Winter Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: bigdisc65

Last Attempt Professional-Data-Engineer Questions

Page: 8 / 14
Question 32

You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules:

    No interaction by the user on the site for 1 hour

    Has added more than $30 worth of products to the basket

    Has not completed a transaction

You use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design the pipeline?

Options:

A.

Use a fixed-time window with a duration of 60 minutes.

B.

Use a sliding time window with a duration of 60 minutes.

C.

Use a session window with a gap time duration of 60 minutes.

D.

Use a global window with a time based trigger with a delay of 60 minutes.

Question 33

You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use?

Options:

A.

Linear regression

B.

Logistic classification

C.

Recurrent neural network

D.

Feedforward neural network

Question 34

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.)

Options:

A.

Load data into different partitions.

B.

Load data into a different dataset for each client.

C.

Put each client’s BigQuery dataset into a different table.

D.

Restrict a client’s dataset to approved users.

E.

Only allow a service account to access the datasets.

F.

Use the appropriate identity and access management (IAM) roles for each client’s users.

Question 35

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?

Options:

A.

Create a Google Cloud Dataflow job to process the data.

B.

Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.

C.

Create a Hadoop cluster on Google Compute Engine that uses persistent disks.

D.

Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.

E.

Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.

Page: 8 / 14
Exam Name: Google Professional Data Engineer Exam
Last Update: Nov 21, 2024
Questions: 330
Professional-Data-Engineer pdf

Professional-Data-Engineer PDF

$28  $80
Professional-Data-Engineer Engine

Professional-Data-Engineer Testing Engine

$33.25  $95
Professional-Data-Engineer PDF + Engine

Professional-Data-Engineer PDF + Testing Engine

$45.5  $130