Winter Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: bigdisc65

Passed Exam Today Professional-Data-Engineer

Page: 13 / 14
Question 52

You are operating a streaming Cloud Dataflow pipeline. Your engineers have a new version of the pipeline with a different windowing algorithm and triggering strategy. You want to update the running pipeline with the new version. You want to ensure that no data is lost during the update. What should you do?

Options:

A.

Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to the existing job name

B.

Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to a new unique job name

C.

Stop the Cloud Dataflow pipeline with the Cancel option. Create a new Cloud Dataflow job with the updated code

D.

Stop the Cloud Dataflow pipeline with the Drain option. Create a new Cloud Dataflow job with the updated code

Question 53

You have historical data covering the last three years in BigQuery and a data pipeline that delivers new data to BigQuery daily. You have noticed that when the Data Science team runs a query filtered on a date column and limited to 30–90 days of data, the query scans the entire table. You also noticed that your bill is increasing more quickly than you expected. You want to resolve the issue as cost-effectively as possible while maintaining the ability to conduct SQL queries. What should you do?

Options:

A.

Re-create the tables using DDL. Partition the tables by a column containing a TIMESTAMP or DATE Type.

B.

Recommend that the Data Science team export the table to a CSV file on Cloud Storage and use Cloud Datalab to explore the data by reading the files directly.

C.

Modify your pipeline to maintain the last 30–90 days of data in one table and the longer history in a different table to minimize full table scans over the entire history.

D.

Write an Apache Beam pipeline that creates a BigQuery table per day. Recommend that the Data Science team use wildcards on the table name suffixes to select the data they need.

Question 54

Your business users need a way to clean and prepare data before using the data for analysis. Your business users are less technically savvy and prefer to work with graphical user interfaces to define their transformations. After the data has been transformed, the business users want to perform their analysis directly in a spreadsheet. You need to recommend a solution that they can use. What should you do?

Options:

A.

Use Dataprep to clean the data, and write the results to BigQuery Analyze the data by using Connected Sheets.

B.

Use Dataprep to clean the data, and write the results to BigQuery Analyze the data by using Looker Studio.

C.

Use Dataflow to clean the data, and write the results to BigQuery. Analyze the data by using Connected Sheets.

D.

Use Dataflow to clean the data, and write the results to BigQuery. Analyze the data by using Looker Studio.

Question 55

You are designing a data mesh on Google Cloud with multiple distinct data engineering teams building data products. The typical data curation design pattern consists of landing files in Cloud Storage, transforming raw data in Cloud Storage and BigQuery datasets. and storing the final curated data product in BigQuery datasets You need to configure Dataplex to ensure that each team can access only the assets needed to build their data products. You also need to ensure that teams can easily share the curated data product. What should you do?

Options:

A.

1 Create a single Dataplex virtual lake and create a single zone to contain landing, raw. and curated data.

2 Provide each data engineering team access to the virtual lake.

B.

1 Create a single Dataplex virtual lake and create a single zone to contain landing, raw. and curated data. 2 Build separate assets for each data product within the zone.

3. Assign permissions to the data engineering teams at the zone level.

C.

1 Create a Dataplex virtual lake for each data product, and create a single zone to contain landing, raw, and curated data.

2. Provide the data engineering teams with full access to the virtual lake assigned to their data product.

D.

1 Create a Dataplex virtual lake for each data product, and create multiple zones for landing, raw. and curated data.

2. Provide the data engineering teams with full access to the virtual lake assigned to their data product.

Page: 13 / 14
Exam Name: Google Professional Data Engineer Exam
Last Update: Nov 21, 2024
Questions: 330
Professional-Data-Engineer pdf

Professional-Data-Engineer PDF

$28  $80
Professional-Data-Engineer Engine

Professional-Data-Engineer Testing Engine

$33.25  $95
Professional-Data-Engineer PDF + Engine

Professional-Data-Engineer PDF + Testing Engine

$45.5  $130