New Year Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: Board70

Google Cloud Certified Professional-Data-Engineer Full Course Free

Page: 7 / 14
Question 28

You are on the data governance team and are implementing security requirements to deploy resources. You need to ensure that resources are limited to only the europe-west 3 region You want to follow Google-recommended practices What should you do?

Options:

A.

Deploy resources with Terraform and implement a variable validation rule to ensure that the region is set to the europe-west3 region for all resources.

B.

Set the constraints/gcp. resourceLocations organization policy constraint to in:eu-locations.

C.

Create a Cloud Function to monitor all resources created and automatically destroy the ones created outside the europe-west3 region.

D.

Set the constraints/gcp. resourceLocations organization policy constraint to in: europe-west3-locations.

Question 29

You’ve migrated a Hadoop job from an on-prem cluster to dataproc and GCS. Your Spark job is a complicated analytical workload that consists of many shuffing operations and initial data are parquet files (on average 200-400 MB size each). You see some degradation in performance after the migration to Dataproc, so you’d like to optimize for it. You need to keep in mind that your organization is very cost-sensitive, so you’d like to continue using Dataproc on preemptibles (with 2 non-preemptible workers only) for this workload.

What should you do?

Options:

A.

Increase the size of your parquet files to ensure them to be 1 GB minimum.

B.

Switch to TFRecords formats (appr. 200MB per file) instead of parquet files.

C.

Switch from HDDs to SSDs, copy initial data from GCS to HDFS, run the Spark job and copy results back to GCS.

D.

Switch from HDDs to SSDs, override the preemptible VMs configuration to increase the boot disk size.

Question 30

Your company's customer_order table in BigOuery stores the order history for 10 million customers, with a table size of 10 PB. You need to create a dashboard for the support team to view the order history. The dashboard has two filters, countryname and username. Both are string data types in the BigQuery table. When a filter is applied, the dashboard fetches the order history from the table and displays the query results. However, the dashboard is slow to show the results when applying the filters to the following query:

How should you redesign the BigQuery table to support faster access?

Options:

A.

Cluster the table by country field, and partition by username field.

B.

Partition the table by country and username fields.

C.

Cluster the table by country and username fields

D.

Partition the table by _PARTITIONTIME.

Question 31

You have a BigQuery dataset named "customers". All tables will be tagged by using a Data Catalog tag template named "gdpr". The template contains one mandatory field, "has sensitive data~. with a boolean value. All employees must be able to do a simple search and find tables in the dataset that have either true or false in the "has sensitive data" field. However, only the Human Resources (HR) group should be able to see the data inside the tables for which "hass-ensitive-data" is true. You give the all employees group the bigquery.metadataViewer and bigquery.connectionUser roles on the dataset. You want to minimize configuration overhead. What should you do next?

Options:

A.

Create the "gdpr" tag template with private visibility. Assign the bigquery -dataViewer role to the HR group on the tables that contain sensitive data.

B.

Create the ~gdpr" tag template with private visibility. Assign the datacatalog. tagTemplateViewer role on this tag to the all employees

group, and assign the bigquery.dataViewer role to the HR group on the tables that contain sensitive data.

C.

Create the "gdpr" tag template with public visibility. Assign the bigquery. dataViewer role to the HR group on the tables that contain

sensitive data.

D.

Create the "gdpr" tag template with public visibility. Assign the datacatalog. tagTemplateViewer role on this tag to the all employees.

group, and assign the bijquery.dataViewer role to the HR group on the tables that contain sensitive data.

Page: 7 / 14
Exam Name: Google Professional Data Engineer Exam
Last Update: Dec 22, 2024
Questions: 372
Professional-Data-Engineer pdf

Professional-Data-Engineer PDF

$25.5  $84.99
Professional-Data-Engineer Engine

Professional-Data-Engineer Testing Engine

$28.5  $94.99
Professional-Data-Engineer PDF + Engine

Professional-Data-Engineer PDF + Testing Engine

$40.5  $134.99