Special Summer Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: Board70

Associate-Data-Practitioner Exam Dumps - Google Cloud Platform Questions and Answers

Question # 24

You manage an ecommerce website that has a diverse range of products. You need to forecast future product demand accurately to ensure that your company has sufficient inventory to meet customer needs and avoid stockouts. Your company's historical sales data is stored in a BigQuery table. You need to create a scalable solution that takes into account the seasonality and historical data to predict product demand. What should you do?

Options:

A.

Use the historical sales data to train and create a BigQuery ML time series model. Use the ML.FORECAST function call to output the predictions into a new BigQuery table.

B.

Use Colab Enterprise to create a Jupyter notebook. Use the historical sales data to train a custom prediction model in Python.

C.

Use the historical sales data to train and create a BigQuery ML linear regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.

D.

Use the historical sales data to train and create a BigQuery ML logistic regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.

Buy Now
Question # 25

Your organization has a petabyte of application logs stored as Parquet files in Cloud Storage. You need to quickly perform a one-time SQL-based analysis of the files and join them to data that already resides in BigQuery. What should you do?

Options:

A.

Create a Dataproc cluster, and write a PySpark job to join the data from BigQuery to the files in Cloud Storage.

B.

Launch a Cloud Data Fusion environment, use plugins to connect to BigQuery and Cloud Storage, and use the SQL join operation to analyze the data.

C.

Create external tables over the files in Cloud Storage, and perform SQL joins to tables in BigQuery to analyze the data.

D.

Use the bq load command to load the Parquet files into BigQuery, and perform SQL joins to analyze the data.

Buy Now
Question # 26

Your organization has several datasets in BigQuery. The datasets need to be shared with your external partners so that they can run SQL queries without needing to copy the data to their own projects. You have organized each partner’s data in its own BigQuery dataset. Each partner should be able to access only their data. You want to share the data while following Google-recommended practices. What should you do?

Options:

A.

Use Analytics Hub to create a listing on a private data exchange for each partner dataset. Allow each partner to subscribe to their respective listings.

B.

Create a Dataflow job that reads from each BigQuery dataset and pushes the data into a dedicated Pub/Sub topic for each partner. Grant each partner the pubsub. subscriber IAM role.

C.

Export the BigQuery data to a Cloud Storage bucket. Grant the partners the storage.objectUser IAM role on the bucket.

D.

Grant the partners the bigquery.user IAM role on the BigQuery project.

Buy Now
Question # 27

Your organization uses a BigQuery table that is partitioned by ingestion time. You need to remove data that is older than one year to reduce your organization’s storage costs. You want to use the most efficient approach while minimizing cost. What should you do?

Options:

A.

Create a scheduled query that periodically runs an update statement in SQL that sets the “deleted" column to “yes” for data that is more than one year old. Create a view that filters out rows that have been marked deleted.

B.

Create a view that filters out rows that are older than one year.

C.

Require users to specify a partition filter using the alter table statement in SQL.

D.

Set the table partition expiration period to one year using the ALTER TABLE statement in SQL.

Buy Now
Question # 28

You work for a healthcare company that has a large on-premises data system containing patient records with personally identifiable information (PII) such as names, addresses, and medical diagnoses. You need a standardized managed solution that de-identifies PII across all your data feeds prior to ingestion to Google Cloud. What should you do?

Options:

A.

Use Cloud Run functions to create a serverless data cleaning pipeline. Store the cleaned data in BigQuery.

B.

Use Cloud Data Fusion to transform the data. Store the cleaned data in BigQuery.

C.

Load the data into BigQuery, and inspect the data by using SQL queries. Use Dataflow to transform the data and remove any errors.

D.

Use Apache Beam to read the data and perform the necessary cleaning and transformation operations. Store the cleaned data in BigQuery.

Buy Now
Question # 29

You need to create a new data pipeline. You want a serverless solution that meets the following requirements:

• Data is streamed from Pub/Sub and is processed in real-time.

• Data is transformed before being stored.

• Data is stored in a location that will allow it to be analyzed with SQL using Looker.

Which Google Cloud services should you recommend for the pipeline?

Options:

A.

1. Dataproc Serverless

2. Bigtable

B.

1. Cloud Composer

2. Cloud SQL for MySQL

C.

1. BigQuery

2. Analytics Hub

D.

1. Dataflow

2. BigQuery

Buy Now
Exam Name: Google Cloud Associate Data Practitioner (ADP Exam)
Last Update: Mar 31, 2025
Questions: 106
Associate-Data-Practitioner pdf

Associate-Data-Practitioner PDF

$25.5  $84.99
Associate-Data-Practitioner Engine

Associate-Data-Practitioner Testing Engine

$28.5  $94.99
Associate-Data-Practitioner PDF + Engine

Associate-Data-Practitioner PDF + Testing Engine

$40.5  $134.99