Black Friday Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: Board70

Machine Learning Engineer Professional-Machine-Learning-Engineer Exam Dumps

Page: 16 / 21
Question 64

Your company stores a large number of audio files of phone calls made to your customer call center in an on-premises database. Each audio file is in wav format and is approximately 5 minutes long. You need to analyze these audio files for customer sentiment. You plan to use the Speech-to-Text API. You want to use the most efficient approach. What should you do?

Options:

A.

1 Upload the audio files to Cloud Storage

2. Call the speech: Iongrunningrecognize API endpoint to generate transcriptions

3. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions

B.

1 Upload the audio files to Cloud Storage

2 Call the speech: Iongrunningrecognize API endpoint to generate transcriptions.

3 Create a Cloud Function that calls the Natural Language API by using the analyzesentiment method

C.

1 Iterate over your local Tiles in Python

2. Use the Speech-to-Text Python library to create a speech.RecognitionAudio object and set the content to the audio file data

3. Call the speech: recognize API endpoint to generate transcriptions

4. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions

D.

1 Iterate over your local files in Python

2 Use the Speech-to-Text Python Library to create a speech.RecognitionAudio object, and set the content to the audio file data

3. Call the speech: lengrunningrecognize API endpoint to generate transcriptions

4 Call the Natural Language API by using the analyzesenriment method

Question 65

You manage a team of data scientists who use a cloud-based backend system to submit training jobs. This system has become very difficult to administer, and you want to use a managed service instead. The data scientists you work with use many different frameworks, including Keras, PyTorch, theano, scikit-learn, and custom libraries. What should you do?

Options:

A.

Use the Vertex AI Training to submit training jobs using any framework.

B.

Configure Kubeflow to run on Google Kubernetes Engine and submit training jobs through TFJob.

C.

Create a library of VM images on Compute Engine, and publish these images on a centralized repository.

D.

Set up Slurm workload manager to receive jobs that can be scheduled to run on your cloud infrastructure.

Question 66

You are developing a training pipeline for a new XGBoost classification model based on tabular data The data is stored in a BigQuery table You need to complete the following steps

1. Randomly split the data into training and evaluation datasets in a 65/35 ratio

2. Conduct feature engineering

3 Obtain metrics for the evaluation dataset.

4 Compare models trained in different pipeline executions

How should you execute these steps'?

Options:

A.

1 Using Vertex Al Pipelines, add a component to divide the data into training and evaluation sets, and add another component for feature engineering

2. Enable auto logging of metrics in the training component.

3 Compare pipeline runs in Vertex Al Experiments

B.

1 Using Vertex Al Pipelines, add a component to divide the data into training and evaluation sets, and add another component for feature engineering

2 Enable autologging of metrics in the training component

3 Compare models using the artifacts lineage in Vertex ML Metadata

C.

1 In BigQuery ML. use the create model statement with bocstzd_tree_classifier as the model

type and use BigQuery to handle the data splits.

2 Use a SQL view to apply feature engineering and train the model using the data in that view

3. Compare the evaluation metrics of the models by using a SQL query with the ml. training_infc statement.

D.

1 In BigQuery ML use the create model statement with boosted_tree_classifier as the model

type, and use BigQuery to handle the data splits.

2 Use ml transform to specify the feature engineering transformations, and train the model using the

data in the table

' 3. Compare the evaluation metrics of the models by using a SQL query with the ml. training_info statement.

Question 67

You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural network. What should you do?

Options:

A.

Create a hot-encoding of words, and feed the encodings into your model.

B.

Identify word embeddings from a pre-trained model, and use the embeddings in your model.

C.

Sort the words by frequency of occurrence, and use the frequencies as the encodings in your model.

D.

Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs in your model.

Page: 16 / 21
Exam Name: Google Professional Machine Learning Engineer
Last Update: Nov 22, 2024
Questions: 285
Professional-Machine-Learning-Engineer pdf

Professional-Machine-Learning-Engineer PDF

$25.5  $84.99
Professional-Machine-Learning-Engineer Engine

Professional-Machine-Learning-Engineer Testing Engine

$28.5  $94.99
Professional-Machine-Learning-Engineer PDF + Engine

Professional-Machine-Learning-Engineer PDF + Testing Engine

$40.5  $134.99