PMLE04
問題一覧
1
Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.
2
Apply one-hot encoding on the categorical variables in the test data
3
Oversample the fraudulent transaction 10 times.
4
F1 score
5
Use the tf.distribute.Strategy API and run a distributed training job.
6
Train a classification Vertex AutoML model.
7
Verify that your model can obtain a low loss on a small subset of the dataset
8
Develop a regression model using BigQuery ML.
9
Add synthetic training data where those phrases are used in non-toxic ways.
10
Use Vertex Explainable AI. Submit each prediction request with the explain' keyword to retrieve feature attributions using the sampled Shapley method.
11
The model with the highest recall where precision is greater than 0.5.
12
Train your model using Vertex AI Training with CPUs.
13
Use a low latency database for the customers’ historic purchase behavior.
14
Turn off auto-scaling for the online prediction service of your new model. Use manual scaling with one node always available.
15
Write a query that preprocesses the data by using BigQuery and creates a new table. Create a Vertex AI managed dataset with the new table as the data source.
16
Trigger GitHub Actions to run the tests, launch a Cloud Build workflow to build custom Docker images, push the images to Artifact Registry, and launch the pipeline in Vertex AI Pipelines.
17
Use the ML.ONE_HOT_ENCODER function on the categorical features and select the encoded categorical features and non-categorical features as inputs to create your model.
18
Import the labeled images as a managed dataset in Vertex AI and use AutoML to train the model.
19
Decrease the score threshold., Add more positive examples to the training set
20
Deploy an online Vertex AI prediction endpoint. Set the max replica count to 100
21
Configure a v3-8 TPU VM. SSH into the VM to train and debug the model.
22
1. Create an experiment in Vertex AI Experiments. 2. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating. 3. Submit multiple runs to the same experiment, using different values for the parameters.
23
Update the model monitoring job to use the more recent training data that was used to retrain the model.
24
Use the features and the feature attributions for monitoring. Set a prediction-sampling-rate value that is closer to 0 than 1.
25
1. Wrap your model in a custom prediction routine (CPR). and build a container image from the CPR local model. 2. Upload your scikit learn model container to Vertex AI Model Registry. 3. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job
26
Write SQL queries to transform the data in-place in BigQuery.
27
Enable caching for the pipeline job, and disable caching for the model training step.
28
Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines.
29
Use the Predictor interface to implement a custom prediction routine. Build the custom container, upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.
30
Set up a CI/CD pipeline that builds and tests your source code and then deploys built artifacts into a pre-production environment. After a successful pipeline run in the pre-production environment, deploy the pipeline to production.
31
Enable VPC Service Controls for peerings, and add Vertex AI to a service perimeter.
32
1. Create a new model. Set the parentModel parameter to the model ID of the currently deployed model. Upload the model to Vertex AI Model Registry. 2. Deploy the new model to the existing endpoint, and set the new model to 100% of the traffic
33
Increase the batch size
34
Use Vertex AI chronological split, and specify the sales timestamp feature as the time variable
35
1. Create a Vertex AI Model Monitoring job configured to monitor prediction drift 2. Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected 3. Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery
36
Use the Kubeflow pipelines SDK to write code that specifies two components: - The first is a Dataproc Serverless component that launches the feature engineering job - The second is a custom component wrapped in the create_custom_training_job_from_component utility that launches the custom model training job Create a Vertex AI Pipelines job to link and run both components
37
Configure an appropriate minReplicaCount value based on expected baseline traffic
38
1. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint 2. Create a Vertex AI Model Monitoring job with feature drift detection as the monitoring objective, and provide an instance schema
39
Import the TensorFlow model by using the CREATE MODEL statement in BigQuery ML. Apply the historical data to the TensorFlow model
40
Decrease the sample_rate parameter in the RandomSampleConfig of the monitoring job
41
Create a batch prediction job by using the actual sales data, and configure the job settings to generate feature attributions. Compare the results in the report.
42
Enable model monitoring on the Vertex AI endpoint. Configure Pub/Sub to call the Cloud Function when feature drift is detected
43
1. Upload the audio files to Cloud Storage. 2. Call the speech:longrunningrecognize API endpoint to generate transcriptions 3. Create a Cloud Function that calls the Natural Language API by using the analyzeSentiment method
44
Train the model by using AutoML Edge, and export it as a Core ML model. Configure your mobile application to use the .mlmodel file directly.
45
Create a Vertex AI tabular dataset. Train an AutoML model to predict customer purchases. Deploy the model to a Vertex AI endpoint and enable feature attributions. Use the “explain” method to get feature attribution values for each individual prediction.
46
Use the Vertex AI Vision Occupancy Analytics model.
47
Use Tabular Workflow for TabNet through Vertex AI Pipelines to train attention-based models
48
Create a training job that uses Cloud TPU VMs. Use tf.distribute.TPUStrategy for distribution.
49
1. Use a Vertex AI Pipelines custom training job component to tram your model. 2. Generate predictions by using a Vertex AI Pipelines model batch predict component.
50
Uptrain a Document AI custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyzeSentiment feature to infer overall satisfaction scores.
PDE_page4
PDE_page4
ユーザ名非公開 · 50問 · 1年前PDE_page4
PDE_page4
50問 • 1年前PDE_page5
PDE_page5
ユーザ名非公開 · 50問 · 1年前PDE_page5
PDE_page5
50問 • 1年前PDE_page6
PDE_page6
ユーザ名非公開 · 50問 · 1年前PDE_page6
PDE_page6
50問 • 1年前PDE_page7
PDE_page7
ユーザ名非公開 · 19問 · 1年前PDE_page7
PDE_page7
19問 • 1年前PMLE05
PMLE05
ユーザ名非公開 · 50問 · 8ヶ月前PMLE05
PMLE05
50問 • 8ヶ月前PMLE06
PMLE06
ユーザ名非公開 · 50問 · 8ヶ月前PMLE06
PMLE06
50問 • 8ヶ月前PMLE07
PMLE07
ユーザ名非公開 · 39問 · 8ヶ月前PMLE07
PMLE07
39問 • 8ヶ月前問題一覧
1
Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.
2
Apply one-hot encoding on the categorical variables in the test data
3
Oversample the fraudulent transaction 10 times.
4
F1 score
5
Use the tf.distribute.Strategy API and run a distributed training job.
6
Train a classification Vertex AutoML model.
7
Verify that your model can obtain a low loss on a small subset of the dataset
8
Develop a regression model using BigQuery ML.
9
Add synthetic training data where those phrases are used in non-toxic ways.
10
Use Vertex Explainable AI. Submit each prediction request with the explain' keyword to retrieve feature attributions using the sampled Shapley method.
11
The model with the highest recall where precision is greater than 0.5.
12
Train your model using Vertex AI Training with CPUs.
13
Use a low latency database for the customers’ historic purchase behavior.
14
Turn off auto-scaling for the online prediction service of your new model. Use manual scaling with one node always available.
15
Write a query that preprocesses the data by using BigQuery and creates a new table. Create a Vertex AI managed dataset with the new table as the data source.
16
Trigger GitHub Actions to run the tests, launch a Cloud Build workflow to build custom Docker images, push the images to Artifact Registry, and launch the pipeline in Vertex AI Pipelines.
17
Use the ML.ONE_HOT_ENCODER function on the categorical features and select the encoded categorical features and non-categorical features as inputs to create your model.
18
Import the labeled images as a managed dataset in Vertex AI and use AutoML to train the model.
19
Decrease the score threshold., Add more positive examples to the training set
20
Deploy an online Vertex AI prediction endpoint. Set the max replica count to 100
21
Configure a v3-8 TPU VM. SSH into the VM to train and debug the model.
22
1. Create an experiment in Vertex AI Experiments. 2. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating. 3. Submit multiple runs to the same experiment, using different values for the parameters.
23
Update the model monitoring job to use the more recent training data that was used to retrain the model.
24
Use the features and the feature attributions for monitoring. Set a prediction-sampling-rate value that is closer to 0 than 1.
25
1. Wrap your model in a custom prediction routine (CPR). and build a container image from the CPR local model. 2. Upload your scikit learn model container to Vertex AI Model Registry. 3. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job
26
Write SQL queries to transform the data in-place in BigQuery.
27
Enable caching for the pipeline job, and disable caching for the model training step.
28
Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines.
29
Use the Predictor interface to implement a custom prediction routine. Build the custom container, upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.
30
Set up a CI/CD pipeline that builds and tests your source code and then deploys built artifacts into a pre-production environment. After a successful pipeline run in the pre-production environment, deploy the pipeline to production.
31
Enable VPC Service Controls for peerings, and add Vertex AI to a service perimeter.
32
1. Create a new model. Set the parentModel parameter to the model ID of the currently deployed model. Upload the model to Vertex AI Model Registry. 2. Deploy the new model to the existing endpoint, and set the new model to 100% of the traffic
33
Increase the batch size
34
Use Vertex AI chronological split, and specify the sales timestamp feature as the time variable
35
1. Create a Vertex AI Model Monitoring job configured to monitor prediction drift 2. Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected 3. Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery
36
Use the Kubeflow pipelines SDK to write code that specifies two components: - The first is a Dataproc Serverless component that launches the feature engineering job - The second is a custom component wrapped in the create_custom_training_job_from_component utility that launches the custom model training job Create a Vertex AI Pipelines job to link and run both components
37
Configure an appropriate minReplicaCount value based on expected baseline traffic
38
1. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint 2. Create a Vertex AI Model Monitoring job with feature drift detection as the monitoring objective, and provide an instance schema
39
Import the TensorFlow model by using the CREATE MODEL statement in BigQuery ML. Apply the historical data to the TensorFlow model
40
Decrease the sample_rate parameter in the RandomSampleConfig of the monitoring job
41
Create a batch prediction job by using the actual sales data, and configure the job settings to generate feature attributions. Compare the results in the report.
42
Enable model monitoring on the Vertex AI endpoint. Configure Pub/Sub to call the Cloud Function when feature drift is detected
43
1. Upload the audio files to Cloud Storage. 2. Call the speech:longrunningrecognize API endpoint to generate transcriptions 3. Create a Cloud Function that calls the Natural Language API by using the analyzeSentiment method
44
Train the model by using AutoML Edge, and export it as a Core ML model. Configure your mobile application to use the .mlmodel file directly.
45
Create a Vertex AI tabular dataset. Train an AutoML model to predict customer purchases. Deploy the model to a Vertex AI endpoint and enable feature attributions. Use the “explain” method to get feature attribution values for each individual prediction.
46
Use the Vertex AI Vision Occupancy Analytics model.
47
Use Tabular Workflow for TabNet through Vertex AI Pipelines to train attention-based models
48
Create a training job that uses Cloud TPU VMs. Use tf.distribute.TPUStrategy for distribution.
49
1. Use a Vertex AI Pipelines custom training job component to tram your model. 2. Generate predictions by using a Vertex AI Pipelines model batch predict component.
50
Uptrain a Document AI custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyzeSentiment feature to infer overall satisfaction scores.