PMLE05
問題一覧
1
Change the components’ YAML filenames to export.yaml, preprocess,yaml, f train- {dt}.yaml, fcalibrate-{dt).vaml.
2
Create a Vertex AI Workbench notebook with instance type n2-standard-4.
3
Vertex ML Metadata, Vertex AI Experiments, and Vertex AI TensorBoard
4
Train an object detection model in AutoML by using the annotated image data.
5
Use the Cloud Data Loss Prevention (DLP) API to de-identify the PII before performing data exploration and preprocessing.
6
Use TensorFlow to create a deep learning-based model, and use Integrated Gradients to explain the model output.
7
Create a Vertex AI tabular dataset. Train a Vertex AI AutoML Forecasting model, with number of beds as the target variable, number of scheduled surgeries as a covariate and date as the time variable.
8
Use the Kubeflow Pipelines SDK to implement the pipeline. Use the BigQueryJobOp component to run the preprocessing script and the CustomTrainingJobOp component to launch a Vertex AI training job.
9
Configure the machines of the first two worker pools to have GPUs and to use a container image where your training code runs. Configure the third worker pool to use the reductionserver container image without accelerators, and choose a machine type that prioritizes bandwidth.
10
Refactor the transformation code in the batch data pipeline so that it can be used outside of the pipeline. Use the same code in the endpoint.
11
Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using TensorFlow Transform and Dataflow.
12
Implement a TPU Pod slice with -accelerator-type=v4-l28 by using tf.distribute.TPUStrategy.
13
Use the TensorFlow Extended (TFX) SDK to create multiple components that use Dataflow and Vertex AI services. Deploy the workflow on Vertex AI Pipelines.
14
Chain the Vertex AI ModelUploadOp and ModelDeployOp components together
15
Use BigQuery ML to build a statistical ARIMA_PLUS model.
16
TextDatasetCreateOp, CustomTrainingJobOp, and ModelDeployOp
17
Configure a Cloud Build trigger with the event set as Push to a branch
18
Decrease the CPU utilization target in the autoscaling configurations
19
Use the RunInference API with WatchFilePattern in a Dataflow job that wraps around the model and serves predictions.
20
1. Create a new service account and grant it the Notebook Viewer role 2. Grant the Service Account User role to each team member on the service account 3. Grant the Vertex AI User role to each team member 4. Provision a Vertex AI Workbench user-managed notebook instance that uses the new service account
21
Use AutoML Entity Extraction to train a medical entity extraction model
22
Keep the training dataset as is. Deploy both models to the same endpoint and submit a Vertex AI Model Monitoring job with a monitoring-config-from-file parameter that accounts for the model IDs and feature selections.
23
Download the weather data each week, and download the flu data each month. Deploy the model to a Vertex AI endpoint with feature drift monitoring, and retrain the model if a monitoring alert is detected.
24
Store parameters in Vertex ML Metadata, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
25
Define a fairness metric that is represented by accuracy across the sensitive features. Train a BigQuery ML boosted trees classification model with all features. Use the trained model to make predictions on a test set. Join the data back with the sensitive features, and calculate a fairness metric to investigate whether it meets your requirements.
26
Build a custom predictor class based on XGBoost Predictor from the Vertex AI SDK, and package the handler in a custom container image based on a Vertex built-in container image. Store a pickled model in Cloud Storage, and deploy the model to Vertex AI Endpoints.
27
Use Vertex Explainable AI with the sampled Shapley method, and enable Vertex AI Model Monitoring to check for feature distribution drift.
28
Use the lineage feature of Vertex AI Metadata to find the model artifact. Determine the version of the model and identify the step that creates the data copy and search in the metadata for its location.
29
Configure example-based explanations. Specify the embedding output layer to be used for the latent space representation.
30
Vertex AI Pipelines, Vertex AI Experiments, and Vertex AI Metadata
31
Perform preprocessing in BigQuery by using SQL. Use the BigQueryClient in TensorFlow to read the data directly from BigQuery.
32
1. Create a Dataflow job that creates sharded TFRecord files in a Cloud Storage directory. 2. Reference tf.data.TFRecordDataset in the training script. 3. Train the model by using Vertex AI Training with a V100 GPU.
33
dsl.ParallelFor, dsl.component, and CustomTrainingJobOp
34
Import the new model to the same Vertex AI Model Registry as a different version of the existing model. Deploy the new model to the same Vertex AI endpoint as the existing model, and use traffic splitting to route 95% of production traffic to the BigQuery ML model and 5% of production traffic to the new model.
35
1. Use Vertex Explainable AI to generate feature attributions. Aggregate feature attributions over the entire dataset. 2. Analyze the aggregation result together with the standard model evaluation metrics.
36
Create a logistic regression model in BigQuery ML and register the model in Vertex AI Model Registry. Evaluate the model performance in Vertex AI .
37
Develop the model training code for image classification, and train a model by using Vertex AI custom training.
38
Increase the number of workers in your model server
39
Send user-submitted images to the Cloud Vision API. Use object localization to identify all objects in the image and compare the results against a list of animals.
40
Import the model into Vertex AI Model Registry. Create a Vertex AI endpoint that hosts the model, and make online inference requests.
41
Add a component to the Vertex AI pipeline that logs metrics to Vertex ML Metadata. Use Vertex AI Experiments to compare different executions of the pipeline. Use Vertex AI TensorBoard to visualize metrics.
42
Create a Vertex AI experiment. Submit all the pipelines as experiment runs. For models trained on notebooks log parameters and metrics by using the Vertex AI SDK.
43
Set up Vertex AI Experiments to track metrics and parameters. Configure Vertex AI TensorBoard for visualization.
44
Deploy a Dataflow streaming pipeline with the Runlnference API, and use automatic model refresh.
45
Compare the results to the evaluation results from a previous run. If the performance improved deploy the model to a Vertex AI endpoint with training/serving skew threshold model monitoring. When the model monitoring threshold is triggered redeploy the pipeline.
46
Alter the model by using BigQuery ML, and specify Vertex AI as the model registry. Deploy the model from Vertex AI Model Registry to a Vertex AI endpoint.
47
Create a Vertex AI Model Monitoring job. Enable feature attribution skew and drift detection for your model.
48
Pull the Docker image locally, and use the docker run command to launch it locally. Use the docker logs command to explore the error logs
49
Convert the images to TFRecords and store them in a Cloud Storage bucket. Read the TFRecords by using the tf.data.TFRecordDataset function.
50
Prepare the data in BigQuery and associate the data with a Vertex AI dataset. Create an AutoMLTabularTrainingJob to tram a classification model.
PDE_page4
PDE_page4
ユーザ名非公開 · 50問 · 1年前PDE_page4
PDE_page4
50問 • 1年前PDE_page5
PDE_page5
ユーザ名非公開 · 50問 · 1年前PDE_page5
PDE_page5
50問 • 1年前PDE_page6
PDE_page6
ユーザ名非公開 · 50問 · 1年前PDE_page6
PDE_page6
50問 • 1年前PDE_page7
PDE_page7
ユーザ名非公開 · 19問 · 1年前PDE_page7
PDE_page7
19問 • 1年前PMLE04
PMLE04
ユーザ名非公開 · 50問 · 8ヶ月前PMLE04
PMLE04
50問 • 8ヶ月前PMLE06
PMLE06
ユーザ名非公開 · 50問 · 8ヶ月前PMLE06
PMLE06
50問 • 8ヶ月前PMLE07
PMLE07
ユーザ名非公開 · 39問 · 8ヶ月前PMLE07
PMLE07
39問 • 8ヶ月前問題一覧
1
Change the components’ YAML filenames to export.yaml, preprocess,yaml, f train- {dt}.yaml, fcalibrate-{dt).vaml.
2
Create a Vertex AI Workbench notebook with instance type n2-standard-4.
3
Vertex ML Metadata, Vertex AI Experiments, and Vertex AI TensorBoard
4
Train an object detection model in AutoML by using the annotated image data.
5
Use the Cloud Data Loss Prevention (DLP) API to de-identify the PII before performing data exploration and preprocessing.
6
Use TensorFlow to create a deep learning-based model, and use Integrated Gradients to explain the model output.
7
Create a Vertex AI tabular dataset. Train a Vertex AI AutoML Forecasting model, with number of beds as the target variable, number of scheduled surgeries as a covariate and date as the time variable.
8
Use the Kubeflow Pipelines SDK to implement the pipeline. Use the BigQueryJobOp component to run the preprocessing script and the CustomTrainingJobOp component to launch a Vertex AI training job.
9
Configure the machines of the first two worker pools to have GPUs and to use a container image where your training code runs. Configure the third worker pool to use the reductionserver container image without accelerators, and choose a machine type that prioritizes bandwidth.
10
Refactor the transformation code in the batch data pipeline so that it can be used outside of the pipeline. Use the same code in the endpoint.
11
Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using TensorFlow Transform and Dataflow.
12
Implement a TPU Pod slice with -accelerator-type=v4-l28 by using tf.distribute.TPUStrategy.
13
Use the TensorFlow Extended (TFX) SDK to create multiple components that use Dataflow and Vertex AI services. Deploy the workflow on Vertex AI Pipelines.
14
Chain the Vertex AI ModelUploadOp and ModelDeployOp components together
15
Use BigQuery ML to build a statistical ARIMA_PLUS model.
16
TextDatasetCreateOp, CustomTrainingJobOp, and ModelDeployOp
17
Configure a Cloud Build trigger with the event set as Push to a branch
18
Decrease the CPU utilization target in the autoscaling configurations
19
Use the RunInference API with WatchFilePattern in a Dataflow job that wraps around the model and serves predictions.
20
1. Create a new service account and grant it the Notebook Viewer role 2. Grant the Service Account User role to each team member on the service account 3. Grant the Vertex AI User role to each team member 4. Provision a Vertex AI Workbench user-managed notebook instance that uses the new service account
21
Use AutoML Entity Extraction to train a medical entity extraction model
22
Keep the training dataset as is. Deploy both models to the same endpoint and submit a Vertex AI Model Monitoring job with a monitoring-config-from-file parameter that accounts for the model IDs and feature selections.
23
Download the weather data each week, and download the flu data each month. Deploy the model to a Vertex AI endpoint with feature drift monitoring, and retrain the model if a monitoring alert is detected.
24
Store parameters in Vertex ML Metadata, store the models’ source code in GitHub, and store the models’ binaries in Cloud Storage.
25
Define a fairness metric that is represented by accuracy across the sensitive features. Train a BigQuery ML boosted trees classification model with all features. Use the trained model to make predictions on a test set. Join the data back with the sensitive features, and calculate a fairness metric to investigate whether it meets your requirements.
26
Build a custom predictor class based on XGBoost Predictor from the Vertex AI SDK, and package the handler in a custom container image based on a Vertex built-in container image. Store a pickled model in Cloud Storage, and deploy the model to Vertex AI Endpoints.
27
Use Vertex Explainable AI with the sampled Shapley method, and enable Vertex AI Model Monitoring to check for feature distribution drift.
28
Use the lineage feature of Vertex AI Metadata to find the model artifact. Determine the version of the model and identify the step that creates the data copy and search in the metadata for its location.
29
Configure example-based explanations. Specify the embedding output layer to be used for the latent space representation.
30
Vertex AI Pipelines, Vertex AI Experiments, and Vertex AI Metadata
31
Perform preprocessing in BigQuery by using SQL. Use the BigQueryClient in TensorFlow to read the data directly from BigQuery.
32
1. Create a Dataflow job that creates sharded TFRecord files in a Cloud Storage directory. 2. Reference tf.data.TFRecordDataset in the training script. 3. Train the model by using Vertex AI Training with a V100 GPU.
33
dsl.ParallelFor, dsl.component, and CustomTrainingJobOp
34
Import the new model to the same Vertex AI Model Registry as a different version of the existing model. Deploy the new model to the same Vertex AI endpoint as the existing model, and use traffic splitting to route 95% of production traffic to the BigQuery ML model and 5% of production traffic to the new model.
35
1. Use Vertex Explainable AI to generate feature attributions. Aggregate feature attributions over the entire dataset. 2. Analyze the aggregation result together with the standard model evaluation metrics.
36
Create a logistic regression model in BigQuery ML and register the model in Vertex AI Model Registry. Evaluate the model performance in Vertex AI .
37
Develop the model training code for image classification, and train a model by using Vertex AI custom training.
38
Increase the number of workers in your model server
39
Send user-submitted images to the Cloud Vision API. Use object localization to identify all objects in the image and compare the results against a list of animals.
40
Import the model into Vertex AI Model Registry. Create a Vertex AI endpoint that hosts the model, and make online inference requests.
41
Add a component to the Vertex AI pipeline that logs metrics to Vertex ML Metadata. Use Vertex AI Experiments to compare different executions of the pipeline. Use Vertex AI TensorBoard to visualize metrics.
42
Create a Vertex AI experiment. Submit all the pipelines as experiment runs. For models trained on notebooks log parameters and metrics by using the Vertex AI SDK.
43
Set up Vertex AI Experiments to track metrics and parameters. Configure Vertex AI TensorBoard for visualization.
44
Deploy a Dataflow streaming pipeline with the Runlnference API, and use automatic model refresh.
45
Compare the results to the evaluation results from a previous run. If the performance improved deploy the model to a Vertex AI endpoint with training/serving skew threshold model monitoring. When the model monitoring threshold is triggered redeploy the pipeline.
46
Alter the model by using BigQuery ML, and specify Vertex AI as the model registry. Deploy the model from Vertex AI Model Registry to a Vertex AI endpoint.
47
Create a Vertex AI Model Monitoring job. Enable feature attribution skew and drift detection for your model.
48
Pull the Docker image locally, and use the docker run command to launch it locally. Use the docker logs command to explore the error logs
49
Convert the images to TFRecords and store them in a Cloud Storage bucket. Read the TFRecords by using the tf.data.TFRecordDataset function.
50
Prepare the data in BigQuery and associate the data with a Vertex AI dataset. Create an AutoMLTabularTrainingJob to tram a classification model.