PMLE07
問題一覧
1
Use Vertex Explainable AI to generate feature attributions, and use feature-based explanations for your models.
2
Use the Cloud Document AI API to extract information from the invoices and receipts.
3
Configure Vertex AI Vector Search as the search platform’s backend.
4
Deploy the model to a Vertex AI endpoint, and configure the model for batch prediction. Schedule the batch prediction to run weekly.
5
Use a combination of Vertex AI Pipelines and the Vertex AI SDK to integrate metadata tracking into the ML workflow.
6
Create a logistic regression model in BigQuery ML. Use the ML.CONFUSION_MATRIX function to evaluate the model performance.
7
Create a dataset using manually labeled images. Ingest this dataset into AutoML. Train an image classification model and deploy into a Vertex AI endpoint. Integrate this endpoint with the image upload process to identify and block inappropriate uploads. Monitor predictions and periodically retrain the model.
8
Train and deploy a BigQuery ML classification model trained on historic loan default data. Enable feature-based explanations for each prediction. Report the prediction, probability of default, and feature attributions for each loan application.
9
Audit the training dataset to identify underrepresented groups and augment the dataset with additional samples before retraining the model.
10
Configure a Git repository trigger in Cloud Build to initiate retraining when there are new code commits to the model's repository and a Pub/Sub trigger when there is new data in Cloud Storage.
11
Use Colab Enterprise with Cloud Storage for data management. Use a Git repository for version control.
12
Configure one a2-highgpu-1g instance with an NVIDIA A100 GPU with 80 GB of RAM. Use bfloat16 quantization during model training.
13
Set up a Vertex AI Workbench instance with a Spark kernel.
14
Implement tf.data.Detaset.prefetch in the data pipeline.
15
Implement an orchestration framework such as Kubeflow Pipelines or Vertex AI Pipelines.
16
Use a boosted decision tree-based model architecture, and use SHAP values for interpretability.
17
Use feature attribution in Vertex AI to analyze model predictions and the impact of each feature on the model's predictions.
18
Enable request-response logging for the Vertex AI endpoint, and set up alerts using Cloud Logging. Review the feature attributions in the Google Cloud console when an alert is received.
19
Deploy the model on a Google Kubernetes Engine (GKE) cluster by using the deployment options in Model Garden.
20
Use Vertex AI Experiments for tracking iterations and comparison, and use Vertex AI TensorBoard for visualization and analysis of the training metrics and model architecture.
21
Decrease the probability threshold to classify a fraudulent transaction.
22
Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
23
Split incoming traffic to distribute prediction requests among the versions. Monitor the performance of each version using Vertex AI's built-in monitoring tools.
24
Store the data in a Cloud Storage bucket, and create a custom container with your training application and its custom dependencies. In your training application, read the data from Cloud Storage and train the model.
25
Monitor the training/serving skew of feature values for requests sent to the endpoint.
26
Increase the number of maximum replicas to 6 nodes, each with 1 e2-standard-2 machine.
27
Fine-tune Llama 3 from Model Garden on Vertex AI Pipelines.
28
Deploy your models on Vertex AI endpoints.
29
Ask users to indicate all scenarios where they expect concise responses versus verbose responses. Modify the application 's prompt to include these scenarios and their respective verbosity levels. Re-evaluate the verbosity of responses with updated prompts.
30
Use Vertex AI Experiments to track and compare model artifacts and versions, and use Vertex AI managed datasets to manage dataset versioning.
31
Retrain the model when a significant shift in the distribution of customer attributes is detected in the production data compared to the training data.
32
Upload the files into Cloud Storage. Use Python to preprocess and load the tabular data into BigQuery. Use time series forecasting models to predict weekly sales.
33
Build a logistic regression model in scikit-learn, and interpret the model's output coefficients to understand feature impact.
34
Create a Vertex AI Model Monitoring job to track the model's performance with production data, and trigger retraining when specific metrics drop below predefined thresholds.
35
1. Create a managed pipeline in Vertex AI Pipelines to train your model by using a Vertex AI CustomTrainingJobOp component. 2. Use the ModelUploadOp component to upload your model to Vertex AI Model Registry. 3. Use Cloud Scheduler and Cloud Run functions to run the Vertex AI pipeline weekly.
36
Deploy the model to a Vertex AI endpoint resource to automatically scale the serving backend based on the throughput. Configure the endpoint's autoscaling settings to minimize latency.
37
Fine-tune the model using a company-specific dataset.
38
Update the WorkerPoolSpec to use a machine with 24 vCPUs and 3 NVIDIA Tesla V100 GPUs.
39
Convert the model into a BigQuery ML model, and use SQL for inference.
PDE_page4
PDE_page4
ユーザ名非公開 · 50問 · 1年前PDE_page4
PDE_page4
50問 • 1年前PDE_page5
PDE_page5
ユーザ名非公開 · 50問 · 1年前PDE_page5
PDE_page5
50問 • 1年前PDE_page6
PDE_page6
ユーザ名非公開 · 50問 · 1年前PDE_page6
PDE_page6
50問 • 1年前PDE_page7
PDE_page7
ユーザ名非公開 · 19問 · 1年前PDE_page7
PDE_page7
19問 • 1年前PMLE04
PMLE04
ユーザ名非公開 · 50問 · 8ヶ月前PMLE04
PMLE04
50問 • 8ヶ月前PMLE05
PMLE05
ユーザ名非公開 · 50問 · 8ヶ月前PMLE05
PMLE05
50問 • 8ヶ月前PMLE06
PMLE06
ユーザ名非公開 · 50問 · 8ヶ月前PMLE06
PMLE06
50問 • 8ヶ月前問題一覧
1
Use Vertex Explainable AI to generate feature attributions, and use feature-based explanations for your models.
2
Use the Cloud Document AI API to extract information from the invoices and receipts.
3
Configure Vertex AI Vector Search as the search platform’s backend.
4
Deploy the model to a Vertex AI endpoint, and configure the model for batch prediction. Schedule the batch prediction to run weekly.
5
Use a combination of Vertex AI Pipelines and the Vertex AI SDK to integrate metadata tracking into the ML workflow.
6
Create a logistic regression model in BigQuery ML. Use the ML.CONFUSION_MATRIX function to evaluate the model performance.
7
Create a dataset using manually labeled images. Ingest this dataset into AutoML. Train an image classification model and deploy into a Vertex AI endpoint. Integrate this endpoint with the image upload process to identify and block inappropriate uploads. Monitor predictions and periodically retrain the model.
8
Train and deploy a BigQuery ML classification model trained on historic loan default data. Enable feature-based explanations for each prediction. Report the prediction, probability of default, and feature attributions for each loan application.
9
Audit the training dataset to identify underrepresented groups and augment the dataset with additional samples before retraining the model.
10
Configure a Git repository trigger in Cloud Build to initiate retraining when there are new code commits to the model's repository and a Pub/Sub trigger when there is new data in Cloud Storage.
11
Use Colab Enterprise with Cloud Storage for data management. Use a Git repository for version control.
12
Configure one a2-highgpu-1g instance with an NVIDIA A100 GPU with 80 GB of RAM. Use bfloat16 quantization during model training.
13
Set up a Vertex AI Workbench instance with a Spark kernel.
14
Implement tf.data.Detaset.prefetch in the data pipeline.
15
Implement an orchestration framework such as Kubeflow Pipelines or Vertex AI Pipelines.
16
Use a boosted decision tree-based model architecture, and use SHAP values for interpretability.
17
Use feature attribution in Vertex AI to analyze model predictions and the impact of each feature on the model's predictions.
18
Enable request-response logging for the Vertex AI endpoint, and set up alerts using Cloud Logging. Review the feature attributions in the Google Cloud console when an alert is received.
19
Deploy the model on a Google Kubernetes Engine (GKE) cluster by using the deployment options in Model Garden.
20
Use Vertex AI Experiments for tracking iterations and comparison, and use Vertex AI TensorBoard for visualization and analysis of the training metrics and model architecture.
21
Decrease the probability threshold to classify a fraudulent transaction.
22
Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
23
Split incoming traffic to distribute prediction requests among the versions. Monitor the performance of each version using Vertex AI's built-in monitoring tools.
24
Store the data in a Cloud Storage bucket, and create a custom container with your training application and its custom dependencies. In your training application, read the data from Cloud Storage and train the model.
25
Monitor the training/serving skew of feature values for requests sent to the endpoint.
26
Increase the number of maximum replicas to 6 nodes, each with 1 e2-standard-2 machine.
27
Fine-tune Llama 3 from Model Garden on Vertex AI Pipelines.
28
Deploy your models on Vertex AI endpoints.
29
Ask users to indicate all scenarios where they expect concise responses versus verbose responses. Modify the application 's prompt to include these scenarios and their respective verbosity levels. Re-evaluate the verbosity of responses with updated prompts.
30
Use Vertex AI Experiments to track and compare model artifacts and versions, and use Vertex AI managed datasets to manage dataset versioning.
31
Retrain the model when a significant shift in the distribution of customer attributes is detected in the production data compared to the training data.
32
Upload the files into Cloud Storage. Use Python to preprocess and load the tabular data into BigQuery. Use time series forecasting models to predict weekly sales.
33
Build a logistic regression model in scikit-learn, and interpret the model's output coefficients to understand feature impact.
34
Create a Vertex AI Model Monitoring job to track the model's performance with production data, and trigger retraining when specific metrics drop below predefined thresholds.
35
1. Create a managed pipeline in Vertex AI Pipelines to train your model by using a Vertex AI CustomTrainingJobOp component. 2. Use the ModelUploadOp component to upload your model to Vertex AI Model Registry. 3. Use Cloud Scheduler and Cloud Run functions to run the Vertex AI pipeline weekly.
36
Deploy the model to a Vertex AI endpoint resource to automatically scale the serving backend based on the throughput. Configure the endpoint's autoscaling settings to minimize latency.
37
Fine-tune the model using a company-specific dataset.
38
Update the WorkerPoolSpec to use a machine with 24 vCPUs and 3 NVIDIA Tesla V100 GPUs.
39
Convert the model into a BigQuery ML model, and use SQL for inference.