PMLE06
問題一覧
1
1. In BigQuery ML, use the CREATE MODEL statement with BOOSTED_TREE_CLASSIFIER as the model type and use BigQuery to handle the data splits. 2. Use ML TRANSFORM to specify the feature engineering transformations and tram the model using the data in the table. 3. Compare the evaluation metrics of the models by using a SQL query with the ML.TRAINING_INFO statement.
2
Create a Vertex AI Workbench notebook to perform exploratory data analysis. Use IPython magics to create a new BigQuery table with input features, create the model, and validate the results by using the CREATE MODEL, ML.EVALUATE, and ML.PREDICT statements.
3
Store features in Vertex AI Feature Store.
4
Install the NLTK library from a Jupyter cell by using the !pip install nltk --user command.
5
Import the model into Vertex AI. On Vertex AI Pipelines, create a pipeline that uses the DataflowPythonJobOp and the ModelBatchPredictOp components.
6
1. Maintain the same machine type on the endpoint Configure the endpoint to enable autoscaling based on vCPU usage. 2. Set up a monitoring job and an alert for CPU usage. 3. If you receive an alert, investigate the cause.
7
Use a prebuilt XGBoost Vertex container to create a model, and deploy it to Vertex AI Endpoints.
8
Use AutoML Translation to train a model. Configure a Translation Hub project, and use the trained model to translate the documents. Use human reviewers to evaluate the incorrect translations.
9
Expose each individual model as an endpoint in Vertex AI Endpoints. Use Cloud Run to orchestrate the workflow.
10
1. Create a Vertex AI TensorBoard instance, and use the Vertex AI SDK to create an experiment and associate the TensorBoard instance. 2. Use the log_time_series_metrics function to track the preprocessed data, and use the log_metrics function to log loss values.
11
Create a Vertex AI Workbench managed notebook to browse and query the tables directly from the JupyterLab interface.
12
Schedule an increase in the number of online serving nodes in your featurestore prior to the batch ingestion jobs
13
1. Use TFX components with Dataflow to encode the text features and scale the numerical features. 2. Export results to Cloud Storage as TFRecords. 3. Feed the data into Vertex AI Training.
14
Build a random forest classification model in a Vertex AI Workbench notebook instance. Configure the model to generate feature importances after the model is trained.
15
Create a text dataset on Vertex AI for entity extraction Create two entities called “ingredient” and “cookware”, and label at least 200 examples of each entity. Train an AutoML entity extraction model to extract occurrences of these entity types. Evaluate performance on a holdout dataset.
16
Deploy the new model to the existing Vertex AI endpoint. Use traffic splitting to send 5% of production traffic to the new model. Monitor end-user metrics, such as listening time. If end-user metrics improve between models over time, gradually increase the percentage of production traffic sent to the new model.
17
Use BigQuery’s scheduling service to run the model retraining query periodically.
18
Use the aiplatform.log_classification_metrics function to log the F1 score and the confusion matrix.
19
本番トラフィックの層化抽出サンプルを収集してトレーニングデータセットを構築する Collect a stratified sample of production traffic to build the training dataset, トレーニング済みモデルに対して、機密性の高いカテゴリおよびデモグラフィック全体で公平性テストを実施する Conduct fairness tests across sensitive categories and demographics on the trained model
20
1. Initialize the Vertex SDK with the name of your experiment. Log parameters and metrics for each experiment, and attach dataset and model artifacts as inputs and outputs to each execution. 2. After a successful experiment create a Vertex AI pipeline.
21
Ensure that the Vertex AI Workbench instance is assigned the Identity and Access Management (IAM) Vertex AI User role.
22
Use Vertex AI Data Labeling Service to label the images, and tram an AutoML image classification model. Deploy the model, and configure Pub/Sub to publish a message when an image is categorized into the failing class.
23
Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading API
24
Use Vertex Al Model Monitoring. Enable prediction drift monitoring on the endpoint, and specify a notification email.
25
1. Specify sampled Shapley as the explanation method with a path count of 5. 2. Deploy the model to Vertex AI Endpoints. 3. Create a Model Monitoring job that uses prediction drift as the monitoring objective.
26
Load the data in BigQuery. Use BigQuery ML to train a matrix factorization model.
27
Create another Vertex AI endpoint in the asia-southeast1 region, and allow the application to choose the appropriate endpoint.
28
Store the data in a Cloud Storage bucket, and create a custom container with your training application. In your training application, read the data from Cloud Storage and train the model.
29
Create a pipeline in Vertex AI Pipelines. Create a Cloud Function that uses a Cloud Storage trigger and deploys the pipeline.
30
Enable caching in all the steps of the Kubeflow pipeline.
31
Ingest the Avro files into BigQuery to perform analytics. Use a Dataflow pipeline to create the features, and store them in Vertex AI Feature Store for online prediction.
32
Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
33
Use the Document Translation feature of the Cloud Translation API to translate the documents.
34
Use the Vertex AI Metadata API inside the custom job to create context, execution, and artifacts for each model, and use events to link them together.
35
Create a Vertex AI hyperparameter tuning job.
36
Use the Cloud Vision API to automatically annotate objects in the images to help specialists with the annotation task.
37
Set up a Vertex AI Pipelines to orchestrate the MLOps pipeline. Use the predefined Dataproc component for the PySpark-based workloads.
38
Configure the model deployment settings to use an n1-standard-4 machine type. Set the minReplicaCount value to 1 and the maxReplicaCount value to 8.
39
Use the TRANSFORM clause in the CREATE MODEL statement in the SQL query to calculate the required statistics.
40
Run one hypertuning job for 100 trials. Set num_hidden_layers and learning_rate as conditional hyperparameters based on their parent hyperparameter training_method.
41
Create a feature attribution drift monitoring job. Set the sampling rate to 0.1 and the monitoring frequency to weekly.
42
Run the TFX pipeline in Vertex AI Pipelines. Set the appropriate Apache Beam parameters in the pipeline to run the data preprocessing steps in Dataflow.
43
1. Use a Vertex AI Pipelines custom training job component to train your model. 2. Generate predictions by using a Vertex AI Pipelines model batch predict component.
44
Access BigQuery Studio in the Google Cloud console. Run the CREATE MODEL statement in the SQL editor to create an AutoML regression model.
45
Use a linear regression model. Perform one-hot encoding on categorical features, and create additional features based on the date, such as day of the week or month.
46
Use Vertex AI Agent Builder to create an agent. Securely index the organization’s internal documentation to the agent’s datastore. Send users’ queries to the agent and return the agent’s grounded responses to the users.
47
Use AutoML Vision to train a model using the image dataset.
48
Use the DLP API to scan and de-identify PII in chatbot conversations before storing the data.
49
Create a custom Vertex AI Pipelines component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
50
Use the DLP API to de-identify the sensitive data before loading it into BigQuery.
PDE_page4
PDE_page4
ユーザ名非公開 · 50問 · 1年前PDE_page4
PDE_page4
50問 • 1年前PDE_page5
PDE_page5
ユーザ名非公開 · 50問 · 1年前PDE_page5
PDE_page5
50問 • 1年前PDE_page6
PDE_page6
ユーザ名非公開 · 50問 · 1年前PDE_page6
PDE_page6
50問 • 1年前PDE_page7
PDE_page7
ユーザ名非公開 · 19問 · 1年前PDE_page7
PDE_page7
19問 • 1年前PMLE04
PMLE04
ユーザ名非公開 · 50問 · 8ヶ月前PMLE04
PMLE04
50問 • 8ヶ月前PMLE05
PMLE05
ユーザ名非公開 · 50問 · 8ヶ月前PMLE05
PMLE05
50問 • 8ヶ月前PMLE07
PMLE07
ユーザ名非公開 · 39問 · 8ヶ月前PMLE07
PMLE07
39問 • 8ヶ月前問題一覧
1
1. In BigQuery ML, use the CREATE MODEL statement with BOOSTED_TREE_CLASSIFIER as the model type and use BigQuery to handle the data splits. 2. Use ML TRANSFORM to specify the feature engineering transformations and tram the model using the data in the table. 3. Compare the evaluation metrics of the models by using a SQL query with the ML.TRAINING_INFO statement.
2
Create a Vertex AI Workbench notebook to perform exploratory data analysis. Use IPython magics to create a new BigQuery table with input features, create the model, and validate the results by using the CREATE MODEL, ML.EVALUATE, and ML.PREDICT statements.
3
Store features in Vertex AI Feature Store.
4
Install the NLTK library from a Jupyter cell by using the !pip install nltk --user command.
5
Import the model into Vertex AI. On Vertex AI Pipelines, create a pipeline that uses the DataflowPythonJobOp and the ModelBatchPredictOp components.
6
1. Maintain the same machine type on the endpoint Configure the endpoint to enable autoscaling based on vCPU usage. 2. Set up a monitoring job and an alert for CPU usage. 3. If you receive an alert, investigate the cause.
7
Use a prebuilt XGBoost Vertex container to create a model, and deploy it to Vertex AI Endpoints.
8
Use AutoML Translation to train a model. Configure a Translation Hub project, and use the trained model to translate the documents. Use human reviewers to evaluate the incorrect translations.
9
Expose each individual model as an endpoint in Vertex AI Endpoints. Use Cloud Run to orchestrate the workflow.
10
1. Create a Vertex AI TensorBoard instance, and use the Vertex AI SDK to create an experiment and associate the TensorBoard instance. 2. Use the log_time_series_metrics function to track the preprocessed data, and use the log_metrics function to log loss values.
11
Create a Vertex AI Workbench managed notebook to browse and query the tables directly from the JupyterLab interface.
12
Schedule an increase in the number of online serving nodes in your featurestore prior to the batch ingestion jobs
13
1. Use TFX components with Dataflow to encode the text features and scale the numerical features. 2. Export results to Cloud Storage as TFRecords. 3. Feed the data into Vertex AI Training.
14
Build a random forest classification model in a Vertex AI Workbench notebook instance. Configure the model to generate feature importances after the model is trained.
15
Create a text dataset on Vertex AI for entity extraction Create two entities called “ingredient” and “cookware”, and label at least 200 examples of each entity. Train an AutoML entity extraction model to extract occurrences of these entity types. Evaluate performance on a holdout dataset.
16
Deploy the new model to the existing Vertex AI endpoint. Use traffic splitting to send 5% of production traffic to the new model. Monitor end-user metrics, such as listening time. If end-user metrics improve between models over time, gradually increase the percentage of production traffic sent to the new model.
17
Use BigQuery’s scheduling service to run the model retraining query periodically.
18
Use the aiplatform.log_classification_metrics function to log the F1 score and the confusion matrix.
19
本番トラフィックの層化抽出サンプルを収集してトレーニングデータセットを構築する Collect a stratified sample of production traffic to build the training dataset, トレーニング済みモデルに対して、機密性の高いカテゴリおよびデモグラフィック全体で公平性テストを実施する Conduct fairness tests across sensitive categories and demographics on the trained model
20
1. Initialize the Vertex SDK with the name of your experiment. Log parameters and metrics for each experiment, and attach dataset and model artifacts as inputs and outputs to each execution. 2. After a successful experiment create a Vertex AI pipeline.
21
Ensure that the Vertex AI Workbench instance is assigned the Identity and Access Management (IAM) Vertex AI User role.
22
Use Vertex AI Data Labeling Service to label the images, and tram an AutoML image classification model. Deploy the model, and configure Pub/Sub to publish a message when an image is categorized into the failing class.
23
Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading API
24
Use Vertex Al Model Monitoring. Enable prediction drift monitoring on the endpoint, and specify a notification email.
25
1. Specify sampled Shapley as the explanation method with a path count of 5. 2. Deploy the model to Vertex AI Endpoints. 3. Create a Model Monitoring job that uses prediction drift as the monitoring objective.
26
Load the data in BigQuery. Use BigQuery ML to train a matrix factorization model.
27
Create another Vertex AI endpoint in the asia-southeast1 region, and allow the application to choose the appropriate endpoint.
28
Store the data in a Cloud Storage bucket, and create a custom container with your training application. In your training application, read the data from Cloud Storage and train the model.
29
Create a pipeline in Vertex AI Pipelines. Create a Cloud Function that uses a Cloud Storage trigger and deploys the pipeline.
30
Enable caching in all the steps of the Kubeflow pipeline.
31
Ingest the Avro files into BigQuery to perform analytics. Use a Dataflow pipeline to create the features, and store them in Vertex AI Feature Store for online prediction.
32
Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
33
Use the Document Translation feature of the Cloud Translation API to translate the documents.
34
Use the Vertex AI Metadata API inside the custom job to create context, execution, and artifacts for each model, and use events to link them together.
35
Create a Vertex AI hyperparameter tuning job.
36
Use the Cloud Vision API to automatically annotate objects in the images to help specialists with the annotation task.
37
Set up a Vertex AI Pipelines to orchestrate the MLOps pipeline. Use the predefined Dataproc component for the PySpark-based workloads.
38
Configure the model deployment settings to use an n1-standard-4 machine type. Set the minReplicaCount value to 1 and the maxReplicaCount value to 8.
39
Use the TRANSFORM clause in the CREATE MODEL statement in the SQL query to calculate the required statistics.
40
Run one hypertuning job for 100 trials. Set num_hidden_layers and learning_rate as conditional hyperparameters based on their parent hyperparameter training_method.
41
Create a feature attribution drift monitoring job. Set the sampling rate to 0.1 and the monitoring frequency to weekly.
42
Run the TFX pipeline in Vertex AI Pipelines. Set the appropriate Apache Beam parameters in the pipeline to run the data preprocessing steps in Dataflow.
43
1. Use a Vertex AI Pipelines custom training job component to train your model. 2. Generate predictions by using a Vertex AI Pipelines model batch predict component.
44
Access BigQuery Studio in the Google Cloud console. Run the CREATE MODEL statement in the SQL editor to create an AutoML regression model.
45
Use a linear regression model. Perform one-hot encoding on categorical features, and create additional features based on the date, such as day of the week or month.
46
Use Vertex AI Agent Builder to create an agent. Securely index the organization’s internal documentation to the agent’s datastore. Send users’ queries to the agent and return the agent’s grounded responses to the users.
47
Use AutoML Vision to train a model using the image dataset.
48
Use the DLP API to scan and de-identify PII in chatbot conversations before storing the data.
49
Create a custom Vertex AI Pipelines component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
50
Use the DLP API to de-identify the sensitive data before loading it into BigQuery.