問題一覧
1
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.
2
A. Deploy compute optimized EC2 instances into a cluster placement group., E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
3
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
4
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
5
B. Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
6
D. Lock the EBS snapshots to prevent deletion.
7
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
8
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
9
A. Create a customer managed key. Use the key to encrypt the EBS volumes.
10
A. Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer., C. Migrate the database to an Amazon RDS Multi-AZ deployment.
11
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance., E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are distributed across two Availability Zones.
12
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
13
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
14
A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.
15
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
16
A. Create an S3 bucket that has S3 Object Lock enabled., C. Configure a default retention period of 30 days for the objects., E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
17
C. Create a public Application Load Balancer. Specify the application target group., E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint
18
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.
19
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
20
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
21
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company's snapshot policy requirements.
22
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica., D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
23
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.
24
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
25
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
26
C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
27
A. Choose on-demand mode. Update the read and write capacity units appropriately.
28
D. Create a tag policy in Organizations that has a list of allowed application names.
29
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
30
A. Create a managed node group that contains only Spot Instances.
31
B. Deploy the applications in AWS Local Zones by extending the company's VPC from eu-central-1 to the chosen Local Zone.
32
C. AWS Snowball Edge Storage Optimized
33
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.
34
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
35
A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
36
A. Create a gateway VPC endpoint to the S3 bucket.
37
B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
38
B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
39
D. Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin. Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.
40
B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
41
C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
42
C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.
43
C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret.
44
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
45
A. Use AWS Config rules to define and detect resources that are not properly tagged.
46
B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
47
D. Enable AWS Shield Advanced and assign the ELB to it.
48
C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
49
D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SOS) subscriptions. Configure the consumer applications to process the messages from the queues.
50
D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
51
C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
52
B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
53
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket., B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
54
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
55
B. Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.
56
A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
57
C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
58
A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in each Region.
59
B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
60
A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.
61
B. Create an Amazon S3 bucket and host the website there.
62
A. Use AWS Secrets Manager. Turn on automatic rotation.
63
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic., E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
64
D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
65
A. Share the dashboard from the CloudWatch console. Enter the product manager's email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
66
B. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
67
A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
68
B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
69
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
70
B. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
71
D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis.
72
B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
73
A. Change the storage type to Provisioned IOPS SSD.
74
C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
75
C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
76
B. S3 Intelligent-Tiering
77
D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.
78
C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
79
C. Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.
80
B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
81
C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.
82
A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.
83
B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
84
C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
85
C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
86
C. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
87
D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
88
B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
89
C. Deploy a gateway VPC endpoint for Amazon S3.
90
B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email., D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data.
91
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.
92
A. Turn on AWS Config with the appropriate rules.
93
D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
94
A. Enable versioning on the S3 bucket., B. Enable MFA Delete on the S3 bucket.
95
D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.
96
D. Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx File Gateway on premises. Move the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-premises workloads to use the FSx File Gateway.
97
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
98
D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
99
B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
100
A. Configure Amazon CloudFront in front of the website to use HTTPS functionality., D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
aws_DAS-C01_2024.3.5_814/1000
aws_DAS-C01_2024.3.5_814/1000
user- lilasikuta · 164問 · 1年前aws_DAS-C01_2024.3.5_814/1000
aws_DAS-C01_2024.3.5_814/1000
164問 • 1年前SC-am
SC-am
user- lilasikuta · 725問 · 1年前SC-am
SC-am
725問 • 1年前NW-am
NW-am
user- lilasikuta · 350問 · 1年前NW-am
NW-am
350問 • 1年前SC-pm
SC-pm
user- lilasikuta · 133問 · 1年前SC-pm
SC-pm
133問 • 1年前TOPIKII 13:00~14:00 15:20~16:30
TOPIKII 13:00~14:00 15:20~16:30
user- lilasikuta · 1000問 · 1年前TOPIKII 13:00~14:00 15:20~16:30
TOPIKII 13:00~14:00 15:20~16:30
1000問 • 1年前問題一覧
1
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.
2
A. Deploy compute optimized EC2 instances into a cluster placement group., E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
3
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
4
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
5
B. Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
6
D. Lock the EBS snapshots to prevent deletion.
7
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
8
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
9
A. Create a customer managed key. Use the key to encrypt the EBS volumes.
10
A. Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer., C. Migrate the database to an Amazon RDS Multi-AZ deployment.
11
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance., E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are distributed across two Availability Zones.
12
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
13
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
14
A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.
15
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
16
A. Create an S3 bucket that has S3 Object Lock enabled., C. Configure a default retention period of 30 days for the objects., E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
17
C. Create a public Application Load Balancer. Specify the application target group., E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint
18
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.
19
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
20
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
21
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company's snapshot policy requirements.
22
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica., D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
23
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.
24
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
25
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
26
C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
27
A. Choose on-demand mode. Update the read and write capacity units appropriately.
28
D. Create a tag policy in Organizations that has a list of allowed application names.
29
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
30
A. Create a managed node group that contains only Spot Instances.
31
B. Deploy the applications in AWS Local Zones by extending the company's VPC from eu-central-1 to the chosen Local Zone.
32
C. AWS Snowball Edge Storage Optimized
33
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.
34
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
35
A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
36
A. Create a gateway VPC endpoint to the S3 bucket.
37
B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
38
B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
39
D. Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin. Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.
40
B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
41
C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
42
C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.
43
C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret.
44
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
45
A. Use AWS Config rules to define and detect resources that are not properly tagged.
46
B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
47
D. Enable AWS Shield Advanced and assign the ELB to it.
48
C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
49
D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SOS) subscriptions. Configure the consumer applications to process the messages from the queues.
50
D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
51
C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
52
B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
53
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket., B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
54
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
55
B. Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.
56
A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
57
C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
58
A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in each Region.
59
B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
60
A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.
61
B. Create an Amazon S3 bucket and host the website there.
62
A. Use AWS Secrets Manager. Turn on automatic rotation.
63
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic., E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
64
D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
65
A. Share the dashboard from the CloudWatch console. Enter the product manager's email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
66
B. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
67
A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
68
B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
69
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
70
B. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
71
D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis.
72
B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
73
A. Change the storage type to Provisioned IOPS SSD.
74
C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
75
C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
76
B. S3 Intelligent-Tiering
77
D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.
78
C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
79
C. Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.
80
B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
81
C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.
82
A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.
83
B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
84
C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
85
C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
86
C. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
87
D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
88
B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
89
C. Deploy a gateway VPC endpoint for Amazon S3.
90
B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email., D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data.
91
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.
92
A. Turn on AWS Config with the appropriate rules.
93
D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
94
A. Enable versioning on the S3 bucket., B. Enable MFA Delete on the S3 bucket.
95
D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.
96
D. Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx File Gateway on premises. Move the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-premises workloads to use the FSx File Gateway.
97
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
98
D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
99
B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
100
A. Configure Amazon CloudFront in front of the website to use HTTPS functionality., D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.