問題一覧
1
Launch the EC2 instance with two Amazon EBS volumes and configure RAID 1.
2
Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.
3
Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
4
Amazon MQ
5
Add an Amazon CloudFront distribution., Add Aurora Replicas.
6
Configure a lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.
7
Associate the Direct Connect gateway to a transit gateway
8
Use an Amazon S3 Data Lake as the original date store for the output from the support communications. Use Amazon Comprehend to process the text for sentiment analysis. Then store the outputs in Amazon RedShift.
9
Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
10
Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department's AWS power user permissions on the created accounts.
11
Use AWS License Manager to manage the software licenses
12
Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
13
AWS Private 5G
14
Amazon Redshift with Amazon Redshift Spectrum
15
Implement an IPSec VPN connection and use the same BGP prefix
16
Migrate the fortnightly reporting to an Aurora Replica.
17
Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
18
AWS AppSync
19
Amazon SNS
20
Create an Amazon SQS FIFO queue
21
Pilot light
22
Create an Amazon S3 File Gateway, extending the company's storage space into the cloud. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 30 days.
23
Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
24
Use Amazon Kinesis Data Firehose to ingest the data., Use containers running on AWS Fargate to process the data.
25
Redeploy the application in Elastic Beanstalk with the .NET platform provisioned in a Multi-AZ configuration., Migrate from Oracle to Oracle on Amazon RDS using the AWS Database Migration Service (AWS DMS).
26
Create a gateway VPC endpoint and add an entry to the route table
27
Order multiple AWS Snowball devices to migrate the data to AWS.
28
Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform the analysis.
29
Create a file system with Amazon FSx for Windows File Server and enable Multi-AZ. Join Amazon FSx to Active Directory.
30
Use an Amazon Aurora global database with a warm standby disaster recovery strategy.
31
Deploy a database cache using Amazon ElastiCache.
32
Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the web tier security group.
33
Create a snapshot of the database when the tests are completed. Terminate the DB instance. Create a new DB instance from the snapshot when required.
34
Set up an Amazon SQS queue and subscribe it to the SNS topic., Modify the Lambda function so it reads from an Amazon SQS queue.
35
Launch an Amazon EC2 instance in the same Region as the S3 bucket. Process the log files and upload the output to another S3 bucket in the same Region.
36
Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Create a new RDS instance from the encrypted snapshot.
37
At the Edge Location, run your code with CloudFront Functions.
38
Use Amazon Athena to query and analyze the data in Amazon S3 using standard SQL queries on demand.
39
Create an Amazon S3 interface VPC endpoint in the subnet where the EC2 instance is located. Add a resource policy to the S3 bucket to allow only the EC2 instance's IAM role access.
40
Use Amazon Athena on Amazon S3 to perform the queries.
41
Store the credentials in Systems Manager Parameter Store and update the function code and execution role
42
Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to transform and process the images whenever a GET request is initiated on an object.
43
Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket
44
Create a new trail in CloudTrail from within the management account with the organization trails option enabled.
45
Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
46
Use AWS Batch to deploy a multi-node parallel job
47
Install and configure the unified CloudWatch agent on the EC2 instances. Monitor Swap Utilization metrics in CloudWatch.
48
Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
49
Use capacity reservations with savings plans
50
Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
51
Configure the database security group to allow traffic only from the application security group
52
Create an AWS Config rule to check for the key age. Define an Amazon EventBridge rule to execute an AWS Lambda function that removes the key.
53
AWS Compute Optimizer
54
Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add the s3:PutObjectLegalHold permission to the IAM policies of users who need to delete the objects.
55
Migrate both public IP addresses to the AWS Global Accelerator, Create an AWS Global Accelerator and attach endpoints in each AWS Region
56
Use AWS Snowball Edge devices to process the data locally.
57
Use S3 Access points to administer different access policies to each team, and control access points using Service Control Policies within AWS Organizations.
58
Create a bucket policy that denies Put requests that do not have an x-amz-server-side-encryption header set.
59
Amazon CloudFront and Amazon S3.
60
Redis AUTH command
61
Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
62
Set up a warm standby Amazon RDS for PostgreSQL database on AWS. Configure AWS Database Migration Service (AWS DMS) to use change data capture (CDC).
63
Use Amazon Aurora with MySQL compatibility. Direct the reporting functions to use one of the Aurora Replicas.
64
Configure a VPC peering connection between the us-west-2 VPC and the eu-central-1 VPC. Update the subnet route tables accordingly. Create an inbound rule in the eu-central-1 database security group that allows traffic from the us-west-2 application server IP addresses.
65
Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
66
Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost
67
Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a Multi-AZ DB instance deployment. Use Amazon ElastiCache for Redis with a replication group to manage session data and cache reads. Migrate the application server to an Auto Scaling group across three Availability Zones.
68
Use AWS Systems Manager Run Command to run a custom command that installs the tool on all the EC2 instances.
69
Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.
70
Use AWS Config to identify all untagged resources and tag them programmatically. Then, use AWS Backup to automate the backup of all AWS resources based on tags.
71
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
72
Use Amazon Kinesis Data Firehose for data ingestion and Amazon Kinesis Data Analytics for real-time analysis.
73
Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
74
Use AWS Config to track configuration changes and AWS CloudTrail to record API calls and track access patterns in the AWS Cloud.
75
Use Amazon Macie. Create an AWS Lambda function to filter the ‘SensitiveData:S3Object/Personal’ event type from Macie findings and trigger an Amazon Simple Notification Service (Amazon SNS) notification to the compliance team.
76
Use AWS KMS encryption keys for the S3 bucket and use Amazon Athena to query the data
77
Use an API Gateway canary release deployment. Initially direct a small percentage of user traffic to the new API version. After API verification, promote the canary stage to the production stage.
78
Add an attribute to each new item created in the table that has a value of the current timestamp plus 30 days. Configure this attribute as the TTL attribute.
79
Deploy AWS Directory Service and integrate it with the corporate directory service. Set up AWS Identity Center for authentication across accounts., Create a new AWS Organizations entity with all features enabled. Create the new AWS accounts within the organization.
80
Install an SMB client on to the on-premises servers and mount an Amazon FSx file system to the servers. Mount the same file system to the EC2 instances within the Amazon VPC. Use the existing Direct Connect connection to connect the on-premises data center to the Amazon VPC.
81
Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
82
Generate an API Gateway endpoint for the Lambda function. Provide the API Gateway endpoint to the third party for the webhook.
83
Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
84
Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
85
Store the session data in an Amazon DynamoDB table.
86
Warm standby
87
Create an AWS Transit Gateway and share it with each account using AWS Resource Access Manager
88
Write the log files to an Amazon S3 bucket. Create an event notification to invoke an AWS Lambda function that will process the files
89
Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
90
Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS CloudFormation.
91
Share the dashboard from the CloudWatch console. Enter the client’s email address and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
92
Amazon Elastic File System (Amazon EFS) with the Standard storage class.
93
Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
94
Set up VPC endpoints for Amazon EKS and ECR to enable nodes to communicate with the control plane.
95
Use AWS Config to detect resources that are not properly tagged. Create a Systems Manager automation document for remediation.
96
Use the Elastic File System (EFS) and mount the file system using NFS
97
Migrate the data to Amazon FSx for Windows File Server using AWS DataSync
98
Create an Amazon EventBridge rule for the CreateImage API call. Configure the target as an Amazon SNS topic to send an alert when a Createlmage API call is detected.
99
When an order is received, use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. For processing, configure the SQS FIFO queue to invoke an AWS Lambda function.
100
Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.
M#5 Section and Title v2
M#5 Section and Title v2
ユーザ名非公開 · 32問 · 13日前M#5 Section and Title v2
M#5 Section and Title v2
32問 • 13日前MPLE
MPLE
ユーザ名非公開 · 41問 · 13日前MPLE
MPLE
41問 • 13日前Weekly Test 3
Weekly Test 3
ユーザ名非公開 · 50問 · 13日前Weekly Test 3
Weekly Test 3
50問 • 13日前Weekly Test 2
Weekly Test 2
ユーザ名非公開 · 50問 · 13日前Weekly Test 2
Weekly Test 2
50問 • 13日前Weekly Test 1
Weekly Test 1
ユーザ名非公開 · 50問 · 13日前Weekly Test 1
Weekly Test 1
50問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 9問 · 13日前Refresher SPDI 1
Refresher SPDI 1
9問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 100問 · 13日前Refresher SPDI 1
Refresher SPDI 1
100問 • 13日前Definition of Terms 3
Definition of Terms 3
ユーザ名非公開 · 90問 · 13日前Definition of Terms 3
Definition of Terms 3
90問 • 13日前Definition of Terms 2
Definition of Terms 2
ユーザ名非公開 · 90問 · 13日前Definition of Terms 2
Definition of Terms 2
90問 • 13日前Definition of Terms 1
Definition of Terms 1
ユーザ名非公開 · 90問 · 13日前Definition of Terms 1
Definition of Terms 1
90問 • 13日前WT 6
WT 6
ユーザ名非公開 · 50問 · 13日前WT 6
WT 6
50問 • 13日前WT 3
WT 3
ユーザ名非公開 · 50問 · 13日前WT 3
WT 3
50問 • 13日前WT 1
WT 1
ユーザ名非公開 · 50問 · 13日前WT 1
WT 1
50問 • 13日前SPI version D pt 2
SPI version D pt 2
ユーザ名非公開 · 61問 · 13日前SPI version D pt 2
SPI version D pt 2
61問 • 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
ユーザ名非公開 · 94問 · 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
94問 • 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
ユーザ名非公開 · 20問 · 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
20問 • 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
ユーザ名非公開 · 10問 · 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
10問 • 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
ユーザ名非公開 · 11問 · 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
11問 • 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
ユーザ名非公開 · 11問 · 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
11問 • 13日前Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
ユーザ名非公開 · 13問 · 13日前Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
13問 • 13日前問題一覧
1
Launch the EC2 instance with two Amazon EBS volumes and configure RAID 1.
2
Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.
3
Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
4
Amazon MQ
5
Add an Amazon CloudFront distribution., Add Aurora Replicas.
6
Configure a lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.
7
Associate the Direct Connect gateway to a transit gateway
8
Use an Amazon S3 Data Lake as the original date store for the output from the support communications. Use Amazon Comprehend to process the text for sentiment analysis. Then store the outputs in Amazon RedShift.
9
Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
10
Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department's AWS power user permissions on the created accounts.
11
Use AWS License Manager to manage the software licenses
12
Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
13
AWS Private 5G
14
Amazon Redshift with Amazon Redshift Spectrum
15
Implement an IPSec VPN connection and use the same BGP prefix
16
Migrate the fortnightly reporting to an Aurora Replica.
17
Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
18
AWS AppSync
19
Amazon SNS
20
Create an Amazon SQS FIFO queue
21
Pilot light
22
Create an Amazon S3 File Gateway, extending the company's storage space into the cloud. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 30 days.
23
Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
24
Use Amazon Kinesis Data Firehose to ingest the data., Use containers running on AWS Fargate to process the data.
25
Redeploy the application in Elastic Beanstalk with the .NET platform provisioned in a Multi-AZ configuration., Migrate from Oracle to Oracle on Amazon RDS using the AWS Database Migration Service (AWS DMS).
26
Create a gateway VPC endpoint and add an entry to the route table
27
Order multiple AWS Snowball devices to migrate the data to AWS.
28
Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform the analysis.
29
Create a file system with Amazon FSx for Windows File Server and enable Multi-AZ. Join Amazon FSx to Active Directory.
30
Use an Amazon Aurora global database with a warm standby disaster recovery strategy.
31
Deploy a database cache using Amazon ElastiCache.
32
Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the web tier security group.
33
Create a snapshot of the database when the tests are completed. Terminate the DB instance. Create a new DB instance from the snapshot when required.
34
Set up an Amazon SQS queue and subscribe it to the SNS topic., Modify the Lambda function so it reads from an Amazon SQS queue.
35
Launch an Amazon EC2 instance in the same Region as the S3 bucket. Process the log files and upload the output to another S3 bucket in the same Region.
36
Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Create a new RDS instance from the encrypted snapshot.
37
At the Edge Location, run your code with CloudFront Functions.
38
Use Amazon Athena to query and analyze the data in Amazon S3 using standard SQL queries on demand.
39
Create an Amazon S3 interface VPC endpoint in the subnet where the EC2 instance is located. Add a resource policy to the S3 bucket to allow only the EC2 instance's IAM role access.
40
Use Amazon Athena on Amazon S3 to perform the queries.
41
Store the credentials in Systems Manager Parameter Store and update the function code and execution role
42
Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to transform and process the images whenever a GET request is initiated on an object.
43
Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket
44
Create a new trail in CloudTrail from within the management account with the organization trails option enabled.
45
Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
46
Use AWS Batch to deploy a multi-node parallel job
47
Install and configure the unified CloudWatch agent on the EC2 instances. Monitor Swap Utilization metrics in CloudWatch.
48
Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
49
Use capacity reservations with savings plans
50
Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
51
Configure the database security group to allow traffic only from the application security group
52
Create an AWS Config rule to check for the key age. Define an Amazon EventBridge rule to execute an AWS Lambda function that removes the key.
53
AWS Compute Optimizer
54
Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add the s3:PutObjectLegalHold permission to the IAM policies of users who need to delete the objects.
55
Migrate both public IP addresses to the AWS Global Accelerator, Create an AWS Global Accelerator and attach endpoints in each AWS Region
56
Use AWS Snowball Edge devices to process the data locally.
57
Use S3 Access points to administer different access policies to each team, and control access points using Service Control Policies within AWS Organizations.
58
Create a bucket policy that denies Put requests that do not have an x-amz-server-side-encryption header set.
59
Amazon CloudFront and Amazon S3.
60
Redis AUTH command
61
Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
62
Set up a warm standby Amazon RDS for PostgreSQL database on AWS. Configure AWS Database Migration Service (AWS DMS) to use change data capture (CDC).
63
Use Amazon Aurora with MySQL compatibility. Direct the reporting functions to use one of the Aurora Replicas.
64
Configure a VPC peering connection between the us-west-2 VPC and the eu-central-1 VPC. Update the subnet route tables accordingly. Create an inbound rule in the eu-central-1 database security group that allows traffic from the us-west-2 application server IP addresses.
65
Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
66
Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost
67
Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a Multi-AZ DB instance deployment. Use Amazon ElastiCache for Redis with a replication group to manage session data and cache reads. Migrate the application server to an Auto Scaling group across three Availability Zones.
68
Use AWS Systems Manager Run Command to run a custom command that installs the tool on all the EC2 instances.
69
Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.
70
Use AWS Config to identify all untagged resources and tag them programmatically. Then, use AWS Backup to automate the backup of all AWS resources based on tags.
71
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
72
Use Amazon Kinesis Data Firehose for data ingestion and Amazon Kinesis Data Analytics for real-time analysis.
73
Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
74
Use AWS Config to track configuration changes and AWS CloudTrail to record API calls and track access patterns in the AWS Cloud.
75
Use Amazon Macie. Create an AWS Lambda function to filter the ‘SensitiveData:S3Object/Personal’ event type from Macie findings and trigger an Amazon Simple Notification Service (Amazon SNS) notification to the compliance team.
76
Use AWS KMS encryption keys for the S3 bucket and use Amazon Athena to query the data
77
Use an API Gateway canary release deployment. Initially direct a small percentage of user traffic to the new API version. After API verification, promote the canary stage to the production stage.
78
Add an attribute to each new item created in the table that has a value of the current timestamp plus 30 days. Configure this attribute as the TTL attribute.
79
Deploy AWS Directory Service and integrate it with the corporate directory service. Set up AWS Identity Center for authentication across accounts., Create a new AWS Organizations entity with all features enabled. Create the new AWS accounts within the organization.
80
Install an SMB client on to the on-premises servers and mount an Amazon FSx file system to the servers. Mount the same file system to the EC2 instances within the Amazon VPC. Use the existing Direct Connect connection to connect the on-premises data center to the Amazon VPC.
81
Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
82
Generate an API Gateway endpoint for the Lambda function. Provide the API Gateway endpoint to the third party for the webhook.
83
Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
84
Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
85
Store the session data in an Amazon DynamoDB table.
86
Warm standby
87
Create an AWS Transit Gateway and share it with each account using AWS Resource Access Manager
88
Write the log files to an Amazon S3 bucket. Create an event notification to invoke an AWS Lambda function that will process the files
89
Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
90
Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS CloudFormation.
91
Share the dashboard from the CloudWatch console. Enter the client’s email address and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
92
Amazon Elastic File System (Amazon EFS) with the Standard storage class.
93
Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
94
Set up VPC endpoints for Amazon EKS and ECR to enable nodes to communicate with the control plane.
95
Use AWS Config to detect resources that are not properly tagged. Create a Systems Manager automation document for remediation.
96
Use the Elastic File System (EFS) and mount the file system using NFS
97
Migrate the data to Amazon FSx for Windows File Server using AWS DataSync
98
Create an Amazon EventBridge rule for the CreateImage API call. Configure the target as an Amazon SNS topic to send an alert when a Createlmage API call is detected.
99
When an order is received, use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. For processing, configure the SQS FIFO queue to invoke an AWS Lambda function.
100
Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.