問題一覧
1
AWS Fargate, Amazon RDS for MySQL
2
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
3
Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
4
Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC.
5
Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot
6
Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys.
7
Amazon FSx for Lustre for high-performance parallel storage, Amazon S3 for cold data storage
8
Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances
9
Set up an Amazon S3 bucket. The application should be updated to use S3 buckets to store documents. Store the object metadata in the existing database.
10
Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
11
Apply a bucket policy to restrict access to the S3 endpoint., Create a VPC endpoint for Amazon S3.
12
Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers.
13
Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
14
Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change
15
Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy
16
Create a read replica as a Multi-AZ DB instance
17
Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
18
Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer.
19
Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.
20
In a cluster placement group
21
Configure an AWS Storage Gateway file gateway.
22
Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
23
Migrate the file share to Amazon FSx for Windows File Server
24
Put the JSON documents in an Amazon S3 bucket. As documents arrive in the S3 bucket, create an AWS Lambda function that runs Python code to process them. Use Amazon Aurora DB clusters to store the results.
25
Enable multi-factor authentication for the root user, Ensure the root user uses a strong password
26
Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects., Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
27
Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue
28
Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. and create a custom transformation job by using AWS Glue.
29
Generate a presigned URL and ask the vendor to download the log file before the URL expires.
30
Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
31
Launch the EC2 instances in a cluster placement group in one Availability Zone
32
Decouple the application components with an Amazon SQS queue. Configure a dead-letter queue to collect the requests that failed to process.
33
Create an Amazon EFS file share and establish an IAM role that allows Fargate to communicate with Amazon EFS.
34
Amazon ElastiCache for Redis
35
Amazon EBS General Purpose SSD (gp2)
36
Create an Amazon S3 bucket and host the website there.
37
Create an Amazon EFS file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.
38
Create a Multi-AZ RDS Read Replica of the production RDS DB instance
39
Amazon Aurora Global Database.
40
Transition the objects to the appropriate storage class by using an S3 Lifecycle configuration.
41
Create a service control policy in the root organizational unit to deny access to the services or actions
42
Amazon Aurora Serverless
43
Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application's URL domain name.
44
Use CloudFormation with scripts
45
Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
46
Set up AWS Global Accelerator and add endpoints
47
Deploy a Network Load Balancer in front of the EC2 instances in each Region. Use AWS Global Accelerator to route traffic to the most optimal Regional endpoint.
48
Create an Amazon SQS queue and custom CloudWatch metric to measure the number of messages in the queue. Configure the ASG to scale based on the number of messages in the queue
49
Set up an Amazon API Gateway and use AWS Lambda functions
50
Create gateway VPC endpoints for Amazon S3 and DynamoDB.
51
Set up S3 bucket policies to allow access from a VPC endpoint.
52
Use AWS Snowball.
53
* "Sid": "PutUpdateDeleteOnOrders", * "Effect": "Allow", * "Action": [ * "dynamodb:PutItem", * "dynamodb:UpdateItem", * "dynamodb:DeleteItem" * ], * "Resource": "arn:aws:dynamodb:us-east-1:227392126428:table/Orders"
54
Use Amazon FSx to create an SMB file share. Connect remote clients to the file share over a client VPN.
55
Enable an Amazon Route 53 health check.
56
Store data in STANDARD for 90 days then transition the data to DEEP_ARCHIVE
57
Deploy CloudFront with an S3 origin and configure an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the CloudFront distribution.
58
Amazon DynamoDB
59
Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing.
60
Use a target tracking policy to dynamically scale the Auto Scaling group
61
Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Use an Amazon EventBridge rule that invokes an AWS Lambda function to promote the DB cluster in us-west-2 when failure is detected.
62
AWS Lake Formation
63
Create an organization in AWS Organizations that includes all accounts and create a service control policy (SCP) that denies the launch of large EC2 instances.
64
Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)
65
Use a target tracking policy to dynamically scale the Auto Scaling group.
66
Set up an ECS cluster behind an Application Load Balancer on AWS Fargate. Use Amazon Quantum Ledger Database (QLDB) to manage the storage layer.
67
Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
68
Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address, Configure a VPC peering connection between the two VPCs. Access the API using the private address
69
Enable an Amazon Route 53 health check
70
Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
71
Deploy a NAT gateway in the public subnet. Modify the route table in the private subnet to direct all internet traffic to the NAT gateway.
72
Setup a Virtual Private Gateway (VPG)
73
Use Amazon Instance Store
74
An Amazon API Gateway REST API directly accesses the sales performance data in the DynamoDB table., An Amazon API Gateway REST API invokes an AWS Lambda function. The Lambda function reads data from the DynamoDB table.
75
Deploy an AWS DataSync agent for the on-premises environment. Configure a task to replicate the data and connect it to a VPC endpoint.
76
Use an AWS Storage Gateway file gateway to provide a locally accessible file system that replicates data to the cloud, then analyze the data in the AWS Cloud.
77
Update the application to read from the Aurora Replica
78
Create an additional S3 bucket with versioning in another Region and configure cross-Region replication
79
Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
80
Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules
81
Use Amazon FSx for Windows File Server for the Windows instances and the Linux instances.
82
Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot
83
Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones.
84
Use Lambda@Edge to compress the files as they are sent to users.
85
Add additional VPNs to the Production VPC from a second customer gateway device.
86
Amazon GuardDuty
87
Create an Amazon SQS FIFO queue to decouple the application. Configure an AWS Lambda function to process messages from the queue.
88
Amazon FSx for Lustre
89
Use a target tracking policy that keeps the average aggregate CPU utilization at 40%
90
Configure Amazon EBS encryption and Amazon RDS encryption with AWS KMS keys to encrypt instance and database volumes.
91
Set up an Amazon S3 bucket. Configure an Amazon FSx for Lustre file system and integrate it with the S3 bucket after importing the data then access the FSx for Lustre file system from the HPC cluster instances.
92
Administrators can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.1.2.28.
93
Amazon S3
94
Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch
95
Process each part using a separate ECS task. Create an Amazon SQS queue
96
Add read replicas for the RDS database and direct read traffic to the replicas.
97
Use Amazon SageMaker to build the machine learning part of the application and use AWS Data Exchange to gain access to the third-party telemetry data.
98
Set up VPC sharing with the Prod1 account as the owner and the Prod2 account as the participant to transfer the data.
99
Amazon DynamoDB
100
Multiple instance store volumes with software RAID 0.
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
AWS Fargate, Amazon RDS for MySQL
2
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
3
Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
4
Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC.
5
Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot
6
Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys.
7
Amazon FSx for Lustre for high-performance parallel storage, Amazon S3 for cold data storage
8
Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances
9
Set up an Amazon S3 bucket. The application should be updated to use S3 buckets to store documents. Store the object metadata in the existing database.
10
Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
11
Apply a bucket policy to restrict access to the S3 endpoint., Create a VPC endpoint for Amazon S3.
12
Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers.
13
Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
14
Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change
15
Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy
16
Create a read replica as a Multi-AZ DB instance
17
Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
18
Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer.
19
Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.
20
In a cluster placement group
21
Configure an AWS Storage Gateway file gateway.
22
Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
23
Migrate the file share to Amazon FSx for Windows File Server
24
Put the JSON documents in an Amazon S3 bucket. As documents arrive in the S3 bucket, create an AWS Lambda function that runs Python code to process them. Use Amazon Aurora DB clusters to store the results.
25
Enable multi-factor authentication for the root user, Ensure the root user uses a strong password
26
Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects., Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
27
Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue
28
Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. and create a custom transformation job by using AWS Glue.
29
Generate a presigned URL and ask the vendor to download the log file before the URL expires.
30
Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
31
Launch the EC2 instances in a cluster placement group in one Availability Zone
32
Decouple the application components with an Amazon SQS queue. Configure a dead-letter queue to collect the requests that failed to process.
33
Create an Amazon EFS file share and establish an IAM role that allows Fargate to communicate with Amazon EFS.
34
Amazon ElastiCache for Redis
35
Amazon EBS General Purpose SSD (gp2)
36
Create an Amazon S3 bucket and host the website there.
37
Create an Amazon EFS file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.
38
Create a Multi-AZ RDS Read Replica of the production RDS DB instance
39
Amazon Aurora Global Database.
40
Transition the objects to the appropriate storage class by using an S3 Lifecycle configuration.
41
Create a service control policy in the root organizational unit to deny access to the services or actions
42
Amazon Aurora Serverless
43
Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application's URL domain name.
44
Use CloudFormation with scripts
45
Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
46
Set up AWS Global Accelerator and add endpoints
47
Deploy a Network Load Balancer in front of the EC2 instances in each Region. Use AWS Global Accelerator to route traffic to the most optimal Regional endpoint.
48
Create an Amazon SQS queue and custom CloudWatch metric to measure the number of messages in the queue. Configure the ASG to scale based on the number of messages in the queue
49
Set up an Amazon API Gateway and use AWS Lambda functions
50
Create gateway VPC endpoints for Amazon S3 and DynamoDB.
51
Set up S3 bucket policies to allow access from a VPC endpoint.
52
Use AWS Snowball.
53
* "Sid": "PutUpdateDeleteOnOrders", * "Effect": "Allow", * "Action": [ * "dynamodb:PutItem", * "dynamodb:UpdateItem", * "dynamodb:DeleteItem" * ], * "Resource": "arn:aws:dynamodb:us-east-1:227392126428:table/Orders"
54
Use Amazon FSx to create an SMB file share. Connect remote clients to the file share over a client VPN.
55
Enable an Amazon Route 53 health check.
56
Store data in STANDARD for 90 days then transition the data to DEEP_ARCHIVE
57
Deploy CloudFront with an S3 origin and configure an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the CloudFront distribution.
58
Amazon DynamoDB
59
Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing.
60
Use a target tracking policy to dynamically scale the Auto Scaling group
61
Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Use an Amazon EventBridge rule that invokes an AWS Lambda function to promote the DB cluster in us-west-2 when failure is detected.
62
AWS Lake Formation
63
Create an organization in AWS Organizations that includes all accounts and create a service control policy (SCP) that denies the launch of large EC2 instances.
64
Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)
65
Use a target tracking policy to dynamically scale the Auto Scaling group.
66
Set up an ECS cluster behind an Application Load Balancer on AWS Fargate. Use Amazon Quantum Ledger Database (QLDB) to manage the storage layer.
67
Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
68
Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address, Configure a VPC peering connection between the two VPCs. Access the API using the private address
69
Enable an Amazon Route 53 health check
70
Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
71
Deploy a NAT gateway in the public subnet. Modify the route table in the private subnet to direct all internet traffic to the NAT gateway.
72
Setup a Virtual Private Gateway (VPG)
73
Use Amazon Instance Store
74
An Amazon API Gateway REST API directly accesses the sales performance data in the DynamoDB table., An Amazon API Gateway REST API invokes an AWS Lambda function. The Lambda function reads data from the DynamoDB table.
75
Deploy an AWS DataSync agent for the on-premises environment. Configure a task to replicate the data and connect it to a VPC endpoint.
76
Use an AWS Storage Gateway file gateway to provide a locally accessible file system that replicates data to the cloud, then analyze the data in the AWS Cloud.
77
Update the application to read from the Aurora Replica
78
Create an additional S3 bucket with versioning in another Region and configure cross-Region replication
79
Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
80
Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules
81
Use Amazon FSx for Windows File Server for the Windows instances and the Linux instances.
82
Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot
83
Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones.
84
Use Lambda@Edge to compress the files as they are sent to users.
85
Add additional VPNs to the Production VPC from a second customer gateway device.
86
Amazon GuardDuty
87
Create an Amazon SQS FIFO queue to decouple the application. Configure an AWS Lambda function to process messages from the queue.
88
Amazon FSx for Lustre
89
Use a target tracking policy that keeps the average aggregate CPU utilization at 40%
90
Configure Amazon EBS encryption and Amazon RDS encryption with AWS KMS keys to encrypt instance and database volumes.
91
Set up an Amazon S3 bucket. Configure an Amazon FSx for Lustre file system and integrate it with the S3 bucket after importing the data then access the FSx for Lustre file system from the HPC cluster instances.
92
Administrators can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.1.2.28.
93
Amazon S3
94
Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch
95
Process each part using a separate ECS task. Create an Amazon SQS queue
96
Add read replicas for the RDS database and direct read traffic to the replicas.
97
Use Amazon SageMaker to build the machine learning part of the application and use AWS Data Exchange to gain access to the third-party telemetry data.
98
Set up VPC sharing with the Prod1 account as the owner and the Prod2 account as the participant to transfer the data.
99
Amazon DynamoDB
100
Multiple instance store volumes with software RAID 0.