問題一覧
1
Enable Amazon S3 server access logging to capture all bucket-level and object-level events, Enable AWS CloudTrail data events to enable object-level logging for S3 bucket
2
Set up an interface VPC endpoint for Kinesis Data Streams in the VPC. Ensure that the VPC endpoint policy allows traffic from the applications
3
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single or multiple AWS Region(s), Use cross-Region Read Replicas
4
Create an inbound rule in the security group for the MySQL DB servers using TCP protocol on port 3306. Set the source as the security group for the EC2 instance app servers, Create an outbound rule in the security group for the EC2 instance app servers using TCP protocol on port 3306. Set the destination as the security group for the MySQL DB servers
5
When a new Amazon S3 bucket is created, it takes up to 24 hours before the bucket name propagates across all AWS Regions, CloudFront by default, forwards the requests to the default S3 endpoint. Change the origin domain name of the distribution to include the Regional endpoint of the bucket
6
Use AWS Direct Connect along with a site-to-site VPN to establish a connection between the data center and AWS Cloud
7
You can monitor storage capacity and file system activity using Amazon CloudWatch, and monitor end-user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose, Configure a new Amazon FSx for Windows file system with a deployment type of Multi-AZ. Transfer data to the newly created file system using the AWS DataSync service. Point all the file system users to the new location. You can test the failover of your Multi-AZ file system by modifying its throughput capacity
8
Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the application logs to CloudWatch Logs, Set up the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances as well as set up tracing of SQL queries with the X-Ray SDK for Java, Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs
9
Set up Kinesis Data Firehose in the logging account and then subscribe the delivery stream to CloudWatch Logs streams in each application AWS account via subscription filters. Persist the log data in an Amazon S3 bucket inside the logging AWS account
10
Store the data in Apache ORC, partitioned by date and sorted by device type of the device
11
Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift
12
Configure an S3 VPC endpoint and create an S3 bucket policy to allow access only from this VPC endpoint
13
Configure SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to it. Attach a customer-managed policy that denies access to RDS in any AWS Region except us-east-1
14
Change the API Gateway Regional endpoints to edge-optimized endpoints, Enable S3 Transfer Acceleration on the S3 bucket and configure the web application to use the Transfer Acceleration endpoints
15
Use Amazon CloudFront distribution with origin as the S3 bucket. This would speed up uploads as well as downloads for the video files, Enable Amazon S3 Transfer Acceleration for the S3 bucket. This would speed up uploads as well as downloads for the video files
16
Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read Replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
17
Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications
18
Create a backup process to persist all the data to an S3 bucket A using S3 standard storage class in the Production Region. Set up cross-Region replication of this S3 bucket A to an S3 bucket B using S3 standard storage class in the DR Region and set up a lifecycle policy in the DR Region to immediately move this data to Amazon Glacier
19
Process and analyze the Amazon CloudWatch Logs for Lambda function to determine processing times for requested images at pre-configured intervals, Process and analyze the AWS X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors
20
Use message timers to postpone the delivery of certain messages to the queue by one minute
21
As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application, Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance
22
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
23
Generate a separate certificate for each FQDN in each AWS Region using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in the relevant AWS Region
24
Configure Route 53 latency-based routing to route to the nearest Region and activate the health checks. Host the website on S3 in each Region and use API Gateway with AWS Lambda for the application layer. Set up the data layer using DynamoDB global tables with DAX for caching
25
Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls
26
Instance X is in the default security group. The default rules for the default security group allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. Instance Y is in a new security group. The default rules for a security group that you create allow no inbound traffic
27
Modify the size of the gp2 volume for each instance from 3 TB to 4 TB
28
Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered
29
Set up VPN CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters
30
Refactor the application to run from S3 instead of EFS and upload the video files directly to an S3 bucket. Configure an S3 trigger to invoke a Lambda function on each video file upload to S3 that puts a message in an SQS queue containing the link and the video processing instructions. Change the video processing application to read from the SQS queue and the S3 bucket. Configure the queue depth metric to scale the size of the Auto Scaling group for video processing instances. Leverage EventBridge events to trigger an SNS notification to the user containing the links to the processed files
31
Automated backups, manual snapshots and Read Replicas are supported across multiple Regions, Recovery time objective (RTO) represents the number of hours it takes, to return the Amazon RDS database to a working state after a disaster, Database snapshots are user-initiated backups of your complete DB instance that serve as full backups. These snapshots can be copied and shared to different Regions and accounts
32
Create an application that will use the S3 Select ScanRange parameter to get the first 250 bytes and store that information in ElasticSearch, Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in ElasticSearch
33
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
34
Use AWS Kinesis Data Streams to facilitate multiple applications consume same streaming data concurrently and independently
35
The error is the outcome of the company reaching its API Gateway account limit for calls per second, configure API keys as client identifiers using usage plans to define the per-client throttling limits for premium customers
36
Leverage Config rules to audit changes to AWS resources and monitor the compliance of the configuration by running the evaluations for the rule at a frequency that you choose. Develop AWS Config custom rules to establish a test-driven development approach by triggering the evaluation when any resource that matches the rule's scope changes in configuration, Enable trails and set up CloudTrail events to review and monitor management activities of all AWS accounts by logging these activities into CloudWatch Logs using a KMS key. Ensure that CloudTrail is enabled for all accounts as well as all available AWS services
37
Configure connection draining on ELB
38
Configure a CloudWatch metric that checks the status of the EC2 StatusCheckFailed metric, add an alarm to the metric, and then configure a health check that monitors the state of the alarm
39
Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities
40
Create an IAM role in your AWS account with a trust policy that trusts the Partner (Example Corp). Take a unique external ID value from Example Corp and include this external ID condition in the role’s trust policy
41
Leverage AWS Systems Manager to create and maintain a new AMI with the OS patches updated on an ongoing basis. Configure the Auto Scaling group to use the patched AMI and replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store and access the static dataset using Amazon EFS
42
Create a new ASG launch configuration that uses the newly created AMI. Double the size of the ASG and allow the new instances to become healthy and then reduce the ASG back to the original size. If the new instances do not work as expected, associate the ASG with the old launch configuration
43
Launch a new instance in the new subnet via an AMI created from the old instance. Direct traffic to this new instance using Route 53 and then terminate the old instance
44
In account B, create a cross-account IAM role. In account A, add the AssumeRole permission to account A's CodePipeline service role to allow it to assume the cross-account role in account B, In account B, create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack. In account A, update the CodePipeline configuration to include the resources associated with account B, In account A, create a customer-managed AWS KMS key that grants usage permissions to account A's CodePipeline service role and account B. Also, create an Amazon Simple Storage Service (Amazon S3) bucket with a bucket policy that grants account B access to the bucket
45
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
46
Set up AWS Global Accelerator in front of all the AWS Regions
47
Use a Network Address Translation (NAT) gateway to map multiple IP addresses into a single publicly exposed IP address
48
Capture the data in Kinesis Data Firehose and use an intermediary Lambda function to filter and transform the incoming stream before the output is dumped on S3
49
Create a CloudFront distribution and configure CloudFront to cache objects from a custom origin. This will offload some traffic from the on-premises servers. Customize CloudFront cache behavior by setting Time To Live (TTL) to suit your business requirement
50
The throttle limit set on API Gateway is very low. During peak hours, the additional requests are not making their way to Lambda
51
Create a customer-managed AWS KMS key and configure the key policy to grant permissions to the Amazon S3 service principal
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Enable Amazon S3 server access logging to capture all bucket-level and object-level events, Enable AWS CloudTrail data events to enable object-level logging for S3 bucket
2
Set up an interface VPC endpoint for Kinesis Data Streams in the VPC. Ensure that the VPC endpoint policy allows traffic from the applications
3
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single or multiple AWS Region(s), Use cross-Region Read Replicas
4
Create an inbound rule in the security group for the MySQL DB servers using TCP protocol on port 3306. Set the source as the security group for the EC2 instance app servers, Create an outbound rule in the security group for the EC2 instance app servers using TCP protocol on port 3306. Set the destination as the security group for the MySQL DB servers
5
When a new Amazon S3 bucket is created, it takes up to 24 hours before the bucket name propagates across all AWS Regions, CloudFront by default, forwards the requests to the default S3 endpoint. Change the origin domain name of the distribution to include the Regional endpoint of the bucket
6
Use AWS Direct Connect along with a site-to-site VPN to establish a connection between the data center and AWS Cloud
7
You can monitor storage capacity and file system activity using Amazon CloudWatch, and monitor end-user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose, Configure a new Amazon FSx for Windows file system with a deployment type of Multi-AZ. Transfer data to the newly created file system using the AWS DataSync service. Point all the file system users to the new location. You can test the failover of your Multi-AZ file system by modifying its throughput capacity
8
Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the application logs to CloudWatch Logs, Set up the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances as well as set up tracing of SQL queries with the X-Ray SDK for Java, Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs
9
Set up Kinesis Data Firehose in the logging account and then subscribe the delivery stream to CloudWatch Logs streams in each application AWS account via subscription filters. Persist the log data in an Amazon S3 bucket inside the logging AWS account
10
Store the data in Apache ORC, partitioned by date and sorted by device type of the device
11
Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift
12
Configure an S3 VPC endpoint and create an S3 bucket policy to allow access only from this VPC endpoint
13
Configure SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to it. Attach a customer-managed policy that denies access to RDS in any AWS Region except us-east-1
14
Change the API Gateway Regional endpoints to edge-optimized endpoints, Enable S3 Transfer Acceleration on the S3 bucket and configure the web application to use the Transfer Acceleration endpoints
15
Use Amazon CloudFront distribution with origin as the S3 bucket. This would speed up uploads as well as downloads for the video files, Enable Amazon S3 Transfer Acceleration for the S3 bucket. This would speed up uploads as well as downloads for the video files
16
Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read Replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
17
Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications
18
Create a backup process to persist all the data to an S3 bucket A using S3 standard storage class in the Production Region. Set up cross-Region replication of this S3 bucket A to an S3 bucket B using S3 standard storage class in the DR Region and set up a lifecycle policy in the DR Region to immediately move this data to Amazon Glacier
19
Process and analyze the Amazon CloudWatch Logs for Lambda function to determine processing times for requested images at pre-configured intervals, Process and analyze the AWS X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors
20
Use message timers to postpone the delivery of certain messages to the queue by one minute
21
As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application, Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance
22
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
23
Generate a separate certificate for each FQDN in each AWS Region using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in the relevant AWS Region
24
Configure Route 53 latency-based routing to route to the nearest Region and activate the health checks. Host the website on S3 in each Region and use API Gateway with AWS Lambda for the application layer. Set up the data layer using DynamoDB global tables with DAX for caching
25
Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls
26
Instance X is in the default security group. The default rules for the default security group allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. Instance Y is in a new security group. The default rules for a security group that you create allow no inbound traffic
27
Modify the size of the gp2 volume for each instance from 3 TB to 4 TB
28
Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered
29
Set up VPN CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters
30
Refactor the application to run from S3 instead of EFS and upload the video files directly to an S3 bucket. Configure an S3 trigger to invoke a Lambda function on each video file upload to S3 that puts a message in an SQS queue containing the link and the video processing instructions. Change the video processing application to read from the SQS queue and the S3 bucket. Configure the queue depth metric to scale the size of the Auto Scaling group for video processing instances. Leverage EventBridge events to trigger an SNS notification to the user containing the links to the processed files
31
Automated backups, manual snapshots and Read Replicas are supported across multiple Regions, Recovery time objective (RTO) represents the number of hours it takes, to return the Amazon RDS database to a working state after a disaster, Database snapshots are user-initiated backups of your complete DB instance that serve as full backups. These snapshots can be copied and shared to different Regions and accounts
32
Create an application that will use the S3 Select ScanRange parameter to get the first 250 bytes and store that information in ElasticSearch, Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in ElasticSearch
33
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
34
Use AWS Kinesis Data Streams to facilitate multiple applications consume same streaming data concurrently and independently
35
The error is the outcome of the company reaching its API Gateway account limit for calls per second, configure API keys as client identifiers using usage plans to define the per-client throttling limits for premium customers
36
Leverage Config rules to audit changes to AWS resources and monitor the compliance of the configuration by running the evaluations for the rule at a frequency that you choose. Develop AWS Config custom rules to establish a test-driven development approach by triggering the evaluation when any resource that matches the rule's scope changes in configuration, Enable trails and set up CloudTrail events to review and monitor management activities of all AWS accounts by logging these activities into CloudWatch Logs using a KMS key. Ensure that CloudTrail is enabled for all accounts as well as all available AWS services
37
Configure connection draining on ELB
38
Configure a CloudWatch metric that checks the status of the EC2 StatusCheckFailed metric, add an alarm to the metric, and then configure a health check that monitors the state of the alarm
39
Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities
40
Create an IAM role in your AWS account with a trust policy that trusts the Partner (Example Corp). Take a unique external ID value from Example Corp and include this external ID condition in the role’s trust policy
41
Leverage AWS Systems Manager to create and maintain a new AMI with the OS patches updated on an ongoing basis. Configure the Auto Scaling group to use the patched AMI and replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store and access the static dataset using Amazon EFS
42
Create a new ASG launch configuration that uses the newly created AMI. Double the size of the ASG and allow the new instances to become healthy and then reduce the ASG back to the original size. If the new instances do not work as expected, associate the ASG with the old launch configuration
43
Launch a new instance in the new subnet via an AMI created from the old instance. Direct traffic to this new instance using Route 53 and then terminate the old instance
44
In account B, create a cross-account IAM role. In account A, add the AssumeRole permission to account A's CodePipeline service role to allow it to assume the cross-account role in account B, In account B, create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack. In account A, update the CodePipeline configuration to include the resources associated with account B, In account A, create a customer-managed AWS KMS key that grants usage permissions to account A's CodePipeline service role and account B. Also, create an Amazon Simple Storage Service (Amazon S3) bucket with a bucket policy that grants account B access to the bucket
45
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
46
Set up AWS Global Accelerator in front of all the AWS Regions
47
Use a Network Address Translation (NAT) gateway to map multiple IP addresses into a single publicly exposed IP address
48
Capture the data in Kinesis Data Firehose and use an intermediary Lambda function to filter and transform the incoming stream before the output is dumped on S3
49
Create a CloudFront distribution and configure CloudFront to cache objects from a custom origin. This will offload some traffic from the on-premises servers. Customize CloudFront cache behavior by setting Time To Live (TTL) to suit your business requirement
50
The throttle limit set on API Gateway is very low. During peak hours, the additional requests are not making their way to Lambda
51
Create a customer-managed AWS KMS key and configure the key policy to grant permissions to the Amazon S3 service principal