問題一覧
1
Enable Amazon S3 server access logging to capture all bucket-level and object-level events, Enable AWS CloudTrail data events to enable object-level logging for S3 bucket
2
Set up an interface VPC endpoint for Kinesis Data Streams in the VPC. Ensure that the VPC endpoint policy allows traffic from the applications
3
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single or multiple AWS Region(s), Use cross-Region Read Replicas
4
Create an inbound rule in the security group for the MySQL DB servers using TCP protocol on port 3306. Set the source as the security group for the EC2 instance app servers, Create an outbound rule in the security group for the EC2 instance app servers using TCP protocol on port 3306. Set the destination as the security group for the MySQL DB servers
5
When a new Amazon S3 bucket is created, it takes up to 24 hours before the bucket name propagates across all AWS Regions, CloudFront by default, forwards the requests to the default S3 endpoint. Change the origin domain name of the distribution to include the Regional endpoint of the bucket
6
Use AWS Direct Connect along with a site-to-site VPN to establish a connection between the data center and AWS Cloud
7
You can monitor storage capacity and file system activity using Amazon CloudWatch, and monitor end-user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose, Configure a new Amazon FSx for Windows file system with a deployment type of Multi-AZ. Transfer data to the newly created file system using the AWS DataSync service. Point all the file system users to the new location. You can test the failover of your Multi-AZ file system by modifying its throughput capacity
8
Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the application logs to CloudWatch Logs, Set up the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances as well as set up tracing of SQL queries with the X-Ray SDK for Java, Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs
9
Set up Kinesis Data Firehose in the logging account and then subscribe the delivery stream to CloudWatch Logs streams in each application AWS account via subscription filters. Persist the log data in an Amazon S3 bucket inside the logging AWS account
10
Store the data in Apache ORC, partitioned by date and sorted by device type of the device
11
Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift
12
Configure an S3 VPC endpoint and create an S3 bucket policy to allow access only from this VPC endpoint
13
Configure SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to it. Attach a customer-managed policy that denies access to RDS in any AWS Region except us-east-1
14
Change the API Gateway Regional endpoints to edge-optimized endpoints, Enable S3 Transfer Acceleration on the S3 bucket and configure the web application to use the Transfer Acceleration endpoints
15
Use Amazon CloudFront distribution with origin as the S3 bucket. This would speed up uploads as well as downloads for the video files, Enable Amazon S3 Transfer Acceleration for the S3 bucket. This would speed up uploads as well as downloads for the video files
16
Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read Replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
17
Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications
18
Create a backup process to persist all the data to an S3 bucket A using S3 standard storage class in the Production Region. Set up cross-Region replication of this S3 bucket A to an S3 bucket B using S3 standard storage class in the DR Region and set up a lifecycle policy in the DR Region to immediately move this data to Amazon Glacier
19
Process and analyze the Amazon CloudWatch Logs for Lambda function to determine processing times for requested images at pre-configured intervals, Process and analyze the AWS X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors
20
Use message timers to postpone the delivery of certain messages to the queue by one minute
21
As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application, Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance
22
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
23
Generate a separate certificate for each FQDN in each AWS Region using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in the relevant AWS Region
24
Configure Route 53 latency-based routing to route to the nearest Region and activate the health checks. Host the website on S3 in each Region and use API Gateway with AWS Lambda for the application layer. Set up the data layer using DynamoDB global tables with DAX for caching
25
Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls
26
Instance X is in the default security group. The default rules for the default security group allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. Instance Y is in a new security group. The default rules for a security group that you create allow no inbound traffic
27
Modify the size of the gp2 volume for each instance from 3 TB to 4 TB
28
Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered
29
Set up VPN CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters
30
Refactor the application to run from S3 instead of EFS and upload the video files directly to an S3 bucket. Configure an S3 trigger to invoke a Lambda function on each video file upload to S3 that puts a message in an SQS queue containing the link and the video processing instructions. Change the video processing application to read from the SQS queue and the S3 bucket. Configure the queue depth metric to scale the size of the Auto Scaling group for video processing instances. Leverage EventBridge events to trigger an SNS notification to the user containing the links to the processed files
31
Automated backups, manual snapshots and Read Replicas are supported across multiple Regions, Recovery time objective (RTO) represents the number of hours it takes, to return the Amazon RDS database to a working state after a disaster, Database snapshots are user-initiated backups of your complete DB instance that serve as full backups. These snapshots can be copied and shared to different Regions and accounts
32
Create an application that will use the S3 Select ScanRange parameter to get the first 250 bytes and store that information in ElasticSearch, Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in ElasticSearch
33
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
34
Use AWS Kinesis Data Streams to facilitate multiple applications consume same streaming data concurrently and independently
35
The error is the outcome of the company reaching its API Gateway account limit for calls per second, configure API keys as client identifiers using usage plans to define the per-client throttling limits for premium customers
36
Leverage Config rules to audit changes to AWS resources and monitor the compliance of the configuration by running the evaluations for the rule at a frequency that you choose. Develop AWS Config custom rules to establish a test-driven development approach by triggering the evaluation when any resource that matches the rule's scope changes in configuration, Enable trails and set up CloudTrail events to review and monitor management activities of all AWS accounts by logging these activities into CloudWatch Logs using a KMS key. Ensure that CloudTrail is enabled for all accounts as well as all available AWS services
37
Configure connection draining on ELB
38
Configure a CloudWatch metric that checks the status of the EC2 StatusCheckFailed metric, add an alarm to the metric, and then configure a health check that monitors the state of the alarm
39
Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities
40
Create an IAM role in your AWS account with a trust policy that trusts the Partner (Example Corp). Take a unique external ID value from Example Corp and include this external ID condition in the role’s trust policy
41
Leverage AWS Systems Manager to create and maintain a new AMI with the OS patches updated on an ongoing basis. Configure the Auto Scaling group to use the patched AMI and replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store and access the static dataset using Amazon EFS
42
Create a new ASG launch configuration that uses the newly created AMI. Double the size of the ASG and allow the new instances to become healthy and then reduce the ASG back to the original size. If the new instances do not work as expected, associate the ASG with the old launch configuration
43
Launch a new instance in the new subnet via an AMI created from the old instance. Direct traffic to this new instance using Route 53 and then terminate the old instance
44
In account B, create a cross-account IAM role. In account A, add the AssumeRole permission to account A's CodePipeline service role to allow it to assume the cross-account role in account B, In account B, create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack. In account A, update the CodePipeline configuration to include the resources associated with account B, In account A, create a customer-managed AWS KMS key that grants usage permissions to account A's CodePipeline service role and account B. Also, create an Amazon Simple Storage Service (Amazon S3) bucket with a bucket policy that grants account B access to the bucket
45
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
46
Set up AWS Global Accelerator in front of all the AWS Regions
47
Use a Network Address Translation (NAT) gateway to map multiple IP addresses into a single publicly exposed IP address
48
Capture the data in Kinesis Data Firehose and use an intermediary Lambda function to filter and transform the incoming stream before the output is dumped on S3
49
Create a CloudFront distribution and configure CloudFront to cache objects from a custom origin. This will offload some traffic from the on-premises servers. Customize CloudFront cache behavior by setting Time To Live (TTL) to suit your business requirement
50
The throttle limit set on API Gateway is very low. During peak hours, the additional requests are not making their way to Lambda
51
Create a customer-managed AWS KMS key and configure the key policy to grant permissions to the Amazon S3 service principal
52
Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every night
53
Set up database migration to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group for Amazon EC2 instances that are fronted by an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis with replication group
54
Leverage AWS DataSync to transfer the biological data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow for orchestrating an AWS Batch job that processes the biological data
55
Configure an Amazon VPC interface endpoint for the Secrets Manager service to enable access for your Secrets Manager Lambda rotation function and private Amazon Relational Database Service (Amazon RDS) instance
56
The research assistant does not need to pay any transfer charges for the image upload
57
Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection
58
Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches
59
The primary and standby DB instances are upgraded at the same time for RDS MySQL Multi-AZ. All instances are upgraded at the same time for Aurora MySQL, Multi-AZ deployments for both RDS MySQL and Aurora MySQL follow synchronous replication, Read Replicas can be manually promoted to a standalone database instance for RDS MySQL whereas Read Replicas for Aurora MySQL can be promoted to the primary instance
60
Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier
61
Attach an EFS file system to the on-premises servers to act as the NAS server. Mount the same EFS file system to the AWS based web servers running on EC2 instances to serve the content
62
Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS
63
Order three AWS Snowball Edge appliances, split and transfer the data to these three appliances and ship them to AWS which will then copy the data from the Snowball Edge appliances to S3
64
Dockerize each application and then deploy to an ECS cluster running behind an Application Load Balancer
65
Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS
66
Create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint, Create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint
67
Enable Aurora Auto Scaling for Aurora Replicas. Deploy the application on Amazon EC2 instances configured behind an Auto Scaling Group, Configure EC2 instances behind an Application Load Balancer with Round Robin routing algorithm and sticky sessions enabled
68
Set up Amazon Kinesis Data Firehose to buffer events and an AWS Lambda function to process and transform the events. Set up Amazon OpenSearch to receive the transformed events. Use the Kibana endpoint that is deployed with OpenSearch to create near-real-time visualizations and dashboards
69
Create a Read Replica in the same Region as the Master database and point the analytics workload there
70
Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center's Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network
71
Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier
72
Ingest the orders in an SQS queue and trigger a Lambda function to process them
73
Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC
74
Set up DynamoDB Streams to capture and send updates to a Lambda function that outputs records to Kinesis Data Analytics (KDA) via Kinesis Data Streams (KDS). Detect and analyze anomalies in KDA and send notifications via SNS
75
Register the Application Load Balancer behind a Network Load Balancer that will provide the necessary static IP address to the ALB
76
Create a single Amazon S3 bucket. Create an IAM user for each client. Group these users under an IAM policy that permits access to sub-folders within the bucket via the use of the 'username' Policy variable. Train the clients to use an S3 client instead of an FTP client
77
Set up a new Amazon FSx file system with a Multi-AZ deployment type. Leverage AWS DataSync to transfer data from the old file system to the new one. Point the application to the new Multi-AZ file system
78
Add subnet CIDR range, or IP address of the replication instance in the inbound rules of the Amazon Redshift cluster security group, Your Amazon Redshift cluster must be in the same account and same AWS Region as the replication instance
79
Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3, Set up an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput
80
Enable default encryption on the Amazon S3 bucket that uses Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Use Amazon Redshift Spectrum to query the data for monthly audits
81
Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations
82
Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data
83
NAT Gateway has to be configured to give internet access to the Amazon VPC connected Lambda function
84
Ingest the data into Amazon S3 from S3 Glacier and query the required data with Amazon S3 Select
85
Deploy the VPC infrastructure using AWS CloudFormation and leverage a custom resource to request a unique CIDR range from an external IP address management (IPAM) service
86
Amazon S3 Glacier Instant Retrieval is the best fit for data accessed twice a year. Amazon S3 Glacier Deep Archive is cost-effective for data that is stored for long-term retention
87
Create an IPsec tunnel between your customer gateway appliance and the virtual private gateway, Create a VPC with a virtual private gateway, Set up a public virtual interface on the Direct Connect connection
88
The SSL certificate on the legacy web application server has expired. Reissue the SSL certificate on the web server that is signed by a globally recognized certificate authority (CA). Install the full certificate chain onto the legacy web application server
89
Set up Interface endpoints for Amazon S3
90
Replatform the server to Amazon EC2 while choosing an AMI of your choice to cater to the OS requirements. Use AWS Snowball to transfer the image data to Amazon S3
91
Set up VPC Flow Logs for the elastic network interfaces associated with the instances and configure the VPC Flow Logs to be filtered for rejected traffic. Publish the Flow Logs to CloudWatch Logs
92
Leverage AWS Config rules for auditing changes to AWS resources periodically and monitor the compliance of the configuration. Set up AWS Config custom rules using AWS Lambda to create a test-driven development approach, and finally automate the evaluation of configuration changes against the required controls, Leverage AWS CloudTrail events to review management activities of all AWS accounts. Make sure that CloudTrail is enabled in all accounts for the available AWS services. Enable CloudTrail trails and encrypt CloudTrail event log files with an AWS KMS key and monitor the recorded events via CloudWatch Logs
93
Amazon CloudFront with Lambda@Edge, Application Load Balancer
94
Design the serverless architecture by use of Amazon Simple Queue Service (SQS) with Amazon ECS Fargate. To save costs, run the Amazon SQS FIFO queues and Amazon ECS Fargate tasks only when needed
95
Store the intermediary query results in S3 Standard storage class
96
Configure a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days, Configure a Lifecycle Policy to transition all objects to Glacier after 180 days
97
Establish connectivity between VPC V1 and VPC V2 using VPC peering. Enable DNS resolution from the source VPC for VPC peering. Establish the necessary routes, security group rules, and network access control list (ACL) rules to allow traffic between the VPCs
98
Use Athena to run SQL based analytics against S3 data
99
Migrate mission-critical VMs using AWS Application Migration Service (MGN). Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball Edge. Leverage VM Import/Export to import the VMs into Amazon EC2
100
Analyze the cold data with Athena, Transition the data to S3 Standard IA after 30 days
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Enable Amazon S3 server access logging to capture all bucket-level and object-level events, Enable AWS CloudTrail data events to enable object-level logging for S3 bucket
2
Set up an interface VPC endpoint for Kinesis Data Streams in the VPC. Ensure that the VPC endpoint policy allows traffic from the applications
3
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single or multiple AWS Region(s), Use cross-Region Read Replicas
4
Create an inbound rule in the security group for the MySQL DB servers using TCP protocol on port 3306. Set the source as the security group for the EC2 instance app servers, Create an outbound rule in the security group for the EC2 instance app servers using TCP protocol on port 3306. Set the destination as the security group for the MySQL DB servers
5
When a new Amazon S3 bucket is created, it takes up to 24 hours before the bucket name propagates across all AWS Regions, CloudFront by default, forwards the requests to the default S3 endpoint. Change the origin domain name of the distribution to include the Regional endpoint of the bucket
6
Use AWS Direct Connect along with a site-to-site VPN to establish a connection between the data center and AWS Cloud
7
You can monitor storage capacity and file system activity using Amazon CloudWatch, and monitor end-user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose, Configure a new Amazon FSx for Windows file system with a deployment type of Multi-AZ. Transfer data to the newly created file system using the AWS DataSync service. Point all the file system users to the new location. You can test the failover of your Multi-AZ file system by modifying its throughput capacity
8
Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the application logs to CloudWatch Logs, Set up the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances as well as set up tracing of SQL queries with the X-Ray SDK for Java, Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs
9
Set up Kinesis Data Firehose in the logging account and then subscribe the delivery stream to CloudWatch Logs streams in each application AWS account via subscription filters. Persist the log data in an Amazon S3 bucket inside the logging AWS account
10
Store the data in Apache ORC, partitioned by date and sorted by device type of the device
11
Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift
12
Configure an S3 VPC endpoint and create an S3 bucket policy to allow access only from this VPC endpoint
13
Configure SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to it. Attach a customer-managed policy that denies access to RDS in any AWS Region except us-east-1
14
Change the API Gateway Regional endpoints to edge-optimized endpoints, Enable S3 Transfer Acceleration on the S3 bucket and configure the web application to use the Transfer Acceleration endpoints
15
Use Amazon CloudFront distribution with origin as the S3 bucket. This would speed up uploads as well as downloads for the video files, Enable Amazon S3 Transfer Acceleration for the S3 bucket. This would speed up uploads as well as downloads for the video files
16
Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read Replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
17
Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications
18
Create a backup process to persist all the data to an S3 bucket A using S3 standard storage class in the Production Region. Set up cross-Region replication of this S3 bucket A to an S3 bucket B using S3 standard storage class in the DR Region and set up a lifecycle policy in the DR Region to immediately move this data to Amazon Glacier
19
Process and analyze the Amazon CloudWatch Logs for Lambda function to determine processing times for requested images at pre-configured intervals, Process and analyze the AWS X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors
20
Use message timers to postpone the delivery of certain messages to the queue by one minute
21
As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application, Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance
22
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
23
Generate a separate certificate for each FQDN in each AWS Region using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in the relevant AWS Region
24
Configure Route 53 latency-based routing to route to the nearest Region and activate the health checks. Host the website on S3 in each Region and use API Gateway with AWS Lambda for the application layer. Set up the data layer using DynamoDB global tables with DAX for caching
25
Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls
26
Instance X is in the default security group. The default rules for the default security group allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. Instance Y is in a new security group. The default rules for a security group that you create allow no inbound traffic
27
Modify the size of the gp2 volume for each instance from 3 TB to 4 TB
28
Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered
29
Set up VPN CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters
30
Refactor the application to run from S3 instead of EFS and upload the video files directly to an S3 bucket. Configure an S3 trigger to invoke a Lambda function on each video file upload to S3 that puts a message in an SQS queue containing the link and the video processing instructions. Change the video processing application to read from the SQS queue and the S3 bucket. Configure the queue depth metric to scale the size of the Auto Scaling group for video processing instances. Leverage EventBridge events to trigger an SNS notification to the user containing the links to the processed files
31
Automated backups, manual snapshots and Read Replicas are supported across multiple Regions, Recovery time objective (RTO) represents the number of hours it takes, to return the Amazon RDS database to a working state after a disaster, Database snapshots are user-initiated backups of your complete DB instance that serve as full backups. These snapshots can be copied and shared to different Regions and accounts
32
Create an application that will use the S3 Select ScanRange parameter to get the first 250 bytes and store that information in ElasticSearch, Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in ElasticSearch
33
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
34
Use AWS Kinesis Data Streams to facilitate multiple applications consume same streaming data concurrently and independently
35
The error is the outcome of the company reaching its API Gateway account limit for calls per second, configure API keys as client identifiers using usage plans to define the per-client throttling limits for premium customers
36
Leverage Config rules to audit changes to AWS resources and monitor the compliance of the configuration by running the evaluations for the rule at a frequency that you choose. Develop AWS Config custom rules to establish a test-driven development approach by triggering the evaluation when any resource that matches the rule's scope changes in configuration, Enable trails and set up CloudTrail events to review and monitor management activities of all AWS accounts by logging these activities into CloudWatch Logs using a KMS key. Ensure that CloudTrail is enabled for all accounts as well as all available AWS services
37
Configure connection draining on ELB
38
Configure a CloudWatch metric that checks the status of the EC2 StatusCheckFailed metric, add an alarm to the metric, and then configure a health check that monitors the state of the alarm
39
Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities
40
Create an IAM role in your AWS account with a trust policy that trusts the Partner (Example Corp). Take a unique external ID value from Example Corp and include this external ID condition in the role’s trust policy
41
Leverage AWS Systems Manager to create and maintain a new AMI with the OS patches updated on an ongoing basis. Configure the Auto Scaling group to use the patched AMI and replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store and access the static dataset using Amazon EFS
42
Create a new ASG launch configuration that uses the newly created AMI. Double the size of the ASG and allow the new instances to become healthy and then reduce the ASG back to the original size. If the new instances do not work as expected, associate the ASG with the old launch configuration
43
Launch a new instance in the new subnet via an AMI created from the old instance. Direct traffic to this new instance using Route 53 and then terminate the old instance
44
In account B, create a cross-account IAM role. In account A, add the AssumeRole permission to account A's CodePipeline service role to allow it to assume the cross-account role in account B, In account B, create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack. In account A, update the CodePipeline configuration to include the resources associated with account B, In account A, create a customer-managed AWS KMS key that grants usage permissions to account A's CodePipeline service role and account B. Also, create an Amazon Simple Storage Service (Amazon S3) bucket with a bucket policy that grants account B access to the bucket
45
Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking
46
Set up AWS Global Accelerator in front of all the AWS Regions
47
Use a Network Address Translation (NAT) gateway to map multiple IP addresses into a single publicly exposed IP address
48
Capture the data in Kinesis Data Firehose and use an intermediary Lambda function to filter and transform the incoming stream before the output is dumped on S3
49
Create a CloudFront distribution and configure CloudFront to cache objects from a custom origin. This will offload some traffic from the on-premises servers. Customize CloudFront cache behavior by setting Time To Live (TTL) to suit your business requirement
50
The throttle limit set on API Gateway is very low. During peak hours, the additional requests are not making their way to Lambda
51
Create a customer-managed AWS KMS key and configure the key policy to grant permissions to the Amazon S3 service principal
52
Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every night
53
Set up database migration to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group for Amazon EC2 instances that are fronted by an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis with replication group
54
Leverage AWS DataSync to transfer the biological data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow for orchestrating an AWS Batch job that processes the biological data
55
Configure an Amazon VPC interface endpoint for the Secrets Manager service to enable access for your Secrets Manager Lambda rotation function and private Amazon Relational Database Service (Amazon RDS) instance
56
The research assistant does not need to pay any transfer charges for the image upload
57
Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection
58
Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches
59
The primary and standby DB instances are upgraded at the same time for RDS MySQL Multi-AZ. All instances are upgraded at the same time for Aurora MySQL, Multi-AZ deployments for both RDS MySQL and Aurora MySQL follow synchronous replication, Read Replicas can be manually promoted to a standalone database instance for RDS MySQL whereas Read Replicas for Aurora MySQL can be promoted to the primary instance
60
Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier
61
Attach an EFS file system to the on-premises servers to act as the NAS server. Mount the same EFS file system to the AWS based web servers running on EC2 instances to serve the content
62
Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS
63
Order three AWS Snowball Edge appliances, split and transfer the data to these three appliances and ship them to AWS which will then copy the data from the Snowball Edge appliances to S3
64
Dockerize each application and then deploy to an ECS cluster running behind an Application Load Balancer
65
Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS
66
Create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint, Create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint
67
Enable Aurora Auto Scaling for Aurora Replicas. Deploy the application on Amazon EC2 instances configured behind an Auto Scaling Group, Configure EC2 instances behind an Application Load Balancer with Round Robin routing algorithm and sticky sessions enabled
68
Set up Amazon Kinesis Data Firehose to buffer events and an AWS Lambda function to process and transform the events. Set up Amazon OpenSearch to receive the transformed events. Use the Kibana endpoint that is deployed with OpenSearch to create near-real-time visualizations and dashboards
69
Create a Read Replica in the same Region as the Master database and point the analytics workload there
70
Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center's Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network
71
Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier
72
Ingest the orders in an SQS queue and trigger a Lambda function to process them
73
Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC
74
Set up DynamoDB Streams to capture and send updates to a Lambda function that outputs records to Kinesis Data Analytics (KDA) via Kinesis Data Streams (KDS). Detect and analyze anomalies in KDA and send notifications via SNS
75
Register the Application Load Balancer behind a Network Load Balancer that will provide the necessary static IP address to the ALB
76
Create a single Amazon S3 bucket. Create an IAM user for each client. Group these users under an IAM policy that permits access to sub-folders within the bucket via the use of the 'username' Policy variable. Train the clients to use an S3 client instead of an FTP client
77
Set up a new Amazon FSx file system with a Multi-AZ deployment type. Leverage AWS DataSync to transfer data from the old file system to the new one. Point the application to the new Multi-AZ file system
78
Add subnet CIDR range, or IP address of the replication instance in the inbound rules of the Amazon Redshift cluster security group, Your Amazon Redshift cluster must be in the same account and same AWS Region as the replication instance
79
Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3, Set up an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput
80
Enable default encryption on the Amazon S3 bucket that uses Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Use Amazon Redshift Spectrum to query the data for monthly audits
81
Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations
82
Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data
83
NAT Gateway has to be configured to give internet access to the Amazon VPC connected Lambda function
84
Ingest the data into Amazon S3 from S3 Glacier and query the required data with Amazon S3 Select
85
Deploy the VPC infrastructure using AWS CloudFormation and leverage a custom resource to request a unique CIDR range from an external IP address management (IPAM) service
86
Amazon S3 Glacier Instant Retrieval is the best fit for data accessed twice a year. Amazon S3 Glacier Deep Archive is cost-effective for data that is stored for long-term retention
87
Create an IPsec tunnel between your customer gateway appliance and the virtual private gateway, Create a VPC with a virtual private gateway, Set up a public virtual interface on the Direct Connect connection
88
The SSL certificate on the legacy web application server has expired. Reissue the SSL certificate on the web server that is signed by a globally recognized certificate authority (CA). Install the full certificate chain onto the legacy web application server
89
Set up Interface endpoints for Amazon S3
90
Replatform the server to Amazon EC2 while choosing an AMI of your choice to cater to the OS requirements. Use AWS Snowball to transfer the image data to Amazon S3
91
Set up VPC Flow Logs for the elastic network interfaces associated with the instances and configure the VPC Flow Logs to be filtered for rejected traffic. Publish the Flow Logs to CloudWatch Logs
92
Leverage AWS Config rules for auditing changes to AWS resources periodically and monitor the compliance of the configuration. Set up AWS Config custom rules using AWS Lambda to create a test-driven development approach, and finally automate the evaluation of configuration changes against the required controls, Leverage AWS CloudTrail events to review management activities of all AWS accounts. Make sure that CloudTrail is enabled in all accounts for the available AWS services. Enable CloudTrail trails and encrypt CloudTrail event log files with an AWS KMS key and monitor the recorded events via CloudWatch Logs
93
Amazon CloudFront with Lambda@Edge, Application Load Balancer
94
Design the serverless architecture by use of Amazon Simple Queue Service (SQS) with Amazon ECS Fargate. To save costs, run the Amazon SQS FIFO queues and Amazon ECS Fargate tasks only when needed
95
Store the intermediary query results in S3 Standard storage class
96
Configure a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days, Configure a Lifecycle Policy to transition all objects to Glacier after 180 days
97
Establish connectivity between VPC V1 and VPC V2 using VPC peering. Enable DNS resolution from the source VPC for VPC peering. Establish the necessary routes, security group rules, and network access control list (ACL) rules to allow traffic between the VPCs
98
Use Athena to run SQL based analytics against S3 data
99
Migrate mission-critical VMs using AWS Application Migration Service (MGN). Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball Edge. Leverage VM Import/Export to import the VMs into Amazon EC2
100
Analyze the cold data with Athena, Transition the data to S3 Standard IA after 30 days