問題一覧
1
Use Amazon CloudFront distribution in front of the Application Load Balancer, Use Amazon Aurora Replica
2
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis
3
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per operation to process the messages at the peak rate
4
Path-based Routing
5
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
6
Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs) within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region
7
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for AWS Lambda, so the team needs to contact AWS support to raise the account limit
8
Use an Application Load Balancer for distributing traffic to the Amazon EC2 instances spread across different Availability Zones (AZs). Configure Auto Scaling group to mask any failure of an instance
9
Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service, Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can manually set the instance's health status back to healthy and activate the ReplaceUnhealthy process type again
10
Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket, Enable versioning on the Amazon S3 bucket
11
Tier-1 (32 terabytes)
12
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour
13
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs having API call details and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm based on this metric's rate to send an Amazon SNS notification to the required team
14
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is polled by an AWS Lambda function in batches and the data is written into an auto-scaled Amazon DynamoDB table for downstream processing
15
Partition placement group
16
The instance maybe in Impaired status, The instance has failed the Elastic Load Balancing (ELB) health check status, The health check grace period for the instance has not expired
17
Amazon DynamoDB, AWS Lambda
18
Enable storage auto-scaling for Amazon RDS MySQL
19
Set up a read replica and modify the application to use the appropriate endpoint
20
Set up a Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3
21
Amazon CloudWatch, Amazon Simple Notification Service (Amazon SNS)
22
Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
23
Leverage Amazon Aurora MySQL with Multi-AZ Aurora Replicas and create the dev database by restoring from the automated backups of Amazon Aurora
24
Write a one time job to copy the videos from all Amazon EBS volumes to Amazon S3 and then modify the application to use Amazon S3 standard for storing the videos, Mount Amazon Elastic File System (Amazon EFS) on all Amazon EC2 instances. Write a one time job to copy the videos from all Amazon EBS volumes to Amazon EFS. Modify the application to use Amazon EFS for storing the videos
25
Remove the member account from the old organization. Send an invite to the member account from the new Organization. Accept the invite to the new organization from the member account
26
Deploy the web-tier Amazon EC2 instances in two Availability Zones (AZs), behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration
27
A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata, If your instance has a public IPv4 address, it retains the public IPv4 address after recovery
28
Set up an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data
29
Use Amazon Kinesis Data Streams to ingest the data, process it using AWS Lambda or run analytics using Amazon Kinesis Data Analytics
30
Amazon Aurora Serverless
31
Configure an Amazon CloudWatch alarm that triggers the recovery of the Amazon EC2 instance, in case the instance fails. The instance, however, should only be configured with an Amazon EBS volume
32
Amazon FSx for Windows File Server, File Gateway Configuration of AWS Storage Gateway
33
Internet Gateway (I1)
34
Use Amazon DynamoDB point in time recovery to restore the table to the state just before corrupted data was written
35
With cross-zone load balancing enabled, one instance in Availability Zone A receives 20% traffic and four instances in Availability Zone B receive 20% traffic each. With cross-zone load balancing disabled, one instance in Availability Zone A receives 50% traffic and four instances in Availability Zone B receive 12.5% traffic each
36
Provision Amazon Aurora Global Database
37
Amazon Kinesis Data Streams
38
Use Amazon FSx for Windows File Server as a shared storage solution
39
Make sure that the name of the FIFO (First-In-First-Out) queue ends with the .fifo suffix, Make sure that the throughput for the target FIFO (First-In-First-Out) queue does not exceed 3,000 messages per second, Delete the existing standard queue and recreate it as a FIFO (First-In-First-Out) queue
40
Amazon Kinesis Data Streams
41
Configure Amazon Simple Queue Service (Amazon SQS) queue to decouple microservices running faster processes from the microservices running slower ones
42
Use Amazon Kinesis Data Streams to process the data streams as well as decouple the producers and consumers for the real-time data processor
43
Use delay queues to postpone the delivery of new messages to the queue for a few seconds
44
Use AWS Database Migration Service (AWS DMS) to replicate the data from the databases into Amazon Redshift
45
Instance Store
46
Amazon Kinesis with Amazon Simple Notification Service (Amazon SNS)
47
Use a VPC peering connection
48
Amazon DynamoDB
49
Use Amazon Aurora Read Replicas
50
Migrate the data to Amazon RDS for SQL Server database in a Multi-AZ deployment
51
Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure
52
test.example.com
53
Amazon CloudFront can route to multiple origins based on the content type, Use field level encryption in Amazon CloudFront to protect sensitive data for specific content, Use an origin group with primary and secondary origins to configure Amazon CloudFront for high-availability and failover
54
Copy data from the source bucket to the destination bucket using the aws S3 sync command, Set up Amazon S3 batch replication to copy objects across Amazon S3 buckets in another Region using S3 console and then delete the replication configuration
55
Use Amazon EventBridge to decouple the system architecture
56
Use a Network Load Balancer with an Auto Scaling Group
57
Use Amazon RDS Read Replicas
58
AWS Snowball Edge Compute Optimized
59
Warm Standby
60
Use multi-part upload feature of Amazon S3
61
The Time To Live (TTL) is still in effect
62
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for serverless orchestration of the containerized services, Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for serverless orchestration of the containerized services
63
The instance with the oldest launch template or launch configuration will be terminated in AZ-B
64
Use AWS CloudTrail to analyze API calls
65
Create a Golden Amazon Machine Image (AMI) with the static installation components already setup, Use Amazon EC2 user data to customize the dynamic installation parts at boot time
66
Pilot Light
67
Create an Amazon Machine Image (AMI) after installing the software and copy the AMI across all Regions. Use this Region-specific AMI to run the recovery process in the respective Regions
68
The Auto Scaling group should be configured with the minimum capacity set to 4, with 2 instances each in two different Availability Zones. The maximum capacity of the Auto Scaling group should be set to 6
69
A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object
70
Network Load Balancer
71
Create a new launch configuration to use the correct instance type. Modify the Auto Scaling group to use this new launch configuration. Delete the old launch configuration as it is no longer needed
72
Use database cloning to create multiple clones of the production database and use each clone as a test database
73
The Auto Scaling group is using Amazon EC2 based health check and the Application Load Balancer is using ALB based health check
74
3
75
Set up Amazon Route 53 active-passive type of failover routing policy. If Amazon Route 53 health check determines the Application Load Balancer endpoint as unhealthy, the traffic will be diverted to a static error page, hosted on Amazon S3 bucket
76
Use cross-Region Read Replicas, Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups across multiple Regions
77
By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer
78
The CNAME record will be updated to point to the standby database
79
http://169.254.169.254/latest/meta-data/public-ipv4
80
Enable an Amazon Route 53 health check
81
http://bucket-name.s3-website-Region.amazonaws.com, http://bucket-name.s3-website.Region.amazonaws.com
82
Only Standard Amazon SQS queue is allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed
83
Handle all read operations for your application by connecting to the reader endpoint of the Amazon Aurora cluster so that Aurora can spread the load for read-only connections across the Aurora replica, Create a replica Aurora instance in another Availability Zone to improve the availability as the replica can serve as a failover target
84
Use a dead-letter queue to handle message processing failures
85
Auto Scaling group scheduled action
86
Create two Amazon SQS standard queues: one for pro and one for lite. Set up Amazon EC2 instances to prioritize polling for the pro queue over the lite queue
87
Use Amazon CloudFront signed URLs, Use Amazon CloudFront signed cookies
88
Amazon ElastiCache for Memcached
89
Any database engine level upgrade for an Amazon RDS database instance with Multi-AZ deployment triggers both the primary and standby database instances to be upgraded at the same time. This causes downtime until the upgrade is complete
90
Create an Amazon S3 Event Notification that sends a message to an Amazon SQS queue. Make the Amazon EC2 instances read from the Amazon SQS queue
91
Create an Auto-Scaling group with a desired capacity of a total of two Amazon EC2 instances across two Availability Zones. Configure an Application Load Balancer having a target group of these Amazon EC2 instances. Set up Amazon RDS MySQL DB in a multi-AZ configuration
92
Run the custom scripts as user data scripts on the Amazon EC2 instances
93
Amazon MQ
94
Create a read-replica with the same compute capacity and the same storage capacity as the primary. Point the reporting queries to run against the read replica
95
Provisioned IOPS SSD Amazon EBS volumes
96
Amazon Aurora Serverless
97
Opt for two separate AWS Direct Connect connections terminating on separate devices in more than one Direct Connect location
98
Amazon RDS applies operating system updates by performing maintenance on the standby, then promoting the standby to primary and finally performing maintenance on the old primary, which becomes the new standby, Amazon RDS automatically initiates a failover to the standby, in case primary database fails for any reason
99
Application Load Balancer + dynamic port mapping
100
Set up Amazon EBS as the Amazon EC2 instance root volume and then configure the application to use Amazon S3 as the document store
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Use Amazon CloudFront distribution in front of the Application Load Balancer, Use Amazon Aurora Replica
2
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis
3
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per operation to process the messages at the peak rate
4
Path-based Routing
5
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
6
Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs) within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region
7
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for AWS Lambda, so the team needs to contact AWS support to raise the account limit
8
Use an Application Load Balancer for distributing traffic to the Amazon EC2 instances spread across different Availability Zones (AZs). Configure Auto Scaling group to mask any failure of an instance
9
Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service, Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can manually set the instance's health status back to healthy and activate the ReplaceUnhealthy process type again
10
Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket, Enable versioning on the Amazon S3 bucket
11
Tier-1 (32 terabytes)
12
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour
13
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs having API call details and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm based on this metric's rate to send an Amazon SNS notification to the required team
14
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is polled by an AWS Lambda function in batches and the data is written into an auto-scaled Amazon DynamoDB table for downstream processing
15
Partition placement group
16
The instance maybe in Impaired status, The instance has failed the Elastic Load Balancing (ELB) health check status, The health check grace period for the instance has not expired
17
Amazon DynamoDB, AWS Lambda
18
Enable storage auto-scaling for Amazon RDS MySQL
19
Set up a read replica and modify the application to use the appropriate endpoint
20
Set up a Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3
21
Amazon CloudWatch, Amazon Simple Notification Service (Amazon SNS)
22
Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
23
Leverage Amazon Aurora MySQL with Multi-AZ Aurora Replicas and create the dev database by restoring from the automated backups of Amazon Aurora
24
Write a one time job to copy the videos from all Amazon EBS volumes to Amazon S3 and then modify the application to use Amazon S3 standard for storing the videos, Mount Amazon Elastic File System (Amazon EFS) on all Amazon EC2 instances. Write a one time job to copy the videos from all Amazon EBS volumes to Amazon EFS. Modify the application to use Amazon EFS for storing the videos
25
Remove the member account from the old organization. Send an invite to the member account from the new Organization. Accept the invite to the new organization from the member account
26
Deploy the web-tier Amazon EC2 instances in two Availability Zones (AZs), behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration
27
A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata, If your instance has a public IPv4 address, it retains the public IPv4 address after recovery
28
Set up an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data
29
Use Amazon Kinesis Data Streams to ingest the data, process it using AWS Lambda or run analytics using Amazon Kinesis Data Analytics
30
Amazon Aurora Serverless
31
Configure an Amazon CloudWatch alarm that triggers the recovery of the Amazon EC2 instance, in case the instance fails. The instance, however, should only be configured with an Amazon EBS volume
32
Amazon FSx for Windows File Server, File Gateway Configuration of AWS Storage Gateway
33
Internet Gateway (I1)
34
Use Amazon DynamoDB point in time recovery to restore the table to the state just before corrupted data was written
35
With cross-zone load balancing enabled, one instance in Availability Zone A receives 20% traffic and four instances in Availability Zone B receive 20% traffic each. With cross-zone load balancing disabled, one instance in Availability Zone A receives 50% traffic and four instances in Availability Zone B receive 12.5% traffic each
36
Provision Amazon Aurora Global Database
37
Amazon Kinesis Data Streams
38
Use Amazon FSx for Windows File Server as a shared storage solution
39
Make sure that the name of the FIFO (First-In-First-Out) queue ends with the .fifo suffix, Make sure that the throughput for the target FIFO (First-In-First-Out) queue does not exceed 3,000 messages per second, Delete the existing standard queue and recreate it as a FIFO (First-In-First-Out) queue
40
Amazon Kinesis Data Streams
41
Configure Amazon Simple Queue Service (Amazon SQS) queue to decouple microservices running faster processes from the microservices running slower ones
42
Use Amazon Kinesis Data Streams to process the data streams as well as decouple the producers and consumers for the real-time data processor
43
Use delay queues to postpone the delivery of new messages to the queue for a few seconds
44
Use AWS Database Migration Service (AWS DMS) to replicate the data from the databases into Amazon Redshift
45
Instance Store
46
Amazon Kinesis with Amazon Simple Notification Service (Amazon SNS)
47
Use a VPC peering connection
48
Amazon DynamoDB
49
Use Amazon Aurora Read Replicas
50
Migrate the data to Amazon RDS for SQL Server database in a Multi-AZ deployment
51
Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure
52
test.example.com
53
Amazon CloudFront can route to multiple origins based on the content type, Use field level encryption in Amazon CloudFront to protect sensitive data for specific content, Use an origin group with primary and secondary origins to configure Amazon CloudFront for high-availability and failover
54
Copy data from the source bucket to the destination bucket using the aws S3 sync command, Set up Amazon S3 batch replication to copy objects across Amazon S3 buckets in another Region using S3 console and then delete the replication configuration
55
Use Amazon EventBridge to decouple the system architecture
56
Use a Network Load Balancer with an Auto Scaling Group
57
Use Amazon RDS Read Replicas
58
AWS Snowball Edge Compute Optimized
59
Warm Standby
60
Use multi-part upload feature of Amazon S3
61
The Time To Live (TTL) is still in effect
62
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for serverless orchestration of the containerized services, Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for serverless orchestration of the containerized services
63
The instance with the oldest launch template or launch configuration will be terminated in AZ-B
64
Use AWS CloudTrail to analyze API calls
65
Create a Golden Amazon Machine Image (AMI) with the static installation components already setup, Use Amazon EC2 user data to customize the dynamic installation parts at boot time
66
Pilot Light
67
Create an Amazon Machine Image (AMI) after installing the software and copy the AMI across all Regions. Use this Region-specific AMI to run the recovery process in the respective Regions
68
The Auto Scaling group should be configured with the minimum capacity set to 4, with 2 instances each in two different Availability Zones. The maximum capacity of the Auto Scaling group should be set to 6
69
A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object
70
Network Load Balancer
71
Create a new launch configuration to use the correct instance type. Modify the Auto Scaling group to use this new launch configuration. Delete the old launch configuration as it is no longer needed
72
Use database cloning to create multiple clones of the production database and use each clone as a test database
73
The Auto Scaling group is using Amazon EC2 based health check and the Application Load Balancer is using ALB based health check
74
3
75
Set up Amazon Route 53 active-passive type of failover routing policy. If Amazon Route 53 health check determines the Application Load Balancer endpoint as unhealthy, the traffic will be diverted to a static error page, hosted on Amazon S3 bucket
76
Use cross-Region Read Replicas, Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups across multiple Regions
77
By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer
78
The CNAME record will be updated to point to the standby database
79
http://169.254.169.254/latest/meta-data/public-ipv4
80
Enable an Amazon Route 53 health check
81
http://bucket-name.s3-website-Region.amazonaws.com, http://bucket-name.s3-website.Region.amazonaws.com
82
Only Standard Amazon SQS queue is allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed
83
Handle all read operations for your application by connecting to the reader endpoint of the Amazon Aurora cluster so that Aurora can spread the load for read-only connections across the Aurora replica, Create a replica Aurora instance in another Availability Zone to improve the availability as the replica can serve as a failover target
84
Use a dead-letter queue to handle message processing failures
85
Auto Scaling group scheduled action
86
Create two Amazon SQS standard queues: one for pro and one for lite. Set up Amazon EC2 instances to prioritize polling for the pro queue over the lite queue
87
Use Amazon CloudFront signed URLs, Use Amazon CloudFront signed cookies
88
Amazon ElastiCache for Memcached
89
Any database engine level upgrade for an Amazon RDS database instance with Multi-AZ deployment triggers both the primary and standby database instances to be upgraded at the same time. This causes downtime until the upgrade is complete
90
Create an Amazon S3 Event Notification that sends a message to an Amazon SQS queue. Make the Amazon EC2 instances read from the Amazon SQS queue
91
Create an Auto-Scaling group with a desired capacity of a total of two Amazon EC2 instances across two Availability Zones. Configure an Application Load Balancer having a target group of these Amazon EC2 instances. Set up Amazon RDS MySQL DB in a multi-AZ configuration
92
Run the custom scripts as user data scripts on the Amazon EC2 instances
93
Amazon MQ
94
Create a read-replica with the same compute capacity and the same storage capacity as the primary. Point the reporting queries to run against the read replica
95
Provisioned IOPS SSD Amazon EBS volumes
96
Amazon Aurora Serverless
97
Opt for two separate AWS Direct Connect connections terminating on separate devices in more than one Direct Connect location
98
Amazon RDS applies operating system updates by performing maintenance on the standby, then promoting the standby to primary and finally performing maintenance on the old primary, which becomes the new standby, Amazon RDS automatically initiates a failover to the standby, in case primary database fails for any reason
99
Application Load Balancer + dynamic port mapping
100
Set up Amazon EBS as the Amazon EC2 instance root volume and then configure the application to use Amazon S3 as the document store