問題一覧
1
Amazon DynamoDB
2
This is a scale up example of vertical scalability
3
Use Amazon ElastiCache to improve the performance of compute-intensive workloads, Use Amazon ElastiCache to improve latency and throughput for read-heavy application workloads
4
Elastic Fabric Adapter (EFA)
5
Use Amazon CloudFront with Amazon S3 as the storage solution for the static assets
6
Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed logs locally for low-latency access while storing the full volume with all logs in its Amazon S3 service bucket
7
Use AWS CloudFormation StackSets to deploy the same template across AWS accounts and regions
8
Provisioned IOPS SSD (io1)
9
By default, AWS Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once an AWS Lambda function is VPC-enabled, it will need a route through a Network Address Translation gateway (NAT gateway) in a public subnet to access public resources, Since AWS Lambda functions can scale extremely quickly, it's a good idea to deploy a Amazon CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold, If you intend to reuse code in more than one AWS Lambda function, you should consider creating an AWS Lambda Layer for the reusable code
10
Geoproximity routing
11
Amazon Neptune
12
Use AWS CloudFormation to manage Amazon RDS databases
13
Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years
14
Use Amazon DynamoDB DAX
15
Use an Auto Scaling Group
16
Set up AWS Global Accelerator. Register the Application Load Balancers in different Regions to the AWS Global Accelerator. Configure the on-premises firewall's rule to allow static IP addresses associated with the AWS Global Accelerator
17
Each of the four targets in AZ-A receives 12.5% of the traffic
18
Leverage Amazon Kinesis Data Streams to capture the data from the website and feed it into Amazon Kinesis Data Analytics which can query the data in real time. Lastly, the analyzed feed is output into Amazon Kinesis Data Firehose to persist the data on Amazon S3
19
Use an Amazon CloudFront distribution
20
Select a cluster placement group while launching Amazon EC2 instances
21
Set up Amazon DynamoDB table in the on-demand capacity mode
22
Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks
23
Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams
24
Create an AWS Transit Gateway with equal cost multipath routing and add additional VPN tunnels
25
Amazon MQ
26
Amazon Elastic Compute Cloud (Amazon EC2)
27
Amazon DynamoDB Streams + AWS Lambda
28
Use a Cluster placement group
29
Use Amazon Redshift Spectrum to create Amazon Redshift cluster tables pointing to the underlying historical data in Amazon S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
30
Feed the streaming transactions into Amazon Kinesis Data Streams. Leverage AWS Lambda integration to remove sensitive data from every transaction and then store the cleansed transactions in Amazon DynamoDB. The internal applications can consume the raw transactions off the Amazon Kinesis Data Stream
31
Use Amazon Transcribe to convert audio files to text and Amazon Athena to perform SQL based analysis to understand the underlying customer sentiments
32
Amazon DynamoDB
33
Create an Application Load Balancer
34
Use Amazon S3 to host the static website and Amazon CloudFront to distribute the content for low latency access
35
Amazon Cognito
36
The Amazon EBS volume was configured as the root volume of Amazon EC2 instance. On termination of the instance, the default behavior is to also terminate the attached root volume
37
Amazon EBS volumes are Availability Zone (AZ) locked
38
Enable Amazon S3 Transfer Acceleration (Amazon S3TA) for the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files, Use Amazon CloudFront distribution with origin as the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files
39
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
40
Use Amazon Elasticache for distributed in-memory cache based session management
41
Amazon EC2 with Amazon EBS volume of Provisioned IOPS SSD (io1) type
42
Set up AWS Global Accelerator and add endpoints to cater to users in different geographic locations
43
NAT Gateways deployed in your public subnet
44
Install an Amazon CloudWatch Logs agents on the Amazon EC2 instances to send logs to Amazon CloudWatch
45
Use batch messages
46
Use Amazon ElastiCache for Redis
47
Amazon Kinesis Data Streams
48
Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications
49
Use Enhanced Fanout feature of Amazon Kinesis Data Streams
50
AWS Glue, Amazon EMR
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Amazon DynamoDB
2
This is a scale up example of vertical scalability
3
Use Amazon ElastiCache to improve the performance of compute-intensive workloads, Use Amazon ElastiCache to improve latency and throughput for read-heavy application workloads
4
Elastic Fabric Adapter (EFA)
5
Use Amazon CloudFront with Amazon S3 as the storage solution for the static assets
6
Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed logs locally for low-latency access while storing the full volume with all logs in its Amazon S3 service bucket
7
Use AWS CloudFormation StackSets to deploy the same template across AWS accounts and regions
8
Provisioned IOPS SSD (io1)
9
By default, AWS Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once an AWS Lambda function is VPC-enabled, it will need a route through a Network Address Translation gateway (NAT gateway) in a public subnet to access public resources, Since AWS Lambda functions can scale extremely quickly, it's a good idea to deploy a Amazon CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold, If you intend to reuse code in more than one AWS Lambda function, you should consider creating an AWS Lambda Layer for the reusable code
10
Geoproximity routing
11
Amazon Neptune
12
Use AWS CloudFormation to manage Amazon RDS databases
13
Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years
14
Use Amazon DynamoDB DAX
15
Use an Auto Scaling Group
16
Set up AWS Global Accelerator. Register the Application Load Balancers in different Regions to the AWS Global Accelerator. Configure the on-premises firewall's rule to allow static IP addresses associated with the AWS Global Accelerator
17
Each of the four targets in AZ-A receives 12.5% of the traffic
18
Leverage Amazon Kinesis Data Streams to capture the data from the website and feed it into Amazon Kinesis Data Analytics which can query the data in real time. Lastly, the analyzed feed is output into Amazon Kinesis Data Firehose to persist the data on Amazon S3
19
Use an Amazon CloudFront distribution
20
Select a cluster placement group while launching Amazon EC2 instances
21
Set up Amazon DynamoDB table in the on-demand capacity mode
22
Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks
23
Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams
24
Create an AWS Transit Gateway with equal cost multipath routing and add additional VPN tunnels
25
Amazon MQ
26
Amazon Elastic Compute Cloud (Amazon EC2)
27
Amazon DynamoDB Streams + AWS Lambda
28
Use a Cluster placement group
29
Use Amazon Redshift Spectrum to create Amazon Redshift cluster tables pointing to the underlying historical data in Amazon S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
30
Feed the streaming transactions into Amazon Kinesis Data Streams. Leverage AWS Lambda integration to remove sensitive data from every transaction and then store the cleansed transactions in Amazon DynamoDB. The internal applications can consume the raw transactions off the Amazon Kinesis Data Stream
31
Use Amazon Transcribe to convert audio files to text and Amazon Athena to perform SQL based analysis to understand the underlying customer sentiments
32
Amazon DynamoDB
33
Create an Application Load Balancer
34
Use Amazon S3 to host the static website and Amazon CloudFront to distribute the content for low latency access
35
Amazon Cognito
36
The Amazon EBS volume was configured as the root volume of Amazon EC2 instance. On termination of the instance, the default behavior is to also terminate the attached root volume
37
Amazon EBS volumes are Availability Zone (AZ) locked
38
Enable Amazon S3 Transfer Acceleration (Amazon S3TA) for the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files, Use Amazon CloudFront distribution with origin as the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files
39
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
40
Use Amazon Elasticache for distributed in-memory cache based session management
41
Amazon EC2 with Amazon EBS volume of Provisioned IOPS SSD (io1) type
42
Set up AWS Global Accelerator and add endpoints to cater to users in different geographic locations
43
NAT Gateways deployed in your public subnet
44
Install an Amazon CloudWatch Logs agents on the Amazon EC2 instances to send logs to Amazon CloudWatch
45
Use batch messages
46
Use Amazon ElastiCache for Redis
47
Amazon Kinesis Data Streams
48
Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications
49
Use Enhanced Fanout feature of Amazon Kinesis Data Streams
50
AWS Glue, Amazon EMR