問題一覧
1
Change the application architecture to create customer-specific custom prefixes within the single Amazon S3 bucket and then upload the daily files into those prefixed locations
2
The Amazon EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput
3
Use AWS Direct Connect plus virtual private network (VPN) to establish a connection between the data center and AWS Cloud
4
Versioning
5
Use Amazon CloudFront with a custom origin pointing to the on-premises servers
6
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
7
Amazon FSx for Windows File Server
8
Push score updates to Amazon Kinesis Data Streams which uses an AWS Lambda function to process these updates and then store these processed updates in Amazon DynamoDB
9
AWS Global Accelerator
10
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
11
Amazon FSx for Lustre
12
Leverage Amazon API Gateway with Amazon Kinesis Data Analytics
13
Power the on-demand, live leaderboard using Amazon DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements, Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it meets the in-memory, high availability, low latency requirements
14
1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B
15
Upload the compressed file using multipart upload with Amazon S3 Transfer Acceleration (Amazon S3TA)
16
Throughput Optimized Hard disk drive (st1), Cold Hard disk drive (sc1)
17
Use Instance Store based Amazon EC2 instances
18
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly from Amazon FSx in AWS
19
The junior scientist does not need to pay any transfer charges for the image upload
20
Amazon Kinesis Data Streams
21
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be accessed in other AWS regions by using an inter-region VPC peering connection
22
Amazon ElastiCache, Amazon DynamoDB Accelerator (DAX)
23
If the master database is encrypted, the read replicas are encrypted
24
Set up an Amazon Aurora Global Database cluster
25
Use AWS Glue ETL job to write the transformed data in the refined zone using a compressed file format, Setup a lifecycle policy to transition the raw zone data into Amazon S3 Glacier Deep Archive after 1 day of object creation
26
Amazon Relational Database Service (Amazon RDS)
27
Amazon Kinesis Data Firehose
28
Kinesis Agent cannot write to Amazon Kinesis Firehose for which the delivery stream source is already set as Amazon Kinesis Data Streams
29
Amazon FSx for Windows File Server
30
Use Amazon EC2 Instance Hibernate
31
Max I/O
32
Use an Amazon Simple Queue Service (Amazon SQS) FIFO (First-In-First-Out) queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID
33
By default, user data runs only during the boot cycle when you first launch an instance, By default, scripts entered as user data are executed with root user privileges
34
Setup another fleet of Amazon EC2 instances for the web tier in the eu-west-1 region. Enable latency routing policy in Amazon Route 53, Create Amazon Aurora read replicas in the eu-west-1 region
35
Configure an AWS DataSync agent on the on-premises server that has access to the NFS file system. Transfer data over the AWS Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Set up an AWS DataSync scheduled task to send the video files to the Amazon EFS file system every 24 hours
36
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
37
Build the website as a static website hosted on Amazon S3. Create an Amazon CloudFront distribution with Amazon S3 as the origin. Use Amazon Route 53 to create an alias record that points to your Amazon CloudFront distribution
38
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)
39
Use AWS DataSync to automate and accelerate online data transfers to the given AWS storage services
40
The instances launched by both Launch Template LT1 and Launch Template LT2 will have dedicated instance tenancy
41
Use AWS Global Accelerator to provide a low latency way to distribute live sports results
42
Use Amazon S3 for hosting the web application and use Amazon S3 Transfer Acceleration (Amazon S3TA) to reduce the latency that geographically dispersed users might face
43
Set up database migration from Amazon RDS MySQL to Amazon Aurora MySQL. Swap out the MySQL read replicas with Aurora Replicas. Configure Aurora Auto Scaling
44
Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance
45
Use Availability Zone (AZ) ID to uniquely identify the Availability Zones across the two AWS Accounts
46
NAT instance supports port forwarding, NAT instance can be used as a bastion server, Security Groups can be associated with a NAT instance
47
Host the static content on Amazon S3 and use AWS Lambda with Amazon DynamoDB for the serverless web application that handles dynamic content. Amazon CloudFront will sit in front of AWS Lambda for distribution across diverse regions
48
Launch AWS Global Accelerator and create endpoints for all the Regions. Register the Application Load Balancers of each Region to the corresponding endpoints
49
Create a read replica and connect the report generation tool/application to it
50
Configure AWS Auto Scaling to scale out the Amazon ECS cluster when the ECS service's CPU utilization rises above a threshold
51
Amazon DynamoDB
52
This is a scale up example of vertical scalability
53
Use Amazon ElastiCache to improve the performance of compute-intensive workloads, Use Amazon ElastiCache to improve latency and throughput for read-heavy application workloads
54
Elastic Fabric Adapter (EFA)
55
Use Amazon CloudFront with Amazon S3 as the storage solution for the static assets
56
Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed logs locally for low-latency access while storing the full volume with all logs in its Amazon S3 service bucket
57
Use AWS CloudFormation StackSets to deploy the same template across AWS accounts and regions
58
Provisioned IOPS SSD (io1)
59
By default, AWS Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once an AWS Lambda function is VPC-enabled, it will need a route through a Network Address Translation gateway (NAT gateway) in a public subnet to access public resources, Since AWS Lambda functions can scale extremely quickly, it's a good idea to deploy a Amazon CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold, If you intend to reuse code in more than one AWS Lambda function, you should consider creating an AWS Lambda Layer for the reusable code
60
Geoproximity routing
61
Amazon Neptune
62
Use AWS CloudFormation to manage Amazon RDS databases
63
Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years
64
Use Amazon DynamoDB DAX
65
Use an Auto Scaling Group
66
Set up AWS Global Accelerator. Register the Application Load Balancers in different Regions to the AWS Global Accelerator. Configure the on-premises firewall's rule to allow static IP addresses associated with the AWS Global Accelerator
67
Each of the four targets in AZ-A receives 12.5% of the traffic
68
Leverage Amazon Kinesis Data Streams to capture the data from the website and feed it into Amazon Kinesis Data Analytics which can query the data in real time. Lastly, the analyzed feed is output into Amazon Kinesis Data Firehose to persist the data on Amazon S3
69
Use an Amazon CloudFront distribution
70
Select a cluster placement group while launching Amazon EC2 instances
71
Set up Amazon DynamoDB table in the on-demand capacity mode
72
Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks
73
Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams
74
Create an AWS Transit Gateway with equal cost multipath routing and add additional VPN tunnels
75
Amazon MQ
76
Amazon Elastic Compute Cloud (Amazon EC2)
77
Amazon DynamoDB Streams + AWS Lambda
78
Use a Cluster placement group
79
Use Amazon Redshift Spectrum to create Amazon Redshift cluster tables pointing to the underlying historical data in Amazon S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
80
Feed the streaming transactions into Amazon Kinesis Data Streams. Leverage AWS Lambda integration to remove sensitive data from every transaction and then store the cleansed transactions in Amazon DynamoDB. The internal applications can consume the raw transactions off the Amazon Kinesis Data Stream
81
Use Amazon Transcribe to convert audio files to text and Amazon Athena to perform SQL based analysis to understand the underlying customer sentiments
82
Amazon DynamoDB
83
Create an Application Load Balancer
84
Use Amazon S3 to host the static website and Amazon CloudFront to distribute the content for low latency access
85
Amazon Cognito
86
The Amazon EBS volume was configured as the root volume of Amazon EC2 instance. On termination of the instance, the default behavior is to also terminate the attached root volume
87
Amazon EBS volumes are Availability Zone (AZ) locked
88
Enable Amazon S3 Transfer Acceleration (Amazon S3TA) for the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files, Use Amazon CloudFront distribution with origin as the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files
89
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
90
Use Amazon Elasticache for distributed in-memory cache based session management
91
Amazon EC2 with Amazon EBS volume of Provisioned IOPS SSD (io1) type
92
Set up AWS Global Accelerator and add endpoints to cater to users in different geographic locations
93
NAT Gateways deployed in your public subnet
94
Install an Amazon CloudWatch Logs agents on the Amazon EC2 instances to send logs to Amazon CloudWatch
95
Use batch messages
96
Use Amazon ElastiCache for Redis
97
Amazon Kinesis Data Streams
98
Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications
99
Use Enhanced Fanout feature of Amazon Kinesis Data Streams
100
AWS Glue, Amazon EMR
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Change the application architecture to create customer-specific custom prefixes within the single Amazon S3 bucket and then upload the daily files into those prefixed locations
2
The Amazon EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput
3
Use AWS Direct Connect plus virtual private network (VPN) to establish a connection between the data center and AWS Cloud
4
Versioning
5
Use Amazon CloudFront with a custom origin pointing to the on-premises servers
6
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
7
Amazon FSx for Windows File Server
8
Push score updates to Amazon Kinesis Data Streams which uses an AWS Lambda function to process these updates and then store these processed updates in Amazon DynamoDB
9
AWS Global Accelerator
10
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
11
Amazon FSx for Lustre
12
Leverage Amazon API Gateway with Amazon Kinesis Data Analytics
13
Power the on-demand, live leaderboard using Amazon DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements, Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it meets the in-memory, high availability, low latency requirements
14
1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B
15
Upload the compressed file using multipart upload with Amazon S3 Transfer Acceleration (Amazon S3TA)
16
Throughput Optimized Hard disk drive (st1), Cold Hard disk drive (sc1)
17
Use Instance Store based Amazon EC2 instances
18
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly from Amazon FSx in AWS
19
The junior scientist does not need to pay any transfer charges for the image upload
20
Amazon Kinesis Data Streams
21
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be accessed in other AWS regions by using an inter-region VPC peering connection
22
Amazon ElastiCache, Amazon DynamoDB Accelerator (DAX)
23
If the master database is encrypted, the read replicas are encrypted
24
Set up an Amazon Aurora Global Database cluster
25
Use AWS Glue ETL job to write the transformed data in the refined zone using a compressed file format, Setup a lifecycle policy to transition the raw zone data into Amazon S3 Glacier Deep Archive after 1 day of object creation
26
Amazon Relational Database Service (Amazon RDS)
27
Amazon Kinesis Data Firehose
28
Kinesis Agent cannot write to Amazon Kinesis Firehose for which the delivery stream source is already set as Amazon Kinesis Data Streams
29
Amazon FSx for Windows File Server
30
Use Amazon EC2 Instance Hibernate
31
Max I/O
32
Use an Amazon Simple Queue Service (Amazon SQS) FIFO (First-In-First-Out) queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID
33
By default, user data runs only during the boot cycle when you first launch an instance, By default, scripts entered as user data are executed with root user privileges
34
Setup another fleet of Amazon EC2 instances for the web tier in the eu-west-1 region. Enable latency routing policy in Amazon Route 53, Create Amazon Aurora read replicas in the eu-west-1 region
35
Configure an AWS DataSync agent on the on-premises server that has access to the NFS file system. Transfer data over the AWS Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Set up an AWS DataSync scheduled task to send the video files to the Amazon EFS file system every 24 hours
36
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
37
Build the website as a static website hosted on Amazon S3. Create an Amazon CloudFront distribution with Amazon S3 as the origin. Use Amazon Route 53 to create an alias record that points to your Amazon CloudFront distribution
38
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)
39
Use AWS DataSync to automate and accelerate online data transfers to the given AWS storage services
40
The instances launched by both Launch Template LT1 and Launch Template LT2 will have dedicated instance tenancy
41
Use AWS Global Accelerator to provide a low latency way to distribute live sports results
42
Use Amazon S3 for hosting the web application and use Amazon S3 Transfer Acceleration (Amazon S3TA) to reduce the latency that geographically dispersed users might face
43
Set up database migration from Amazon RDS MySQL to Amazon Aurora MySQL. Swap out the MySQL read replicas with Aurora Replicas. Configure Aurora Auto Scaling
44
Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance
45
Use Availability Zone (AZ) ID to uniquely identify the Availability Zones across the two AWS Accounts
46
NAT instance supports port forwarding, NAT instance can be used as a bastion server, Security Groups can be associated with a NAT instance
47
Host the static content on Amazon S3 and use AWS Lambda with Amazon DynamoDB for the serverless web application that handles dynamic content. Amazon CloudFront will sit in front of AWS Lambda for distribution across diverse regions
48
Launch AWS Global Accelerator and create endpoints for all the Regions. Register the Application Load Balancers of each Region to the corresponding endpoints
49
Create a read replica and connect the report generation tool/application to it
50
Configure AWS Auto Scaling to scale out the Amazon ECS cluster when the ECS service's CPU utilization rises above a threshold
51
Amazon DynamoDB
52
This is a scale up example of vertical scalability
53
Use Amazon ElastiCache to improve the performance of compute-intensive workloads, Use Amazon ElastiCache to improve latency and throughput for read-heavy application workloads
54
Elastic Fabric Adapter (EFA)
55
Use Amazon CloudFront with Amazon S3 as the storage solution for the static assets
56
Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed logs locally for low-latency access while storing the full volume with all logs in its Amazon S3 service bucket
57
Use AWS CloudFormation StackSets to deploy the same template across AWS accounts and regions
58
Provisioned IOPS SSD (io1)
59
By default, AWS Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once an AWS Lambda function is VPC-enabled, it will need a route through a Network Address Translation gateway (NAT gateway) in a public subnet to access public resources, Since AWS Lambda functions can scale extremely quickly, it's a good idea to deploy a Amazon CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold, If you intend to reuse code in more than one AWS Lambda function, you should consider creating an AWS Lambda Layer for the reusable code
60
Geoproximity routing
61
Amazon Neptune
62
Use AWS CloudFormation to manage Amazon RDS databases
63
Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years
64
Use Amazon DynamoDB DAX
65
Use an Auto Scaling Group
66
Set up AWS Global Accelerator. Register the Application Load Balancers in different Regions to the AWS Global Accelerator. Configure the on-premises firewall's rule to allow static IP addresses associated with the AWS Global Accelerator
67
Each of the four targets in AZ-A receives 12.5% of the traffic
68
Leverage Amazon Kinesis Data Streams to capture the data from the website and feed it into Amazon Kinesis Data Analytics which can query the data in real time. Lastly, the analyzed feed is output into Amazon Kinesis Data Firehose to persist the data on Amazon S3
69
Use an Amazon CloudFront distribution
70
Select a cluster placement group while launching Amazon EC2 instances
71
Set up Amazon DynamoDB table in the on-demand capacity mode
72
Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks
73
Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams
74
Create an AWS Transit Gateway with equal cost multipath routing and add additional VPN tunnels
75
Amazon MQ
76
Amazon Elastic Compute Cloud (Amazon EC2)
77
Amazon DynamoDB Streams + AWS Lambda
78
Use a Cluster placement group
79
Use Amazon Redshift Spectrum to create Amazon Redshift cluster tables pointing to the underlying historical data in Amazon S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
80
Feed the streaming transactions into Amazon Kinesis Data Streams. Leverage AWS Lambda integration to remove sensitive data from every transaction and then store the cleansed transactions in Amazon DynamoDB. The internal applications can consume the raw transactions off the Amazon Kinesis Data Stream
81
Use Amazon Transcribe to convert audio files to text and Amazon Athena to perform SQL based analysis to understand the underlying customer sentiments
82
Amazon DynamoDB
83
Create an Application Load Balancer
84
Use Amazon S3 to host the static website and Amazon CloudFront to distribute the content for low latency access
85
Amazon Cognito
86
The Amazon EBS volume was configured as the root volume of Amazon EC2 instance. On termination of the instance, the default behavior is to also terminate the attached root volume
87
Amazon EBS volumes are Availability Zone (AZ) locked
88
Enable Amazon S3 Transfer Acceleration (Amazon S3TA) for the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files, Use Amazon CloudFront distribution with origin as the Amazon S3 bucket. This would speed up uploads as well as downloads for the video files
89
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
90
Use Amazon Elasticache for distributed in-memory cache based session management
91
Amazon EC2 with Amazon EBS volume of Provisioned IOPS SSD (io1) type
92
Set up AWS Global Accelerator and add endpoints to cater to users in different geographic locations
93
NAT Gateways deployed in your public subnet
94
Install an Amazon CloudWatch Logs agents on the Amazon EC2 instances to send logs to Amazon CloudWatch
95
Use batch messages
96
Use Amazon ElastiCache for Redis
97
Amazon Kinesis Data Streams
98
Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications
99
Use Enhanced Fanout feature of Amazon Kinesis Data Streams
100
AWS Glue, Amazon EMR