問題一覧
1
Change the application architecture to create customer-specific custom prefixes within the single Amazon S3 bucket and then upload the daily files into those prefixed locations
2
The Amazon EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput
3
Use AWS Direct Connect plus virtual private network (VPN) to establish a connection between the data center and AWS Cloud
4
Versioning
5
Use Amazon CloudFront with a custom origin pointing to the on-premises servers
6
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
7
Amazon FSx for Windows File Server
8
Push score updates to Amazon Kinesis Data Streams which uses an AWS Lambda function to process these updates and then store these processed updates in Amazon DynamoDB
9
AWS Global Accelerator
10
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
11
Amazon FSx for Lustre
12
Leverage Amazon API Gateway with Amazon Kinesis Data Analytics
13
Power the on-demand, live leaderboard using Amazon DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements, Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it meets the in-memory, high availability, low latency requirements
14
1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B
15
Upload the compressed file using multipart upload with Amazon S3 Transfer Acceleration (Amazon S3TA)
16
Throughput Optimized Hard disk drive (st1), Cold Hard disk drive (sc1)
17
Use Instance Store based Amazon EC2 instances
18
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly from Amazon FSx in AWS
19
The junior scientist does not need to pay any transfer charges for the image upload
20
Amazon Kinesis Data Streams
21
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be accessed in other AWS regions by using an inter-region VPC peering connection
22
Amazon ElastiCache, Amazon DynamoDB Accelerator (DAX)
23
If the master database is encrypted, the read replicas are encrypted
24
Set up an Amazon Aurora Global Database cluster
25
Use AWS Glue ETL job to write the transformed data in the refined zone using a compressed file format, Setup a lifecycle policy to transition the raw zone data into Amazon S3 Glacier Deep Archive after 1 day of object creation
26
Amazon Relational Database Service (Amazon RDS)
27
Amazon Kinesis Data Firehose
28
Kinesis Agent cannot write to Amazon Kinesis Firehose for which the delivery stream source is already set as Amazon Kinesis Data Streams
29
Amazon FSx for Windows File Server
30
Use Amazon EC2 Instance Hibernate
31
Max I/O
32
Use an Amazon Simple Queue Service (Amazon SQS) FIFO (First-In-First-Out) queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID
33
By default, user data runs only during the boot cycle when you first launch an instance, By default, scripts entered as user data are executed with root user privileges
34
Setup another fleet of Amazon EC2 instances for the web tier in the eu-west-1 region. Enable latency routing policy in Amazon Route 53, Create Amazon Aurora read replicas in the eu-west-1 region
35
Configure an AWS DataSync agent on the on-premises server that has access to the NFS file system. Transfer data over the AWS Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Set up an AWS DataSync scheduled task to send the video files to the Amazon EFS file system every 24 hours
36
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
37
Build the website as a static website hosted on Amazon S3. Create an Amazon CloudFront distribution with Amazon S3 as the origin. Use Amazon Route 53 to create an alias record that points to your Amazon CloudFront distribution
38
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)
39
Use AWS DataSync to automate and accelerate online data transfers to the given AWS storage services
40
The instances launched by both Launch Template LT1 and Launch Template LT2 will have dedicated instance tenancy
41
Use AWS Global Accelerator to provide a low latency way to distribute live sports results
42
Use Amazon S3 for hosting the web application and use Amazon S3 Transfer Acceleration (Amazon S3TA) to reduce the latency that geographically dispersed users might face
43
Set up database migration from Amazon RDS MySQL to Amazon Aurora MySQL. Swap out the MySQL read replicas with Aurora Replicas. Configure Aurora Auto Scaling
44
Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance
45
Use Availability Zone (AZ) ID to uniquely identify the Availability Zones across the two AWS Accounts
46
NAT instance supports port forwarding, NAT instance can be used as a bastion server, Security Groups can be associated with a NAT instance
47
Host the static content on Amazon S3 and use AWS Lambda with Amazon DynamoDB for the serverless web application that handles dynamic content. Amazon CloudFront will sit in front of AWS Lambda for distribution across diverse regions
48
Launch AWS Global Accelerator and create endpoints for all the Regions. Register the Application Load Balancers of each Region to the corresponding endpoints
49
Create a read replica and connect the report generation tool/application to it
50
Configure AWS Auto Scaling to scale out the Amazon ECS cluster when the ECS service's CPU utilization rises above a threshold
51
Amazon ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. Amazon ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
52
Order 10 AWS Snowball Edge Storage Optimized devices to complete the one-time data transfer, Setup AWS Site-to-Site VPN to establish on-going connectivity between the on-premises data center and AWS Cloud
53
AWS Storage Gateway - File Gateway
54
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
55
Store the intermediary query results in Amazon S3 Standard storage class
56
Amazon S3 Intelligent-Tiering => Amazon S3 Standard, Amazon S3 One Zone-IA => Amazon S3 Standard-IA
57
Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once
58
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days
59
Use Amazon EC2 spot instances to run the workflow processes
60
Ingest the data in Amazon Kinesis Data Firehose and use an intermediary AWS Lambda function to filter and transform the incoming stream before the output is dumped on Amazon S3
61
Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and Amazon CloudFront for Amazon S3
62
Use multipart uploads for faster file uploads into the destination Amazon S3 bucket, Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file uploads into the destination S3 bucket
63
Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EFS < Cost of test file storage on Amazon EBS
64
Use Amazon EC2 reserved instance (RI) for the production application and on-demand instances for the dev application
65
Amazon Elastic File System (EFS) Standard–IA storage class
66
Instance B
67
Create an AWS Snowball job and target an Amazon S3 bucket. Create a lifecycle policy to transition this data to Amazon S3 Glacier Deep Archive on the same day
68
There are data transfer charges for replicating data across AWS Regions
69
Purchase 70 reserved instances (RIs) and 30 spot instances
70
Amazon EFS Infrequent Access
71
Run the workload on a Spot Fleet
72
You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
73
Create a virtual private cloud (VPC) in an account and share one or more of its subnets with the other accounts using Resource Access Manager
74
Deploy the instances in three Availability Zones (AZs). Launch two instances in each Availability Zone (AZ)
75
Purchase 80 reserved instances (RIs). Provision additional on-demand and spot instances per the workload demand (Use Auto Scaling Group with launch template to provision the mix of on-demand and spot instances)
76
Use SQS long polling to retrieve messages from your Amazon SQS queues
77
Use Amazon EC2 dedicated hosts
78
Set up a VPC gateway endpoint for Amazon S3. Attach an endpoint policy to the endpoint. Update the route table to direct the S3-bound traffic to the VPC endpoint
79
You can change the tenancy of an instance from dedicated to host, You can change the tenancy of an instance from host to dedicated
80
Configure Amazon CloudFront to distribute the data hosted on Amazon S3 cost-effectively
81
Schedule a weekly Amazon EventBridge event cron expression to invoke an AWS Lambda function that runs the database rollover job
82
AWS Database Migration Service (AWS DMS), AWS Schema Conversion Tool (AWS SCT)
83
Create an alias record for covid19survey.com that routes traffic to www.covid19survey.com
84
When you cancel an active spot request, it does not terminate the associated instance, If a spot request is persistent, then it is opened again after your Spot Instance is interrupted, Spot Fleets can maintain target capacity by launching replacement instances after Spot Instances in the fleet are terminated
85
Create a CNAME record
86
Use AWS Cost Explorer Resource Optimization to get a report of Amazon EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations
87
Convert the Amazon EC2 instance EBS volume to gp2
88
Setup Amazon ElastiCache in front of Amazon RDS
89
Use Reserved Instances (RIs) for the minimum capacity, Set the minimum capacity to 2
90
Create a Lifecycle Policy to transition objects to Amazon S3 Standard IA using a prefix after 45 days, Create a Lifecycle Policy to transition all objects to Amazon S3 Glacier after 180 days
91
Create an Amazon CloudFront distribution
92
Enable Amazon API Gateway Caching
93
Amazon Simple Queue Service (Amazon SQS), Amazon EC2 Spot Instances
94
Use Amazon Athena to run SQL based analytics against Amazon S3 data
95
Use Amazon EC2 instances with Instance Store as the storage option
96
Use Amazon S3 Intelligent-Tiering storage class to optimize the Amazon S3 storage costs
97
AWS Trusted Advisor
98
Distribute the static content through Amazon S3
99
Deploy the visualization tool in the same AWS region as the data warehouse. Access the visualization tool over a Direct Connect connection at a location in the same region
100
Set up an Amazon S3 bucket lifecycle policy to move files from Amazon S3 Standard to Amazon S3 Standard-IA 30 days after object creation. Delete the files 5 years after object creation
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Change the application architecture to create customer-specific custom prefixes within the single Amazon S3 bucket and then upload the daily files into those prefixed locations
2
The Amazon EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput
3
Use AWS Direct Connect plus virtual private network (VPN) to establish a connection between the data center and AWS Cloud
4
Versioning
5
Use Amazon CloudFront with a custom origin pointing to the on-premises servers
6
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
7
Amazon FSx for Windows File Server
8
Push score updates to Amazon Kinesis Data Streams which uses an AWS Lambda function to process these updates and then store these processed updates in Amazon DynamoDB
9
AWS Global Accelerator
10
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
11
Amazon FSx for Lustre
12
Leverage Amazon API Gateway with Amazon Kinesis Data Analytics
13
Power the on-demand, live leaderboard using Amazon DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements, Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it meets the in-memory, high availability, low latency requirements
14
1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B
15
Upload the compressed file using multipart upload with Amazon S3 Transfer Acceleration (Amazon S3TA)
16
Throughput Optimized Hard disk drive (st1), Cold Hard disk drive (sc1)
17
Use Instance Store based Amazon EC2 instances
18
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly from Amazon FSx in AWS
19
The junior scientist does not need to pay any transfer charges for the image upload
20
Amazon Kinesis Data Streams
21
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be accessed in other AWS regions by using an inter-region VPC peering connection
22
Amazon ElastiCache, Amazon DynamoDB Accelerator (DAX)
23
If the master database is encrypted, the read replicas are encrypted
24
Set up an Amazon Aurora Global Database cluster
25
Use AWS Glue ETL job to write the transformed data in the refined zone using a compressed file format, Setup a lifecycle policy to transition the raw zone data into Amazon S3 Glacier Deep Archive after 1 day of object creation
26
Amazon Relational Database Service (Amazon RDS)
27
Amazon Kinesis Data Firehose
28
Kinesis Agent cannot write to Amazon Kinesis Firehose for which the delivery stream source is already set as Amazon Kinesis Data Streams
29
Amazon FSx for Windows File Server
30
Use Amazon EC2 Instance Hibernate
31
Max I/O
32
Use an Amazon Simple Queue Service (Amazon SQS) FIFO (First-In-First-Out) queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID
33
By default, user data runs only during the boot cycle when you first launch an instance, By default, scripts entered as user data are executed with root user privileges
34
Setup another fleet of Amazon EC2 instances for the web tier in the eu-west-1 region. Enable latency routing policy in Amazon Route 53, Create Amazon Aurora read replicas in the eu-west-1 region
35
Configure an AWS DataSync agent on the on-premises server that has access to the NFS file system. Transfer data over the AWS Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Set up an AWS DataSync scheduled task to send the video files to the Amazon EFS file system every 24 hours
36
Use Amazon Aurora Global Database to enable fast local reads with low latency in each region
37
Build the website as a static website hosted on Amazon S3. Create an Amazon CloudFront distribution with Amazon S3 as the origin. Use Amazon Route 53 to create an alias record that points to your Amazon CloudFront distribution
38
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)
39
Use AWS DataSync to automate and accelerate online data transfers to the given AWS storage services
40
The instances launched by both Launch Template LT1 and Launch Template LT2 will have dedicated instance tenancy
41
Use AWS Global Accelerator to provide a low latency way to distribute live sports results
42
Use Amazon S3 for hosting the web application and use Amazon S3 Transfer Acceleration (Amazon S3TA) to reduce the latency that geographically dispersed users might face
43
Set up database migration from Amazon RDS MySQL to Amazon Aurora MySQL. Swap out the MySQL read replicas with Aurora Replicas. Configure Aurora Auto Scaling
44
Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance
45
Use Availability Zone (AZ) ID to uniquely identify the Availability Zones across the two AWS Accounts
46
NAT instance supports port forwarding, NAT instance can be used as a bastion server, Security Groups can be associated with a NAT instance
47
Host the static content on Amazon S3 and use AWS Lambda with Amazon DynamoDB for the serverless web application that handles dynamic content. Amazon CloudFront will sit in front of AWS Lambda for distribution across diverse regions
48
Launch AWS Global Accelerator and create endpoints for all the Regions. Register the Application Load Balancers of each Region to the corresponding endpoints
49
Create a read replica and connect the report generation tool/application to it
50
Configure AWS Auto Scaling to scale out the Amazon ECS cluster when the ECS service's CPU utilization rises above a threshold
51
Amazon ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. Amazon ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
52
Order 10 AWS Snowball Edge Storage Optimized devices to complete the one-time data transfer, Setup AWS Site-to-Site VPN to establish on-going connectivity between the on-premises data center and AWS Cloud
53
AWS Storage Gateway - File Gateway
54
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
55
Store the intermediary query results in Amazon S3 Standard storage class
56
Amazon S3 Intelligent-Tiering => Amazon S3 Standard, Amazon S3 One Zone-IA => Amazon S3 Standard-IA
57
Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once
58
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days
59
Use Amazon EC2 spot instances to run the workflow processes
60
Ingest the data in Amazon Kinesis Data Firehose and use an intermediary AWS Lambda function to filter and transform the incoming stream before the output is dumped on Amazon S3
61
Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and Amazon CloudFront for Amazon S3
62
Use multipart uploads for faster file uploads into the destination Amazon S3 bucket, Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file uploads into the destination S3 bucket
63
Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EFS < Cost of test file storage on Amazon EBS
64
Use Amazon EC2 reserved instance (RI) for the production application and on-demand instances for the dev application
65
Amazon Elastic File System (EFS) Standard–IA storage class
66
Instance B
67
Create an AWS Snowball job and target an Amazon S3 bucket. Create a lifecycle policy to transition this data to Amazon S3 Glacier Deep Archive on the same day
68
There are data transfer charges for replicating data across AWS Regions
69
Purchase 70 reserved instances (RIs) and 30 spot instances
70
Amazon EFS Infrequent Access
71
Run the workload on a Spot Fleet
72
You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
73
Create a virtual private cloud (VPC) in an account and share one or more of its subnets with the other accounts using Resource Access Manager
74
Deploy the instances in three Availability Zones (AZs). Launch two instances in each Availability Zone (AZ)
75
Purchase 80 reserved instances (RIs). Provision additional on-demand and spot instances per the workload demand (Use Auto Scaling Group with launch template to provision the mix of on-demand and spot instances)
76
Use SQS long polling to retrieve messages from your Amazon SQS queues
77
Use Amazon EC2 dedicated hosts
78
Set up a VPC gateway endpoint for Amazon S3. Attach an endpoint policy to the endpoint. Update the route table to direct the S3-bound traffic to the VPC endpoint
79
You can change the tenancy of an instance from dedicated to host, You can change the tenancy of an instance from host to dedicated
80
Configure Amazon CloudFront to distribute the data hosted on Amazon S3 cost-effectively
81
Schedule a weekly Amazon EventBridge event cron expression to invoke an AWS Lambda function that runs the database rollover job
82
AWS Database Migration Service (AWS DMS), AWS Schema Conversion Tool (AWS SCT)
83
Create an alias record for covid19survey.com that routes traffic to www.covid19survey.com
84
When you cancel an active spot request, it does not terminate the associated instance, If a spot request is persistent, then it is opened again after your Spot Instance is interrupted, Spot Fleets can maintain target capacity by launching replacement instances after Spot Instances in the fleet are terminated
85
Create a CNAME record
86
Use AWS Cost Explorer Resource Optimization to get a report of Amazon EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations
87
Convert the Amazon EC2 instance EBS volume to gp2
88
Setup Amazon ElastiCache in front of Amazon RDS
89
Use Reserved Instances (RIs) for the minimum capacity, Set the minimum capacity to 2
90
Create a Lifecycle Policy to transition objects to Amazon S3 Standard IA using a prefix after 45 days, Create a Lifecycle Policy to transition all objects to Amazon S3 Glacier after 180 days
91
Create an Amazon CloudFront distribution
92
Enable Amazon API Gateway Caching
93
Amazon Simple Queue Service (Amazon SQS), Amazon EC2 Spot Instances
94
Use Amazon Athena to run SQL based analytics against Amazon S3 data
95
Use Amazon EC2 instances with Instance Store as the storage option
96
Use Amazon S3 Intelligent-Tiering storage class to optimize the Amazon S3 storage costs
97
AWS Trusted Advisor
98
Distribute the static content through Amazon S3
99
Deploy the visualization tool in the same AWS region as the data warehouse. Access the visualization tool over a Direct Connect connection at a location in the same region
100
Set up an Amazon S3 bucket lifecycle policy to move files from Amazon S3 Standard to Amazon S3 Standard-IA 30 days after object creation. Delete the files 5 years after object creation