問題一覧
1
Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure the application tier in Europe to use the local reader endpoint.
2
Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3
3
Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot and update the application. Use AWS DMS to synchronize data between the source and destination RDS DBs.
4
Create an SCP with a deny rule that denies all but the specific instance types
5
Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination, Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
6
Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance.
7
Enable MFA Delete on the bucket, Enable versioning on the bucket
8
Use Amazon CloudFront to serve the application and deny access to blocked countries
9
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage.
10
Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address
11
Create an Application Load Balancer and associate three public subnets from the same Availability Zones as the private instances. Add the private instances to the ALB.
12
Using us-east-1 bucket as the primary bucket and ap-southeast-1 bucket as the secondary bucket, create a CloudFront origin group., Add an origin for ap-southeast-1 to CloudFront.
13
Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator
14
On-Demand Instances
15
Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
16
Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
17
Kinesis Data Firehose can be connected to the VPC using AWS PrivateLink. Install a 1 Gbps AWS Direct Connect connection between the on-premises network and AWS. To send data from on-premises to Kinesis Data Firehose, use the PrivateLink endpoint.
18
Amazon S3 Standard
19
Create Amazon Route 53 records with a geolocation routing policy.
20
Add an Amazon CloudFront distribution in front of the ALB, Add Amazon Aurora Replicas
21
Modify the Auto Scaling group to use two instances across each of three Availability Zones.
22
AWS DataSync over AWS Direct Connect.
23
Update the permission policy on the SQS queue to grant the sqs:SendMessage permission to the partner’s AWS account.
24
Use Amazon CloudFront with the S3 bucket as its origin
25
On-Demand Capacity Reservations
26
Amazon SNS
27
Connect the backup applications to an AWS Storage Gateway using an iSCSI-virtual tape library (VTL).
28
Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth.
29
Create a gateway VPC endpoint and add an entry to the route table
30
AWS Lambda, Amazon DynamoDB
31
AWS DataSync
32
Store the data in an Amazon EFS filesystem. Mount the file system on the application instances
33
Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
34
Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
35
Create a VPC peering connection between VPC-TEST1 and VPC-TEST2.
36
Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled.
37
Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica
38
Create a gateway endpoint for DynamoDB, Create a route table entry for the endpoint
39
Hibernate the instance outside business hours. Start the instance again when required.
40
Process and store the images using AWS Snowball Edge devices.
41
Elastic Fabric Adapter (EFA)
42
Amazon FSx
43
Create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (i01) volume attached. Provision 64,000 IOPS for the volume.
44
Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs., Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
45
Amazon FSx
46
Configure a static website using Amazon S3 and create a Route 53 failover routing policy.
47
Set a password policy for the entire AWS account.
48
Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance.
49
Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
50
Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in ap-southeast-2
51
Use an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.
52
Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically, Configure a Network Load Balancer in front of the EC2 instances
53
Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS
54
Enable MFA Delete on the S3 bucket., Enable versioning on the S3 bucket.
55
Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs, Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
56
Provide an Amazon Simple Queue Service (Amazon SQS) queue for the sender and processor applications. Set up a dead-letter queue to collect failed messages.
57
Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS worker nodes.
58
S3 Intelligent-Tiering
59
Use an AWS Storage Gateway volume gateway to replace the block storage., Use an AWS Storage Gateway file gateway to replace the NFS storage.
60
Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter
61
Create an Amazon EFS file system with mount targets in each Availability Zone. Configure the application instances to mount the file system.
62
Amazon EBS General Purpose SSD (gp2)
63
Create an Amazon CloudFront distribution and set the price class to use only U.S, Canada and Mexico.
64
On-demand capacity reservations for the development environment, Use Reserved instances for the production environment
65
Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
66
Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached.
67
Configure the route table for the private subnet so that it routes the outbound traffic to an AWS Network Firewall firewall then configure domain list rule groups.
68
Create an Aurora Replica and use the replica endpoint for reporting.
69
Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.
70
Configure DX connections at multiple DX locations.
71
Modify the Auto Scaling group to use four instances across each of two Availability Zones
72
Set up an HTTPS endpoint in Amazon API Gateway. To process the messages and save the results to Amazon DynamoDB, configure an API Gateway endpoint to invoke an AWS Lambda function.
73
Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution
74
Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server.
75
Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
76
Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
77
Use Amazon ElastiCache to cache the database layer.
78
Use AWS PrivateLink to expose the application as an endpoint service, Create a Network Load Balancer (NLB)
79
Amazon FSx for Lustre with Amazon S3.
80
Create an Amazon FSx for Lustre file system. Connect the file system to the origin server. Ensure that the file system is connected to the application server.
81
Amazon CloudFront
82
Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3.
83
On the table, enable Amazon DynamoDB Streams. Subscriptions can be made to a single Amazon Simple Notification Service (Amazon SNS) topic using triggers.
84
Amazon EC2 instance store
85
Transition to ONEZONE_IA after 30 days. After 90 days expire the objects
86
Move the documents and media files to an Amazon FSx for Windows File Server file system.
87
Migrate the account using the AWS Organizations console
88
Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
89
Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
90
Migrate the databases to Amazon Aurora Serverless (Aurora MySQL).
91
Use client-side encryption with a master key stored in AWS KMS.
92
Set a permissions boundary on the developer IAM role that denies attaching administrator access.
93
Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
94
Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier, Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0 and to allow outbound traffic on port 1433 to the RDS
95
Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn.
96
RedShift for both use cases
97
Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
98
Enable an RDS Proxy instance on your RDS Database.
99
Create an IAM role with least privilege permissions and attach it to the EC2 instance profile.
100
Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link
examen Alexandru
examen Alexandru
ユーザ名非公開 · 40問 · 8日前examen Alexandru
examen Alexandru
40問 • 8日前M#5 Section and Title v2
M#5 Section and Title v2
ユーザ名非公開 · 32問 · 13日前M#5 Section and Title v2
M#5 Section and Title v2
32問 • 13日前MPLE
MPLE
ユーザ名非公開 · 41問 · 13日前MPLE
MPLE
41問 • 13日前Weekly Test 3
Weekly Test 3
ユーザ名非公開 · 50問 · 13日前Weekly Test 3
Weekly Test 3
50問 • 13日前Weekly Test 2
Weekly Test 2
ユーザ名非公開 · 50問 · 13日前Weekly Test 2
Weekly Test 2
50問 • 13日前Weekly Test 1
Weekly Test 1
ユーザ名非公開 · 50問 · 13日前Weekly Test 1
Weekly Test 1
50問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 9問 · 13日前Refresher SPDI 1
Refresher SPDI 1
9問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 100問 · 13日前Refresher SPDI 1
Refresher SPDI 1
100問 • 13日前Definition of Terms 3
Definition of Terms 3
ユーザ名非公開 · 90問 · 13日前Definition of Terms 3
Definition of Terms 3
90問 • 13日前Definition of Terms 2
Definition of Terms 2
ユーザ名非公開 · 90問 · 13日前Definition of Terms 2
Definition of Terms 2
90問 • 13日前Definition of Terms 1
Definition of Terms 1
ユーザ名非公開 · 90問 · 13日前Definition of Terms 1
Definition of Terms 1
90問 • 13日前WT 6
WT 6
ユーザ名非公開 · 50問 · 13日前WT 6
WT 6
50問 • 13日前WT 3
WT 3
ユーザ名非公開 · 50問 · 13日前WT 3
WT 3
50問 • 13日前WT 1
WT 1
ユーザ名非公開 · 50問 · 13日前WT 1
WT 1
50問 • 13日前SPI version D pt 2
SPI version D pt 2
ユーザ名非公開 · 61問 · 13日前SPI version D pt 2
SPI version D pt 2
61問 • 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
ユーザ名非公開 · 94問 · 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
94問 • 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
ユーザ名非公開 · 20問 · 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
20問 • 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
ユーザ名非公開 · 10問 · 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
10問 • 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
ユーザ名非公開 · 11問 · 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
11問 • 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
ユーザ名非公開 · 11問 · 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
11問 • 13日前問題一覧
1
Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure the application tier in Europe to use the local reader endpoint.
2
Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3
3
Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot and update the application. Use AWS DMS to synchronize data between the source and destination RDS DBs.
4
Create an SCP with a deny rule that denies all but the specific instance types
5
Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination, Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
6
Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance.
7
Enable MFA Delete on the bucket, Enable versioning on the bucket
8
Use Amazon CloudFront to serve the application and deny access to blocked countries
9
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage.
10
Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address
11
Create an Application Load Balancer and associate three public subnets from the same Availability Zones as the private instances. Add the private instances to the ALB.
12
Using us-east-1 bucket as the primary bucket and ap-southeast-1 bucket as the secondary bucket, create a CloudFront origin group., Add an origin for ap-southeast-1 to CloudFront.
13
Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator
14
On-Demand Instances
15
Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
16
Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
17
Kinesis Data Firehose can be connected to the VPC using AWS PrivateLink. Install a 1 Gbps AWS Direct Connect connection between the on-premises network and AWS. To send data from on-premises to Kinesis Data Firehose, use the PrivateLink endpoint.
18
Amazon S3 Standard
19
Create Amazon Route 53 records with a geolocation routing policy.
20
Add an Amazon CloudFront distribution in front of the ALB, Add Amazon Aurora Replicas
21
Modify the Auto Scaling group to use two instances across each of three Availability Zones.
22
AWS DataSync over AWS Direct Connect.
23
Update the permission policy on the SQS queue to grant the sqs:SendMessage permission to the partner’s AWS account.
24
Use Amazon CloudFront with the S3 bucket as its origin
25
On-Demand Capacity Reservations
26
Amazon SNS
27
Connect the backup applications to an AWS Storage Gateway using an iSCSI-virtual tape library (VTL).
28
Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth.
29
Create a gateway VPC endpoint and add an entry to the route table
30
AWS Lambda, Amazon DynamoDB
31
AWS DataSync
32
Store the data in an Amazon EFS filesystem. Mount the file system on the application instances
33
Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
34
Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
35
Create a VPC peering connection between VPC-TEST1 and VPC-TEST2.
36
Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled.
37
Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica
38
Create a gateway endpoint for DynamoDB, Create a route table entry for the endpoint
39
Hibernate the instance outside business hours. Start the instance again when required.
40
Process and store the images using AWS Snowball Edge devices.
41
Elastic Fabric Adapter (EFA)
42
Amazon FSx
43
Create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (i01) volume attached. Provision 64,000 IOPS for the volume.
44
Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs., Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
45
Amazon FSx
46
Configure a static website using Amazon S3 and create a Route 53 failover routing policy.
47
Set a password policy for the entire AWS account.
48
Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance.
49
Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
50
Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in ap-southeast-2
51
Use an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.
52
Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically, Configure a Network Load Balancer in front of the EC2 instances
53
Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS
54
Enable MFA Delete on the S3 bucket., Enable versioning on the S3 bucket.
55
Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs, Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
56
Provide an Amazon Simple Queue Service (Amazon SQS) queue for the sender and processor applications. Set up a dead-letter queue to collect failed messages.
57
Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS worker nodes.
58
S3 Intelligent-Tiering
59
Use an AWS Storage Gateway volume gateway to replace the block storage., Use an AWS Storage Gateway file gateway to replace the NFS storage.
60
Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter
61
Create an Amazon EFS file system with mount targets in each Availability Zone. Configure the application instances to mount the file system.
62
Amazon EBS General Purpose SSD (gp2)
63
Create an Amazon CloudFront distribution and set the price class to use only U.S, Canada and Mexico.
64
On-demand capacity reservations for the development environment, Use Reserved instances for the production environment
65
Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
66
Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached.
67
Configure the route table for the private subnet so that it routes the outbound traffic to an AWS Network Firewall firewall then configure domain list rule groups.
68
Create an Aurora Replica and use the replica endpoint for reporting.
69
Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.
70
Configure DX connections at multiple DX locations.
71
Modify the Auto Scaling group to use four instances across each of two Availability Zones
72
Set up an HTTPS endpoint in Amazon API Gateway. To process the messages and save the results to Amazon DynamoDB, configure an API Gateway endpoint to invoke an AWS Lambda function.
73
Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution
74
Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server.
75
Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
76
Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
77
Use Amazon ElastiCache to cache the database layer.
78
Use AWS PrivateLink to expose the application as an endpoint service, Create a Network Load Balancer (NLB)
79
Amazon FSx for Lustre with Amazon S3.
80
Create an Amazon FSx for Lustre file system. Connect the file system to the origin server. Ensure that the file system is connected to the application server.
81
Amazon CloudFront
82
Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3.
83
On the table, enable Amazon DynamoDB Streams. Subscriptions can be made to a single Amazon Simple Notification Service (Amazon SNS) topic using triggers.
84
Amazon EC2 instance store
85
Transition to ONEZONE_IA after 30 days. After 90 days expire the objects
86
Move the documents and media files to an Amazon FSx for Windows File Server file system.
87
Migrate the account using the AWS Organizations console
88
Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
89
Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
90
Migrate the databases to Amazon Aurora Serverless (Aurora MySQL).
91
Use client-side encryption with a master key stored in AWS KMS.
92
Set a permissions boundary on the developer IAM role that denies attaching administrator access.
93
Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
94
Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier, Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0 and to allow outbound traffic on port 1433 to the RDS
95
Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn.
96
RedShift for both use cases
97
Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
98
Enable an RDS Proxy instance on your RDS Database.
99
Create an IAM role with least privilege permissions and attach it to the EC2 instance profile.
100
Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link