問題一覧
1
Create a new cross-account IAM role in the Production account with write access to the S3 bucket. Modify the build pipeline to assume this role to upload the files to the Production Account.
2
Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
3
Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at each OU level. Leave the default AWS managed SCP at the root level. For any specific exceptions for an OU, remove the standard deny list SCP and add a new deny list SCP for that OU
4
Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Application Migration Service (MGN) to migrate the VMs into Amazon EC2.
5
Use AWS Elastic Beanstalk and create a secondary environment configured as a deployment target for the CI/CD pipeline. To deploy, swap the staging and production environment URLs.
6
Use GitHub webhooks to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.
7
Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and set the Quorum minimum value to two using the setMValue command., Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and register a key for signing with the registerMofnPubKey command.
8
Use AWS Lambda to create daily EBS snapshots and copy them to the disaster recovery Region. Implement an Aurora Replica in the DR Region. Use Amazon Route 53 with an active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery Region.
9
Use Amazon CloudFront with Amazon S3 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.
10
Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind an Application Load Balancer. Create an Amazon Aurora PostgreSQL database in one AZ and add Aurora Replicas in two more AZs.
11
Enable versioning for the AWS Lambda function and associate an alias for every new version. Use the AWS CLI ‘update-alias’ command with the ‘routing-config’ parameter to distribute the load.
12
Verify that the IAM role has permission to decrypt the referenced KMS key.
13
Create an AWS WAF web ACL with a rule to allow access from the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
14
Store the expense claim data in Amazon S3. Use Amazon Athena and Amazon QuickSight to generate the reports using Amazon S3 as the data source., Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront. Deploy the application backend using Amazon API Gateway with an AWS Lambda proxy integration.
15
Configure an Aurora global database for storage-based cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources and create Amazon CloudFront distributions., Use Amazon Route 53 with a latency-based routing policy. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions.
16
Deploy an Amazon S3 File Gateway, configuring it to store both patient records and diagnostic images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA), accessible via SMB.
17
Enable AWS CloudTrail and keep all CloudTrail trails and logs in the management account., Create user accounts in the Production and Development accounts.
18
In each regional account, establish the SecurityAudit IAM role and grant permission to the central account to assume this role.
19
Configure a Multi-AZ Auto Scaling group using the application's AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the WebACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
20
Establish a new Amazon Elastic File System (Amazon EFS) using the Max I/O performance mode and mount this EFS file system on each EC2 instance in the cluster.
21
Deploy each application to a single-instance AWS Elastic Beanstalk environment without a load balancer.
22
Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.
23
Set up a monitoring system in the organization's central account using AWS Budgets. Focus on tracking the hours of EC2 instance operation, setting a monitoring interval to daily. Define a budget limit that is 15% above the 45-day average usage of EC2, as determined by AWS Cost Explorer, and configure alerts for the architecture team when this limit is reached.
24
Enable resource sharing from the AWS Organizations management account., Create a resource share in AWS Resource Access Manager in the operations account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share.
25
Use the AWS Application Discovery Service agent for data collection on physical servers and Hyper-V. Use the AWS Agentless Discovery Connector for data collection on VMware. Store the collected data in Amazon S3. Query the data with Amazon Athena. Generate reports by using Amazon QuickSight.
26
Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and AWS Lambda for processing the images.
27
Migrate the database to an Amazon RDS Aurora MySQL configuration. Host the web application on an Auto Scaling configuration of Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.
28
Create a bucket policy that denies any unencrypted operations in the S3 bucket that the web application uses., Turn on the S3 server-side encryption for the S3 bucket in use., Configure redirection of HTTP requests to HTTPS requests in CloudFront.
29
Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
30
Use Auto Scaling groups for the web application and use DynamoDB auto scaling.
31
Use Auto Scaling groups for the EC2 instances and enable RDS auto scaling to dynamically adjust the database capacity based on demand.
32
Migrate the NAS data to AWS using AWS Storage Gateway., Migrate the virtual machines with AWS Application Migration Service.
33
Set up an S3 gateway VPC endpoint in the VPC. Attach an endpoint policy to the endpoint to allow the required actions on the S3 bucket.
34
Use the VMware vSphere client to export the application as an image in Open Virtualization Format (OVF) format. Create an Amazon S3 bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import. Use the AWS CLI to run the EC2 import command.
35
Use AWS Budgets to create a budget for Rl coverage and set the threshold to 70%. Configure an alert that notifies the DevOps team.
36
Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe.
37
Create a VPC Endpoint Service that accepts TCP traffic and host it behind a Network Load Balancer. Enable access to the IT services over the DX connection.
38
Export the data from the DB instance and import the data into an unencrypted DB instance.
39
Create a customer managed policy document for each project that requires access to AWS resources. Specify full control of the resources that belong to the project. Attach the project-specific policy document to an IAM group. Change the group membership when developers change projects. Update the policy document when the set of resources changes.
40
Use AWS CodeBuild for automated testing. Use CloudFormation changes sets to evaluate changes ahead of deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns.
41
Integrate an Amazon ElastiCache for Redis layer to cache database query results. Update the Lambda functions to retrieve data from this cache when available.
42
The ECS task IAM role was modified.
43
Create an Auto Scaling group for the front end with a combination of Reserved instances and Spot Instances to reduce costs. Convert the tables in the Oracle database into Amazon DynamoDB tables.
44
Use Application Auto Scaling to scale out write capacity on the DynamoDB table based on a schedule.
45
Implement a blue/green deployment strategy.
46
The bucket has the BlockPublicAcls setting set to TRUE.
47
Configure the security group on the interface endpoint to allow connectivity to the AWS services.
48
Create a VPN connection between the company’s corporate network and the VPC. Configure security groups for the EC2 instances to only allow traffic from the VPN connection.
49
Configure an AWS Glue crawler to crawl the databases and create tables in the AWS Glue Data Catalog. Create an AWS Glue ETL job that loads data from the RDS databases to Amazon S3. Use Amazon Athena to run the queries.
50
Set the authorization to AWS_IAM for the API Gateway method. Create a permissions policy that grants execute-api:Invoke permission on the REST API resource and attach it to a group containing the IAM user accounts.
51
Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.
52
Use AWS Compute Optimizer. Call the “ExportLambdaFunctionRecommendations” operation for the Lambda functions. Export the .csv file to an S3 bucket. Create an Amazon EventBridge rule to schedule the Lambda function to run every 2 weeks.
53
An inbound rule for port 443 from source 10.1.0.0/24., An outbound rule for ports 1024 through 65535 to destination 10.1.0.0/24.
54
Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback in case Amazon CloudWatch alarms is triggered.
55
Use multipart upload for the backup jobs. Create a lifecycle policy for the incomplete multipart uploads on the S3 bucket to prevent new failed uploads from accumulating.
56
The throttle limit on the REST API is configured too low. During busy periods some requests are being throttled and are not reaching the Lambda function.
57
Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for PHP., Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logs to CloudWatch Logs., Configure the Aurora MySQL DB cluster to generate slow query logs by setting parameters in the parameter group.
58
Configure two private subnets in the Neptune VPC and route internet traffic via a NAT gateway. Deploy the Lambda functions in these private subnets., Create two new subnets in the Neptune VPC, specifically for hosting the Lambda functions. Implement a VPC endpoint for DynamoDB to facilitate direct access from these subnets.
59
Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation., Create encrypted database credentials in AWS Secrets Manager for the Amazon RDS database.
60
Create an IAM role with the AmazonSSMManagedInstanceCore managed policy attached. Attach the IAM role to all the EC2 instances. Remove all security group rules attached to the EC2 instances that allow inbound TCP on port 22. Have the engineers install the AWS Systems Manager Session Manager plugin for their devices and remotely access the instances by using the start-session API call from Systems Manager.
61
Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs. Configure the batch_count parameter to 1.
62
Deploy Amazon EC2 instances in a placement group., Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).
63
Download the Lambda function package from the source account. Use the deployment package and create new Lambda functions in the target account. Share the Aurora DB cluster with the target account by using AWS Resource Access Manager (AWS RAM). Grant the Target account permission to clone the Aurora DB cluster.
64
Create an Auto Scaling group for the EC2 instances and use an Application Load Balancer to direct incoming requests. Use Amazon DynamoDB to save the authenticated connection details.
65
Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount an Amazon EFS file system with the static data to the EC2 instances at launch time.
66
Identify the IP addresses in Amazon S3 requests with Amazon S3 access logs and Amazon Athena. Use AWS Config with Auto Remediation to remediate any changes to S3 bucket policies. Configure alerting with AWS Config and Amazon SNS.
67
Attach an endpoint policy to the gateway endpoint that restricts access to the specific S3 bucket. Assign an IAM role to the EC2 instances and attach a policy to the S3 bucket that grants access only to this role.
68
Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If any credentials are found, disable them and notify the user.
69
Re-architect the application. Load the data into Amazon S3. Use AWS Glue to transform the data. Store the table schema in an AWS Glue Data Catalog. Use Amazon Athena to query the data.
70
Implement AWS Application Discovery Service with the installation of its data collection agent on each server in the organization's data center to gather detailed server usage and network data.
71
Set up a standby ECS cluster and service on Fargate in a different Region. Create a cross-Region RDS read replica in this new Region. Design an AWS Lambda function to promote the read replica to a primary database and reconfigure Route 53 to reroute traffic to the standby ECS cluster. Adjust the EventBridge rule to include this Lambda function as a target.
72
Use AWS DMS to migrate the database to Amazon RDS. Replicate the client VMs into AWS using AWS SMS. Create Route 53 A records for each client VM.
73
Use Amazon ECS Spot instances and configure Spot Instance Draining.
74
Check if the S3 bucket is encrypted using AWS KMS., Check if the S3 block public access option is enabled on the S3 bucket.
75
Change the default encryption to server-side encryption with AWS KMS managed encryption keys (SSE-KMS) in S3 bucket. Set an S3 bucket policy to deny unencrypted PutObject requests. Use the AWS CLI to re-upload all objects in the S3 bucket.
76
Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Set up S3 cross-region replication from us-west-1 to eu-central-1.
77
When the RDS Aurora MySQL database is fully synchronized, change the DNS entry to point to the Aurora DB instance and stop replication., Launch an RDS Aurora MySQL DB instance and load the database data from the Snowball export. Configure replication from the on-premises database to the RDS Aurora instance using the VPN., Export the data from the database using database-native tools and import the data to AWS using AWS Snowball.
78
Create an SSL certificate by using AWS Certificate Manager (ACM). Include the domains as Subject Alternative Names., Create an Amazon CloudFront distribution and deploy a Lambda@Edge function., Create an Application Load Balancer that includes HTTP and HTTPS listeners.
79
Create an IAM account for the new employee and add the account to the security team IAM group. Set a permissions boundary that grants access to manage Amazon DynamoDB, Amazon RDS, and Amazon CloudWatch. When the employee takes on new management responsibilities, add the additional services to the permissions boundary IAM policy.
80
Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use Direct Connect Gateway to access data in other AWS Regions.
81
Create an AWS Direct Connect (DX) gateway and attach the DX gateway to a transit gateway. Enable route propagation with BGP., Create an AWS transit gateway and add attachments for all of the VPCs. Configure the route tables in the VPCs to send traffic to the transit gateway.
82
Deploy a scaled-down version of the production environment in a separate AWS Region ensuring the minimum distance requirements are met. The DR environment should include one instance for the web tier and one instance for the application tier. Create another database instance and configure source-replica replication for MySQL. Configure Auto Scaling for the web and app tiers to they can scale based on load. Use Amazon Route 53 to switch traffic to the DR Region.
83
Create an AWS Service Catalog product from the environment template and add a launch constraint to the product with the existing role. Give users in the testing team permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
84
Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for internal.company.local that point to the inbound resolver.
85
Create an interface VPC endpoint for API Gateway in the VPC. Enable private DNS naming for the VPC endpoint and configure an API resource policy that allows access from the VPC endpoint. Use the API endpoint's DNS names to access the API from the EC2 instance.
86
An inbound rule in ALB-SG allowing port 80 from source 0.0.0.0/0., An inbound rule in WebAppSG allowing port 80 from source ALB-SG.
87
Refactor the application onto AWS Lambda functions. Use AWS Step Functions to orchestrate the application.
88
Create a separate AWS account for identities where IAM user accounts can be created. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
89
Configure the S3 bucket policy to permit access using an aws:sourceVpce condition to match the S3 endpoint ID., Configure the EMR cluster to use an AWS CloudHSM appliance for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3.
90
Create a separate pipeline in CodePipeline and trigger execution using CodeCommit branches. Use AWS CodeBuild for running unit tests and stage the artifacts in an S3 bucket in a separate testing account.
91
Configure the application to send Set-Cookie headers to the viewer and control access to the files using signed cookies.
92
Create an Amazon SQS queue. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket.
93
Set up the tasks using the awsvpc network mode for enhanced network isolation and control., Attach security groups to the individual tasks and utilize IAM roles specifically designed for tasks to access other AWS resources.
94
Configure an Amazon RDS instance with a cross-Region read replica in an alternative Region. Should the primary Region fail, promote the read replica to become the new primary database.
95
Take a snapshot of the EBS volume by using Amazon Data Lifecycle Manager (Amazon DLM). Use the EBS direct APIs to copy the data from the snapshot to Amazon S3.
96
Upload the data to the S3 bucket using S3 Transfer Acceleration.
97
Launch the Amazon EC2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the external API service.
98
Set the value of Evaluate Target Health to Yes on the latency alias resources for both us-east-2 and us-west-1., Write a custom health check that verifies successful access to the database endpoints in each Region. Add the health check within the latency-based routing policy in Amazon Route 53.
99
Add the member accounts to a single organizational unit (OU). Create a service control policy (SCP) that denies access to the specific set of services and attach it to the OU.
100
Install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.
MPLE
MPLE
ユーザ名非公開 · 41問 · 13日前MPLE
MPLE
41問 • 13日前Weekly Test 3
Weekly Test 3
ユーザ名非公開 · 50問 · 13日前Weekly Test 3
Weekly Test 3
50問 • 13日前Weekly Test 2
Weekly Test 2
ユーザ名非公開 · 50問 · 13日前Weekly Test 2
Weekly Test 2
50問 • 13日前Weekly Test 1
Weekly Test 1
ユーザ名非公開 · 50問 · 13日前Weekly Test 1
Weekly Test 1
50問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 9問 · 13日前Refresher SPDI 1
Refresher SPDI 1
9問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 100問 · 13日前Refresher SPDI 1
Refresher SPDI 1
100問 • 13日前Definition of Terms 3
Definition of Terms 3
ユーザ名非公開 · 90問 · 13日前Definition of Terms 3
Definition of Terms 3
90問 • 13日前Definition of Terms 1
Definition of Terms 1
ユーザ名非公開 · 90問 · 13日前Definition of Terms 1
Definition of Terms 1
90問 • 13日前WT 6
WT 6
ユーザ名非公開 · 50問 · 13日前WT 6
WT 6
50問 • 13日前WT 3
WT 3
ユーザ名非公開 · 50問 · 13日前WT 3
WT 3
50問 • 13日前WT 1
WT 1
ユーザ名非公開 · 50問 · 13日前WT 1
WT 1
50問 • 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
ユーザ名非公開 · 94問 · 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
94問 • 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
ユーザ名非公開 · 20問 · 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
20問 • 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
ユーザ名非公開 · 10問 · 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
10問 • 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
ユーザ名非公開 · 11問 · 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
11問 • 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
ユーザ名非公開 · 11問 · 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
11問 • 13日前Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
ユーザ名非公開 · 13問 · 13日前Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
13問 • 13日前Item 204 Portland Cement Stabilized Road Mix Base Course
Item 204 Portland Cement Stabilized Road Mix Base Course
ユーザ名非公開 · 7問 · 13日前Item 204 Portland Cement Stabilized Road Mix Base Course
Item 204 Portland Cement Stabilized Road Mix Base Course
7問 • 13日前Item 202 Crushed Aggregate Base Course
Item 202 Crushed Aggregate Base Course
ユーザ名非公開 · 18問 · 13日前Item 202 Crushed Aggregate Base Course
Item 202 Crushed Aggregate Base Course
18問 • 13日前Item 200 Aggregate Subbase Course
Item 200 Aggregate Subbase Course
ユーザ名非公開 · 16問 · 13日前Item 200 Aggregate Subbase Course
Item 200 Aggregate Subbase Course
16問 • 13日前問題一覧
1
Create a new cross-account IAM role in the Production account with write access to the S3 bucket. Modify the build pipeline to assume this role to upload the files to the Production Account.
2
Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
3
Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at each OU level. Leave the default AWS managed SCP at the root level. For any specific exceptions for an OU, remove the standard deny list SCP and add a new deny list SCP for that OU
4
Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Application Migration Service (MGN) to migrate the VMs into Amazon EC2.
5
Use AWS Elastic Beanstalk and create a secondary environment configured as a deployment target for the CI/CD pipeline. To deploy, swap the staging and production environment URLs.
6
Use GitHub webhooks to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.
7
Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and set the Quorum minimum value to two using the setMValue command., Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and register a key for signing with the registerMofnPubKey command.
8
Use AWS Lambda to create daily EBS snapshots and copy them to the disaster recovery Region. Implement an Aurora Replica in the DR Region. Use Amazon Route 53 with an active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery Region.
9
Use Amazon CloudFront with Amazon S3 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.
10
Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind an Application Load Balancer. Create an Amazon Aurora PostgreSQL database in one AZ and add Aurora Replicas in two more AZs.
11
Enable versioning for the AWS Lambda function and associate an alias for every new version. Use the AWS CLI ‘update-alias’ command with the ‘routing-config’ parameter to distribute the load.
12
Verify that the IAM role has permission to decrypt the referenced KMS key.
13
Create an AWS WAF web ACL with a rule to allow access from the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
14
Store the expense claim data in Amazon S3. Use Amazon Athena and Amazon QuickSight to generate the reports using Amazon S3 as the data source., Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront. Deploy the application backend using Amazon API Gateway with an AWS Lambda proxy integration.
15
Configure an Aurora global database for storage-based cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources and create Amazon CloudFront distributions., Use Amazon Route 53 with a latency-based routing policy. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions.
16
Deploy an Amazon S3 File Gateway, configuring it to store both patient records and diagnostic images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA), accessible via SMB.
17
Enable AWS CloudTrail and keep all CloudTrail trails and logs in the management account., Create user accounts in the Production and Development accounts.
18
In each regional account, establish the SecurityAudit IAM role and grant permission to the central account to assume this role.
19
Configure a Multi-AZ Auto Scaling group using the application's AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the WebACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
20
Establish a new Amazon Elastic File System (Amazon EFS) using the Max I/O performance mode and mount this EFS file system on each EC2 instance in the cluster.
21
Deploy each application to a single-instance AWS Elastic Beanstalk environment without a load balancer.
22
Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.
23
Set up a monitoring system in the organization's central account using AWS Budgets. Focus on tracking the hours of EC2 instance operation, setting a monitoring interval to daily. Define a budget limit that is 15% above the 45-day average usage of EC2, as determined by AWS Cost Explorer, and configure alerts for the architecture team when this limit is reached.
24
Enable resource sharing from the AWS Organizations management account., Create a resource share in AWS Resource Access Manager in the operations account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share.
25
Use the AWS Application Discovery Service agent for data collection on physical servers and Hyper-V. Use the AWS Agentless Discovery Connector for data collection on VMware. Store the collected data in Amazon S3. Query the data with Amazon Athena. Generate reports by using Amazon QuickSight.
26
Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and AWS Lambda for processing the images.
27
Migrate the database to an Amazon RDS Aurora MySQL configuration. Host the web application on an Auto Scaling configuration of Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.
28
Create a bucket policy that denies any unencrypted operations in the S3 bucket that the web application uses., Turn on the S3 server-side encryption for the S3 bucket in use., Configure redirection of HTTP requests to HTTPS requests in CloudFront.
29
Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
30
Use Auto Scaling groups for the web application and use DynamoDB auto scaling.
31
Use Auto Scaling groups for the EC2 instances and enable RDS auto scaling to dynamically adjust the database capacity based on demand.
32
Migrate the NAS data to AWS using AWS Storage Gateway., Migrate the virtual machines with AWS Application Migration Service.
33
Set up an S3 gateway VPC endpoint in the VPC. Attach an endpoint policy to the endpoint to allow the required actions on the S3 bucket.
34
Use the VMware vSphere client to export the application as an image in Open Virtualization Format (OVF) format. Create an Amazon S3 bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import. Use the AWS CLI to run the EC2 import command.
35
Use AWS Budgets to create a budget for Rl coverage and set the threshold to 70%. Configure an alert that notifies the DevOps team.
36
Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe.
37
Create a VPC Endpoint Service that accepts TCP traffic and host it behind a Network Load Balancer. Enable access to the IT services over the DX connection.
38
Export the data from the DB instance and import the data into an unencrypted DB instance.
39
Create a customer managed policy document for each project that requires access to AWS resources. Specify full control of the resources that belong to the project. Attach the project-specific policy document to an IAM group. Change the group membership when developers change projects. Update the policy document when the set of resources changes.
40
Use AWS CodeBuild for automated testing. Use CloudFormation changes sets to evaluate changes ahead of deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns.
41
Integrate an Amazon ElastiCache for Redis layer to cache database query results. Update the Lambda functions to retrieve data from this cache when available.
42
The ECS task IAM role was modified.
43
Create an Auto Scaling group for the front end with a combination of Reserved instances and Spot Instances to reduce costs. Convert the tables in the Oracle database into Amazon DynamoDB tables.
44
Use Application Auto Scaling to scale out write capacity on the DynamoDB table based on a schedule.
45
Implement a blue/green deployment strategy.
46
The bucket has the BlockPublicAcls setting set to TRUE.
47
Configure the security group on the interface endpoint to allow connectivity to the AWS services.
48
Create a VPN connection between the company’s corporate network and the VPC. Configure security groups for the EC2 instances to only allow traffic from the VPN connection.
49
Configure an AWS Glue crawler to crawl the databases and create tables in the AWS Glue Data Catalog. Create an AWS Glue ETL job that loads data from the RDS databases to Amazon S3. Use Amazon Athena to run the queries.
50
Set the authorization to AWS_IAM for the API Gateway method. Create a permissions policy that grants execute-api:Invoke permission on the REST API resource and attach it to a group containing the IAM user accounts.
51
Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.
52
Use AWS Compute Optimizer. Call the “ExportLambdaFunctionRecommendations” operation for the Lambda functions. Export the .csv file to an S3 bucket. Create an Amazon EventBridge rule to schedule the Lambda function to run every 2 weeks.
53
An inbound rule for port 443 from source 10.1.0.0/24., An outbound rule for ports 1024 through 65535 to destination 10.1.0.0/24.
54
Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback in case Amazon CloudWatch alarms is triggered.
55
Use multipart upload for the backup jobs. Create a lifecycle policy for the incomplete multipart uploads on the S3 bucket to prevent new failed uploads from accumulating.
56
The throttle limit on the REST API is configured too low. During busy periods some requests are being throttled and are not reaching the Lambda function.
57
Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for PHP., Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logs to CloudWatch Logs., Configure the Aurora MySQL DB cluster to generate slow query logs by setting parameters in the parameter group.
58
Configure two private subnets in the Neptune VPC and route internet traffic via a NAT gateway. Deploy the Lambda functions in these private subnets., Create two new subnets in the Neptune VPC, specifically for hosting the Lambda functions. Implement a VPC endpoint for DynamoDB to facilitate direct access from these subnets.
59
Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation., Create encrypted database credentials in AWS Secrets Manager for the Amazon RDS database.
60
Create an IAM role with the AmazonSSMManagedInstanceCore managed policy attached. Attach the IAM role to all the EC2 instances. Remove all security group rules attached to the EC2 instances that allow inbound TCP on port 22. Have the engineers install the AWS Systems Manager Session Manager plugin for their devices and remotely access the instances by using the start-session API call from Systems Manager.
61
Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs. Configure the batch_count parameter to 1.
62
Deploy Amazon EC2 instances in a placement group., Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).
63
Download the Lambda function package from the source account. Use the deployment package and create new Lambda functions in the target account. Share the Aurora DB cluster with the target account by using AWS Resource Access Manager (AWS RAM). Grant the Target account permission to clone the Aurora DB cluster.
64
Create an Auto Scaling group for the EC2 instances and use an Application Load Balancer to direct incoming requests. Use Amazon DynamoDB to save the authenticated connection details.
65
Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount an Amazon EFS file system with the static data to the EC2 instances at launch time.
66
Identify the IP addresses in Amazon S3 requests with Amazon S3 access logs and Amazon Athena. Use AWS Config with Auto Remediation to remediate any changes to S3 bucket policies. Configure alerting with AWS Config and Amazon SNS.
67
Attach an endpoint policy to the gateway endpoint that restricts access to the specific S3 bucket. Assign an IAM role to the EC2 instances and attach a policy to the S3 bucket that grants access only to this role.
68
Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If any credentials are found, disable them and notify the user.
69
Re-architect the application. Load the data into Amazon S3. Use AWS Glue to transform the data. Store the table schema in an AWS Glue Data Catalog. Use Amazon Athena to query the data.
70
Implement AWS Application Discovery Service with the installation of its data collection agent on each server in the organization's data center to gather detailed server usage and network data.
71
Set up a standby ECS cluster and service on Fargate in a different Region. Create a cross-Region RDS read replica in this new Region. Design an AWS Lambda function to promote the read replica to a primary database and reconfigure Route 53 to reroute traffic to the standby ECS cluster. Adjust the EventBridge rule to include this Lambda function as a target.
72
Use AWS DMS to migrate the database to Amazon RDS. Replicate the client VMs into AWS using AWS SMS. Create Route 53 A records for each client VM.
73
Use Amazon ECS Spot instances and configure Spot Instance Draining.
74
Check if the S3 bucket is encrypted using AWS KMS., Check if the S3 block public access option is enabled on the S3 bucket.
75
Change the default encryption to server-side encryption with AWS KMS managed encryption keys (SSE-KMS) in S3 bucket. Set an S3 bucket policy to deny unencrypted PutObject requests. Use the AWS CLI to re-upload all objects in the S3 bucket.
76
Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Set up S3 cross-region replication from us-west-1 to eu-central-1.
77
When the RDS Aurora MySQL database is fully synchronized, change the DNS entry to point to the Aurora DB instance and stop replication., Launch an RDS Aurora MySQL DB instance and load the database data from the Snowball export. Configure replication from the on-premises database to the RDS Aurora instance using the VPN., Export the data from the database using database-native tools and import the data to AWS using AWS Snowball.
78
Create an SSL certificate by using AWS Certificate Manager (ACM). Include the domains as Subject Alternative Names., Create an Amazon CloudFront distribution and deploy a Lambda@Edge function., Create an Application Load Balancer that includes HTTP and HTTPS listeners.
79
Create an IAM account for the new employee and add the account to the security team IAM group. Set a permissions boundary that grants access to manage Amazon DynamoDB, Amazon RDS, and Amazon CloudWatch. When the employee takes on new management responsibilities, add the additional services to the permissions boundary IAM policy.
80
Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use Direct Connect Gateway to access data in other AWS Regions.
81
Create an AWS Direct Connect (DX) gateway and attach the DX gateway to a transit gateway. Enable route propagation with BGP., Create an AWS transit gateway and add attachments for all of the VPCs. Configure the route tables in the VPCs to send traffic to the transit gateway.
82
Deploy a scaled-down version of the production environment in a separate AWS Region ensuring the minimum distance requirements are met. The DR environment should include one instance for the web tier and one instance for the application tier. Create another database instance and configure source-replica replication for MySQL. Configure Auto Scaling for the web and app tiers to they can scale based on load. Use Amazon Route 53 to switch traffic to the DR Region.
83
Create an AWS Service Catalog product from the environment template and add a launch constraint to the product with the existing role. Give users in the testing team permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
84
Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for internal.company.local that point to the inbound resolver.
85
Create an interface VPC endpoint for API Gateway in the VPC. Enable private DNS naming for the VPC endpoint and configure an API resource policy that allows access from the VPC endpoint. Use the API endpoint's DNS names to access the API from the EC2 instance.
86
An inbound rule in ALB-SG allowing port 80 from source 0.0.0.0/0., An inbound rule in WebAppSG allowing port 80 from source ALB-SG.
87
Refactor the application onto AWS Lambda functions. Use AWS Step Functions to orchestrate the application.
88
Create a separate AWS account for identities where IAM user accounts can be created. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
89
Configure the S3 bucket policy to permit access using an aws:sourceVpce condition to match the S3 endpoint ID., Configure the EMR cluster to use an AWS CloudHSM appliance for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3.
90
Create a separate pipeline in CodePipeline and trigger execution using CodeCommit branches. Use AWS CodeBuild for running unit tests and stage the artifacts in an S3 bucket in a separate testing account.
91
Configure the application to send Set-Cookie headers to the viewer and control access to the files using signed cookies.
92
Create an Amazon SQS queue. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket.
93
Set up the tasks using the awsvpc network mode for enhanced network isolation and control., Attach security groups to the individual tasks and utilize IAM roles specifically designed for tasks to access other AWS resources.
94
Configure an Amazon RDS instance with a cross-Region read replica in an alternative Region. Should the primary Region fail, promote the read replica to become the new primary database.
95
Take a snapshot of the EBS volume by using Amazon Data Lifecycle Manager (Amazon DLM). Use the EBS direct APIs to copy the data from the snapshot to Amazon S3.
96
Upload the data to the S3 bucket using S3 Transfer Acceleration.
97
Launch the Amazon EC2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the external API service.
98
Set the value of Evaluate Target Health to Yes on the latency alias resources for both us-east-2 and us-west-1., Write a custom health check that verifies successful access to the database endpoints in each Region. Add the health check within the latency-based routing policy in Amazon Route 53.
99
Add the member accounts to a single organizational unit (OU). Create a service control policy (SCP) that denies access to the specific set of services and attach it to the OU.
100
Install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.