問題一覧
1
Stream the environmental data to Amazon Kinesis Data Streams, analyze it using an AWS Lambda function, and configure Amazon SNS to send immediate alerts to the management team if anomalies are detected.
2
Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage., Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
3
Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches., Increase the memory available to the Lambda functions.
4
Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
5
Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization., Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
6
AWS Application Discovery Service, AWS Cloud Adoption Readiness Tool (CART), AWS Migration Hub
7
Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
8
Deploy an AWS WAF web ACL that includes a rule group that blocks the attack traffic. Associate the web ACL with the Amazon CloudFront distribution., Create an Amazon CloudFront distribution with the ALB as the origin and configure a custom header and secret value. Configure the ALB to conditionally forward traffic only if the header and value match.
9
Configure the Kinesis Data Firehose delivery stream to partition the data in Amazon S3 by date and event type. Redefine the Athena table to include these partitions and modify the queries to specifically target relevant partitions.
10
Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account., Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
11
Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organization's deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
12
Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
13
Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables.
14
Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
15
Configure CloudFront to add a custom header to requests that it sends to the origin., Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB., Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to the OAI only.
16
Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
17
Set up an AWS Client VPN endpoint, associate it with a subnet in the VPC, and configure a Client VPN self-service portal. Instruct the developers to connect using the Client VPN client.
18
Configure Service Control Policies (SCPs) within AWS Control Tower to disallow assigning public IP addresses to EC2 instances across all OUs.
19
Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket.
20
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User-Agent HTTP header.
21
The IAM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principal., The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the IAM role, and the SAML assertion from IdP., The company's IdP defines SAML assertions that properly map users or groups in the company to IAM roles with appropriate permissions.
22
Create an AWS Budgets alert action to send an Amazon SNS notification when the budgeted amount is reached. Invoke an AWS Lambda function to terminate all services., Use the AWS Budgets service to define a fixed monthly budget for each development account., Create an SCP that denies access to expensive services. Apply the SCP to an OU containing the development accounts.
23
Define the infrastructure services in AWS CloudFormation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company., Allow IAM users to have AWSServiceCatalogEndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
24
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL. - Enable API caching on API Gateway to reduce the number of Lambda function invocations. - Enable Auto Scaling in DynamoDB.
25
A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
26
Enable a cross-Region read replica for the RDS database. In the case of an outage, promote the replica to be a standalone DB instance. Point applications to the new DB endpoint and create a read replica to maintain high availability.
27
Use Amazon Inspector to run the CVE assessment package on the EC2 instances launched from the approved AMIs., Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days.
28
Create a new DX connection to the same Region. Provision a Direct Connect gateway and establish new private VIFs to a virtual private gateway in the VPCs in each Region.
29
Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization.
30
Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
31
Configure the reserved concurrency limit for the new Lambda function. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
32
Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
33
Deploy a hot standby of the application tiers to another Region., Create a cross-Region Aurora MySQL Replica of the database.
34
Create an AWS WAF web ACL with a geo-match rule to block requests from outside the specified country. Associate this rule with the web ACL, and then attach the web ACL to the ALB.
35
Create an ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
36
Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
37
Perform multiple copy operations at one time by running each command from a separate terminal window, in separate instances of the Snowball client.
38
Implement an AWS Lambda function that initiates image processing in response to messages in the SQS queue., Configure the mobile app to send image uploads directly to Amazon S3. Configure S3 to trigger an Amazon Simple Queue Service (Amazon SQS) standard queue message upon each upload., Use Amazon Simple Notification Service (Amazon SNS) to send push notifications to the mobile app once the image processing is finished.
39
Connect an RDS Proxy connection pool to the reader endpoint of the Aurora database., Move Lambda function code for opening the database connection outside of the event handler.
40
Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Configure a Lambda function to retrieve messages from the SQS queue and call the StartExecution API.
41
The company should create an IAM role and assign the required permissions to the IAM role. The customer should then use the IAM role's Amazon Resource Name (ARN), including the external ID in the IAM role's trust policy, when requesting access to perform the required tasks.
42
Modify the existing VPC to include an Amazon-provided IPv6 CIDR block for the VPC and its subnets. For the public subnets, update the route tables to route IPv6 traffic (::/0) to the internet gateway. For the private subnets, update the route tables to route IPv6 traffic (::/0) to an egress-only internet gateway.
43
Use AWS Systems Manager Patch Manager to deploy patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports.
44
Migrate the database to Amazon RDS for MySQL. Configure the RDS instance to use a Multi-AZ deployment., Configure the application to store the user's session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between application instances., Put the application instances in an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes unhealthy.
45
Add a custom 'flag as spam' button to the Contact Control Panel (CCP) in Amazon Connect. This button triggers an AWS Lambda function to update call attributes and log the number in an Amazon DynamoDB table. Adapt the contact flows to reference these attributes and interact with the DynamoDB table for future call filtering.
46
Deploy Amazon EC2 instances in a cluster placement group., Use Amazon EC2 instance types and AMIs that support EFA.
47
Create an AWS Service Catalog portfolio for each team. Add each team's Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product., Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property.
48
Use AWS DataSync to schedule a daily task that replicates data between the on-premises file share and Amazon FSX.
49
Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
50
Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
51
Create an AWS Lambda function triggered by Amazon EventBridge to monitor and automatically apply encryption to any newly created or existing unencrypted S3 buckets., Establish an AWS Organizations structure, implement AWS Control Tower, and activate the necessary security guardrails. Consolidate all AWS accounts under this organization and organize them into Organizational Units (OUs) based on their function.
52
A canary deployment
53
Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.
54
Configure AWS Secrets Manager for managing the database credentials, creating separate secret keys for the development and production environments. Enable automatic secret rotation. Pass the Secrets Manager secret ARNs to the Lambda functions through environment variables. Assign appropriate IAM roles to the Lambda functions for accessing the secrets.
55
Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket.
56
Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a public subnet to route requests to the internet.
57
Use AWS CodePipeline to a create a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
58
Define an outbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers., Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
59
Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.
60
Deploy the AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission to manage the instances.
61
Check for the permissions boundaries set for the IAM user., Check the SCPs set at the organizational units (OUs).
62
Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
63
Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS., Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
64
Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates., Apply environment, cost center, and application name tags to all resources that accept tags.
65
Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB's Elastic IP address., Enable AWS Shield Advanced on all public-facing resources., Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
66
Run the web and application tiers in both Regions in an active/passive configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources., Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds., Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
67
Add a security group rule to the ALB to allow traffic from the AWS managed prefix list for CloudFront only.
68
Configure the S3 bucket to use S3 Transfer Acceleration., Redeploy the application to use Amazon S3 multipart upload.
69
Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
70
Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.
71
Define the AWS resources using JavaScript or TypeScript. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developers' code and use the AWS CDK to create CloudFormation stacks. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
72
Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR.
73
Enable Amazon Route 53 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue., Enable Amazon S3 cross-Region replication on the buckets that contain images., Enable DynamoDB global tables to achieve multi-Region table replication.
74
Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching.
75
Update the CloudWatch Events rule to trigger on Amazon EC2 "Instance Launch Successful" and "Instance Terminate Successful" events for the Auto Scaling group used by the cluster., Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule., Configure a Lambda function to retrieve messages from an Amazon SQS queue. Modify the Lambda function to retrieve a maximum of 10 messages then batch the messages by Amazon Route 53 API call type and submit. Delete the messages from the SQS queue after successful API calls.
76
Replace Amazon EFS with Amazon FSx for Lustre., Enable an Elastic Fabric Adapter (EFA) on a supported EC2 instance type., Ensure the HPC cluster is launched within a single Availability Zone.
77
Convert the Aurora Serverless v1 database to a multi-Region Aurora MySQL database, ensuring continuous data replication across the primary and a secondary Region. Use AWS SAM to script the application deployment in the secondary Region for rapid recovery.
78
Create an alias for new versions of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
79
Establish an AWS Direct Connect connection from the on-premises data center to AWS., Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream., Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
80
Configure on-demand capacity mode for the table to enable pay-per-request pricing for read and write requests.
81
Create a Lambda@Edge function to sort parameters by name and force them to be lowercase. Select the CloudFront viewer request trigger to invoke the function.
82
Implement an Amazon Kinesis Data Firehose for ingesting sales transactions and process them using AWS Lambda functions before storing in an Amazon RDS instance.
83
Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets., Add another origin to the CloudFront distribution for the static assets.
84
Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance., Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data.
85
Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
86
Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization., Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the IAM identity provider.
87
Create an Amazon CloudFront distribution in front of the processed images bucket., Replace the EC2 instance with AWS Lambda to run the image processing tasks.
88
Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store trader session data in Amazon DynamoDB with on-demand capacity.
89
Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket. Use S3 event notification to publish events to the SQS queue. Process the queue with an AWS Lambda functions that calls the Amazon Rekognition API to perform facial analysis.
90
Create an Amazon EventBridge rule with a pattern that looks for AWS CloudTrail events where the API calls involve the root user account. Configure an Amazon SQS queue as a target for the rule., Update the Lambda function to poll the Amazon SQS queue for messages and to return successfully when the ticketing system API has processed the request.
91
Create an authorization to associate the private hosted zone in the Management account with the new VPC in the Production account., Associate a new VPC in the Production account with a hosted zone in the Management account. Delete the association authorization in the Management account.
92
Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.
93
Create an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild build., Create an AWS CodeBuild project that pulls the latest container image from Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR., Create an Amazon ECR repository for the image. Create an AWS CodeCommit repository containing code for the tool being deployed to the container image in Amazon ECR.
94
Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint and associate the security group with the endpoint.
95
Use Amazon WorkSpaces for providing cloud desktops. Connect it to the on-premises network via VPN, integrate with the on-premises Active Directory using an AD Connector, and set up a RADIUS server to enable MFA.
96
Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security., Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer., Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.
97
Migrate the CRM system to Amazon EC2 instances., Implement Amazon RDS to host the CRM's database.
98
Configure CodePipeline with a deployment stage using AWS CodeDeploy for blue/green deployments. After deploying the new version, monitor its performance and security, and use CodeDeploy's rollback feature in case of any issues.
99
Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
100
Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named cloud.myservice.com and assign the NLB DNS name to the record set.
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Stream the environmental data to Amazon Kinesis Data Streams, analyze it using an AWS Lambda function, and configure Amazon SNS to send immediate alerts to the management team if anomalies are detected.
2
Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage., Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
3
Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches., Increase the memory available to the Lambda functions.
4
Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
5
Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization., Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
6
AWS Application Discovery Service, AWS Cloud Adoption Readiness Tool (CART), AWS Migration Hub
7
Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
8
Deploy an AWS WAF web ACL that includes a rule group that blocks the attack traffic. Associate the web ACL with the Amazon CloudFront distribution., Create an Amazon CloudFront distribution with the ALB as the origin and configure a custom header and secret value. Configure the ALB to conditionally forward traffic only if the header and value match.
9
Configure the Kinesis Data Firehose delivery stream to partition the data in Amazon S3 by date and event type. Redefine the Athena table to include these partitions and modify the queries to specifically target relevant partitions.
10
Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account., Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
11
Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organization's deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
12
Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
13
Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables.
14
Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
15
Configure CloudFront to add a custom header to requests that it sends to the origin., Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB., Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to the OAI only.
16
Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
17
Set up an AWS Client VPN endpoint, associate it with a subnet in the VPC, and configure a Client VPN self-service portal. Instruct the developers to connect using the Client VPN client.
18
Configure Service Control Policies (SCPs) within AWS Control Tower to disallow assigning public IP addresses to EC2 instances across all OUs.
19
Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket.
20
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User-Agent HTTP header.
21
The IAM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principal., The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the IAM role, and the SAML assertion from IdP., The company's IdP defines SAML assertions that properly map users or groups in the company to IAM roles with appropriate permissions.
22
Create an AWS Budgets alert action to send an Amazon SNS notification when the budgeted amount is reached. Invoke an AWS Lambda function to terminate all services., Use the AWS Budgets service to define a fixed monthly budget for each development account., Create an SCP that denies access to expensive services. Apply the SCP to an OU containing the development accounts.
23
Define the infrastructure services in AWS CloudFormation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company., Allow IAM users to have AWSServiceCatalogEndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
24
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL. - Enable API caching on API Gateway to reduce the number of Lambda function invocations. - Enable Auto Scaling in DynamoDB.
25
A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
26
Enable a cross-Region read replica for the RDS database. In the case of an outage, promote the replica to be a standalone DB instance. Point applications to the new DB endpoint and create a read replica to maintain high availability.
27
Use Amazon Inspector to run the CVE assessment package on the EC2 instances launched from the approved AMIs., Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days.
28
Create a new DX connection to the same Region. Provision a Direct Connect gateway and establish new private VIFs to a virtual private gateway in the VPCs in each Region.
29
Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization.
30
Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
31
Configure the reserved concurrency limit for the new Lambda function. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
32
Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
33
Deploy a hot standby of the application tiers to another Region., Create a cross-Region Aurora MySQL Replica of the database.
34
Create an AWS WAF web ACL with a geo-match rule to block requests from outside the specified country. Associate this rule with the web ACL, and then attach the web ACL to the ALB.
35
Create an ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
36
Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
37
Perform multiple copy operations at one time by running each command from a separate terminal window, in separate instances of the Snowball client.
38
Implement an AWS Lambda function that initiates image processing in response to messages in the SQS queue., Configure the mobile app to send image uploads directly to Amazon S3. Configure S3 to trigger an Amazon Simple Queue Service (Amazon SQS) standard queue message upon each upload., Use Amazon Simple Notification Service (Amazon SNS) to send push notifications to the mobile app once the image processing is finished.
39
Connect an RDS Proxy connection pool to the reader endpoint of the Aurora database., Move Lambda function code for opening the database connection outside of the event handler.
40
Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Configure a Lambda function to retrieve messages from the SQS queue and call the StartExecution API.
41
The company should create an IAM role and assign the required permissions to the IAM role. The customer should then use the IAM role's Amazon Resource Name (ARN), including the external ID in the IAM role's trust policy, when requesting access to perform the required tasks.
42
Modify the existing VPC to include an Amazon-provided IPv6 CIDR block for the VPC and its subnets. For the public subnets, update the route tables to route IPv6 traffic (::/0) to the internet gateway. For the private subnets, update the route tables to route IPv6 traffic (::/0) to an egress-only internet gateway.
43
Use AWS Systems Manager Patch Manager to deploy patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports.
44
Migrate the database to Amazon RDS for MySQL. Configure the RDS instance to use a Multi-AZ deployment., Configure the application to store the user's session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between application instances., Put the application instances in an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes unhealthy.
45
Add a custom 'flag as spam' button to the Contact Control Panel (CCP) in Amazon Connect. This button triggers an AWS Lambda function to update call attributes and log the number in an Amazon DynamoDB table. Adapt the contact flows to reference these attributes and interact with the DynamoDB table for future call filtering.
46
Deploy Amazon EC2 instances in a cluster placement group., Use Amazon EC2 instance types and AMIs that support EFA.
47
Create an AWS Service Catalog portfolio for each team. Add each team's Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product., Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property.
48
Use AWS DataSync to schedule a daily task that replicates data between the on-premises file share and Amazon FSX.
49
Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
50
Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
51
Create an AWS Lambda function triggered by Amazon EventBridge to monitor and automatically apply encryption to any newly created or existing unencrypted S3 buckets., Establish an AWS Organizations structure, implement AWS Control Tower, and activate the necessary security guardrails. Consolidate all AWS accounts under this organization and organize them into Organizational Units (OUs) based on their function.
52
A canary deployment
53
Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.
54
Configure AWS Secrets Manager for managing the database credentials, creating separate secret keys for the development and production environments. Enable automatic secret rotation. Pass the Secrets Manager secret ARNs to the Lambda functions through environment variables. Assign appropriate IAM roles to the Lambda functions for accessing the secrets.
55
Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket.
56
Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a public subnet to route requests to the internet.
57
Use AWS CodePipeline to a create a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
58
Define an outbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers., Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
59
Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.
60
Deploy the AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission to manage the instances.
61
Check for the permissions boundaries set for the IAM user., Check the SCPs set at the organizational units (OUs).
62
Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
63
Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS., Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
64
Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates., Apply environment, cost center, and application name tags to all resources that accept tags.
65
Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB's Elastic IP address., Enable AWS Shield Advanced on all public-facing resources., Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
66
Run the web and application tiers in both Regions in an active/passive configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources., Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds., Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
67
Add a security group rule to the ALB to allow traffic from the AWS managed prefix list for CloudFront only.
68
Configure the S3 bucket to use S3 Transfer Acceleration., Redeploy the application to use Amazon S3 multipart upload.
69
Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
70
Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.
71
Define the AWS resources using JavaScript or TypeScript. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developers' code and use the AWS CDK to create CloudFormation stacks. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
72
Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR.
73
Enable Amazon Route 53 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue., Enable Amazon S3 cross-Region replication on the buckets that contain images., Enable DynamoDB global tables to achieve multi-Region table replication.
74
Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching.
75
Update the CloudWatch Events rule to trigger on Amazon EC2 "Instance Launch Successful" and "Instance Terminate Successful" events for the Auto Scaling group used by the cluster., Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule., Configure a Lambda function to retrieve messages from an Amazon SQS queue. Modify the Lambda function to retrieve a maximum of 10 messages then batch the messages by Amazon Route 53 API call type and submit. Delete the messages from the SQS queue after successful API calls.
76
Replace Amazon EFS with Amazon FSx for Lustre., Enable an Elastic Fabric Adapter (EFA) on a supported EC2 instance type., Ensure the HPC cluster is launched within a single Availability Zone.
77
Convert the Aurora Serverless v1 database to a multi-Region Aurora MySQL database, ensuring continuous data replication across the primary and a secondary Region. Use AWS SAM to script the application deployment in the secondary Region for rapid recovery.
78
Create an alias for new versions of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
79
Establish an AWS Direct Connect connection from the on-premises data center to AWS., Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream., Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
80
Configure on-demand capacity mode for the table to enable pay-per-request pricing for read and write requests.
81
Create a Lambda@Edge function to sort parameters by name and force them to be lowercase. Select the CloudFront viewer request trigger to invoke the function.
82
Implement an Amazon Kinesis Data Firehose for ingesting sales transactions and process them using AWS Lambda functions before storing in an Amazon RDS instance.
83
Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets., Add another origin to the CloudFront distribution for the static assets.
84
Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance., Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data.
85
Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
86
Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization., Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the IAM identity provider.
87
Create an Amazon CloudFront distribution in front of the processed images bucket., Replace the EC2 instance with AWS Lambda to run the image processing tasks.
88
Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store trader session data in Amazon DynamoDB with on-demand capacity.
89
Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket. Use S3 event notification to publish events to the SQS queue. Process the queue with an AWS Lambda functions that calls the Amazon Rekognition API to perform facial analysis.
90
Create an Amazon EventBridge rule with a pattern that looks for AWS CloudTrail events where the API calls involve the root user account. Configure an Amazon SQS queue as a target for the rule., Update the Lambda function to poll the Amazon SQS queue for messages and to return successfully when the ticketing system API has processed the request.
91
Create an authorization to associate the private hosted zone in the Management account with the new VPC in the Production account., Associate a new VPC in the Production account with a hosted zone in the Management account. Delete the association authorization in the Management account.
92
Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.
93
Create an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild build., Create an AWS CodeBuild project that pulls the latest container image from Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR., Create an Amazon ECR repository for the image. Create an AWS CodeCommit repository containing code for the tool being deployed to the container image in Amazon ECR.
94
Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint and associate the security group with the endpoint.
95
Use Amazon WorkSpaces for providing cloud desktops. Connect it to the on-premises network via VPN, integrate with the on-premises Active Directory using an AD Connector, and set up a RADIUS server to enable MFA.
96
Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security., Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer., Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.
97
Migrate the CRM system to Amazon EC2 instances., Implement Amazon RDS to host the CRM's database.
98
Configure CodePipeline with a deployment stage using AWS CodeDeploy for blue/green deployments. After deploying the new version, monitor its performance and security, and use CodeDeploy's rollback feature in case of any issues.
99
Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
100
Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named cloud.myservice.com and assign the NLB DNS name to the record set.