問題一覧
1
Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date., Store the data in Amazon S3 using Apache Parquet or Apache ORC formats.
2
Create a pipeline in CodePipeline with a deploy stage that uses a blue/green deployment strategy. Monitor the application and if there are any issues trigger a manual rollback using CodeDeploy.
3
Create an S3 bucket for the pipeline. Configure S3 caching for the CodeBuild projects that are in the pipeline. Update the build specifications of the CodeBuild projects. Add the data file directory to the cache definition.
4
Use AWS Organizations to create a management account and create each team’s account from the management account. Create a security account for cross-account access. Apply service control policies on each account and grant the security team cross-account access to all accounts. The Security team will create IAM policies to provide least privilege access.
5
Create a SAML-based identity management provider in a central account and map IAM roles that provide the necessary permissions for users. Map users in the on-premises IdP groups to IAM roles. Use cross-account access to the other AWS accounts.
6
Migrate the applications to Docker containers on Amazon ECS. Create a separate ECS task and service for each application. Enable service Auto Scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using Amazon CloudWatch.
7
Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an Amazon CloudFront distribution with the S3 bucket as the origin. Use Amazon Cognito to provide user management authentication functions.
8
Adjust the workload configuration to utilize topology spread constraints based on different Availability Zones.
9
Implement AWS Global Accelerator with a standard accelerator configuration. Associate each regional deployment's ALB with the Global Accelerator and distribute its static IP addresses to customers.
10
Create a cross-Region read replica in us-west-1. Use Amazon EventBridge to trigger an AWS Lambda function that promotes the read replica to primary and updates the DNS endpoint address for the database.
11
With cross-zone load balancing enabled, one instance in Availability Zone X receives 20% traffic and four instances in Availability Zone Y receive 20% traffic each. With cross-zone load balancing disabled, one instance in Availability Zone X receives 50% traffic and four instances in Availability Zone Y receive 12.5% traffic each
12
Use AWS Organizations to set up a multi-account environment. Organize the accounts into the following Organizational Units (OUs): Security, Infrastructure, Workloads, Suspended and Exceptions, Configure an AWS Budget alert to move an AWS account to Exceptions OU if the account reaches a predefined budget threshold. Use Service Control Policies (SCPs) to limit/block resource usage in the Exceptions OU. Configure a Suspended OU to hold workload accounts with retired resources. Use Service Control Policies (SCPs) to limit/block resource usage in the Suspended OU, Designate an account within the AWS Organizations organization to be the GuardDuty delegated administrator. Create an SNS topic in this account. Subscribe the security team to the topic so that the security team can receive alerts from GuardDuty via SNS
13
You, as the bucket owner, still own any objects that were written to the bucket while the bucket owner enforced setting was applied. These objects are not owned by the object writer, even if you re-enable ACLs, If you used object ACLs for permissions management before you applied the bucket owner enforced setting and you didn't migrate these object ACL permissions to your bucket policy after you re-enable ACLs, these permissions are restored
14
Create an AWS Organizations organization-wide AWS Config rule that mandates all resources in the selected OUs to be associated with the AWS WAF rules. Configure automated remediation actions by using AWS Systems Manager Automation documents to fix non-compliant resources. Set up AWS WAF rules by using an AWS CloudFormation stack set to target the same OUs where the AWS Config rule is applied
15
Make sure that all AWS accounts are assigned organizational units (OUs) within an AWS Organizations structure operating in all features mode, Set up a Service Control Policy (SCP) that contains a deny rule to the ec2:PurchaseReservedInstancesOffering and ec2:ModifyReservedInstances actions. Attach the SCP to each organizational unit (OU) of the AWS Organizations structure
16
Inspect the VPC Flow Logs using the CloudWatch console and select the log group that contains the NAT gateway's ENI and the EC2 instance's ENI. Leverage a query filter with the destination address set as like 205.1 and the source address set as like 198.21.200.1. Execute the stats command to filter the sum of bytes transferred by the source address and the destination address
17
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin, Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
18
Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing
19
Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
20
Create a gateway endpoint for Amazon S3 in the data lake VPC. Attach an endpoint policy to allow access to the S3 bucket only via the access points. Specify the route table that is used to access the bucket, In the AWS account that owns the S3 buckets, create an S3 access point for each bucket that the applications must use to access the data. Set up all applications in a single data lake VPC, Add a bucket policy on the buckets to deny access from applications outside the data lake VPC
21
Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data Streams
22
Use AWS DataSync to automate and accelerate online data transfers to the given AWS storage services
23
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit
24
SCPs do not affect service-linked role, If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can't perform that action, SCPs affect all users and roles in attached accounts, including the root user
25
Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation, Use Glue ETL job to write the transformed data in the curated zone using a compressed file format
26
To use private hosted zones, DNS hostnames and DNS resolution should be enabled for the VPC
27
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket, Use multipart uploads for faster file uploads into the destination S3 bucket
28
Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
29
To upload video files to Amazon S3 bucket, leverage multipart uploads feature. Configure the application to use S3 Transfer Acceleration endpoints to improve the performance of uploads and also optimize the multipart uploads
30
Create a private virtual interface to a Direct Connect connection in us-east-1. Set up an interface VPC endpoint and configure the on-premises systems to access S3 via this endpoint
31
Configure the applications behind private Network Load Balancers (NLBs) in separate VPCs. Set up each NLB as an AWS PrivateLink endpoint service with associated VPC endpoints in the centralized VPC. Set up a public Application Load Balancer (ALB) in the centralized VPC and point the target groups to the private IP addresses of each endpoint. Set up host-based routing to route application traffic to the corresponding target group through the ALB
32
During SAML-based federation, pass an attribute for DevelopmentDept as an AWS Security Token Service (AWS STS) session tag. The policy of the assumed IAM role used by the developers should be updated with a deny action and a StringNotEquals condition for the DevelopmentDept resource tag and aws:PrincipalTag/ DevelopmentDept
33
If you're creating failover records in a private hosted zone, you must assign a public IP address to an instance in the VPC to check the health of an endpoint within a VPC by IP address, Records without a health check are always considered healthy. If no record is healthy, all records are deemed to be healthy
34
In the centralized account, configure an IAM role that has the Lambda service as a trusted entity. Add an inline policy to assume the roles of the other AWS accounts, In the other AWS accounts, configure an IAM role that has minimal permissions. Add the Lambda execution role of the centralized account as a trusted entity
35
Configure a Route 53 Resolver inbound endpoint and configure it for the EFS specific VPC. Create a Route 53 private hosted zone and add a new CNAME record with the value of the EFS DNS name. Configure forwarding rules on the on-premises DNS servers to forward queries for the custom domain host to the Route 53 private hosted zone
36
If a user has an IAM policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user cannot perform that action, The specified actions from an attached SCP affect all IAM identities including the root user of the member account
37
Discard existing subnet in VPC B. Create two new subnets 192.168.2.0/28 and 192.168.2.16/28 in VPC B. Move b-1 to subnet 192.168.2.0/28 and b-2 to subnet 192.168.2.16/28 by launching a new instance in the new subnet via an AMI created from the old instance, Create two route tables in VPC B - one with a route for destination VPC A and another with a route for destination VPC C
38
Decouple the RDS DB instance from the Beanstalk environment (environment A) and leverage Elastic Beanstalk blue (environment A)/green (environment B) deployment to connect to the decoupled database post the upgrade
39
Enable AWS Organizations and attach the AWS accounts of all business units to it. Create a Service Control Policy to deny access to the Non-Core Regions and attach the policy to the root OU
40
Create a CloudFormation template describing the application infrastructure in the Resources section. Use CloudFormation stack set from an administrator account to launch stack instances that deploy the application to various other regions
41
Storage Gateway doesn't automatically update the cache when you upload a file directly to Amazon S3. Perform a RefreshCache operation to see the changes on the file share
42
Use Amazon S3 Intelligent-Tiering storage class to store the video files. Configure this S3 bucket as the origin of an Amazon CloudFront distribution for delivering the contents to the customers
43
Suspend the Auto Scaling group's Terminate process. Use Session Manager to log in to an instance that is marked as unhealthy and analyze the system logs to figure out the root cause
44
Use AWS Web Application Firewall (WAF) as the first line of defense to protect the API Gateway APIs against malicious exploits and DDoS attacks. Install Amazon Inspector on the EC2 instance to check for vulnerabilities. Configure Amazon GuardDuty to monitor any malicious attempts to access the APIs illegally
45
kms:GenerateDataKey
46
Use AWS X-Ray to analyze the microservices applications through request tracing. Configure Amazon CloudWatch for monitoring containers, latency, web server requests, and incoming load-balancer requests and create CloudWatch alarms to send out notifications if system latency is increasing
47
Set up new Amazon DynamoDB tables for the application with on-demand capacity. Use a gateway VPC endpoint for DynamoDB so that the application can have a private and encrypted connection to the DynamoDB tables
48
Configure Amazon S3 for hosting the web application while using AWS AppSync for database access services. Use Amazon Simple Queue Service (Amazon SQS) for queuing orders and AWS Lambda for business logic. Use Amazon SQS dead-letter queue for tracking and re-processing failed orders
49
Objects can't be encrypted by AWS Key Management Service (AWS KMS), The AWS account that owns the bucket must also own the object
50
Create the cluster with auth-token parameter and make sure that the parameter is included in all subsequent commands to the cluster, Configure the security group for the ElastiCache cluster with the required rules to allow inbound traffic from the cluster itself as well as from the cluster's clients on port 6379, Configure the ElastiCache cluster to have both in-transit as well as at-rest encryption
51
Create a snapshot copy grant in the destination Region for a KMS key in the destination Region. Configure Redshift cross-Region snapshots in the source Region
52
Set up a VPC peering connection between the two VPCs and add a route to the routing table of VPC X that points to the IP address range of 172.30.0.0/16, Set up a VPC peering connection between the two VPCs and add a route to the routing table of VPC Y that points to the IP address range of 172.20.0.0/16
53
Use CloudFront signed URLs to restrict access to the application installation file, Use CloudFront signed cookies to restrict access to all the files in the members' area of the website
54
Apply patch baselines using the AWS-RunPatchBaseline SSM document, Set up Systems Manager Agent on all instances to manage patching. Test patches in pre-production and then deploy as a maintenance window task with the appropriate approval
55
Set up separate Lambda functions to provision and terminate the Elastic Beanstalk environment. Configure a Lambda execution role granting the required Elastic Beanstalk environment permissions and assign the role to the Lambda functions. Configure cron expression based Amazon EventBridge events rules to trigger the Lambda functions
56
By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources, Since Lambda functions can scale extremely quickly, it's a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold, If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code
57
Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed results locally for low-latency access while storing the full volume with all results in its Amazon S3 service bucket
58
The instances launched by both Launch Configuration LC-A and Launch Configuration LC-B will have dedicated instance tenancy
59
Configure a Lambda function as one of the SNS topic subscribers, which is invoked to secure the objects in the S3 bucket, Enable object-level logging for S3. Set up a EventBridge event pattern when a PutObject API call with public-read permission is detected in the AWS CloudTrail logs and set the target as an SNS topic for downstream notifications
60
Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations
61
Develop the leaderboard using ElastiCache Redis as it meets the in-memory, high availability, low latency requirements, Develop the leaderboard using DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements
62
Set up a CloudFormation stack set for Redshift cluster creation so it can be launched in another Region and configure Amazon Redshift to automatically copy snapshots for the cluster to the other AWS Region. In case of a disaster, restore the cluster in the other AWS Region from that Region's snapshot
63
API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
64
Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway for low latency access to the migrated data for ongoing updates from the on-premises applications
65
Use Amazon SQS FIFO queue in batch mode of 8 messages per operation to process the messages at the peak rate
66
Use Amazon Route 53 to distribute traffic, Move the static content to Amazon S3, and front this with an Amazon CloudFront distribution. Configure another layer of protection by adding AWS Web Application Firewall (AWS WAF) to the CloudFront distribution
67
Use EFS as the data tier of the storage layer, Use EC2 Instance Store as the service tier of the storage layer
68
Use WAF IP set statement that specifies the IP addresses that you want to allow through, Use WAF geo match statement listing the countries that you want to block
69
After a Route 53 health checker receives the HTTP status code, it must receive the response body from the endpoint within the next two seconds with the SearchString string that you specified. The string must appear entirely in the first 5,120 bytes of the response body or the endpoint fails the health check, HTTPS health checks don't validate SSL/TLS certificates, so checks don't fail if a certificate is invalid or expired, If you configure Route 53 to use the HTTPS protocol to check the health of your endpoint, then that endpoint must support TLS
70
Amazon Inspector, Amazon SNS
71
Use AWS Elemental MediaConvert for file-based video processing and Amazon CloudFront for delivery. Use video streaming protocols like Apple’s HTTP Live Streaming (HLS) and create a manifest file. Point the CloudFront distribution at the manifest
72
Use custom routing accelerator of Global Accelerator to deterministically route one or more users to a specific instance using VPC subnet endpoints
73
Create a VPC Gateway endpoint and create the file gateway using this VPC endpoint, Create a VPC Interface endpoint and create the file gateway using this VPC endpoint
74
Store the data in Amazon S3 in a columnar format such as Apache Parquet, Partition the data in Amazon S3 using Apache Hive partitioning. Use a date column as partition key
75
Configure a public virtual interface on the Direct Connect connection. Create an AWS Site-to-Site VPN between the customer gateway and the virtual private gateway in the VPC
76
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource":" *" }, { "Effect": "Deny", "Action": "s3:*", "Resource": "*" } ] }
77
Create a new private subnet in the same VPC as the Amazon RDS DB instance. Create a new security group with necessary inbound rules for QuickSight in the same VPC. Sign in to QuickSight as a QuickSight admin and create a new QuickSight VPC connection. Create a new dataset from the RDS DB instance
78
Configure Amazon Kinesis Data Firehose to stream data to Amazon Redshift. Create a business intelligence dashboard by using Amazon QuickSight that has Amazon Redshift as a data source
79
Configure CloudFront to use a custom header and configure an AWS WAF rule on the origin’s Application Load Balancer to accept only traffic that contains that header
80
Use Host conditions in ALB listener to route *.ecomm.com to appropriate target groups, Use Host conditions in ALB listener to route ecomm.com to appropriate target groups
81
Update the Security Groups for the application servers to only allow incoming traffic on port 80 from the ELB
82
Enable CloudTrail log file integrity validation, Use Amazon S3 MFA Delete on the S3 bucket that holds CloudTrail logs and digest files
83
The aws:PrincipalOrgID global condition key can be used with the Principal element in a resource-based policy with AWS KMS. You need to specify the Organization ID in the Condition element
84
Update the network ACL associated with the subnet to allow outbound traffic
85
Create a new RDS Read Replica from your Multi AZ primary database and generate reports by querying the Read Replica
86
Send score updates to Kinesis Data Streams which uses a Lambda function to process these updates and then store these processed updates in DynamoDB
87
Each KCL application must use its own DynamoDB table, You can only use DynamoDB for checkpointing KCL
88
Create a new Amazon S3 bucket to be used for replication. Create a new S3 Replication Time Control (S3 RTC) rule on the source S3 bucket that filters data based on the prefix (high-value claim type) and replicates it to the new S3 bucket. Leverage an Amazon S3 event notification to trigger a notification when the time to copy the claim data exceeds the desired threshold
89
Set up an AWS Web Application Firewall (WAF) web ACL. Create a rule to deny any requests that do not originate from the specified country. Attach the rule with the web ACL. Attach the web ACL with the ALB
90
Configure traffic mirroring on the source EC2 instances hosting the VOIP program, set up a network monitoring program on a target EC2 instance and stream the logs to an S3 bucket for further analysis
MPLE
MPLE
ユーザ名非公開 · 41問 · 13日前MPLE
MPLE
41問 • 13日前Weekly Test 3
Weekly Test 3
ユーザ名非公開 · 50問 · 13日前Weekly Test 3
Weekly Test 3
50問 • 13日前Weekly Test 2
Weekly Test 2
ユーザ名非公開 · 50問 · 13日前Weekly Test 2
Weekly Test 2
50問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 9問 · 13日前Refresher SPDI 1
Refresher SPDI 1
9問 • 13日前Refresher SPDI 1
Refresher SPDI 1
ユーザ名非公開 · 100問 · 13日前Refresher SPDI 1
Refresher SPDI 1
100問 • 13日前Definition of Terms 3
Definition of Terms 3
ユーザ名非公開 · 90問 · 13日前Definition of Terms 3
Definition of Terms 3
90問 • 13日前Definition of Terms 2
Definition of Terms 2
ユーザ名非公開 · 90問 · 13日前Definition of Terms 2
Definition of Terms 2
90問 • 13日前Definition of Terms 1
Definition of Terms 1
ユーザ名非公開 · 90問 · 13日前Definition of Terms 1
Definition of Terms 1
90問 • 13日前WT 6
WT 6
ユーザ名非公開 · 50問 · 13日前WT 6
WT 6
50問 • 13日前WT 3
WT 3
ユーザ名非公開 · 50問 · 13日前WT 3
WT 3
50問 • 13日前SPI version D pt 2
SPI version D pt 2
ユーザ名非公開 · 61問 · 13日前SPI version D pt 2
SPI version D pt 2
61問 • 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
ユーザ名非公開 · 94問 · 13日前RNPCP Chapter 6 to 9
RNPCP Chapter 6 to 9
94問 • 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
ユーザ名非公開 · 20問 · 13日前Item 303 Bituminous Seal Coat
Item 303 Bituminous Seal Coat
20問 • 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
ユーザ名非公開 · 10問 · 13日前Item 301 Bituminous Prime Coat
Item 301 Bituminous Prime Coat
10問 • 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
ユーザ名非公開 · 11問 · 13日前Item 300 Aggregate Surface Course
Item 300 Aggregate Surface Course
11問 • 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
ユーザ名非公開 · 11問 · 13日前Item 206 Chemically Stabilized Road Mix Subbase/Base Course
Item 206 Chemically Stabilized Road Mix Subbase/Base Course
11問 • 13日前Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
ユーザ名非公開 · 13問 · 13日前Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
Item 207 Portland Cement Stabilized Treated Plant Mix Base Course
13問 • 13日前Item 204 Portland Cement Stabilized Road Mix Base Course
Item 204 Portland Cement Stabilized Road Mix Base Course
ユーザ名非公開 · 7問 · 13日前Item 204 Portland Cement Stabilized Road Mix Base Course
Item 204 Portland Cement Stabilized Road Mix Base Course
7問 • 13日前Item 202 Crushed Aggregate Base Course
Item 202 Crushed Aggregate Base Course
ユーザ名非公開 · 18問 · 13日前Item 202 Crushed Aggregate Base Course
Item 202 Crushed Aggregate Base Course
18問 • 13日前Item 200 Aggregate Subbase Course
Item 200 Aggregate Subbase Course
ユーザ名非公開 · 16問 · 13日前Item 200 Aggregate Subbase Course
Item 200 Aggregate Subbase Course
16問 • 13日前問題一覧
1
Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date., Store the data in Amazon S3 using Apache Parquet or Apache ORC formats.
2
Create a pipeline in CodePipeline with a deploy stage that uses a blue/green deployment strategy. Monitor the application and if there are any issues trigger a manual rollback using CodeDeploy.
3
Create an S3 bucket for the pipeline. Configure S3 caching for the CodeBuild projects that are in the pipeline. Update the build specifications of the CodeBuild projects. Add the data file directory to the cache definition.
4
Use AWS Organizations to create a management account and create each team’s account from the management account. Create a security account for cross-account access. Apply service control policies on each account and grant the security team cross-account access to all accounts. The Security team will create IAM policies to provide least privilege access.
5
Create a SAML-based identity management provider in a central account and map IAM roles that provide the necessary permissions for users. Map users in the on-premises IdP groups to IAM roles. Use cross-account access to the other AWS accounts.
6
Migrate the applications to Docker containers on Amazon ECS. Create a separate ECS task and service for each application. Enable service Auto Scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using Amazon CloudWatch.
7
Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an Amazon CloudFront distribution with the S3 bucket as the origin. Use Amazon Cognito to provide user management authentication functions.
8
Adjust the workload configuration to utilize topology spread constraints based on different Availability Zones.
9
Implement AWS Global Accelerator with a standard accelerator configuration. Associate each regional deployment's ALB with the Global Accelerator and distribute its static IP addresses to customers.
10
Create a cross-Region read replica in us-west-1. Use Amazon EventBridge to trigger an AWS Lambda function that promotes the read replica to primary and updates the DNS endpoint address for the database.
11
With cross-zone load balancing enabled, one instance in Availability Zone X receives 20% traffic and four instances in Availability Zone Y receive 20% traffic each. With cross-zone load balancing disabled, one instance in Availability Zone X receives 50% traffic and four instances in Availability Zone Y receive 12.5% traffic each
12
Use AWS Organizations to set up a multi-account environment. Organize the accounts into the following Organizational Units (OUs): Security, Infrastructure, Workloads, Suspended and Exceptions, Configure an AWS Budget alert to move an AWS account to Exceptions OU if the account reaches a predefined budget threshold. Use Service Control Policies (SCPs) to limit/block resource usage in the Exceptions OU. Configure a Suspended OU to hold workload accounts with retired resources. Use Service Control Policies (SCPs) to limit/block resource usage in the Suspended OU, Designate an account within the AWS Organizations organization to be the GuardDuty delegated administrator. Create an SNS topic in this account. Subscribe the security team to the topic so that the security team can receive alerts from GuardDuty via SNS
13
You, as the bucket owner, still own any objects that were written to the bucket while the bucket owner enforced setting was applied. These objects are not owned by the object writer, even if you re-enable ACLs, If you used object ACLs for permissions management before you applied the bucket owner enforced setting and you didn't migrate these object ACL permissions to your bucket policy after you re-enable ACLs, these permissions are restored
14
Create an AWS Organizations organization-wide AWS Config rule that mandates all resources in the selected OUs to be associated with the AWS WAF rules. Configure automated remediation actions by using AWS Systems Manager Automation documents to fix non-compliant resources. Set up AWS WAF rules by using an AWS CloudFormation stack set to target the same OUs where the AWS Config rule is applied
15
Make sure that all AWS accounts are assigned organizational units (OUs) within an AWS Organizations structure operating in all features mode, Set up a Service Control Policy (SCP) that contains a deny rule to the ec2:PurchaseReservedInstancesOffering and ec2:ModifyReservedInstances actions. Attach the SCP to each organizational unit (OU) of the AWS Organizations structure
16
Inspect the VPC Flow Logs using the CloudWatch console and select the log group that contains the NAT gateway's ENI and the EC2 instance's ENI. Leverage a query filter with the destination address set as like 205.1 and the source address set as like 198.21.200.1. Execute the stats command to filter the sum of bytes transferred by the source address and the destination address
17
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin, Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
18
Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing
19
Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
20
Create a gateway endpoint for Amazon S3 in the data lake VPC. Attach an endpoint policy to allow access to the S3 bucket only via the access points. Specify the route table that is used to access the bucket, In the AWS account that owns the S3 buckets, create an S3 access point for each bucket that the applications must use to access the data. Set up all applications in a single data lake VPC, Add a bucket policy on the buckets to deny access from applications outside the data lake VPC
21
Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data Streams
22
Use AWS DataSync to automate and accelerate online data transfers to the given AWS storage services
23
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit
24
SCPs do not affect service-linked role, If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can't perform that action, SCPs affect all users and roles in attached accounts, including the root user
25
Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation, Use Glue ETL job to write the transformed data in the curated zone using a compressed file format
26
To use private hosted zones, DNS hostnames and DNS resolution should be enabled for the VPC
27
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket, Use multipart uploads for faster file uploads into the destination S3 bucket
28
Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
29
To upload video files to Amazon S3 bucket, leverage multipart uploads feature. Configure the application to use S3 Transfer Acceleration endpoints to improve the performance of uploads and also optimize the multipart uploads
30
Create a private virtual interface to a Direct Connect connection in us-east-1. Set up an interface VPC endpoint and configure the on-premises systems to access S3 via this endpoint
31
Configure the applications behind private Network Load Balancers (NLBs) in separate VPCs. Set up each NLB as an AWS PrivateLink endpoint service with associated VPC endpoints in the centralized VPC. Set up a public Application Load Balancer (ALB) in the centralized VPC and point the target groups to the private IP addresses of each endpoint. Set up host-based routing to route application traffic to the corresponding target group through the ALB
32
During SAML-based federation, pass an attribute for DevelopmentDept as an AWS Security Token Service (AWS STS) session tag. The policy of the assumed IAM role used by the developers should be updated with a deny action and a StringNotEquals condition for the DevelopmentDept resource tag and aws:PrincipalTag/ DevelopmentDept
33
If you're creating failover records in a private hosted zone, you must assign a public IP address to an instance in the VPC to check the health of an endpoint within a VPC by IP address, Records without a health check are always considered healthy. If no record is healthy, all records are deemed to be healthy
34
In the centralized account, configure an IAM role that has the Lambda service as a trusted entity. Add an inline policy to assume the roles of the other AWS accounts, In the other AWS accounts, configure an IAM role that has minimal permissions. Add the Lambda execution role of the centralized account as a trusted entity
35
Configure a Route 53 Resolver inbound endpoint and configure it for the EFS specific VPC. Create a Route 53 private hosted zone and add a new CNAME record with the value of the EFS DNS name. Configure forwarding rules on the on-premises DNS servers to forward queries for the custom domain host to the Route 53 private hosted zone
36
If a user has an IAM policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user cannot perform that action, The specified actions from an attached SCP affect all IAM identities including the root user of the member account
37
Discard existing subnet in VPC B. Create two new subnets 192.168.2.0/28 and 192.168.2.16/28 in VPC B. Move b-1 to subnet 192.168.2.0/28 and b-2 to subnet 192.168.2.16/28 by launching a new instance in the new subnet via an AMI created from the old instance, Create two route tables in VPC B - one with a route for destination VPC A and another with a route for destination VPC C
38
Decouple the RDS DB instance from the Beanstalk environment (environment A) and leverage Elastic Beanstalk blue (environment A)/green (environment B) deployment to connect to the decoupled database post the upgrade
39
Enable AWS Organizations and attach the AWS accounts of all business units to it. Create a Service Control Policy to deny access to the Non-Core Regions and attach the policy to the root OU
40
Create a CloudFormation template describing the application infrastructure in the Resources section. Use CloudFormation stack set from an administrator account to launch stack instances that deploy the application to various other regions
41
Storage Gateway doesn't automatically update the cache when you upload a file directly to Amazon S3. Perform a RefreshCache operation to see the changes on the file share
42
Use Amazon S3 Intelligent-Tiering storage class to store the video files. Configure this S3 bucket as the origin of an Amazon CloudFront distribution for delivering the contents to the customers
43
Suspend the Auto Scaling group's Terminate process. Use Session Manager to log in to an instance that is marked as unhealthy and analyze the system logs to figure out the root cause
44
Use AWS Web Application Firewall (WAF) as the first line of defense to protect the API Gateway APIs against malicious exploits and DDoS attacks. Install Amazon Inspector on the EC2 instance to check for vulnerabilities. Configure Amazon GuardDuty to monitor any malicious attempts to access the APIs illegally
45
kms:GenerateDataKey
46
Use AWS X-Ray to analyze the microservices applications through request tracing. Configure Amazon CloudWatch for monitoring containers, latency, web server requests, and incoming load-balancer requests and create CloudWatch alarms to send out notifications if system latency is increasing
47
Set up new Amazon DynamoDB tables for the application with on-demand capacity. Use a gateway VPC endpoint for DynamoDB so that the application can have a private and encrypted connection to the DynamoDB tables
48
Configure Amazon S3 for hosting the web application while using AWS AppSync for database access services. Use Amazon Simple Queue Service (Amazon SQS) for queuing orders and AWS Lambda for business logic. Use Amazon SQS dead-letter queue for tracking and re-processing failed orders
49
Objects can't be encrypted by AWS Key Management Service (AWS KMS), The AWS account that owns the bucket must also own the object
50
Create the cluster with auth-token parameter and make sure that the parameter is included in all subsequent commands to the cluster, Configure the security group for the ElastiCache cluster with the required rules to allow inbound traffic from the cluster itself as well as from the cluster's clients on port 6379, Configure the ElastiCache cluster to have both in-transit as well as at-rest encryption
51
Create a snapshot copy grant in the destination Region for a KMS key in the destination Region. Configure Redshift cross-Region snapshots in the source Region
52
Set up a VPC peering connection between the two VPCs and add a route to the routing table of VPC X that points to the IP address range of 172.30.0.0/16, Set up a VPC peering connection between the two VPCs and add a route to the routing table of VPC Y that points to the IP address range of 172.20.0.0/16
53
Use CloudFront signed URLs to restrict access to the application installation file, Use CloudFront signed cookies to restrict access to all the files in the members' area of the website
54
Apply patch baselines using the AWS-RunPatchBaseline SSM document, Set up Systems Manager Agent on all instances to manage patching. Test patches in pre-production and then deploy as a maintenance window task with the appropriate approval
55
Set up separate Lambda functions to provision and terminate the Elastic Beanstalk environment. Configure a Lambda execution role granting the required Elastic Beanstalk environment permissions and assign the role to the Lambda functions. Configure cron expression based Amazon EventBridge events rules to trigger the Lambda functions
56
By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources, Since Lambda functions can scale extremely quickly, it's a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold, If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code
57
Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed results locally for low-latency access while storing the full volume with all results in its Amazon S3 service bucket
58
The instances launched by both Launch Configuration LC-A and Launch Configuration LC-B will have dedicated instance tenancy
59
Configure a Lambda function as one of the SNS topic subscribers, which is invoked to secure the objects in the S3 bucket, Enable object-level logging for S3. Set up a EventBridge event pattern when a PutObject API call with public-read permission is detected in the AWS CloudTrail logs and set the target as an SNS topic for downstream notifications
60
Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations
61
Develop the leaderboard using ElastiCache Redis as it meets the in-memory, high availability, low latency requirements, Develop the leaderboard using DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements
62
Set up a CloudFormation stack set for Redshift cluster creation so it can be launched in another Region and configure Amazon Redshift to automatically copy snapshots for the cluster to the other AWS Region. In case of a disaster, restore the cluster in the other AWS Region from that Region's snapshot
63
API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
64
Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway for low latency access to the migrated data for ongoing updates from the on-premises applications
65
Use Amazon SQS FIFO queue in batch mode of 8 messages per operation to process the messages at the peak rate
66
Use Amazon Route 53 to distribute traffic, Move the static content to Amazon S3, and front this with an Amazon CloudFront distribution. Configure another layer of protection by adding AWS Web Application Firewall (AWS WAF) to the CloudFront distribution
67
Use EFS as the data tier of the storage layer, Use EC2 Instance Store as the service tier of the storage layer
68
Use WAF IP set statement that specifies the IP addresses that you want to allow through, Use WAF geo match statement listing the countries that you want to block
69
After a Route 53 health checker receives the HTTP status code, it must receive the response body from the endpoint within the next two seconds with the SearchString string that you specified. The string must appear entirely in the first 5,120 bytes of the response body or the endpoint fails the health check, HTTPS health checks don't validate SSL/TLS certificates, so checks don't fail if a certificate is invalid or expired, If you configure Route 53 to use the HTTPS protocol to check the health of your endpoint, then that endpoint must support TLS
70
Amazon Inspector, Amazon SNS
71
Use AWS Elemental MediaConvert for file-based video processing and Amazon CloudFront for delivery. Use video streaming protocols like Apple’s HTTP Live Streaming (HLS) and create a manifest file. Point the CloudFront distribution at the manifest
72
Use custom routing accelerator of Global Accelerator to deterministically route one or more users to a specific instance using VPC subnet endpoints
73
Create a VPC Gateway endpoint and create the file gateway using this VPC endpoint, Create a VPC Interface endpoint and create the file gateway using this VPC endpoint
74
Store the data in Amazon S3 in a columnar format such as Apache Parquet, Partition the data in Amazon S3 using Apache Hive partitioning. Use a date column as partition key
75
Configure a public virtual interface on the Direct Connect connection. Create an AWS Site-to-Site VPN between the customer gateway and the virtual private gateway in the VPC
76
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource":" *" }, { "Effect": "Deny", "Action": "s3:*", "Resource": "*" } ] }
77
Create a new private subnet in the same VPC as the Amazon RDS DB instance. Create a new security group with necessary inbound rules for QuickSight in the same VPC. Sign in to QuickSight as a QuickSight admin and create a new QuickSight VPC connection. Create a new dataset from the RDS DB instance
78
Configure Amazon Kinesis Data Firehose to stream data to Amazon Redshift. Create a business intelligence dashboard by using Amazon QuickSight that has Amazon Redshift as a data source
79
Configure CloudFront to use a custom header and configure an AWS WAF rule on the origin’s Application Load Balancer to accept only traffic that contains that header
80
Use Host conditions in ALB listener to route *.ecomm.com to appropriate target groups, Use Host conditions in ALB listener to route ecomm.com to appropriate target groups
81
Update the Security Groups for the application servers to only allow incoming traffic on port 80 from the ELB
82
Enable CloudTrail log file integrity validation, Use Amazon S3 MFA Delete on the S3 bucket that holds CloudTrail logs and digest files
83
The aws:PrincipalOrgID global condition key can be used with the Principal element in a resource-based policy with AWS KMS. You need to specify the Organization ID in the Condition element
84
Update the network ACL associated with the subnet to allow outbound traffic
85
Create a new RDS Read Replica from your Multi AZ primary database and generate reports by querying the Read Replica
86
Send score updates to Kinesis Data Streams which uses a Lambda function to process these updates and then store these processed updates in DynamoDB
87
Each KCL application must use its own DynamoDB table, You can only use DynamoDB for checkpointing KCL
88
Create a new Amazon S3 bucket to be used for replication. Create a new S3 Replication Time Control (S3 RTC) rule on the source S3 bucket that filters data based on the prefix (high-value claim type) and replicates it to the new S3 bucket. Leverage an Amazon S3 event notification to trigger a notification when the time to copy the claim data exceeds the desired threshold
89
Set up an AWS Web Application Firewall (WAF) web ACL. Create a rule to deny any requests that do not originate from the specified country. Attach the rule with the web ACL. Attach the web ACL with the ALB
90
Configure traffic mirroring on the source EC2 instances hosting the VOIP program, set up a network monitoring program on a target EC2 instance and stream the logs to an S3 bucket for further analysis