問題一覧
1
Use Amazon CloudFront distribution in front of the Application Load Balancer, Use Amazon Aurora Replica
2
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis
3
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per operation to process the messages at the peak rate
4
Path-based Routing
5
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
6
Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs) within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region
7
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for AWS Lambda, so the team needs to contact AWS support to raise the account limit
8
Use an Application Load Balancer for distributing traffic to the Amazon EC2 instances spread across different Availability Zones (AZs). Configure Auto Scaling group to mask any failure of an instance
9
Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service, Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can manually set the instance's health status back to healthy and activate the ReplaceUnhealthy process type again
10
Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket, Enable versioning on the Amazon S3 bucket
11
Tier-1 (32 terabytes)
12
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour
13
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs having API call details and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm based on this metric's rate to send an Amazon SNS notification to the required team
14
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is polled by an AWS Lambda function in batches and the data is written into an auto-scaled Amazon DynamoDB table for downstream processing
15
Partition placement group
16
The instance maybe in Impaired status, The instance has failed the Elastic Load Balancing (ELB) health check status, The health check grace period for the instance has not expired
17
Amazon DynamoDB, AWS Lambda
18
Enable storage auto-scaling for Amazon RDS MySQL
19
Set up a read replica and modify the application to use the appropriate endpoint
20
Set up a Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3
21
Amazon CloudWatch, Amazon Simple Notification Service (Amazon SNS)
22
Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
23
Leverage Amazon Aurora MySQL with Multi-AZ Aurora Replicas and create the dev database by restoring from the automated backups of Amazon Aurora
24
Write a one time job to copy the videos from all Amazon EBS volumes to Amazon S3 and then modify the application to use Amazon S3 standard for storing the videos, Mount Amazon Elastic File System (Amazon EFS) on all Amazon EC2 instances. Write a one time job to copy the videos from all Amazon EBS volumes to Amazon EFS. Modify the application to use Amazon EFS for storing the videos
25
Remove the member account from the old organization. Send an invite to the member account from the new Organization. Accept the invite to the new organization from the member account
26
Deploy the web-tier Amazon EC2 instances in two Availability Zones (AZs), behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration
27
A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata, If your instance has a public IPv4 address, it retains the public IPv4 address after recovery
28
Set up an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data
29
Use Amazon Kinesis Data Streams to ingest the data, process it using AWS Lambda or run analytics using Amazon Kinesis Data Analytics
30
Amazon Aurora Serverless
31
Configure an Amazon CloudWatch alarm that triggers the recovery of the Amazon EC2 instance, in case the instance fails. The instance, however, should only be configured with an Amazon EBS volume
32
Amazon FSx for Windows File Server, File Gateway Configuration of AWS Storage Gateway
33
Internet Gateway (I1)
34
Use Amazon DynamoDB point in time recovery to restore the table to the state just before corrupted data was written
35
With cross-zone load balancing enabled, one instance in Availability Zone A receives 20% traffic and four instances in Availability Zone B receive 20% traffic each. With cross-zone load balancing disabled, one instance in Availability Zone A receives 50% traffic and four instances in Availability Zone B receive 12.5% traffic each
36
Provision Amazon Aurora Global Database
37
Amazon Kinesis Data Streams
38
Use Amazon FSx for Windows File Server as a shared storage solution
39
Make sure that the name of the FIFO (First-In-First-Out) queue ends with the .fifo suffix, Make sure that the throughput for the target FIFO (First-In-First-Out) queue does not exceed 3,000 messages per second, Delete the existing standard queue and recreate it as a FIFO (First-In-First-Out) queue
40
Amazon Kinesis Data Streams
41
Configure Amazon Simple Queue Service (Amazon SQS) queue to decouple microservices running faster processes from the microservices running slower ones
42
Use Amazon Kinesis Data Streams to process the data streams as well as decouple the producers and consumers for the real-time data processor
43
Use delay queues to postpone the delivery of new messages to the queue for a few seconds
44
Use AWS Database Migration Service (AWS DMS) to replicate the data from the databases into Amazon Redshift
45
Instance Store
46
Amazon Kinesis with Amazon Simple Notification Service (Amazon SNS)
47
Use a VPC peering connection
48
Amazon DynamoDB
49
Use Amazon Aurora Read Replicas
50
Migrate the data to Amazon RDS for SQL Server database in a Multi-AZ deployment
51
{ "Action": [ "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::example-bucket/*" ], "Effect": "Allow" }
52
VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events
53
Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
54
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin, Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
55
As the AWS KMS key was deleted a day ago, it must be in the 'pending deletion' status and hence you can just cancel the KMS key deletion and recover the key
56
Use server-side encryption with AWS Key Management Service keys (SSE-KMS) to encrypt the user data on Amazon S3
57
Use georestriction to prevent users in specific geographic locations from accessing content that you're distributing through a Amazon CloudFront web distribution, Use Amazon Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights
58
Configure AWS Web Application Firewall (AWS WAF) on the Application Load Balancer in a Amazon Virtual Private Cloud (Amazon VPC)
59
When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version, Different versions of a single object can have different retention modes and periods
60
Use permissions boundary to control the maximum permissions employees can grant to the IAM principals
61
Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on Amazon EC2 instances
62
Enable Multi Factor Authentication (MFA) for the AWS account root user account, Create a strong password for the AWS account root user
63
Enable AWS Multi-Factor Authentication (AWS MFA) for privileged users, Configure AWS CloudTrail to log all AWS Identity and Access Management (AWS IAM) actions
64
Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables
65
Disable the service in the general settings
66
For security group B: Add an inbound rule that allows traffic only from security group A on port 1433, For security group A: Add an inbound rule that allows traffic from all sources on port 443. Add an outbound rule with the destination as security group B on port 1433
67
AWS Secrets Manager
68
For each developer, define an IAM permission boundary that will restrict the managed policies they can attach to themselves
69
AWS Transit Gateway
70
Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) with automatic key rotation
71
Dedicated Instances
72
Attach the appropriate IAM role to the Amazon EC2 instance profile so that the instance can access Amazon S3 and Amazon DynamoDB
73
Use Amazon S3 Bucket Policies
74
Create a new Amazon S3 bucket in the us-east-1 region with replication enabled from this new bucket into another bucket in us-west-1 region. Enable SSE-KMS encryption on the new bucket in us-east-1 region by using an AWS KMS multi-region key. Copy the existing data from the current Amazon S3 bucket in us-east-1 region into this new Amazon S3 bucket in us-east-1 region
75
{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:ListBucket" ], "Resource":"arn:aws:s3:::mybucket" }, { "Effect":"Allow", "Action":[ "s3:GetObject" ], "Resource":"arn:aws:s3:::mybucket/*" } ] }
76
Create an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key
77
Enable DNS hostnames and DNS resolution for private hosted zones
78
It allows running Amazon EC2 instances only in the eu-west-1 region, and the API call can be made from anywhere in the world
79
It allows starting an Amazon EC2 instance only when the IP where the call originates is within the 34.50.31.0/24 CIDR block
80
Build a shared services Amazon Virtual Private Cloud (Amazon VPC)
81
Set the DeleteOnTermination attribute to false
82
Take a snapshot of the database, copy it as an encrypted snapshot, and restore a database from the encrypted snapshot. Terminate the previous database
83
Users belonging to the IAM user group can terminate an Amazon EC2 instance in the us-west-1 region when the user's source IP is 10.200.200.200
84
Leverage AWS Config managed rule to check if any third-party SSL/TLS certificates imported into ACM are marked for expiration within 30 days. Configure the rule to trigger an Amazon SNS notification to the security team if any certificate expires within 30 days
85
By default, an Amazon S3 object is owned by the AWS account that uploaded it. So the Amazon S3 bucket owner will not implicitly have access to the objects written by the Amazon Redshift cluster
86
Use Amazon Cognito Authentication via Cognito User Pools for your Application Load Balancer
87
Create an IP match condition in the AWS WAF to block the malicious IP address
88
Create an IAM role for the AWS Lambda function that grants access to the Amazon S3 bucket. Set the IAM role as the AWS Lambda function's execution role. Make sure that the bucket policy also grants access to the AWS Lambda function's execution role
89
Security Groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network access control list (network ACL) are stateless, so you must allow both inbound and outbound traffic
90
Use Amazon Cognito User Pools
91
Server-Side Encryption with Customer-Provided Keys (SSE-C)
92
The security group of the Amazon EC2 instances should have an inbound rule from the security group of the Application Load Balancer on port 80, The security group of Amazon RDS should have an inbound rule from the security group of the Amazon EC2 instances in the Auto Scaling group on port 5432, The security group of the Application Load Balancer should have an inbound rule from anywhere on port 443
93
Use a target tracking scaling policy based on a custom Amazon SQS queue metric
94
Security Groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network ACLs are stateless, so you must allow both inbound and outbound traffic
95
Copying an Amazon Machine Image (AMI) backed by an encrypted snapshot cannot result in an unencrypted target snapshot, You can share an Amazon Machine Image (AMI) with another AWS account, You can copy an Amazon Machine Image (AMI) across AWS Regions
96
Create an inbound endpoint on Amazon Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Amazon Route 53 Resolver via this endpoint, Create an outbound endpoint on Amazon Route 53 Resolver and then Amazon Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint
97
Configure an origin access identity (OAI) and associate it with the Amazon CloudFront distribution. Set up the permissions in the Amazon S3 bucket policy so that only the OAI can read the objects, Create an AWS WAF ACL and use an IP match condition to allow traffic only from those IPs that are allowed in the Amazon EC2 security group. Associate this new AWS WAF ACL with the Amazon CloudFront distribution
98
Service control policy (SCP) does not affect service-linked role, Service control policy (SCP) affects all users and roles in the member accounts, including root user of the member accounts, If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable service control policy (SCP), the user or role can't perform that action
99
AWS VPN CloudHub
100
You can use an Internet Gateway ID as the custom source for the inbound rule
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Use Amazon CloudFront distribution in front of the Application Load Balancer, Use Amazon Aurora Replica
2
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis
3
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per operation to process the messages at the peak rate
4
Path-based Routing
5
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
6
Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs) within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region
7
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for AWS Lambda, so the team needs to contact AWS support to raise the account limit
8
Use an Application Load Balancer for distributing traffic to the Amazon EC2 instances spread across different Availability Zones (AZs). Configure Auto Scaling group to mask any failure of an instance
9
Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service, Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can manually set the instance's health status back to healthy and activate the ReplaceUnhealthy process type again
10
Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket, Enable versioning on the Amazon S3 bucket
11
Tier-1 (32 terabytes)
12
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour
13
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs having API call details and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm based on this metric's rate to send an Amazon SNS notification to the required team
14
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is polled by an AWS Lambda function in batches and the data is written into an auto-scaled Amazon DynamoDB table for downstream processing
15
Partition placement group
16
The instance maybe in Impaired status, The instance has failed the Elastic Load Balancing (ELB) health check status, The health check grace period for the instance has not expired
17
Amazon DynamoDB, AWS Lambda
18
Enable storage auto-scaling for Amazon RDS MySQL
19
Set up a read replica and modify the application to use the appropriate endpoint
20
Set up a Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3
21
Amazon CloudWatch, Amazon Simple Notification Service (Amazon SNS)
22
Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
23
Leverage Amazon Aurora MySQL with Multi-AZ Aurora Replicas and create the dev database by restoring from the automated backups of Amazon Aurora
24
Write a one time job to copy the videos from all Amazon EBS volumes to Amazon S3 and then modify the application to use Amazon S3 standard for storing the videos, Mount Amazon Elastic File System (Amazon EFS) on all Amazon EC2 instances. Write a one time job to copy the videos from all Amazon EBS volumes to Amazon EFS. Modify the application to use Amazon EFS for storing the videos
25
Remove the member account from the old organization. Send an invite to the member account from the new Organization. Accept the invite to the new organization from the member account
26
Deploy the web-tier Amazon EC2 instances in two Availability Zones (AZs), behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration
27
A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata, If your instance has a public IPv4 address, it retains the public IPv4 address after recovery
28
Set up an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data
29
Use Amazon Kinesis Data Streams to ingest the data, process it using AWS Lambda or run analytics using Amazon Kinesis Data Analytics
30
Amazon Aurora Serverless
31
Configure an Amazon CloudWatch alarm that triggers the recovery of the Amazon EC2 instance, in case the instance fails. The instance, however, should only be configured with an Amazon EBS volume
32
Amazon FSx for Windows File Server, File Gateway Configuration of AWS Storage Gateway
33
Internet Gateway (I1)
34
Use Amazon DynamoDB point in time recovery to restore the table to the state just before corrupted data was written
35
With cross-zone load balancing enabled, one instance in Availability Zone A receives 20% traffic and four instances in Availability Zone B receive 20% traffic each. With cross-zone load balancing disabled, one instance in Availability Zone A receives 50% traffic and four instances in Availability Zone B receive 12.5% traffic each
36
Provision Amazon Aurora Global Database
37
Amazon Kinesis Data Streams
38
Use Amazon FSx for Windows File Server as a shared storage solution
39
Make sure that the name of the FIFO (First-In-First-Out) queue ends with the .fifo suffix, Make sure that the throughput for the target FIFO (First-In-First-Out) queue does not exceed 3,000 messages per second, Delete the existing standard queue and recreate it as a FIFO (First-In-First-Out) queue
40
Amazon Kinesis Data Streams
41
Configure Amazon Simple Queue Service (Amazon SQS) queue to decouple microservices running faster processes from the microservices running slower ones
42
Use Amazon Kinesis Data Streams to process the data streams as well as decouple the producers and consumers for the real-time data processor
43
Use delay queues to postpone the delivery of new messages to the queue for a few seconds
44
Use AWS Database Migration Service (AWS DMS) to replicate the data from the databases into Amazon Redshift
45
Instance Store
46
Amazon Kinesis with Amazon Simple Notification Service (Amazon SNS)
47
Use a VPC peering connection
48
Amazon DynamoDB
49
Use Amazon Aurora Read Replicas
50
Migrate the data to Amazon RDS for SQL Server database in a Multi-AZ deployment
51
{ "Action": [ "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::example-bucket/*" ], "Effect": "Allow" }
52
VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events
53
Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
54
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin, Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
55
As the AWS KMS key was deleted a day ago, it must be in the 'pending deletion' status and hence you can just cancel the KMS key deletion and recover the key
56
Use server-side encryption with AWS Key Management Service keys (SSE-KMS) to encrypt the user data on Amazon S3
57
Use georestriction to prevent users in specific geographic locations from accessing content that you're distributing through a Amazon CloudFront web distribution, Use Amazon Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights
58
Configure AWS Web Application Firewall (AWS WAF) on the Application Load Balancer in a Amazon Virtual Private Cloud (Amazon VPC)
59
When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version, Different versions of a single object can have different retention modes and periods
60
Use permissions boundary to control the maximum permissions employees can grant to the IAM principals
61
Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on Amazon EC2 instances
62
Enable Multi Factor Authentication (MFA) for the AWS account root user account, Create a strong password for the AWS account root user
63
Enable AWS Multi-Factor Authentication (AWS MFA) for privileged users, Configure AWS CloudTrail to log all AWS Identity and Access Management (AWS IAM) actions
64
Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables
65
Disable the service in the general settings
66
For security group B: Add an inbound rule that allows traffic only from security group A on port 1433, For security group A: Add an inbound rule that allows traffic from all sources on port 443. Add an outbound rule with the destination as security group B on port 1433
67
AWS Secrets Manager
68
For each developer, define an IAM permission boundary that will restrict the managed policies they can attach to themselves
69
AWS Transit Gateway
70
Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) with automatic key rotation
71
Dedicated Instances
72
Attach the appropriate IAM role to the Amazon EC2 instance profile so that the instance can access Amazon S3 and Amazon DynamoDB
73
Use Amazon S3 Bucket Policies
74
Create a new Amazon S3 bucket in the us-east-1 region with replication enabled from this new bucket into another bucket in us-west-1 region. Enable SSE-KMS encryption on the new bucket in us-east-1 region by using an AWS KMS multi-region key. Copy the existing data from the current Amazon S3 bucket in us-east-1 region into this new Amazon S3 bucket in us-east-1 region
75
{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:ListBucket" ], "Resource":"arn:aws:s3:::mybucket" }, { "Effect":"Allow", "Action":[ "s3:GetObject" ], "Resource":"arn:aws:s3:::mybucket/*" } ] }
76
Create an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key
77
Enable DNS hostnames and DNS resolution for private hosted zones
78
It allows running Amazon EC2 instances only in the eu-west-1 region, and the API call can be made from anywhere in the world
79
It allows starting an Amazon EC2 instance only when the IP where the call originates is within the 34.50.31.0/24 CIDR block
80
Build a shared services Amazon Virtual Private Cloud (Amazon VPC)
81
Set the DeleteOnTermination attribute to false
82
Take a snapshot of the database, copy it as an encrypted snapshot, and restore a database from the encrypted snapshot. Terminate the previous database
83
Users belonging to the IAM user group can terminate an Amazon EC2 instance in the us-west-1 region when the user's source IP is 10.200.200.200
84
Leverage AWS Config managed rule to check if any third-party SSL/TLS certificates imported into ACM are marked for expiration within 30 days. Configure the rule to trigger an Amazon SNS notification to the security team if any certificate expires within 30 days
85
By default, an Amazon S3 object is owned by the AWS account that uploaded it. So the Amazon S3 bucket owner will not implicitly have access to the objects written by the Amazon Redshift cluster
86
Use Amazon Cognito Authentication via Cognito User Pools for your Application Load Balancer
87
Create an IP match condition in the AWS WAF to block the malicious IP address
88
Create an IAM role for the AWS Lambda function that grants access to the Amazon S3 bucket. Set the IAM role as the AWS Lambda function's execution role. Make sure that the bucket policy also grants access to the AWS Lambda function's execution role
89
Security Groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network access control list (network ACL) are stateless, so you must allow both inbound and outbound traffic
90
Use Amazon Cognito User Pools
91
Server-Side Encryption with Customer-Provided Keys (SSE-C)
92
The security group of the Amazon EC2 instances should have an inbound rule from the security group of the Application Load Balancer on port 80, The security group of Amazon RDS should have an inbound rule from the security group of the Amazon EC2 instances in the Auto Scaling group on port 5432, The security group of the Application Load Balancer should have an inbound rule from anywhere on port 443
93
Use a target tracking scaling policy based on a custom Amazon SQS queue metric
94
Security Groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network ACLs are stateless, so you must allow both inbound and outbound traffic
95
Copying an Amazon Machine Image (AMI) backed by an encrypted snapshot cannot result in an unencrypted target snapshot, You can share an Amazon Machine Image (AMI) with another AWS account, You can copy an Amazon Machine Image (AMI) across AWS Regions
96
Create an inbound endpoint on Amazon Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Amazon Route 53 Resolver via this endpoint, Create an outbound endpoint on Amazon Route 53 Resolver and then Amazon Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint
97
Configure an origin access identity (OAI) and associate it with the Amazon CloudFront distribution. Set up the permissions in the Amazon S3 bucket policy so that only the OAI can read the objects, Create an AWS WAF ACL and use an IP match condition to allow traffic only from those IPs that are allowed in the Amazon EC2 security group. Associate this new AWS WAF ACL with the Amazon CloudFront distribution
98
Service control policy (SCP) does not affect service-linked role, Service control policy (SCP) affects all users and roles in the member accounts, including root user of the member accounts, If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable service control policy (SCP), the user or role can't perform that action
99
AWS VPN CloudHub
100
You can use an Internet Gateway ID as the custom source for the inbound rule