問題一覧
1
Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every night
2
Set up database migration to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group for Amazon EC2 instances that are fronted by an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis with replication group
3
Leverage AWS DataSync to transfer the biological data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow for orchestrating an AWS Batch job that processes the biological data
4
Configure an Amazon VPC interface endpoint for the Secrets Manager service to enable access for your Secrets Manager Lambda rotation function and private Amazon Relational Database Service (Amazon RDS) instance
5
The research assistant does not need to pay any transfer charges for the image upload
6
Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection
7
Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches
8
The primary and standby DB instances are upgraded at the same time for RDS MySQL Multi-AZ. All instances are upgraded at the same time for Aurora MySQL, Multi-AZ deployments for both RDS MySQL and Aurora MySQL follow synchronous replication, Read Replicas can be manually promoted to a standalone database instance for RDS MySQL whereas Read Replicas for Aurora MySQL can be promoted to the primary instance
9
Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier
10
Attach an EFS file system to the on-premises servers to act as the NAS server. Mount the same EFS file system to the AWS based web servers running on EC2 instances to serve the content
11
Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS
12
Order three AWS Snowball Edge appliances, split and transfer the data to these three appliances and ship them to AWS which will then copy the data from the Snowball Edge appliances to S3
13
Dockerize each application and then deploy to an ECS cluster running behind an Application Load Balancer
14
Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS
15
Create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint, Create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint
16
Enable Aurora Auto Scaling for Aurora Replicas. Deploy the application on Amazon EC2 instances configured behind an Auto Scaling Group, Configure EC2 instances behind an Application Load Balancer with Round Robin routing algorithm and sticky sessions enabled
17
Set up Amazon Kinesis Data Firehose to buffer events and an AWS Lambda function to process and transform the events. Set up Amazon OpenSearch to receive the transformed events. Use the Kibana endpoint that is deployed with OpenSearch to create near-real-time visualizations and dashboards
18
Create a Read Replica in the same Region as the Master database and point the analytics workload there
19
Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center's Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network
20
Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier
21
Ingest the orders in an SQS queue and trigger a Lambda function to process them
22
Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC
23
Set up DynamoDB Streams to capture and send updates to a Lambda function that outputs records to Kinesis Data Analytics (KDA) via Kinesis Data Streams (KDS). Detect and analyze anomalies in KDA and send notifications via SNS
24
Register the Application Load Balancer behind a Network Load Balancer that will provide the necessary static IP address to the ALB
25
Create a single Amazon S3 bucket. Create an IAM user for each client. Group these users under an IAM policy that permits access to sub-folders within the bucket via the use of the 'username' Policy variable. Train the clients to use an S3 client instead of an FTP client
26
Set up a new Amazon FSx file system with a Multi-AZ deployment type. Leverage AWS DataSync to transfer data from the old file system to the new one. Point the application to the new Multi-AZ file system
27
Add subnet CIDR range, or IP address of the replication instance in the inbound rules of the Amazon Redshift cluster security group, Your Amazon Redshift cluster must be in the same account and same AWS Region as the replication instance
28
Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3, Set up an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput
29
Enable default encryption on the Amazon S3 bucket that uses Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Use Amazon Redshift Spectrum to query the data for monthly audits
30
Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations
31
Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data
32
NAT Gateway has to be configured to give internet access to the Amazon VPC connected Lambda function
33
Ingest the data into Amazon S3 from S3 Glacier and query the required data with Amazon S3 Select
34
Deploy the VPC infrastructure using AWS CloudFormation and leverage a custom resource to request a unique CIDR range from an external IP address management (IPAM) service
35
Amazon S3 Glacier Instant Retrieval is the best fit for data accessed twice a year. Amazon S3 Glacier Deep Archive is cost-effective for data that is stored for long-term retention
36
Create an IPsec tunnel between your customer gateway appliance and the virtual private gateway, Create a VPC with a virtual private gateway, Set up a public virtual interface on the Direct Connect connection
37
The SSL certificate on the legacy web application server has expired. Reissue the SSL certificate on the web server that is signed by a globally recognized certificate authority (CA). Install the full certificate chain onto the legacy web application server
38
Set up Interface endpoints for Amazon S3
39
Replatform the server to Amazon EC2 while choosing an AMI of your choice to cater to the OS requirements. Use AWS Snowball to transfer the image data to Amazon S3
40
Set up VPC Flow Logs for the elastic network interfaces associated with the instances and configure the VPC Flow Logs to be filtered for rejected traffic. Publish the Flow Logs to CloudWatch Logs
41
Leverage AWS Config rules for auditing changes to AWS resources periodically and monitor the compliance of the configuration. Set up AWS Config custom rules using AWS Lambda to create a test-driven development approach, and finally automate the evaluation of configuration changes against the required controls, Leverage AWS CloudTrail events to review management activities of all AWS accounts. Make sure that CloudTrail is enabled in all accounts for the available AWS services. Enable CloudTrail trails and encrypt CloudTrail event log files with an AWS KMS key and monitor the recorded events via CloudWatch Logs
42
Amazon CloudFront with Lambda@Edge, Application Load Balancer
43
Design the serverless architecture by use of Amazon Simple Queue Service (SQS) with Amazon ECS Fargate. To save costs, run the Amazon SQS FIFO queues and Amazon ECS Fargate tasks only when needed
44
Store the intermediary query results in S3 Standard storage class
45
Configure a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days, Configure a Lifecycle Policy to transition all objects to Glacier after 180 days
46
Establish connectivity between VPC V1 and VPC V2 using VPC peering. Enable DNS resolution from the source VPC for VPC peering. Establish the necessary routes, security group rules, and network access control list (ACL) rules to allow traffic between the VPCs
47
Use Athena to run SQL based analytics against S3 data
48
Migrate mission-critical VMs using AWS Application Migration Service (MGN). Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball Edge. Leverage VM Import/Export to import the VMs into Amazon EC2
49
Analyze the cold data with Athena, Transition the data to S3 Standard IA after 30 days
xj9 - 19628 - a
xj9 - 19628 - a
critical flaw · 98問 · 2年前xj9 - 19628 - a
xj9 - 19628 - a
98問 • 2年前xj9 - 19628 - b
xj9 - 19628 - b
critical flaw · 30問 · 2年前xj9 - 19628 - b
xj9 - 19628 - b
30問 • 2年前xj9 - 19628 - c
xj9 - 19628 - c
critical flaw · 99問 · 1年前xj9 - 19628 - c
xj9 - 19628 - c
99問 • 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
critical flaw · 99問 · 1年前xj9 - 19628 - d1
xj9 - 19628 - d1
99問 • 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
critical flaw · 98問 · 1年前xj9 - 19628 - d2
xj9 - 19628 - d2
98問 • 1年前1. Shattershot
1. Shattershot
critical flaw · 50問 · 1年前1. Shattershot
1. Shattershot
50問 • 1年前Conquest Book 1
Conquest Book 1
critical flaw · 100問 · 1年前Conquest Book 1
Conquest Book 1
100問 • 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D1 - A
k3ch - 2910116 - D1 - A
100問 • 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
critical flaw · 65問 · 1年前k3ch - 2910116 - D1 - B
k3ch - 2910116 - D1 - B
65問 • 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D2 - A
k3ch - 2910116 - D2 - A
100問 • 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
critical flaw · 55問 · 1年前k3ch - 2910116 - D2 - B
k3ch - 2910116 - D2 - B
55問 • 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D3 - A
k3ch - 2910116 - D3 - A
100問 • 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
critical flaw · 63問 · 1年前k3ch - 2910116 - D3 - B
k3ch - 2910116 - D3 - B
63問 • 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
critical flaw · 100問 · 1年前k3ch - 2910116 - D4 - A
k3ch - 2910116 - D4 - A
100問 • 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
critical flaw · 100問 · 1年前1. X-Tinction Agenda
1. X-Tinction Agenda
100問 • 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
critical flaw · 100問 · 1年前2. X-Tinction Agenda
2. X-Tinction Agenda
100問 • 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
critical flaw · 100問 · 1年前3. X-Tinction Agenda
3. X-Tinction Agenda
100問 • 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
critical flaw · 90問 · 1年前4. X-Tinction Agenda
4. X-Tinction Agenda
90問 • 1年前Executioner's Song Book 1
Executioner's Song Book 1
critical flaw · 30問 · 1年前Executioner's Song Book 1
Executioner's Song Book 1
30問 • 1年前問題一覧
1
Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every night
2
Set up database migration to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group for Amazon EC2 instances that are fronted by an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis with replication group
3
Leverage AWS DataSync to transfer the biological data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow for orchestrating an AWS Batch job that processes the biological data
4
Configure an Amazon VPC interface endpoint for the Secrets Manager service to enable access for your Secrets Manager Lambda rotation function and private Amazon Relational Database Service (Amazon RDS) instance
5
The research assistant does not need to pay any transfer charges for the image upload
6
Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection
7
Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches
8
The primary and standby DB instances are upgraded at the same time for RDS MySQL Multi-AZ. All instances are upgraded at the same time for Aurora MySQL, Multi-AZ deployments for both RDS MySQL and Aurora MySQL follow synchronous replication, Read Replicas can be manually promoted to a standalone database instance for RDS MySQL whereas Read Replicas for Aurora MySQL can be promoted to the primary instance
9
Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier
10
Attach an EFS file system to the on-premises servers to act as the NAS server. Mount the same EFS file system to the AWS based web servers running on EC2 instances to serve the content
11
Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS
12
Order three AWS Snowball Edge appliances, split and transfer the data to these three appliances and ship them to AWS which will then copy the data from the Snowball Edge appliances to S3
13
Dockerize each application and then deploy to an ECS cluster running behind an Application Load Balancer
14
Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS
15
Create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint, Create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint
16
Enable Aurora Auto Scaling for Aurora Replicas. Deploy the application on Amazon EC2 instances configured behind an Auto Scaling Group, Configure EC2 instances behind an Application Load Balancer with Round Robin routing algorithm and sticky sessions enabled
17
Set up Amazon Kinesis Data Firehose to buffer events and an AWS Lambda function to process and transform the events. Set up Amazon OpenSearch to receive the transformed events. Use the Kibana endpoint that is deployed with OpenSearch to create near-real-time visualizations and dashboards
18
Create a Read Replica in the same Region as the Master database and point the analytics workload there
19
Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center's Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network
20
Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier
21
Ingest the orders in an SQS queue and trigger a Lambda function to process them
22
Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC
23
Set up DynamoDB Streams to capture and send updates to a Lambda function that outputs records to Kinesis Data Analytics (KDA) via Kinesis Data Streams (KDS). Detect and analyze anomalies in KDA and send notifications via SNS
24
Register the Application Load Balancer behind a Network Load Balancer that will provide the necessary static IP address to the ALB
25
Create a single Amazon S3 bucket. Create an IAM user for each client. Group these users under an IAM policy that permits access to sub-folders within the bucket via the use of the 'username' Policy variable. Train the clients to use an S3 client instead of an FTP client
26
Set up a new Amazon FSx file system with a Multi-AZ deployment type. Leverage AWS DataSync to transfer data from the old file system to the new one. Point the application to the new Multi-AZ file system
27
Add subnet CIDR range, or IP address of the replication instance in the inbound rules of the Amazon Redshift cluster security group, Your Amazon Redshift cluster must be in the same account and same AWS Region as the replication instance
28
Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3, Set up an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput
29
Enable default encryption on the Amazon S3 bucket that uses Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Use Amazon Redshift Spectrum to query the data for monthly audits
30
Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations
31
Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data
32
NAT Gateway has to be configured to give internet access to the Amazon VPC connected Lambda function
33
Ingest the data into Amazon S3 from S3 Glacier and query the required data with Amazon S3 Select
34
Deploy the VPC infrastructure using AWS CloudFormation and leverage a custom resource to request a unique CIDR range from an external IP address management (IPAM) service
35
Amazon S3 Glacier Instant Retrieval is the best fit for data accessed twice a year. Amazon S3 Glacier Deep Archive is cost-effective for data that is stored for long-term retention
36
Create an IPsec tunnel between your customer gateway appliance and the virtual private gateway, Create a VPC with a virtual private gateway, Set up a public virtual interface on the Direct Connect connection
37
The SSL certificate on the legacy web application server has expired. Reissue the SSL certificate on the web server that is signed by a globally recognized certificate authority (CA). Install the full certificate chain onto the legacy web application server
38
Set up Interface endpoints for Amazon S3
39
Replatform the server to Amazon EC2 while choosing an AMI of your choice to cater to the OS requirements. Use AWS Snowball to transfer the image data to Amazon S3
40
Set up VPC Flow Logs for the elastic network interfaces associated with the instances and configure the VPC Flow Logs to be filtered for rejected traffic. Publish the Flow Logs to CloudWatch Logs
41
Leverage AWS Config rules for auditing changes to AWS resources periodically and monitor the compliance of the configuration. Set up AWS Config custom rules using AWS Lambda to create a test-driven development approach, and finally automate the evaluation of configuration changes against the required controls, Leverage AWS CloudTrail events to review management activities of all AWS accounts. Make sure that CloudTrail is enabled in all accounts for the available AWS services. Enable CloudTrail trails and encrypt CloudTrail event log files with an AWS KMS key and monitor the recorded events via CloudWatch Logs
42
Amazon CloudFront with Lambda@Edge, Application Load Balancer
43
Design the serverless architecture by use of Amazon Simple Queue Service (SQS) with Amazon ECS Fargate. To save costs, run the Amazon SQS FIFO queues and Amazon ECS Fargate tasks only when needed
44
Store the intermediary query results in S3 Standard storage class
45
Configure a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days, Configure a Lifecycle Policy to transition all objects to Glacier after 180 days
46
Establish connectivity between VPC V1 and VPC V2 using VPC peering. Enable DNS resolution from the source VPC for VPC peering. Establish the necessary routes, security group rules, and network access control list (ACL) rules to allow traffic between the VPCs
47
Use Athena to run SQL based analytics against S3 data
48
Migrate mission-critical VMs using AWS Application Migration Service (MGN). Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball Edge. Leverage VM Import/Export to import the VMs into Amazon EC2
49
Analyze the cold data with Athena, Transition the data to S3 Standard IA after 30 days