ログイン

3. X-Cutioner's Song
100問 • 7ヶ月前
  • critical flaw
  • 通報

    問題一覧

  • 1

    The security team at a company has put forth a requirement to track the external IP address when a customer or a third party uploads files to the Amazon Simple Storage Service (Amazon S3) bucket owned by the company. How will you track the external IP address used for each upload? (Select two)

    Enable Amazon S3 server access logging to capture all bucket-level and object-level events, Enable AWS CloudTrail data events to enable object-level logging for S3 bucket

  • 2

    A social media company manages a multi-AZ VPC environment consisting of public subnets and private subnets. Each public subnet contains a NAT Gateway as well as an Internet Gateway. Most of the company's applications are deployed in the private subnets and these applications read and write data to Kinesis Data Streams. The company has hired you as an AWS Certified Solutions Architect Professional to reduce costs and optimize the applications. Upon analysis in the AWS Cost Explorer, you notice that the cost in the EC2-Other category is consistently high due to the increasing NAT Gateway data transfer charges. What do you recommend to address this requirement?

    Set up an interface VPC endpoint for Kinesis Data Streams in the VPC. Ensure that the VPC endpoint policy allows traffic from the applications

  • 3

    A social learning platform allows students to connect with other students as well as experts and professionals from academic, research institutes and industry. The engineering team at the company manages 5 Amazon EC2 instances that make read-heavy database requests to the Amazon RDS for PostgreSQL DB cluster. As an AWS Certified Solutions Architect Professional, you have been asked to make the database cluster resilient from a disaster recovery perspective. Which of the following features will help you prepare for database disaster recovery? (Select two)

    Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single or multiple AWS Region(s), Use cross-Region Read Replicas

  • 4

    A financial services company had a security incident recently and wants to review the security of its two-tier server architecture. The company wants to ensure that it follows the principle of least privilege while configuring the security groups for access between the EC2 instance-based app servers and RDS MySQL database servers. The security group for the EC2 instances as well as the security group for the MySQL database servers has no inbound and outbound rules configured currently. As an AWS Certified Solutions Architect Professional, which of the following options would you recommend to adhere to the given requirements? (Select two)

    Create an inbound rule in the security group for the MySQL DB servers using TCP protocol on port 3306. Set the source as the security group for the EC2 instance app servers, Create an outbound rule in the security group for the EC2 instance app servers using TCP protocol on port 3306. Set the destination as the security group for the MySQL DB servers

  • 5

    Recently, an Amazon CloudFront distribution has been configured with an Amazon S3 bucket as the origin. However, users are getting an HTTP 307 Temporary Redirect response from Amazon S3. What could be the reason for this behavior and how will you resolve the issue? (Select two)

    When a new Amazon S3 bucket is created, it takes up to 24 hours before the bucket name propagates across all AWS Regions, CloudFront by default, forwards the requests to the default S3 endpoint. Change the origin domain name of the distribution to include the Regional endpoint of the bucket

  • 6

    The engineering team at a retail company wants to establish a dedicated, encrypted, low latency, and high throughput connection between its data center and AWS Cloud. The engineering team has set aside sufficient time to account for the operational overhead of establishing this connection. Which of the following options represents the MOST optimal solution with the LEAST infrastructure set up required for provisioning the end to end connection?

    Use AWS Direct Connect along with a site-to-site VPN to establish a connection between the data center and AWS Cloud

  • 7

    A company uses Amazon FSx for Windows File Server with deployment type of Single-AZ 2 as its file storage service for its non-core functions. With a change in the company's policy that mandates high availability of data for all its functions, the company needs to change the existing configuration. The company also needs to monitor the file system activity as well as the end-user actions on the Amazon FSx file server. Which solutions will you combine to implement these requirements? (Select two)

    You can monitor storage capacity and file system activity using Amazon CloudWatch, and monitor end-user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose, Configure a new Amazon FSx for Windows file system with a deployment type of Multi-AZ. Transfer data to the newly created file system using the AWS DataSync service. Point all the file system users to the new location. You can test the failover of your Multi-AZ file system by modifying its throughput capacity

  • 8

    An e-commerce company is investigating user reports of its Java-based web application errors on the day of the Thanksgiving sale. The development team recovered the logs created by the EC2 instance-hosted web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated before logs could be collected and the Aurora metrics were inadequate for query performance analysis. Which of the following steps would you recommend to make the monitoring process more reliable to troubleshoot any future events due to traffic spikes? (Select three)

    Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the application logs to CloudWatch Logs, Set up the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances as well as set up tracing of SQL queries with the X-Ray SDK for Java, Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs

  • 9

    A financial services company has multiple AWS accounts hosting its portfolio of IT applications that serve the company's retail and enterprise customers. A CloudWatch Logs agent is installed on each of the EC2 instances running these IT applications. The company wants to aggregate all security events in a centralized AWS account dedicated to log storage. The centralized operations team at the company needs to perform near-real-time gathering and collating events across multiple AWS accounts. As a Solutions Architect Professional, which of the following solutions would you suggest to meet these requirements?

    Set up Kinesis Data Firehose in the logging account and then subscribe the delivery stream to CloudWatch Logs streams in each application AWS account via subscription filters. Persist the log data in an Amazon S3 bucket inside the logging AWS account

  • 10

    A data analytics company uses Amazon S3 as the data lake to store the input data that is ingested from the IoT field devices on an hourly basis. The ingested data has attributes such as the device type, ID of the device, the status of the device, the timestamp of the event, the source IP address, etc. The data runs into millions of records per day and the company wants to run complex analytical queries on this data daily for product improvements for each device type. Which is the most optimal way to save this data to get the best performance from the millions of data points processed daily?

    Store the data in Apache ORC, partitioned by date and sorted by device type of the device

  • 11

    A leading pharmaceutical company has significant investments in running Oracle and PostgreSQL services on Amazon RDS which provide their scientists with near real-time analysis of millions of rows of manufacturing data generated by continuous manufacturing equipment with 1,600 data points per row. The business analytics team has been running ad-hoc queries on these databases to prepare daily reports for senior management. The engineering team has observed that the database performance takes a hit whenever these reports are run by the analytics team. To facilitate the business analytics reporting, the engineering team now wants to replicate this data with high availability and consolidate these databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift. As a Solutions Architect Professional, which of the following would you recommend as the MOST resource-efficient solution that requires the LEAST amount of development time without the need to manage the underlying infrastructure?

    Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift

  • 12

    A digital marketing company uses S3 to store artifacts that may only be accessible to EC2 instances running in a private VPC. The security team at the company is apprehensive about an attack vector wherein any team member with access to this instance could also set up an EC2 instance in another VPC to access these artifacts. As an AWS Certified Solutions Architect Professional, which of the following solutions will you recommend to prevent such unauthorized access to the artifacts in S3?

    Configure an S3 VPC endpoint and create an S3 bucket policy to allow access only from this VPC endpoint

  • 13

    A retail company recently saw a huge spike in its monthly AWS spend. Upon further investigation, it was found that some developers had accidentally launched Amazon RDS instances in unexpected Regions. The company has hired you as an AWS Certified Solutions Architect Professional to establish best practices around least privileges for developers and control access to on-premises as well as AWS Cloud resources using Active Directory. The company has mandated you to institute a mechanism to control costs by restricting the level of access that developers have to the AWS Management Console without impacting their productivity. The company would also like to allow developers to launch RDS instances only in us-east-1 Region without limiting access to other services in any Region. How can you help the company achieve the new security mandate while minimizing the operational burden on the DevOps team?

    Configure SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to it. Attach a customer-managed policy that denies access to RDS in any AWS Region except us-east-1

  • 14

    A company has built a serverless electronic document management system for users to upload their documents. The system also has a web application that connects to an Amazon API Gateway with Regional endpoints which in turn invokes AWS Lambda functions. The Lambda functions write the metadata of the documents to the Amazon Aurora Serverless database before uploading the actual documents to the Amazon S3 bucket. While the serverless architecture has been tested in the US East (N. Virginia) Region, the solution should be scalable for other AWS Regions too. As an AWS Certified Solutions Architect Professional, which options would you recommend to make the architecture scalable while offering low latency service to customers of any AWS region? (Select two)

    Change the API Gateway Regional endpoints to edge-optimized endpoints, Enable S3 Transfer Acceleration on the S3 bucket and configure the web application to use the Transfer Acceleration endpoints

  • 15

    A global apparel, footwear, and accessories retailer uses Amazon S3 for centralized storage of the static media assets such as images and videos for its products. The product planning specialists typically upload and download video files (about 100MB each) to the same S3 bucket as part of their day to day work. Initially, the product planning specialists were based out of a single region and there were no performance issues. However, as the company grew and started running offices from multiple countries, it resulted in poor latency while accessing data from S3 and uploading data to S3. The company wants to continue with the serverless solution for its storage requirements but wants to improve its performance. As a solutions architect, which of the following solutions do you propose to address this issue? (Select two)

    Use Amazon CloudFront distribution with origin as the S3 bucket. This would speed up uploads as well as downloads for the video files, Enable Amazon S3 Transfer Acceleration for the S3 bucket. This would speed up uploads as well as downloads for the video files

  • 16

    A company allows property owners and travelers to connect with each other for the purpose of renting unique vacation spaces around the world. The engineering team at the company uses Amazon MySQL RDS DB cluster because it simplifies much of the time-consuming administrative tasks typically associated with databases. The team uses Multi-Availability Zone (Multi-AZ) deployment to further automate its database replication and augment data durability. The current cluster configuration also uses Read Replicas. An intern has joined the team and wants to understand the replication capabilities for Multi-AZ as well as Read Replicas for the given RDS cluster. As a Solutions Architect Professional, which of the following capabilities would you identify as correct for the given database?

    Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read Replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region

  • 17

    An Internet-of-Things (IoT) company is using Kinesis Data Streams (KDS) to process IoT data from field devices. Multiple consumer applications are using the incoming data streams and the engineers have noticed a performance lag for the data delivery speed between producers and consumers of the data streams. As a Solutions Architect Professional, which of the following would you recommend to improve the performance for the given use-case?

    Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications

  • 18

    A digital media company has hired you as an AWS Certified Solutions Architect Professional to optimize the architecture for its backup solution for applications running on the AWS Cloud. Currently, all of the applications running on AWS use at least two Availability Zones (AZs). The updated backup policy at the company mandates that all nightly backups for its data are durably stored in at least two geographically distinct Regions for Production and Disaster Recovery (DR) and the backup processes for both Regions must be fully automated. The new backup solution must ensure that the backup is available to be restored immediately for the Production Region and should be restored within 24 hours in the DR Region. Which of the following represents the MOST cost-effective solution that will address the given use-case?

    Create a backup process to persist all the data to an S3 bucket A using S3 standard storage class in the Production Region. Set up cross-Region replication of this S3 bucket A to an S3 bucket B using S3 standard storage class in the DR Region and set up a lifecycle policy in the DR Region to immediately move this data to Amazon Glacier

  • 19

    A retail company has hired you as an AWS Certified Solutions Architect Professional to provide consultancy for managing a serverless application that consists of multiple API gateways, Lambda functions, S3 buckets and DynamoDB tables. The company is getting reports from customers that some of the application components seem to be lagging while loading dynamic images and some are timing out with the "504 Gateway Timeout" error. As part of your investigations to identify the root cause behind this issue, you can confirm that DynamoDB monitoring metrics are at acceptable levels. Which of the following steps would you recommend to address these application issues? (Select two)

    Process and analyze the Amazon CloudWatch Logs for Lambda function to determine processing times for requested images at pre-configured intervals, Process and analyze the AWS X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors

  • 20

    A leading hotel reviews website has a repository of more than one million high-quality digital images. When this massive volume of images became too cumbersome to handle in-house, the company decided to offload the content to a central repository on Amazon S3 as part of its hybrid cloud strategy. The company now wants to reprocess its entire collection of photographic images to change the watermarks. The company wants to use Amazon EC2 instances and Amazon SQS in an integrated workflow to generate the sizes they need for each photo. The team wants to process a few thousand photos each night, using Amazon EC2 Spot Instances. The team uses Amazon SQS to communicate the photos that need to be processed and the status of the jobs. To handle certain sensitive photos, the team wants to postpone the delivery of certain messages to the queue by one minute while all other messages need to be delivered immediately to the queue. As a Solutions Architect Professional, which of the following solutions would you suggest to the company to handle the workflow for sensitive photos?

    Use message timers to postpone the delivery of certain messages to the queue by one minute

  • 21

    The engineering team at a retail company has deployed a fleet of EC2 instances under an Auto Scaling group (ASG). The instances under the ASG span two Availability Zones (AZ) within the eu-west-1 region. All the incoming requests are handled by an Application Load Balancer (ALB) that routes the requests to the EC2 instances under the ASG. A planned migration went wrong last week when two instances (belonging to AZ 1) were manually terminated and desired capacity was reduced causing the Availability Zones to become unbalanced. Later that day, another instance (belonging to AZ 2) was detected as unhealthy by the Application Load Balancer's health check. Which of the following options represent the correct outcomes for the aforesaid events? (Select two)

    As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application, Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance

  • 22

    A leading telecommunications company has developed its cloud storage solution on Amazon RDS for MySQL but it's running into performance issues despite using Read Replicas. The company has hired you as an AWS Certified Solutions Architect Professional to address these performance-related challenges on an urgent basis without moving away from the underlying relational database schema. The company has branch offices across the world, and it needs the solution to work on a global scale. Which of the following will you recommend as the MOST cost-effective and high-performance solution?

    Use Amazon Aurora Global Database to enable fast local reads with low latency in each region

  • 23

    A web-hosting startup manages more than 500 public web applications on AWS Cloud which are deployed in a single AWS Region. The fully qualified domain names (FQDNs) of all of the applications are configured to use HTTPS and are served via Application Load Balancers (ALBs). These ALBs are configured to use public SSL/TLS certificates. The startup has hired you as an AWS Certified Solutions Architect Professional to migrate the web applications to a multi-Region architecture. You must ensure that all HTTPS services continue to work without interruption. Which of the following solutions would you suggest to address these requirements?

    Generate a separate certificate for each FQDN in each AWS Region using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in the relevant AWS Region

  • 24

    A gaming company runs its flagship application with an SLA of 99.99%. Global users access the application 24/7. The application is currently hosted on the on-premises data centers and it routinely fails to meet its SLA, especially when hundreds of thousands of users access the application concurrently. The engineering team has also received complaints from some users about high latency. As a Solutions Architect Professional, how would you redesign this application for scalability and also allow for automatic failover at the lowest possible cost?

    Configure Route 53 latency-based routing to route to the nearest Region and activate the health checks. Host the website on S3 in each Region and use API Gateway with AWS Lambda for the application layer. Set up the data layer using DynamoDB global tables with DAX for caching

  • 25

    A stock trading firm uses AWS Cloud for its IT infrastructure. The firm runs several trading-risk simulation applications, developing complex algorithms to simulate diverse scenarios in order to evaluate the financial health of its customers. The firm stores customers' financial records on Amazon S3. The engineering team needs to implement an archival solution based on Amazon S3 Glacier to enforce regulatory and compliance controls on the archived data. As a Solutions Architect Professional, which of the following solutions would you recommend?

    Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls

  • 26

    The DevOps team at a financial services company has provisioned a new GPU optimized EC2 instance X by choosing the default security group of the default VPC. The team can ping instance X from other instances in the VPC. The other instances were also created using the default security group. The next day, the team launches another GPU optimized instance Y by creating a new security group and attaching it to instance Y. All other configuration options for instance Y are chosen as default. However, the team is not able to ping instance Y from other instances in the VPC. As a Solutions Architect Professional, which of the following would you identify as the root cause of the issue?

    Instance X is in the default security group. The default rules for the default security group allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. Instance Y is in a new security group. The default rules for a security group that you create allow no inbound traffic

  • 27

    The engineering team at a data analytics company is currently optimizing a production workload on AWS that is I/O intensive with frequent read/write/update operations and it's currently constrained on the IOPS. This workload consists of a single-tier with 15 r6g.8xlarge instances, each with 3 TB gp2 volume. The number of processing jobs has increased recently, resulting in an increase in latency as well. The team has concluded that they need to increase the IOPS by 3,000 for each of the instances for the application to perform efficiently. As an AWS Certified Solutions Architect Professional, which of the following solutions will you suggest to meet the performance goal in the MOST cost-efficient way?

    Modify the size of the gp2 volume for each instance from 3 TB to 4 TB

  • 28

    A social media company has a serverless application stack that consists of CloudFront, API Gateway and Lambda functions. The company has hired you as an AWS Certified Solutions Architect Professional to improve the current deployment process which creates a new version of the Lambda function and then runs an AWS CLI script for deployment. In case the new version errors out, then another CLI script is invoked to deploy the previous working version of the Lambda function. The company has mandated you to decrease the time to deploy new versions of the Lambda functions and also reduce the time to detect and rollback when errors are identified. Which of the following solutions would you suggest for the given use-case?

    Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered

  • 29

    A social media company has its corporate headquarters in New York with an on-premises data center using an AWS Direct Connect connection to the AWS VPC. The branch offices in San Francisco and Miami use Site-to-Site VPN connections to connect to the AWS VPC. The company is looking for a solution to have the branch offices send and receive data with each other as well as with their corporate headquarters. As a Solutions Architect Professional, which of the following solutions would you recommend to meet these requirements?

    Set up VPN CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters

  • 30

    A silicon valley based unicorn startup recently launched a video-sharing social networking service called KitKot. The startup uses AWS Cloud to manage the IT infrastructure. Users upload video files up to 1 GB in size to a single EC2 instance based application server which stores them on a shared EFS file system. Another set of EC2 instances managed via an Auto Scaling group, periodically scans the EFS share directory for new files to process and generate new videos (for thumbnails and composite visual effects) according to the video processing instructions that are uploaded alongside the raw video files. Post-processing, the raw video files are deleted from the EFS file system and the results are stored in an S3 bucket. Links to the processed video files are sent via in-app notifications to the users. The startup has recently found that even as more instances are added to the Auto Scaling Group, many files are processed twice, therefore image processing speed is not improved. As an AWS Certified Solutions Architect Professional, what would you recommend to improve the reliability of the solution as well as eliminate the redundant processing of video files?

    Refactor the application to run from S3 instead of EFS and upload the video files directly to an S3 bucket. Configure an S3 trigger to invoke a Lambda function on each video file upload to S3 that puts a message in an SQS queue containing the link and the video processing instructions. Change the video processing application to read from the SQS queue and the S3 bucket. Configure the queue depth metric to scale the size of the Auto Scaling group for video processing instances. Leverage EventBridge events to trigger an SNS notification to the user containing the links to the processed files

  • 31

    A US-based retailer wants to ensure website availability as the company’s traditional infrastructure hasn’t been easy to scale. By moving its e-commerce platform to AWS, the company wants to scale with demand and ensure better availability. Last year, the company handled record Black Friday sale orders at a rate of nearly 10,000 orders/hour. The engineering team at the company now wants to finetune the disaster recovery strategy for its database tier. As an AWS Certified Solutions Architect Professional, you have been asked to implement a disaster recovery strategy for all the Amazon RDS databases that the company owns. Which of the following points do you need to consider for creating a robust recovery plan? (Select three)

    Automated backups, manual snapshots and Read Replicas are supported across multiple Regions, Recovery time objective (RTO) represents the number of hours it takes, to return the Amazon RDS database to a working state after a disaster, Database snapshots are user-initiated backups of your complete DB instance that serve as full backups. These snapshots can be copied and shared to different Regions and accounts

  • 32

    The engineering team at a social media company is building an ElasticSearch based index for all the existing files in S3. To build this index, it only needs to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, adding up to 50TB of data. As a Solutions Architect Professional, which of the following solutions can be used to build this index MOST efficiently? (Select two)

    Create an application that will use the S3 Select ScanRange parameter to get the first 250 bytes and store that information in ElasticSearch, Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in ElasticSearch

  • 33

    An e-commerce web application is hosted on Amazon EC2 instances that are fronted by Application Load Balancer (ALB) configured with an Auto Scaling group (ASG). Enhanced security is provided to the ALB by AWS WAF web ACLs. As per the company's security policy, AWS CloudTrail is activated and logs are configured to be stored on Amazon S3 and CloudWatch Logs. A discount sales offer was run on the application for a week. The support team has noticed that a few of the instances have rebooted taking down the log files and all temporary data with them. Initial analysis has confirmed that the incident took place during off-peak hours. Even though the incident did not cause any sales or revenue loss, the CTO has asked the security team to fix the security error that has allowed the incident to go unnoticed and eventually untraceable. What steps will you implement to permanently record all traffic coming into the application?

    Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking

  • 34

    A leading Internet-of-Things (IoT) solutions company needs to develop a platform that would analyze real-time clickstream events from embedded sensors in consumer electronic devices. The company has hired you as an AWS Certified Solutions Architect Professional to consult the engineering team and develop a solution using the AWS Cloud. The company wants to use clickstream data to perform data science, develop algorithms, and create visualizations and dashboards to support the business stakeholders. Each of these groups would work independently and would need real-time access to this clickstream data for their applications. Which of the following options would provide a highly available and fault-tolerant solution to capture the clickstream events from the source and also provide a simultaneous feed of the data stream to the downstream applications?

    Use AWS Kinesis Data Streams to facilitate multiple applications consume same streaming data concurrently and independently

  • 35

    A company has built its serverless solution using Amazon API Gateway REST API and AWS Lambda across multiple AWS Regions configured into a single AWS account. During peak hours, customers began to receive 429 Too Many Requests errors from multiple API methods. While troubleshooting the issue, the team realized that AWS Lambda function(s) have not been invoked for these API methods. Also, the company wants to provide a separate quota for its premium customers to access the APIs. Which solution will you offer to meet this requirement?

    The error is the outcome of the company reaching its API Gateway account limit for calls per second, configure API keys as client identifiers using usage plans to define the per-client throttling limits for premium customers

  • 36

    An IT company wants to move all its clients belonging to the regulated and security-sensitive industries such as financial services and healthcare to the AWS Cloud as it wants to leverage the out-of-box security-specific capabilities offered by AWS. The Security team at the company is developing a framework to validate the adoption of AWS best practices and industry-recognized compliance standards. The AWS Management Console is the preferred method for the in-house teams wanting to provision resources. You have been hired as an AWS Certified Solutions Architect Professional to spearhead this strategic initiative. Which of the following strategies would you adopt to address these business requirements for continuously assessing, auditing and monitoring the configurations of AWS resources? (Select two)

    Leverage Config rules to audit changes to AWS resources and monitor the compliance of the configuration by running the evaluations for the rule at a frequency that you choose. Develop AWS Config custom rules to establish a test-driven development approach by triggering the evaluation when any resource that matches the rule's scope changes in configuration, Enable trails and set up CloudTrail events to review and monitor management activities of all AWS accounts by logging these activities into CloudWatch Logs using a KMS key. Ensure that CloudTrail is enabled for all accounts as well as all available AWS services

  • 37

    A company has an Elastic Load Balancer (ELB) that is configured with an Auto Scaling Group (ASG) having a minimum of 4, a maximum of 10, and the desired value of 4 instances. The ASG cooldown and the termination policies are configured to the default values. Monitoring reports indicate a general usage requirement of 4 instances, while any traffic spikes result in an additional 10 instances. Customers have been complaining of request timeouts and partially loaded pages. As an AWS Certified Solutions Architect Professional, which of the following options will you suggest to fix this issue?

    Configure connection draining on ELB

  • 38

    A solutions architect at a retail company has configured a private hosted zone using Route 53. The architect needs to configure health checks for record sets within the private hosted zone that are associated with EC2 instances. How can the architect build a solution to address the given use case?

    Configure a CloudWatch metric that checks the status of the EC2 StatusCheckFailed metric, add an alarm to the metric, and then configure a health check that monitors the state of the alarm

  • 39

    A standard three-tier application is hosted on Amazon EC2 instances that are fronted by an Application Load Balancer. The application maintenance team has reported several small-scale malicious attacks on the application. The solutions architect wants to ramp up the security of the application. Which of the following would you recommend as part of the best practices to scan and mitigate the known vulnerabilities?

    Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities

  • 40

    You have hired a Cloud consulting agency, Example Corp, to monitor your AWS account and help optimize costs. To track daily spending, Example Corp needs access to your AWS resources, therefore, you allow Example Corp to assume an IAM role in your account. However, Example Corp also tracks spending for other customers, and there could be a configuration issue in the Example Corp environment that allows another customer to compel Example Corp to attempt to take an action in your AWS account, even though that customer should only be able to take the action in their account. How will you mitigate the risk of such a cross-account access scenario?

    Create an IAM role in your AWS account with a trust policy that trusts the Partner (Example Corp). Take a unique external ID value from Example Corp and include this external ID condition in the role’s trust policy

  • 41

    A company has hired you as an AWS Certified Solutions Architect Professional to develop a deployment plan for its flagship application deployed on EC2 instances across multiple Availability Zones in the us-east-1 Region. Your solution must meet these constraints: 1) A 300 GB static dataset must be available to the application before it can be started 2) The application layer must scale on-demand with the least amount of starting time possible 3) The development team must be able to change the code multiple times in a day 4) Any patches for critical operating systems (OS) must be applied within 24 hours of release Which of the following represents the best solution for this requirement?

    Leverage AWS Systems Manager to create and maintain a new AMI with the OS patches updated on an ongoing basis. Configure the Auto Scaling group to use the patched AMI and replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store and access the static dataset using Amazon EFS

  • 42

    A web application is running on a fleet of Amazon EC2 instances that are configured to operate in an Auto Scaling group (ASG). The instances are fronted by an Elastic Load Balancer (ELB). To enhance the system performance, a new Amazon Machine Image (AMI) was created and the ASG was configured to use the new AMI. However, after the production deployment, users complained of aberrations in the expected application functionality. A cross-check on the ELB has confirmed that all the instances are healthy and running as expected. As a solutions architect, which option would you suggest to rectify these issues and guarantee that later deployments are successful?

    Create a new ASG launch configuration that uses the newly created AMI. Double the size of the ASG and allow the new instances to become healthy and then reduce the ASG back to the original size. If the new instances do not work as expected, associate the ASG with the old launch configuration

  • 43

    A company has a web application running on an EC2 instance with a single elastic network interface in a subnet in a VPC. As part of the network re-architecture, the CTO at the company wants the web application to be moved to a different subnet in the same Availability Zone. Which of the following solutions would you suggest to meet these requirements?

    Launch a new instance in the new subnet via an AMI created from the old instance. Direct traffic to this new instance using Route 53 and then terminate the old instance

  • 44

    For deployments across AWS accounts, a company has decided to use AWS CodePipeline to deploy an AWS CloudFormation stack in an AWS account (account A) to a different AWS account (account B). As a solutions architect, what combination of steps will you take to configure this requirement? (Select three)

    In account B, create a cross-account IAM role. In account A, add the AssumeRole permission to account A's CodePipeline service role to allow it to assume the cross-account role in account B, In account B, create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack. In account A, update the CodePipeline configuration to include the resources associated with account B, In account A, create a customer-managed AWS KMS key that grants usage permissions to account A's CodePipeline service role and account B. Also, create an Amazon Simple Storage Service (Amazon S3) bucket with a bucket policy that grants account B access to the bucket

  • 45

    A web application is hosted on Amazon EC2 instances that are fronted by Application Load Balancer (ALB) configured with an Auto Scaling group (ASG). Enhanced security is provided to the ALB by AWS WAF web ACLs. As per the company's security policy, AWS CloudTrail is activated and logs are configured to be stored on Amazon S3 and CloudWatch Logs. A holiday sales offer was run on the application for a week. The development team has noticed that a few of the instances have rebooted taking down the log files and all temporary data with them. Initial analysis has confirmed that the incident took place during off-peak hours. Even though the incident did not cause any sales or revenue loss, the CTO has asked the development team to fix the security error that has allowed the incident to go unnoticed and eventually untraceable. Which of the following steps will you implement to permanently record all traffic coming into the application?

    Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking

  • 46

    The development team at a gaming company has been tasked to reduce the in-game latency and jitters. The team wants traffic from its end users to be routed to the AWS Region that is closest to the end users geographically. When maintenance occurs in an AWS Region, traffic must be routed to the next closest AWS Region with no changes to the IP addresses being used as connections by the end-users. As an AWS Certified Solutions Architect Professional, which solution will you suggest to meet these requirements?

    Set up AWS Global Accelerator in front of all the AWS Regions

  • 47

    A supply-chain manufacturing company manages its AWS resources in an Elastic Beanstalk environment. For implementing a new security requirement, the company needs to assign a single static IP address to a load-balanced Elastic Beanstalk environment. Subsequently, this IP address will be used to uniquely identify traffic coming from the Elastic Beanstalk environment. As a solutions architect, which of the following would you recommend as the BEST solution that requires minimal maintenance?

    Use a Network Address Translation (NAT) gateway to map multiple IP addresses into a single publicly exposed IP address

  • 48

    A weather monitoring agency stores and manages the global weather data for the last 50 years. The data has a velocity of 1GB per minute. You would like to store the data with only the most relevant attributes to build a predictive model for weather patterns. Which of the following solutions would you use to build the most cost-effective solution with the LEAST amount of infrastructure maintenance?

    Capture the data in Kinesis Data Firehose and use an intermediary Lambda function to filter and transform the incoming stream before the output is dumped on S3

  • 49

    An e-commerce company runs its flagship website on its on-premises Linux servers. Recently, the company suffered outages after announcing huge discounts on its website. The web tier of the application is fronted by Elastic Load Balancer while the database tier is built on RDS MYSQL database. The company is planning to run heavy discounts for the upcoming holiday sales season. The company is looking for a solution to avoid any similar outages as well as quickly ramp up the ability to handle huge traffic spikes. As an AWS Certified Solutions Architect Professional, which of the following would you suggest as the most optimal solution that can enhance the application's capabilities to handle the sudden spikes in user traffic without significant development effort?

    Create a CloudFront distribution and configure CloudFront to cache objects from a custom origin. This will offload some traffic from the on-premises servers. Customize CloudFront cache behavior by setting Time To Live (TTL) to suit your business requirement

  • 50

    An e-commerce business has recently moved to AWS serverless infrastructure with the help of Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application performs as expected on a normal day. But, during peak periods, when thousands of concurrent requests are submitted, the user requests are initially failing before finally succeeding. The development team examined the logs for each component with a special focus on the Amazon CloudWatch Logs for Lambda. None of the components, services, or applications have logged any errors. What could be the most probable reason for this failure?

    The throttle limit set on API Gateway is very low. During peak hours, the additional requests are not making their way to Lambda

  • 51

    A company uses Amazon S3 storage service for storing its business data. Multiple S3 event notifications have been configured to be delivered to Amazon Simple Queue Service (Amazon SQS) queue when objects pass through the storage lifecycle. The team has noticed that notifications are not being delivered to the queue. Amazon SQS queue has server-side encryption (SSE) turned on. What should be done to receive the S3 event notifications to an Amazon SQS queue that uses SSE?

    Create a customer-managed AWS KMS key and configure the key policy to grant permissions to the Amazon S3 service principal

  • 52

    A company manages a healthcare diagnostics application that writes thousands of lab images to a mounted NFS file system each night from 10 PM - 5 AM. The company wants to migrate this application from its on-premises data center to AWS Cloud over a private network. The company has already established an AWS Direct Connect connection to AWS to facilitate this migration. This application is slated to be moved to Amazon EC2 instances with the Elastic File System (Amazon EFS) file system as the storage service. Which of the following represents the MOST optimal way of replicating all images to the cloud before the application is fully migrated to the cloud?

    Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every night

  • 53

    A company manages a stateful web application that persists data on a MySQL database. The application stack is hosted in the company's on-premises data center using a single server. The company is looking at increasing its market presence through promotions and campaigns. While the user experience has been good so far, the current application architecture will not support the growth that the company envisages. The company has hired you as an AWS Certified Solutions Architect Professional to migrate the current architecture to AWS which should continue to support SQL-based queries. The proposed solution should offer maximum reliability with better performance. What would you recommend?

    Set up database migration to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group for Amazon EC2 instances that are fronted by an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis with replication group

  • 54

    A bioinformatics company leverages multiple open source tools to manage data analysis workflows running on its on-premises servers to process biological data which is generated and stored on a Network Attached Storage (NAS). The existing workflow receives around 100 GB of input biological data for each job run and individual jobs can take several hours to process the data. The CTO at the company wants to re-architect its proprietary analytics workflow on AWS to meet the workload demands and reduce the turnaround time from months to days. The company has provisioned a high-speed AWS Direct Connect connection. The final result needs to be stored in Amazon S3. The company is expecting approximately 20 job requests each day. Which of the following options would you recommend for the given use case?

    Leverage AWS DataSync to transfer the biological data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow for orchestrating an AWS Batch job that processes the biological data

  • 55

    A team has recently created a secret using AWS Secrets Manager to access their private Amazon Relational Database Service (Amazon RDS) instance. When the team tried to rotate the AWS Secrets Manager secret in an Amazon Virtual Private Cloud (Amazon VPC), the operation failed. On analyzing the Amazon CloudWatch Logs, the team realized that the AWS Lambda task timed out. Which of the following solutions needs to be implemented for rotating the secret successfully?

    Configure an Amazon VPC interface endpoint for the Secrets Manager service to enable access for your Secrets Manager Lambda rotation function and private Amazon Relational Database Service (Amazon RDS) instance

  • 56

    A leading medical imaging equipment and diagnostic imaging solutions provider uses AWS Cloud to run its healthcare data flows through more than 500,000 medical imaging devices globally. The solutions provider stores close to one petabyte of medical imaging data on Amazon S3 to provide the durability and reliability needed for their critical data. A research assistant working with the radiology department is trying to upload a high-resolution image into S3 via the public internet. The image size is approximately 5GB. The research assistant is using S3 Transfer Acceleration (S3TA) for faster image upload. It turns out that S3TA did not result in an accelerated transfer. Given this scenario, which of the following is correct regarding the charges for this image transfer?

    The research assistant does not need to pay any transfer charges for the image upload

  • 57

    An e-commerce company runs a data archival workflow once a month for its on-premises data center which is connected to the AWS Cloud over a minimally used 10-Gbps Direct Connect connection using a private virtual interface to its virtual private cloud (VPC). The company internet connection is 200 Mbps, and the usual archive size is around 140 TB that is created on the first Friday of a month. The archive must be transferred and available in Amazon S3 by the next Monday morning. As a Solutions Architect Professional, which of the following options would you recommend as the LEAST expensive way to address the given use-case?

    Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection

  • 58

    A company wants to migrate its on-premises Oracle database to Aurora MySQL. The company has hired an AWS Certified Solutions Architect Professional to carry out the migration with minimal downtime using AWS DMS. The company has mandated that the migration must have minimal impact on the performance of the source database and the Solutions Architect must validate that the data was migrated accurately from the source to the target before the cutover. Which of the following solutions will MOST effectively address this use-case?

    Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches

  • 59

    The engineering team at a company is evaluating the Multi-AZ and Read Replica capabilities of RDS MySQL vs Aurora MySQL before they implement the solution in their production environment. The company has hired you as an AWS Certified Solutions Architect Professional to provide a detailed report on this technical requirement. Which of the following would you identify as correct regarding the given use-case? (Select three)

    The primary and standby DB instances are upgraded at the same time for RDS MySQL Multi-AZ. All instances are upgraded at the same time for Aurora MySQL, Multi-AZ deployments for both RDS MySQL and Aurora MySQL follow synchronous replication, Read Replicas can be manually promoted to a standalone database instance for RDS MySQL whereas Read Replicas for Aurora MySQL can be promoted to the primary instance

  • 60

    A multi-national digital media company wants to exit out of the business of owning and maintaining its own IT infrastructure so it can redeploy resources toward innovation in Artificial Intelligence and related areas to create a better customer experience. As part of this digital transformation, the media company wants to archive about 9 PB of data in its on-premises data center to durable long term storage. As a Solutions Architect Professional, what is your recommendation to migrate and store this data in the quickest and MOST cost-optimal way?

    Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

  • 61

    A blog hosting company has an existing SaaS product architected as an on-premises three-tier web application. The blog content is posted and updated several times a day by multiple authors, so the Linux web servers serve content from a centralized file share on a NAS server. The CTO at the company has done an extensive technical review and highlighted to the company management that the existing infrastructure is not optimized. The company would like to migrate to AWS so that the resources can be dynamically scaled in response to load. The on-premises infrastructure and AWS Cloud are connected using Direct Connect. As a Solutions Architect Professional, which of the following solutions would you recommend to the company so that it can migrate the web infrastructure to AWS without delaying the content updation process?

    Attach an EFS file system to the on-premises servers to act as the NAS server. Mount the same EFS file system to the AWS based web servers running on EC2 instances to serve the content

  • 62

    A solo entrepreneur is working on a new digital media startup and wants to have a hands-on understanding of the comparative pricing for various storage types available on AWS Cloud. The entrepreneur has created a test file of size 5 GB with some random data. Next, he uploads this test file into AWS S3 Standard storage class, provisions an EBS volume (General Purpose SSD (gp2)) with 50 GB of provisioned storage and copies the test file into the EBS volume, and lastly copies the test file into an EFS Standard Storage filesystem. At the end of the month, he analyses the bill for costs incurred on the respective storage types for the test file. What of the following represents the correct order of the storage charges incurred for the test file on these three storage types?

    Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS

  • 63

    A social media company is transitioning its IT infrastructure from its on-premises data center to the AWS Cloud. The company wants to move its data artifacts, 200 TB in total size, to Amazon S3 on the AWS Cloud in the shortest possible time. The company has hired you as an AWS Certified Solutions Architect Professional to provide consultancy for this data migration. In terms of the networking infrastructure, the company has a 500 Mbps Direct Connect connection to the AWS Cloud as well as an IPSec based AWS VPN connection using the public internet that supports a bandwidth of 1 Gbps. Which of the following solutions would you recommend to address the given use-case?

    Order three AWS Snowball Edge appliances, split and transfer the data to these three appliances and ship them to AWS which will then copy the data from the Snowball Edge appliances to S3

  • 64

    A web development studio runs hundreds of Proof-of-Concept (PoC) and demo applications on virtual machines running on an on-premises server. Many of the applications are simple PHP, JavaScript or Python web applications which are no longer actively developed and serve little traffic. As a Solutions Architect Professional, which of the following approaches would you suggest to migrate these applications to AWS with the lowest infrastructure cost and least development effort?

    Dockerize each application and then deploy to an ECS cluster running behind an Application Load Balancer

  • 65

    The DevOps team at a leading social media company uses Chef to automate the configurations of servers in the on-premises data center. The CTO at the company now wants to migrate the IT infrastructure to AWS Cloud with minimal changes to the server configuration workflows and at the same time account for less operational overhead post-migration to AWS. The company has hired you as an AWS Certified Solutions Architect Professional to recommend a solution for this migration. Which of the following solutions would you recommend to address the given use-case?

    Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS

  • 66

    A company wants to use SharePoint to deploy a content and collaboration platform with document and records management functionality. The company wants to establish an AWS Direct Connect link to connect the AWS Cloud with the internal corporate network using AWS Storage Gateway. Using AWS Direct Connect would enable the company to deliver on its performance benchmark requirements including a three second or less response time for sending small documents across the internal network. To facilitate this goal, the company wants to be able to resolve DNS queries for any resources in the on-premises network from the AWS VPC and also resolve any DNS queries for resources in the AWS VPC from the on-premises network. As a Solutions Architect Professional, which of the following solutions would you recommend for this use-case? (Select two)

    Create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint, Create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint

  • 67

    A company runs its two-tier web application from an on-premises data center. The web servers connect to a PostgreSQL database running on a different server. With the consistent increase in users, both the web servers and the database are underperforming leading to a bad user experience. The company has decided to migrate to AWS Cloud and has chosen Amazon Aurora PostgreSQL as its database solution. The company needs a solution that can scale the web servers and the database layer based on user traffic. Which of the following options will you combine to improve the application scalability and improve the user experience? (Select two)

    Enable Aurora Auto Scaling for Aurora Replicas. Deploy the application on Amazon EC2 instances configured behind an Auto Scaling Group, Configure EC2 instances behind an Application Load Balancer with Round Robin routing algorithm and sticky sessions enabled

  • 68

    A data analytics company stores event data in its on-premises PostgreSQL database. With the increase in the number of clients, the company is spending a lot of resources managing and maintaining the infrastructure while performance seems to be dwindling. The company has established connectivity between its on-premises systems and AWS Cloud already and wants a hybrid solution that can automatically buffer and transform event data in a scalable way and create visualizations to track and monitor events in real time. The transformed event data would be in semi-structured JSON format and have dynamic schemas. Which combination of services/technologies will you suggest to implement the requirements?

    Set up Amazon Kinesis Data Firehose to buffer events and an AWS Lambda function to process and transform the events. Set up Amazon OpenSearch to receive the transformed events. Use the Kibana endpoint that is deployed with OpenSearch to create near-real-time visualizations and dashboards

  • 69

    A global SaaS company has recently migrated its technology infrastructure from its on-premises data center to AWS Cloud. The engineering team has provisioned an RDS MySQL DB cluster for the company's flagship application. An analytics workload also runs on the same database which publishes near real-time reports for the management of the company. When the analytics workload runs, it slows down the SaaS application as well, resulting in bad user experience. As a Solutions Architect Professional, which of the following would you recommend as the MOST cost-optimal solution to fix this issue?

    Create a Read Replica in the same Region as the Master database and point the analytics workload there

  • 70

    The CTO at a multi-national retail company is pursuing an IT re-engineering effort to set up a hybrid network architecture that would facilitate the company's envisaged long-term data center migration from multiple on-premises data centers to the AWS Cloud. The current on-premises data centers are in different locations and are inter-linked via a private fiber. Due to the unique constraints of the existing legacy applications, using NAT is not an option. During the migration period, many critical applications will need access to other applications deployed in both the on-premises data centers and AWS Cloud. As a Solutions Architect Professional, which of the following options would you suggest to set up a hybrid network architecture that is highly available and supports high bandwidth for a multi-Region deployment post-migration?

    Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center's Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network

  • 71

    An e-commerce company has hired an AWS Certified Solutions Architect Professional to transform a standard three-tier web application architecture in AWS. Currently, the web and application tiers run on EC2 instances and the database tier runs on RDS MySQL. The company wants to redesign the web and application tiers to use API Gateway with Lambda Functions with the final goal of deploying the new application within 6 months. As an immediate short-term task, the Engineering Manager has mandated the Solutions Architect to reduce costs for the existing stack. Which of the following options should the Solutions Architect recommend as the MOST cost-effective and reliable solution?

    Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier

  • 72

    An e-commerce company is planning to migrate its IT infrastructure from the on-premises data center to AWS Cloud to ramp up its capabilities well in time for the upcoming Holiday Sale season. The company’s CTO has hired you as an AWS Certified Solutions Architect Professional to design a distributed, highly available and loosely coupled order processing application. The application is responsible for receiving and processing orders before storing them in a DynamoDB table. The application has seen sporadic traffic spikes in the past and the CTO wants the application to be able to scale during marketing campaigns to process the orders with minimal disruption. Which of the following options would you recommend as the MOST reliable solution to address these requirements?

    Ingest the orders in an SQS queue and trigger a Lambda function to process them

  • 73

    A multi-national retail company has built a hub-and-spoke network with AWS Transit Gateway. VPCs have been provisioned into multiple AWS accounts to facilitate network isolation and to enable delegated network administration. The organization is looking at a cost-effective, quick and secure way of maintaining this distributed architecture so that it provides access to services required by workloads in each of the VPCs. As a Solutions Architect Professional, which of the following options would you recommend for the given use-case?

    Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC

  • 74

    A big data analytics company is leveraging AWS Cloud to process Internet of Things (IoT) sensor data from the field devices of an agricultural sciences company. The analytics company stores the IoT sensor data in Amazon DynamoDB tables. To detect anomalous behaviors and respond quickly, all changes to the items stored in the DynamoDB tables must be logged in near real-time. As an AWS Certified Solutions Architect Professional, which of the following solutions would you recommend to meet the requirements of the given use-case so that it requires minimal custom development and infrastructure maintenance?

    Set up DynamoDB Streams to capture and send updates to a Lambda function that outputs records to Kinesis Data Analytics (KDA) via Kinesis Data Streams (KDS). Detect and analyze anomalies in KDA and send notifications via SNS

  • 75

    A web application is hosted on a fleet of Amazon EC2 instances running behind an Application Load Balancer (ALB). A custom functionality has mandated the need for a static IP address for the ALB. As a solutions architect, how will you implement this requirement while keeping the costs to a minimum?

    Register the Application Load Balancer behind a Network Load Balancer that will provide the necessary static IP address to the ALB

  • 76

    A web development company uses FTP servers for their growing list of 200 odd clients to facilitate remote data sharing of media assets. To reduce management costs and time, the company has decided to move to AWS Cloud. The company is looking for an AWS solution that can offer increased scalability with reduced costs. Also, the company's policy mandates complete privacy and isolation of data for each client. Which solution will you recommend for these requirements?

    Create a single Amazon S3 bucket. Create an IAM user for each client. Group these users under an IAM policy that permits access to sub-folders within the bucket via the use of the 'username' Policy variable. Train the clients to use an S3 client instead of an FTP client

  • 77

    An e-commerce company is migrating from its on-premises data center to AWS Cloud in a phased manner. As part of the test deployments, the company chose Amazon FSx for Windows File Server with Single-AZ 2 deployment as one of the solutions. After viability testing, it became apparent that the company will need a highly available and fault-tolerant shared Windows file data system to cater to its data storage requirements. As a solutions architect, what changes will you suggest in the current configuration to make it highly available while keeping the downtime low?

    Set up a new Amazon FSx file system with a Multi-AZ deployment type. Leverage AWS DataSync to transfer data from the old file system to the new one. Point the application to the new Multi-AZ file system

  • 78

    A company has decided to move their existing data warehouse solution to Amazon Redshift. Being apprehensive about moving their critical data directly, the company has decided to test run and migrate a part of their data warehouse to Amazon Redshift using AWS Database Migration Service (DMS) task. As a solutions architect, which of the following would you suggest as the key points of consideration while running the DMS task? (Select two)

    Add subnet CIDR range, or IP address of the replication instance in the inbound rules of the Amazon Redshift cluster security group, Your Amazon Redshift cluster must be in the same account and same AWS Region as the replication instance

  • 79

    A company runs a mobile app-based health tracking solution. The mobile app sends 2 KB of data to the company’s backend servers every 2 minutes. The user data is stored in a DynamoDB table. The development team runs a nightly procedure to scan the table for extracting and aggregating the data from the previous day. These insights are then stored on Amazon S3 in JSON files for each user (daily average file size per user is approximately 1 MB). Approximately 50,000 end-users in the US are then alerted via SNS push notifications the next morning, as the new insights are available to be parsed and visualized in the mobile app. You have been hired as an AWS Certified Solutions Architect Professional to recommend a cost-efficient solution to optimize the backend design. Which of the following options would you suggest? (Select two)

    Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3, Set up an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput

  • 80

    An Amazon Redshift cluster is used to store sensitive information of a business-critical application. The compliance guidelines mandate tracking audit logs of the Redshift cluster. The business needs to store the audit logs securely by encrypting the logs at rest. The logs are to be stored for a year at least and audits need to be conducted on the audit logs every month. Which of the following is a cost-effective solution that fulfills the requirement of storing the logs securely while having access to the logs for monthly audits?

    Enable default encryption on the Amazon S3 bucket that uses Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Use Amazon Redshift Spectrum to query the data for monthly audits

  • 81

    A media company uses Amazon S3 under the hood to power its offerings which allow the customers to upload and view the media files immediately. Currently, all the customer files are uploaded directly under a single S3 bucket. The systems administration team has started seeing scalability issues where customer file uploads are failing during the peak access hours with more than 5000 requests per second. Which of the following represents the MOST resource-efficient and cost-optimal way of resolving this issue?

    Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations

  • 82

    An on-premises data center, set up a decade ago, hosts all the applications of a business. The business now wants to move to AWS Cloud. The documentation of these systems is outdated and complete knowledge of all existing workloads is absent. The data center hosts a mix of Windows and Linux virtual machines. As a solutions architect, you need to provide a plan to migrate all the applications to the cloud. How will you gather the necessary data of the existing machines?

    Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data

  • 83

    An e-commerce company traditionally hosted its application APIs on Amazon EC2 instances. Recently, the company has started migrating to a serverless architecture that is built using Amazon API Gateway, AWS Lambda functions, and Amazon DynamoDB. The Lambda functions and EC2 instances share the same Virtual Private Cloud (VPC). The Lambda functions hold the logic to fetch data from a third-party service provider. After moving a portion of functionality to the serverless model, users have started complaining of API Gateway 5XX errors. The third-party service provider is unable to see any requests from the serverless architecture. Upon inspection, the development team can see that the Lambda functions have created some entries in the generated logs. Which solution would you recommend to troubleshoot this issue?

    NAT Gateway has to be configured to give internet access to the Amazon VPC connected Lambda function

  • 84

    A research agency processes multiple compressed (gzip) CSV files containing data about contagious diseases for the past month aggregated from healthcare facilities. The files are about ~200 GB and are stored in Amazon S3 Glacier Flexible Storage Class. As per the reporting guidelines, the agency needs to query a portion of this data to prepare a report every month. Which of the following is the most cost-effective way to query this data?

    Ingest the data into Amazon S3 from S3 Glacier and query the required data with Amazon S3 Select

  • 85

    A solutions architect at a company is managing the migration of the company's IT infrastructure from its on-premises data center to AWS Cloud. The architect needs to automate VPC creation to enforce the company's network and security standards which mandate that each application is isolated in its own VPC. The solution must also ensure that the CIDR range used in each VPC is unique. Which of the following options would you recommend to address these requirements?

    Deploy the VPC infrastructure using AWS CloudFormation and leverage a custom resource to request a unique CIDR range from an external IP address management (IPAM) service

  • 86

    The research department at a healthcare company stores its entire data on Amazon S3. The research department is concerned about the increased costs of storing large amounts of data, most of which is in the form of images. As of now, all data is stored using the S3 Standard storage class. The research department has the following data archival requirements: 1. Need optimum storage for medical reports that are accessed infrequently (about twice a year). But, when accessed, the data has to be retrieved in real-time. 2. Need optimum storage for medical images that are accessed very rarely but have to be stored durably for up to 10 years. These images can be retrieved in a flexible time frame. What will you recommend as the most cost-effective storage option that addresses the given requirements?

    Amazon S3 Glacier Instant Retrieval is the best fit for data accessed twice a year. Amazon S3 Glacier Deep Archive is cost-effective for data that is stored for long-term retention

  • 87

    A healthcare company is migrating sensitive data from its on-premises data center to AWS Cloud via an existing AWS Direct Connect connection. The company must ensure confidentiality and integrity of the data in transit to the AWS VPC. Which of the following options should be combined to set up the most cost-effective connection between your on-premises data center and AWS? (Select three)

    Create an IPsec tunnel between your customer gateway appliance and the virtual private gateway, Create a VPC with a virtual private gateway, Set up a public virtual interface on the Direct Connect connection

  • 88

    A social media company is migrating its legacy web application to the AWS Cloud. Since the application is complex and may take several months to refactor, the CTO at the company tasked the development team to build an ad-hoc solution of using CloudFront with a custom origin pointing to the SSL endpoint URL for the legacy web application until the replacement is ready and deployed. The ad-hoc solution has worked for several weeks, however, all browser connections recently began showing an HTTP 502 Bad Gateway error with the header "X-Cache: Error from CloudFront". Network monitoring services show that the HTTPS port 443 on the legacy web application is open and responding to requests. As an AWS Certified Solutions Architect Professional, which of the following options will you attribute as the likely cause of the error, and what is your recommendation to resolve this issue?

    The SSL certificate on the legacy web application server has expired. Reissue the SSL certificate on the web server that is signed by a globally recognized certificate authority (CA). Install the full certificate chain onto the legacy web application server

  • 89

    A solutions architect at a company is looking at connecting the company's Amazon EC2 instances to the confidential data stored on Amazon S3 storage. The architect has a requirement to use private IP addresses from the company's VPC to access Amazon S3 while also having the ability to access S3 buckets from the company's on-premises systems. In a few months, the S3 buckets will also be accessed from a VPC in another AWS Region. What is the BEST way to build a solution to meet this requirement?

    Set up Interface endpoints for Amazon S3

  • 90

    A legacy web application runs 24/7 and it is currently hosted on an on-premises server with an outdated version of the Operating System (OS). The OS support will end soon and the team wants to expedite migration to an Amazon EC2 instance with an updated version of the OS. The application also references 90 TB of static data in the form of images that need to be moved to AWS. How should this be accomplished most cost-effectively?

    Replatform the server to Amazon EC2 while choosing an AMI of your choice to cater to the OS requirements. Use AWS Snowball to transfer the image data to Amazon S3

  • 91

    A retail company is deploying a critical application on multiple EC2 instances in a VPC. Per the company policy, any failed client connections to the EC2 instances must be logged. Which of the following options would you recommend as the MOST cost-effective solution to address these requirements?

    Set up VPC Flow Logs for the elastic network interfaces associated with the instances and configure the VPC Flow Logs to be filtered for rejected traffic. Publish the Flow Logs to CloudWatch Logs

  • 92

    A financial services firm intends to migrate its IT operations to AWS. The security team is establishing a framework to ensure that AWS best practices are being followed. AWS management console is the only way used by the IT teams to provision AWS resources. As per the firm's compliance requirements, the AWS resources need to be maintained in a particular configuration and audited regularly for unauthorized changes. As an AWS Certified Solutions Architect Professional, how will you implement this requirement? (Select two)

    Leverage AWS Config rules for auditing changes to AWS resources periodically and monitor the compliance of the configuration. Set up AWS Config custom rules using AWS Lambda to create a test-driven development approach, and finally automate the evaluation of configuration changes against the required controls, Leverage AWS CloudTrail events to review management activities of all AWS accounts. Make sure that CloudTrail is enabled in all accounts for the available AWS services. Enable CloudTrail trails and encrypt CloudTrail event log files with an AWS KMS key and monitor the recorded events via CloudWatch Logs

  • 93

    A media company has its users accessing the content from different platforms including mobile, tablet, and desktop. Each platform is customized to provide a different user experience based on various viewing modes. Path-based headers are used to serve the content for different platforms, hosted on different Amazon EC2 instances. An Auto Scaling group (ASG) has also been configured for the EC2 instances to ensure that the solution is highly scalable. Which of the following combination of services can help minimize the cost while maximizing the performance? (Select two)

    Amazon CloudFront with Lambda@Edge, Application Load Balancer

  • 94

    A payment service provider company has a legacy application built on high throughput and resilient queueing system to send messages to the customers. The implementation relied on a manually-managed RabbitMQ cluster and consumers. The system was able to process a large load of messages within a reasonable delivery time. The cluster and consumers were both deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances. However, when the messages in the queue piled up due to network failures on the customer side, the latency of the overall flow was affected, resulting in a breach of the service level agreement (SLA). The development team had to manually scale the queues to resolve the issue. Also, while doing manual upgrades on RabbitMQ and the hosting operating system, the company faced downtimes. The company is growing and has to maintain a strict delivery time SLA. The company is now looking for a serverless solution for its messaging queues. The queue functions of handling concurrency, message delays and retries, maintaining message order, secure delivery, and scalability are needed in the proposed solution architecture. Which of the following would you propose for a cost-effective solution for the requirement?

    Design the serverless architecture by use of Amazon Simple Queue Service (SQS) with Amazon ECS Fargate. To save costs, run the Amazon SQS FIFO queues and Amazon ECS Fargate tasks only when needed

  • 95

    A media streaming service delivers billions of hours of content from Amazon S3 to customers around the world. Amazon S3 also serves as the data lake for its data analytics solution. The data lake has a staging zone where intermediary query results are kept only for 24 hours. These results are also heavily referenced by other parts of the analytics pipeline. Which of the following is the MOST cost-effective solution to store this intermediary query data?

    Store the intermediary query results in S3 Standard storage class

  • 96

    A company has an S3 bucket that contains files in two different folders - s3://my-bucket/images and s3://my-bucket/thumbnails. When an image is first uploaded and new, it is viewed several times. Post a detailed analysis, the company has noticed that after 45 days those image files are rarely requested, but the thumbnails still are. After 180 days, the company would like to archive the image files and the thumbnails. Overall, the company would like the solution to remain highly available to prevent disasters from happening against a whole AZ. Which of the following options can be combined to represent the most cost-efficient solution for the given scenario? (Select two)

    Configure a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days, Configure a Lifecycle Policy to transition all objects to Glacier after 180 days

  • 97

    A firm has created different AWS Virtual Private Cloud (VPCs) for each project belonging to a client. For inter-project functionality, the firm needs to connect to a load balancer in VPC V1 from the Amazon EC2 instance in VPC V2. How will you set up the access to the internal load balancer for this use case in the most cost-effective manner?

    Establish connectivity between VPC V1 and VPC V2 using VPC peering. Enable DNS resolution from the source VPC for VPC peering. Establish the necessary routes, security group rules, and network access control list (ACL) rules to allow traffic between the VPCs

  • 98

    A solutions architect at a retail company has set up a workflow to ingest the clickstream data into the raw zone of the S3 data lake. The architect wants to run some SQL-based data sanity checks on the raw zone of the data lake. What AWS services would you suggest for this requirement such that the solution is cost-effective and easy to maintain?

    Use Athena to run SQL based analytics against S3 data

  • 99

    A company wants to migrate its on-premises resources to AWS. The IT environment consists of 200 virtual machines (VMs) with a combined storage capacity of 50 TB. While the majority of VMs may be taken down for migration since they are only used during business hours, others are mission-critical, so the downtime must be minimized. The on-premises network engineer has allocated 10 Mbps of internet bandwidth for the migration. The capacity of the on-premises network has peaked and increasing it would be prohibitively expensive. You have been hired as an AWS Certified Solutions Architect Professional to develop a migration strategy that can be implemented in the next three months. Which of the following would you recommend?

    Migrate mission-critical VMs using AWS Application Migration Service (MGN). Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball Edge. Leverage VM Import/Export to import the VMs into Amazon EC2

  • 100

    An e-commerce company has created a data warehouse using Redshift that is used to analyze data from Amazon S3. From the usage patterns, the analytics team has detected that after 30 days, the data is rarely queried in Redshift and it's not "hot data" anymore. The team would like to preserve the SQL querying capability on the data and get the queries started immediately. Also, the team wants to adopt a pricing model that allows the company to save the maximum amount of cost on Redshift. Which of the following options would you recommend? (Select two)

    Analyze the cold data with Athena, Transition the data to S3 Standard IA after 30 days

  • xj9 - 19628 - a

    xj9 - 19628 - a

    critical flaw · 98問 · 2年前

    xj9 - 19628 - a

    xj9 - 19628 - a

    98問 • 2年前
    critical flaw

    xj9 - 19628 - b

    xj9 - 19628 - b

    critical flaw · 30問 · 2年前

    xj9 - 19628 - b

    xj9 - 19628 - b

    30問 • 2年前
    critical flaw

    xj9 - 19628 - c

    xj9 - 19628 - c

    critical flaw · 99問 · 1年前

    xj9 - 19628 - c

    xj9 - 19628 - c

    99問 • 1年前
    critical flaw

    xj9 - 19628 - d1

    xj9 - 19628 - d1

    critical flaw · 99問 · 1年前

    xj9 - 19628 - d1

    xj9 - 19628 - d1

    99問 • 1年前
    critical flaw

    xj9 - 19628 - d2

    xj9 - 19628 - d2

    critical flaw · 98問 · 1年前

    xj9 - 19628 - d2

    xj9 - 19628 - d2

    98問 • 1年前
    critical flaw

    1. Shattershot

    1. Shattershot

    critical flaw · 50問 · 1年前

    1. Shattershot

    1. Shattershot

    50問 • 1年前
    critical flaw

    Conquest Book 1

    Conquest Book 1

    critical flaw · 100問 · 1年前

    Conquest Book 1

    Conquest Book 1

    100問 • 1年前
    critical flaw

    k3ch - 2910116 - D1 - A

    k3ch - 2910116 - D1 - A

    critical flaw · 100問 · 1年前

    k3ch - 2910116 - D1 - A

    k3ch - 2910116 - D1 - A

    100問 • 1年前
    critical flaw

    k3ch - 2910116 - D1 - B

    k3ch - 2910116 - D1 - B

    critical flaw · 65問 · 1年前

    k3ch - 2910116 - D1 - B

    k3ch - 2910116 - D1 - B

    65問 • 1年前
    critical flaw

    k3ch - 2910116 - D2 - A

    k3ch - 2910116 - D2 - A

    critical flaw · 100問 · 1年前

    k3ch - 2910116 - D2 - A

    k3ch - 2910116 - D2 - A

    100問 • 1年前
    critical flaw

    k3ch - 2910116 - D2 - B

    k3ch - 2910116 - D2 - B

    critical flaw · 55問 · 1年前

    k3ch - 2910116 - D2 - B

    k3ch - 2910116 - D2 - B

    55問 • 1年前
    critical flaw

    k3ch - 2910116 - D3 - A

    k3ch - 2910116 - D3 - A

    critical flaw · 100問 · 1年前

    k3ch - 2910116 - D3 - A

    k3ch - 2910116 - D3 - A

    100問 • 1年前
    critical flaw

    k3ch - 2910116 - D3 - B

    k3ch - 2910116 - D3 - B

    critical flaw · 63問 · 1年前

    k3ch - 2910116 - D3 - B

    k3ch - 2910116 - D3 - B

    63問 • 1年前
    critical flaw

    k3ch - 2910116 - D4 - A

    k3ch - 2910116 - D4 - A

    critical flaw · 100問 · 1年前

    k3ch - 2910116 - D4 - A

    k3ch - 2910116 - D4 - A

    100問 • 1年前
    critical flaw

    1. X-Tinction Agenda

    1. X-Tinction Agenda

    critical flaw · 100問 · 1年前

    1. X-Tinction Agenda

    1. X-Tinction Agenda

    100問 • 1年前
    critical flaw

    2. X-Tinction Agenda

    2. X-Tinction Agenda

    critical flaw · 100問 · 1年前

    2. X-Tinction Agenda

    2. X-Tinction Agenda

    100問 • 1年前
    critical flaw

    3. X-Tinction Agenda

    3. X-Tinction Agenda

    critical flaw · 100問 · 1年前

    3. X-Tinction Agenda

    3. X-Tinction Agenda

    100問 • 1年前
    critical flaw

    4. X-Tinction Agenda

    4. X-Tinction Agenda

    critical flaw · 90問 · 1年前

    4. X-Tinction Agenda

    4. X-Tinction Agenda

    90問 • 1年前
    critical flaw

    Executioner's Song Book 1

    Executioner's Song Book 1

    critical flaw · 30問 · 1年前

    Executioner's Song Book 1

    Executioner's Song Book 1

    30問 • 1年前
    critical flaw

    問題一覧

  • 1

    The security team at a company has put forth a requirement to track the external IP address when a customer or a third party uploads files to the Amazon Simple Storage Service (Amazon S3) bucket owned by the company. How will you track the external IP address used for each upload? (Select two)

    Enable Amazon S3 server access logging to capture all bucket-level and object-level events, Enable AWS CloudTrail data events to enable object-level logging for S3 bucket

  • 2

    A social media company manages a multi-AZ VPC environment consisting of public subnets and private subnets. Each public subnet contains a NAT Gateway as well as an Internet Gateway. Most of the company's applications are deployed in the private subnets and these applications read and write data to Kinesis Data Streams. The company has hired you as an AWS Certified Solutions Architect Professional to reduce costs and optimize the applications. Upon analysis in the AWS Cost Explorer, you notice that the cost in the EC2-Other category is consistently high due to the increasing NAT Gateway data transfer charges. What do you recommend to address this requirement?

    Set up an interface VPC endpoint for Kinesis Data Streams in the VPC. Ensure that the VPC endpoint policy allows traffic from the applications

  • 3

    A social learning platform allows students to connect with other students as well as experts and professionals from academic, research institutes and industry. The engineering team at the company manages 5 Amazon EC2 instances that make read-heavy database requests to the Amazon RDS for PostgreSQL DB cluster. As an AWS Certified Solutions Architect Professional, you have been asked to make the database cluster resilient from a disaster recovery perspective. Which of the following features will help you prepare for database disaster recovery? (Select two)

    Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single or multiple AWS Region(s), Use cross-Region Read Replicas

  • 4

    A financial services company had a security incident recently and wants to review the security of its two-tier server architecture. The company wants to ensure that it follows the principle of least privilege while configuring the security groups for access between the EC2 instance-based app servers and RDS MySQL database servers. The security group for the EC2 instances as well as the security group for the MySQL database servers has no inbound and outbound rules configured currently. As an AWS Certified Solutions Architect Professional, which of the following options would you recommend to adhere to the given requirements? (Select two)

    Create an inbound rule in the security group for the MySQL DB servers using TCP protocol on port 3306. Set the source as the security group for the EC2 instance app servers, Create an outbound rule in the security group for the EC2 instance app servers using TCP protocol on port 3306. Set the destination as the security group for the MySQL DB servers

  • 5

    Recently, an Amazon CloudFront distribution has been configured with an Amazon S3 bucket as the origin. However, users are getting an HTTP 307 Temporary Redirect response from Amazon S3. What could be the reason for this behavior and how will you resolve the issue? (Select two)

    When a new Amazon S3 bucket is created, it takes up to 24 hours before the bucket name propagates across all AWS Regions, CloudFront by default, forwards the requests to the default S3 endpoint. Change the origin domain name of the distribution to include the Regional endpoint of the bucket

  • 6

    The engineering team at a retail company wants to establish a dedicated, encrypted, low latency, and high throughput connection between its data center and AWS Cloud. The engineering team has set aside sufficient time to account for the operational overhead of establishing this connection. Which of the following options represents the MOST optimal solution with the LEAST infrastructure set up required for provisioning the end to end connection?

    Use AWS Direct Connect along with a site-to-site VPN to establish a connection between the data center and AWS Cloud

  • 7

    A company uses Amazon FSx for Windows File Server with deployment type of Single-AZ 2 as its file storage service for its non-core functions. With a change in the company's policy that mandates high availability of data for all its functions, the company needs to change the existing configuration. The company also needs to monitor the file system activity as well as the end-user actions on the Amazon FSx file server. Which solutions will you combine to implement these requirements? (Select two)

    You can monitor storage capacity and file system activity using Amazon CloudWatch, and monitor end-user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose, Configure a new Amazon FSx for Windows file system with a deployment type of Multi-AZ. Transfer data to the newly created file system using the AWS DataSync service. Point all the file system users to the new location. You can test the failover of your Multi-AZ file system by modifying its throughput capacity

  • 8

    An e-commerce company is investigating user reports of its Java-based web application errors on the day of the Thanksgiving sale. The development team recovered the logs created by the EC2 instance-hosted web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated before logs could be collected and the Aurora metrics were inadequate for query performance analysis. Which of the following steps would you recommend to make the monitoring process more reliable to troubleshoot any future events due to traffic spikes? (Select three)

    Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the application logs to CloudWatch Logs, Set up the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances as well as set up tracing of SQL queries with the X-Ray SDK for Java, Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs

  • 9

    A financial services company has multiple AWS accounts hosting its portfolio of IT applications that serve the company's retail and enterprise customers. A CloudWatch Logs agent is installed on each of the EC2 instances running these IT applications. The company wants to aggregate all security events in a centralized AWS account dedicated to log storage. The centralized operations team at the company needs to perform near-real-time gathering and collating events across multiple AWS accounts. As a Solutions Architect Professional, which of the following solutions would you suggest to meet these requirements?

    Set up Kinesis Data Firehose in the logging account and then subscribe the delivery stream to CloudWatch Logs streams in each application AWS account via subscription filters. Persist the log data in an Amazon S3 bucket inside the logging AWS account

  • 10

    A data analytics company uses Amazon S3 as the data lake to store the input data that is ingested from the IoT field devices on an hourly basis. The ingested data has attributes such as the device type, ID of the device, the status of the device, the timestamp of the event, the source IP address, etc. The data runs into millions of records per day and the company wants to run complex analytical queries on this data daily for product improvements for each device type. Which is the most optimal way to save this data to get the best performance from the millions of data points processed daily?

    Store the data in Apache ORC, partitioned by date and sorted by device type of the device

  • 11

    A leading pharmaceutical company has significant investments in running Oracle and PostgreSQL services on Amazon RDS which provide their scientists with near real-time analysis of millions of rows of manufacturing data generated by continuous manufacturing equipment with 1,600 data points per row. The business analytics team has been running ad-hoc queries on these databases to prepare daily reports for senior management. The engineering team has observed that the database performance takes a hit whenever these reports are run by the analytics team. To facilitate the business analytics reporting, the engineering team now wants to replicate this data with high availability and consolidate these databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift. As a Solutions Architect Professional, which of the following would you recommend as the MOST resource-efficient solution that requires the LEAST amount of development time without the need to manage the underlying infrastructure?

    Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift

  • 12

    A digital marketing company uses S3 to store artifacts that may only be accessible to EC2 instances running in a private VPC. The security team at the company is apprehensive about an attack vector wherein any team member with access to this instance could also set up an EC2 instance in another VPC to access these artifacts. As an AWS Certified Solutions Architect Professional, which of the following solutions will you recommend to prevent such unauthorized access to the artifacts in S3?

    Configure an S3 VPC endpoint and create an S3 bucket policy to allow access only from this VPC endpoint

  • 13

    A retail company recently saw a huge spike in its monthly AWS spend. Upon further investigation, it was found that some developers had accidentally launched Amazon RDS instances in unexpected Regions. The company has hired you as an AWS Certified Solutions Architect Professional to establish best practices around least privileges for developers and control access to on-premises as well as AWS Cloud resources using Active Directory. The company has mandated you to institute a mechanism to control costs by restricting the level of access that developers have to the AWS Management Console without impacting their productivity. The company would also like to allow developers to launch RDS instances only in us-east-1 Region without limiting access to other services in any Region. How can you help the company achieve the new security mandate while minimizing the operational burden on the DevOps team?

    Configure SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to it. Attach a customer-managed policy that denies access to RDS in any AWS Region except us-east-1

  • 14

    A company has built a serverless electronic document management system for users to upload their documents. The system also has a web application that connects to an Amazon API Gateway with Regional endpoints which in turn invokes AWS Lambda functions. The Lambda functions write the metadata of the documents to the Amazon Aurora Serverless database before uploading the actual documents to the Amazon S3 bucket. While the serverless architecture has been tested in the US East (N. Virginia) Region, the solution should be scalable for other AWS Regions too. As an AWS Certified Solutions Architect Professional, which options would you recommend to make the architecture scalable while offering low latency service to customers of any AWS region? (Select two)

    Change the API Gateway Regional endpoints to edge-optimized endpoints, Enable S3 Transfer Acceleration on the S3 bucket and configure the web application to use the Transfer Acceleration endpoints

  • 15

    A global apparel, footwear, and accessories retailer uses Amazon S3 for centralized storage of the static media assets such as images and videos for its products. The product planning specialists typically upload and download video files (about 100MB each) to the same S3 bucket as part of their day to day work. Initially, the product planning specialists were based out of a single region and there were no performance issues. However, as the company grew and started running offices from multiple countries, it resulted in poor latency while accessing data from S3 and uploading data to S3. The company wants to continue with the serverless solution for its storage requirements but wants to improve its performance. As a solutions architect, which of the following solutions do you propose to address this issue? (Select two)

    Use Amazon CloudFront distribution with origin as the S3 bucket. This would speed up uploads as well as downloads for the video files, Enable Amazon S3 Transfer Acceleration for the S3 bucket. This would speed up uploads as well as downloads for the video files

  • 16

    A company allows property owners and travelers to connect with each other for the purpose of renting unique vacation spaces around the world. The engineering team at the company uses Amazon MySQL RDS DB cluster because it simplifies much of the time-consuming administrative tasks typically associated with databases. The team uses Multi-Availability Zone (Multi-AZ) deployment to further automate its database replication and augment data durability. The current cluster configuration also uses Read Replicas. An intern has joined the team and wants to understand the replication capabilities for Multi-AZ as well as Read Replicas for the given RDS cluster. As a Solutions Architect Professional, which of the following capabilities would you identify as correct for the given database?

    Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read Replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region

  • 17

    An Internet-of-Things (IoT) company is using Kinesis Data Streams (KDS) to process IoT data from field devices. Multiple consumer applications are using the incoming data streams and the engineers have noticed a performance lag for the data delivery speed between producers and consumers of the data streams. As a Solutions Architect Professional, which of the following would you recommend to improve the performance for the given use-case?

    Use Enhanced Fanout feature of Kinesis Data Streams to support the desired read throughput for the downstream applications

  • 18

    A digital media company has hired you as an AWS Certified Solutions Architect Professional to optimize the architecture for its backup solution for applications running on the AWS Cloud. Currently, all of the applications running on AWS use at least two Availability Zones (AZs). The updated backup policy at the company mandates that all nightly backups for its data are durably stored in at least two geographically distinct Regions for Production and Disaster Recovery (DR) and the backup processes for both Regions must be fully automated. The new backup solution must ensure that the backup is available to be restored immediately for the Production Region and should be restored within 24 hours in the DR Region. Which of the following represents the MOST cost-effective solution that will address the given use-case?

    Create a backup process to persist all the data to an S3 bucket A using S3 standard storage class in the Production Region. Set up cross-Region replication of this S3 bucket A to an S3 bucket B using S3 standard storage class in the DR Region and set up a lifecycle policy in the DR Region to immediately move this data to Amazon Glacier

  • 19

    A retail company has hired you as an AWS Certified Solutions Architect Professional to provide consultancy for managing a serverless application that consists of multiple API gateways, Lambda functions, S3 buckets and DynamoDB tables. The company is getting reports from customers that some of the application components seem to be lagging while loading dynamic images and some are timing out with the "504 Gateway Timeout" error. As part of your investigations to identify the root cause behind this issue, you can confirm that DynamoDB monitoring metrics are at acceptable levels. Which of the following steps would you recommend to address these application issues? (Select two)

    Process and analyze the Amazon CloudWatch Logs for Lambda function to determine processing times for requested images at pre-configured intervals, Process and analyze the AWS X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors

  • 20

    A leading hotel reviews website has a repository of more than one million high-quality digital images. When this massive volume of images became too cumbersome to handle in-house, the company decided to offload the content to a central repository on Amazon S3 as part of its hybrid cloud strategy. The company now wants to reprocess its entire collection of photographic images to change the watermarks. The company wants to use Amazon EC2 instances and Amazon SQS in an integrated workflow to generate the sizes they need for each photo. The team wants to process a few thousand photos each night, using Amazon EC2 Spot Instances. The team uses Amazon SQS to communicate the photos that need to be processed and the status of the jobs. To handle certain sensitive photos, the team wants to postpone the delivery of certain messages to the queue by one minute while all other messages need to be delivered immediately to the queue. As a Solutions Architect Professional, which of the following solutions would you suggest to the company to handle the workflow for sensitive photos?

    Use message timers to postpone the delivery of certain messages to the queue by one minute

  • 21

    The engineering team at a retail company has deployed a fleet of EC2 instances under an Auto Scaling group (ASG). The instances under the ASG span two Availability Zones (AZ) within the eu-west-1 region. All the incoming requests are handled by an Application Load Balancer (ALB) that routes the requests to the EC2 instances under the ASG. A planned migration went wrong last week when two instances (belonging to AZ 1) were manually terminated and desired capacity was reduced causing the Availability Zones to become unbalanced. Later that day, another instance (belonging to AZ 2) was detected as unhealthy by the Application Load Balancer's health check. Which of the following options represent the correct outcomes for the aforesaid events? (Select two)

    As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application, Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance

  • 22

    A leading telecommunications company has developed its cloud storage solution on Amazon RDS for MySQL but it's running into performance issues despite using Read Replicas. The company has hired you as an AWS Certified Solutions Architect Professional to address these performance-related challenges on an urgent basis without moving away from the underlying relational database schema. The company has branch offices across the world, and it needs the solution to work on a global scale. Which of the following will you recommend as the MOST cost-effective and high-performance solution?

    Use Amazon Aurora Global Database to enable fast local reads with low latency in each region

  • 23

    A web-hosting startup manages more than 500 public web applications on AWS Cloud which are deployed in a single AWS Region. The fully qualified domain names (FQDNs) of all of the applications are configured to use HTTPS and are served via Application Load Balancers (ALBs). These ALBs are configured to use public SSL/TLS certificates. The startup has hired you as an AWS Certified Solutions Architect Professional to migrate the web applications to a multi-Region architecture. You must ensure that all HTTPS services continue to work without interruption. Which of the following solutions would you suggest to address these requirements?

    Generate a separate certificate for each FQDN in each AWS Region using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in the relevant AWS Region

  • 24

    A gaming company runs its flagship application with an SLA of 99.99%. Global users access the application 24/7. The application is currently hosted on the on-premises data centers and it routinely fails to meet its SLA, especially when hundreds of thousands of users access the application concurrently. The engineering team has also received complaints from some users about high latency. As a Solutions Architect Professional, how would you redesign this application for scalability and also allow for automatic failover at the lowest possible cost?

    Configure Route 53 latency-based routing to route to the nearest Region and activate the health checks. Host the website on S3 in each Region and use API Gateway with AWS Lambda for the application layer. Set up the data layer using DynamoDB global tables with DAX for caching

  • 25

    A stock trading firm uses AWS Cloud for its IT infrastructure. The firm runs several trading-risk simulation applications, developing complex algorithms to simulate diverse scenarios in order to evaluate the financial health of its customers. The firm stores customers' financial records on Amazon S3. The engineering team needs to implement an archival solution based on Amazon S3 Glacier to enforce regulatory and compliance controls on the archived data. As a Solutions Architect Professional, which of the following solutions would you recommend?

    Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls

  • 26

    The DevOps team at a financial services company has provisioned a new GPU optimized EC2 instance X by choosing the default security group of the default VPC. The team can ping instance X from other instances in the VPC. The other instances were also created using the default security group. The next day, the team launches another GPU optimized instance Y by creating a new security group and attaching it to instance Y. All other configuration options for instance Y are chosen as default. However, the team is not able to ping instance Y from other instances in the VPC. As a Solutions Architect Professional, which of the following would you identify as the root cause of the issue?

    Instance X is in the default security group. The default rules for the default security group allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. Instance Y is in a new security group. The default rules for a security group that you create allow no inbound traffic

  • 27

    The engineering team at a data analytics company is currently optimizing a production workload on AWS that is I/O intensive with frequent read/write/update operations and it's currently constrained on the IOPS. This workload consists of a single-tier with 15 r6g.8xlarge instances, each with 3 TB gp2 volume. The number of processing jobs has increased recently, resulting in an increase in latency as well. The team has concluded that they need to increase the IOPS by 3,000 for each of the instances for the application to perform efficiently. As an AWS Certified Solutions Architect Professional, which of the following solutions will you suggest to meet the performance goal in the MOST cost-efficient way?

    Modify the size of the gp2 volume for each instance from 3 TB to 4 TB

  • 28

    A social media company has a serverless application stack that consists of CloudFront, API Gateway and Lambda functions. The company has hired you as an AWS Certified Solutions Architect Professional to improve the current deployment process which creates a new version of the Lambda function and then runs an AWS CLI script for deployment. In case the new version errors out, then another CLI script is invoked to deploy the previous working version of the Lambda function. The company has mandated you to decrease the time to deploy new versions of the Lambda functions and also reduce the time to detect and rollback when errors are identified. Which of the following solutions would you suggest for the given use-case?

    Use Serverless Application Model (SAM) and leverage the built-in traffic-shifting feature of SAM to deploy the new Lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms are triggered

  • 29

    A social media company has its corporate headquarters in New York with an on-premises data center using an AWS Direct Connect connection to the AWS VPC. The branch offices in San Francisco and Miami use Site-to-Site VPN connections to connect to the AWS VPC. The company is looking for a solution to have the branch offices send and receive data with each other as well as with their corporate headquarters. As a Solutions Architect Professional, which of the following solutions would you recommend to meet these requirements?

    Set up VPN CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters

  • 30

    A silicon valley based unicorn startup recently launched a video-sharing social networking service called KitKot. The startup uses AWS Cloud to manage the IT infrastructure. Users upload video files up to 1 GB in size to a single EC2 instance based application server which stores them on a shared EFS file system. Another set of EC2 instances managed via an Auto Scaling group, periodically scans the EFS share directory for new files to process and generate new videos (for thumbnails and composite visual effects) according to the video processing instructions that are uploaded alongside the raw video files. Post-processing, the raw video files are deleted from the EFS file system and the results are stored in an S3 bucket. Links to the processed video files are sent via in-app notifications to the users. The startup has recently found that even as more instances are added to the Auto Scaling Group, many files are processed twice, therefore image processing speed is not improved. As an AWS Certified Solutions Architect Professional, what would you recommend to improve the reliability of the solution as well as eliminate the redundant processing of video files?

    Refactor the application to run from S3 instead of EFS and upload the video files directly to an S3 bucket. Configure an S3 trigger to invoke a Lambda function on each video file upload to S3 that puts a message in an SQS queue containing the link and the video processing instructions. Change the video processing application to read from the SQS queue and the S3 bucket. Configure the queue depth metric to scale the size of the Auto Scaling group for video processing instances. Leverage EventBridge events to trigger an SNS notification to the user containing the links to the processed files

  • 31

    A US-based retailer wants to ensure website availability as the company’s traditional infrastructure hasn’t been easy to scale. By moving its e-commerce platform to AWS, the company wants to scale with demand and ensure better availability. Last year, the company handled record Black Friday sale orders at a rate of nearly 10,000 orders/hour. The engineering team at the company now wants to finetune the disaster recovery strategy for its database tier. As an AWS Certified Solutions Architect Professional, you have been asked to implement a disaster recovery strategy for all the Amazon RDS databases that the company owns. Which of the following points do you need to consider for creating a robust recovery plan? (Select three)

    Automated backups, manual snapshots and Read Replicas are supported across multiple Regions, Recovery time objective (RTO) represents the number of hours it takes, to return the Amazon RDS database to a working state after a disaster, Database snapshots are user-initiated backups of your complete DB instance that serve as full backups. These snapshots can be copied and shared to different Regions and accounts

  • 32

    The engineering team at a social media company is building an ElasticSearch based index for all the existing files in S3. To build this index, it only needs to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, adding up to 50TB of data. As a Solutions Architect Professional, which of the following solutions can be used to build this index MOST efficiently? (Select two)

    Create an application that will use the S3 Select ScanRange parameter to get the first 250 bytes and store that information in ElasticSearch, Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in ElasticSearch

  • 33

    An e-commerce web application is hosted on Amazon EC2 instances that are fronted by Application Load Balancer (ALB) configured with an Auto Scaling group (ASG). Enhanced security is provided to the ALB by AWS WAF web ACLs. As per the company's security policy, AWS CloudTrail is activated and logs are configured to be stored on Amazon S3 and CloudWatch Logs. A discount sales offer was run on the application for a week. The support team has noticed that a few of the instances have rebooted taking down the log files and all temporary data with them. Initial analysis has confirmed that the incident took place during off-peak hours. Even though the incident did not cause any sales or revenue loss, the CTO has asked the security team to fix the security error that has allowed the incident to go unnoticed and eventually untraceable. What steps will you implement to permanently record all traffic coming into the application?

    Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking

  • 34

    A leading Internet-of-Things (IoT) solutions company needs to develop a platform that would analyze real-time clickstream events from embedded sensors in consumer electronic devices. The company has hired you as an AWS Certified Solutions Architect Professional to consult the engineering team and develop a solution using the AWS Cloud. The company wants to use clickstream data to perform data science, develop algorithms, and create visualizations and dashboards to support the business stakeholders. Each of these groups would work independently and would need real-time access to this clickstream data for their applications. Which of the following options would provide a highly available and fault-tolerant solution to capture the clickstream events from the source and also provide a simultaneous feed of the data stream to the downstream applications?

    Use AWS Kinesis Data Streams to facilitate multiple applications consume same streaming data concurrently and independently

  • 35

    A company has built its serverless solution using Amazon API Gateway REST API and AWS Lambda across multiple AWS Regions configured into a single AWS account. During peak hours, customers began to receive 429 Too Many Requests errors from multiple API methods. While troubleshooting the issue, the team realized that AWS Lambda function(s) have not been invoked for these API methods. Also, the company wants to provide a separate quota for its premium customers to access the APIs. Which solution will you offer to meet this requirement?

    The error is the outcome of the company reaching its API Gateway account limit for calls per second, configure API keys as client identifiers using usage plans to define the per-client throttling limits for premium customers

  • 36

    An IT company wants to move all its clients belonging to the regulated and security-sensitive industries such as financial services and healthcare to the AWS Cloud as it wants to leverage the out-of-box security-specific capabilities offered by AWS. The Security team at the company is developing a framework to validate the adoption of AWS best practices and industry-recognized compliance standards. The AWS Management Console is the preferred method for the in-house teams wanting to provision resources. You have been hired as an AWS Certified Solutions Architect Professional to spearhead this strategic initiative. Which of the following strategies would you adopt to address these business requirements for continuously assessing, auditing and monitoring the configurations of AWS resources? (Select two)

    Leverage Config rules to audit changes to AWS resources and monitor the compliance of the configuration by running the evaluations for the rule at a frequency that you choose. Develop AWS Config custom rules to establish a test-driven development approach by triggering the evaluation when any resource that matches the rule's scope changes in configuration, Enable trails and set up CloudTrail events to review and monitor management activities of all AWS accounts by logging these activities into CloudWatch Logs using a KMS key. Ensure that CloudTrail is enabled for all accounts as well as all available AWS services

  • 37

    A company has an Elastic Load Balancer (ELB) that is configured with an Auto Scaling Group (ASG) having a minimum of 4, a maximum of 10, and the desired value of 4 instances. The ASG cooldown and the termination policies are configured to the default values. Monitoring reports indicate a general usage requirement of 4 instances, while any traffic spikes result in an additional 10 instances. Customers have been complaining of request timeouts and partially loaded pages. As an AWS Certified Solutions Architect Professional, which of the following options will you suggest to fix this issue?

    Configure connection draining on ELB

  • 38

    A solutions architect at a retail company has configured a private hosted zone using Route 53. The architect needs to configure health checks for record sets within the private hosted zone that are associated with EC2 instances. How can the architect build a solution to address the given use case?

    Configure a CloudWatch metric that checks the status of the EC2 StatusCheckFailed metric, add an alarm to the metric, and then configure a health check that monitors the state of the alarm

  • 39

    A standard three-tier application is hosted on Amazon EC2 instances that are fronted by an Application Load Balancer. The application maintenance team has reported several small-scale malicious attacks on the application. The solutions architect wants to ramp up the security of the application. Which of the following would you recommend as part of the best practices to scan and mitigate the known vulnerabilities?

    Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities

  • 40

    You have hired a Cloud consulting agency, Example Corp, to monitor your AWS account and help optimize costs. To track daily spending, Example Corp needs access to your AWS resources, therefore, you allow Example Corp to assume an IAM role in your account. However, Example Corp also tracks spending for other customers, and there could be a configuration issue in the Example Corp environment that allows another customer to compel Example Corp to attempt to take an action in your AWS account, even though that customer should only be able to take the action in their account. How will you mitigate the risk of such a cross-account access scenario?

    Create an IAM role in your AWS account with a trust policy that trusts the Partner (Example Corp). Take a unique external ID value from Example Corp and include this external ID condition in the role’s trust policy

  • 41

    A company has hired you as an AWS Certified Solutions Architect Professional to develop a deployment plan for its flagship application deployed on EC2 instances across multiple Availability Zones in the us-east-1 Region. Your solution must meet these constraints: 1) A 300 GB static dataset must be available to the application before it can be started 2) The application layer must scale on-demand with the least amount of starting time possible 3) The development team must be able to change the code multiple times in a day 4) Any patches for critical operating systems (OS) must be applied within 24 hours of release Which of the following represents the best solution for this requirement?

    Leverage AWS Systems Manager to create and maintain a new AMI with the OS patches updated on an ongoing basis. Configure the Auto Scaling group to use the patched AMI and replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store and access the static dataset using Amazon EFS

  • 42

    A web application is running on a fleet of Amazon EC2 instances that are configured to operate in an Auto Scaling group (ASG). The instances are fronted by an Elastic Load Balancer (ELB). To enhance the system performance, a new Amazon Machine Image (AMI) was created and the ASG was configured to use the new AMI. However, after the production deployment, users complained of aberrations in the expected application functionality. A cross-check on the ELB has confirmed that all the instances are healthy and running as expected. As a solutions architect, which option would you suggest to rectify these issues and guarantee that later deployments are successful?

    Create a new ASG launch configuration that uses the newly created AMI. Double the size of the ASG and allow the new instances to become healthy and then reduce the ASG back to the original size. If the new instances do not work as expected, associate the ASG with the old launch configuration

  • 43

    A company has a web application running on an EC2 instance with a single elastic network interface in a subnet in a VPC. As part of the network re-architecture, the CTO at the company wants the web application to be moved to a different subnet in the same Availability Zone. Which of the following solutions would you suggest to meet these requirements?

    Launch a new instance in the new subnet via an AMI created from the old instance. Direct traffic to this new instance using Route 53 and then terminate the old instance

  • 44

    For deployments across AWS accounts, a company has decided to use AWS CodePipeline to deploy an AWS CloudFormation stack in an AWS account (account A) to a different AWS account (account B). As a solutions architect, what combination of steps will you take to configure this requirement? (Select three)

    In account B, create a cross-account IAM role. In account A, add the AssumeRole permission to account A's CodePipeline service role to allow it to assume the cross-account role in account B, In account B, create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack. In account A, update the CodePipeline configuration to include the resources associated with account B, In account A, create a customer-managed AWS KMS key that grants usage permissions to account A's CodePipeline service role and account B. Also, create an Amazon Simple Storage Service (Amazon S3) bucket with a bucket policy that grants account B access to the bucket

  • 45

    A web application is hosted on Amazon EC2 instances that are fronted by Application Load Balancer (ALB) configured with an Auto Scaling group (ASG). Enhanced security is provided to the ALB by AWS WAF web ACLs. As per the company's security policy, AWS CloudTrail is activated and logs are configured to be stored on Amazon S3 and CloudWatch Logs. A holiday sales offer was run on the application for a week. The development team has noticed that a few of the instances have rebooted taking down the log files and all temporary data with them. Initial analysis has confirmed that the incident took place during off-peak hours. Even though the incident did not cause any sales or revenue loss, the CTO has asked the development team to fix the security error that has allowed the incident to go unnoticed and eventually untraceable. Which of the following steps will you implement to permanently record all traffic coming into the application?

    Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking

  • 46

    The development team at a gaming company has been tasked to reduce the in-game latency and jitters. The team wants traffic from its end users to be routed to the AWS Region that is closest to the end users geographically. When maintenance occurs in an AWS Region, traffic must be routed to the next closest AWS Region with no changes to the IP addresses being used as connections by the end-users. As an AWS Certified Solutions Architect Professional, which solution will you suggest to meet these requirements?

    Set up AWS Global Accelerator in front of all the AWS Regions

  • 47

    A supply-chain manufacturing company manages its AWS resources in an Elastic Beanstalk environment. For implementing a new security requirement, the company needs to assign a single static IP address to a load-balanced Elastic Beanstalk environment. Subsequently, this IP address will be used to uniquely identify traffic coming from the Elastic Beanstalk environment. As a solutions architect, which of the following would you recommend as the BEST solution that requires minimal maintenance?

    Use a Network Address Translation (NAT) gateway to map multiple IP addresses into a single publicly exposed IP address

  • 48

    A weather monitoring agency stores and manages the global weather data for the last 50 years. The data has a velocity of 1GB per minute. You would like to store the data with only the most relevant attributes to build a predictive model for weather patterns. Which of the following solutions would you use to build the most cost-effective solution with the LEAST amount of infrastructure maintenance?

    Capture the data in Kinesis Data Firehose and use an intermediary Lambda function to filter and transform the incoming stream before the output is dumped on S3

  • 49

    An e-commerce company runs its flagship website on its on-premises Linux servers. Recently, the company suffered outages after announcing huge discounts on its website. The web tier of the application is fronted by Elastic Load Balancer while the database tier is built on RDS MYSQL database. The company is planning to run heavy discounts for the upcoming holiday sales season. The company is looking for a solution to avoid any similar outages as well as quickly ramp up the ability to handle huge traffic spikes. As an AWS Certified Solutions Architect Professional, which of the following would you suggest as the most optimal solution that can enhance the application's capabilities to handle the sudden spikes in user traffic without significant development effort?

    Create a CloudFront distribution and configure CloudFront to cache objects from a custom origin. This will offload some traffic from the on-premises servers. Customize CloudFront cache behavior by setting Time To Live (TTL) to suit your business requirement

  • 50

    An e-commerce business has recently moved to AWS serverless infrastructure with the help of Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application performs as expected on a normal day. But, during peak periods, when thousands of concurrent requests are submitted, the user requests are initially failing before finally succeeding. The development team examined the logs for each component with a special focus on the Amazon CloudWatch Logs for Lambda. None of the components, services, or applications have logged any errors. What could be the most probable reason for this failure?

    The throttle limit set on API Gateway is very low. During peak hours, the additional requests are not making their way to Lambda

  • 51

    A company uses Amazon S3 storage service for storing its business data. Multiple S3 event notifications have been configured to be delivered to Amazon Simple Queue Service (Amazon SQS) queue when objects pass through the storage lifecycle. The team has noticed that notifications are not being delivered to the queue. Amazon SQS queue has server-side encryption (SSE) turned on. What should be done to receive the S3 event notifications to an Amazon SQS queue that uses SSE?

    Create a customer-managed AWS KMS key and configure the key policy to grant permissions to the Amazon S3 service principal

  • 52

    A company manages a healthcare diagnostics application that writes thousands of lab images to a mounted NFS file system each night from 10 PM - 5 AM. The company wants to migrate this application from its on-premises data center to AWS Cloud over a private network. The company has already established an AWS Direct Connect connection to AWS to facilitate this migration. This application is slated to be moved to Amazon EC2 instances with the Elastic File System (Amazon EFS) file system as the storage service. Which of the following represents the MOST optimal way of replicating all images to the cloud before the application is fully migrated to the cloud?

    Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every night

  • 53

    A company manages a stateful web application that persists data on a MySQL database. The application stack is hosted in the company's on-premises data center using a single server. The company is looking at increasing its market presence through promotions and campaigns. While the user experience has been good so far, the current application architecture will not support the growth that the company envisages. The company has hired you as an AWS Certified Solutions Architect Professional to migrate the current architecture to AWS which should continue to support SQL-based queries. The proposed solution should offer maximum reliability with better performance. What would you recommend?

    Set up database migration to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group for Amazon EC2 instances that are fronted by an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis with replication group

  • 54

    A bioinformatics company leverages multiple open source tools to manage data analysis workflows running on its on-premises servers to process biological data which is generated and stored on a Network Attached Storage (NAS). The existing workflow receives around 100 GB of input biological data for each job run and individual jobs can take several hours to process the data. The CTO at the company wants to re-architect its proprietary analytics workflow on AWS to meet the workload demands and reduce the turnaround time from months to days. The company has provisioned a high-speed AWS Direct Connect connection. The final result needs to be stored in Amazon S3. The company is expecting approximately 20 job requests each day. Which of the following options would you recommend for the given use case?

    Leverage AWS DataSync to transfer the biological data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow for orchestrating an AWS Batch job that processes the biological data

  • 55

    A team has recently created a secret using AWS Secrets Manager to access their private Amazon Relational Database Service (Amazon RDS) instance. When the team tried to rotate the AWS Secrets Manager secret in an Amazon Virtual Private Cloud (Amazon VPC), the operation failed. On analyzing the Amazon CloudWatch Logs, the team realized that the AWS Lambda task timed out. Which of the following solutions needs to be implemented for rotating the secret successfully?

    Configure an Amazon VPC interface endpoint for the Secrets Manager service to enable access for your Secrets Manager Lambda rotation function and private Amazon Relational Database Service (Amazon RDS) instance

  • 56

    A leading medical imaging equipment and diagnostic imaging solutions provider uses AWS Cloud to run its healthcare data flows through more than 500,000 medical imaging devices globally. The solutions provider stores close to one petabyte of medical imaging data on Amazon S3 to provide the durability and reliability needed for their critical data. A research assistant working with the radiology department is trying to upload a high-resolution image into S3 via the public internet. The image size is approximately 5GB. The research assistant is using S3 Transfer Acceleration (S3TA) for faster image upload. It turns out that S3TA did not result in an accelerated transfer. Given this scenario, which of the following is correct regarding the charges for this image transfer?

    The research assistant does not need to pay any transfer charges for the image upload

  • 57

    An e-commerce company runs a data archival workflow once a month for its on-premises data center which is connected to the AWS Cloud over a minimally used 10-Gbps Direct Connect connection using a private virtual interface to its virtual private cloud (VPC). The company internet connection is 200 Mbps, and the usual archive size is around 140 TB that is created on the first Friday of a month. The archive must be transferred and available in Amazon S3 by the next Monday morning. As a Solutions Architect Professional, which of the following options would you recommend as the LEAST expensive way to address the given use-case?

    Configure a public virtual interface on the 10-Gbps Direct Connect connection and then copy the data to S3 over the connection

  • 58

    A company wants to migrate its on-premises Oracle database to Aurora MySQL. The company has hired an AWS Certified Solutions Architect Professional to carry out the migration with minimal downtime using AWS DMS. The company has mandated that the migration must have minimal impact on the performance of the source database and the Solutions Architect must validate that the data was migrated accurately from the source to the target before the cutover. Which of the following solutions will MOST effectively address this use-case?

    Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches

  • 59

    The engineering team at a company is evaluating the Multi-AZ and Read Replica capabilities of RDS MySQL vs Aurora MySQL before they implement the solution in their production environment. The company has hired you as an AWS Certified Solutions Architect Professional to provide a detailed report on this technical requirement. Which of the following would you identify as correct regarding the given use-case? (Select three)

    The primary and standby DB instances are upgraded at the same time for RDS MySQL Multi-AZ. All instances are upgraded at the same time for Aurora MySQL, Multi-AZ deployments for both RDS MySQL and Aurora MySQL follow synchronous replication, Read Replicas can be manually promoted to a standalone database instance for RDS MySQL whereas Read Replicas for Aurora MySQL can be promoted to the primary instance

  • 60

    A multi-national digital media company wants to exit out of the business of owning and maintaining its own IT infrastructure so it can redeploy resources toward innovation in Artificial Intelligence and related areas to create a better customer experience. As part of this digital transformation, the media company wants to archive about 9 PB of data in its on-premises data center to durable long term storage. As a Solutions Architect Professional, what is your recommendation to migrate and store this data in the quickest and MOST cost-optimal way?

    Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

  • 61

    A blog hosting company has an existing SaaS product architected as an on-premises three-tier web application. The blog content is posted and updated several times a day by multiple authors, so the Linux web servers serve content from a centralized file share on a NAS server. The CTO at the company has done an extensive technical review and highlighted to the company management that the existing infrastructure is not optimized. The company would like to migrate to AWS so that the resources can be dynamically scaled in response to load. The on-premises infrastructure and AWS Cloud are connected using Direct Connect. As a Solutions Architect Professional, which of the following solutions would you recommend to the company so that it can migrate the web infrastructure to AWS without delaying the content updation process?

    Attach an EFS file system to the on-premises servers to act as the NAS server. Mount the same EFS file system to the AWS based web servers running on EC2 instances to serve the content

  • 62

    A solo entrepreneur is working on a new digital media startup and wants to have a hands-on understanding of the comparative pricing for various storage types available on AWS Cloud. The entrepreneur has created a test file of size 5 GB with some random data. Next, he uploads this test file into AWS S3 Standard storage class, provisions an EBS volume (General Purpose SSD (gp2)) with 50 GB of provisioned storage and copies the test file into the EBS volume, and lastly copies the test file into an EFS Standard Storage filesystem. At the end of the month, he analyses the bill for costs incurred on the respective storage types for the test file. What of the following represents the correct order of the storage charges incurred for the test file on these three storage types?

    Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS

  • 63

    A social media company is transitioning its IT infrastructure from its on-premises data center to the AWS Cloud. The company wants to move its data artifacts, 200 TB in total size, to Amazon S3 on the AWS Cloud in the shortest possible time. The company has hired you as an AWS Certified Solutions Architect Professional to provide consultancy for this data migration. In terms of the networking infrastructure, the company has a 500 Mbps Direct Connect connection to the AWS Cloud as well as an IPSec based AWS VPN connection using the public internet that supports a bandwidth of 1 Gbps. Which of the following solutions would you recommend to address the given use-case?

    Order three AWS Snowball Edge appliances, split and transfer the data to these three appliances and ship them to AWS which will then copy the data from the Snowball Edge appliances to S3

  • 64

    A web development studio runs hundreds of Proof-of-Concept (PoC) and demo applications on virtual machines running on an on-premises server. Many of the applications are simple PHP, JavaScript or Python web applications which are no longer actively developed and serve little traffic. As a Solutions Architect Professional, which of the following approaches would you suggest to migrate these applications to AWS with the lowest infrastructure cost and least development effort?

    Dockerize each application and then deploy to an ECS cluster running behind an Application Load Balancer

  • 65

    The DevOps team at a leading social media company uses Chef to automate the configurations of servers in the on-premises data center. The CTO at the company now wants to migrate the IT infrastructure to AWS Cloud with minimal changes to the server configuration workflows and at the same time account for less operational overhead post-migration to AWS. The company has hired you as an AWS Certified Solutions Architect Professional to recommend a solution for this migration. Which of the following solutions would you recommend to address the given use-case?

    Replatform the IT infrastructure to AWS Cloud by leveraging AWS OpsWorks as a configuration management service to automate the configurations of servers on AWS

  • 66

    A company wants to use SharePoint to deploy a content and collaboration platform with document and records management functionality. The company wants to establish an AWS Direct Connect link to connect the AWS Cloud with the internal corporate network using AWS Storage Gateway. Using AWS Direct Connect would enable the company to deliver on its performance benchmark requirements including a three second or less response time for sending small documents across the internal network. To facilitate this goal, the company wants to be able to resolve DNS queries for any resources in the on-premises network from the AWS VPC and also resolve any DNS queries for resources in the AWS VPC from the on-premises network. As a Solutions Architect Professional, which of the following solutions would you recommend for this use-case? (Select two)

    Create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint, Create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint

  • 67

    A company runs its two-tier web application from an on-premises data center. The web servers connect to a PostgreSQL database running on a different server. With the consistent increase in users, both the web servers and the database are underperforming leading to a bad user experience. The company has decided to migrate to AWS Cloud and has chosen Amazon Aurora PostgreSQL as its database solution. The company needs a solution that can scale the web servers and the database layer based on user traffic. Which of the following options will you combine to improve the application scalability and improve the user experience? (Select two)

    Enable Aurora Auto Scaling for Aurora Replicas. Deploy the application on Amazon EC2 instances configured behind an Auto Scaling Group, Configure EC2 instances behind an Application Load Balancer with Round Robin routing algorithm and sticky sessions enabled

  • 68

    A data analytics company stores event data in its on-premises PostgreSQL database. With the increase in the number of clients, the company is spending a lot of resources managing and maintaining the infrastructure while performance seems to be dwindling. The company has established connectivity between its on-premises systems and AWS Cloud already and wants a hybrid solution that can automatically buffer and transform event data in a scalable way and create visualizations to track and monitor events in real time. The transformed event data would be in semi-structured JSON format and have dynamic schemas. Which combination of services/technologies will you suggest to implement the requirements?

    Set up Amazon Kinesis Data Firehose to buffer events and an AWS Lambda function to process and transform the events. Set up Amazon OpenSearch to receive the transformed events. Use the Kibana endpoint that is deployed with OpenSearch to create near-real-time visualizations and dashboards

  • 69

    A global SaaS company has recently migrated its technology infrastructure from its on-premises data center to AWS Cloud. The engineering team has provisioned an RDS MySQL DB cluster for the company's flagship application. An analytics workload also runs on the same database which publishes near real-time reports for the management of the company. When the analytics workload runs, it slows down the SaaS application as well, resulting in bad user experience. As a Solutions Architect Professional, which of the following would you recommend as the MOST cost-optimal solution to fix this issue?

    Create a Read Replica in the same Region as the Master database and point the analytics workload there

  • 70

    The CTO at a multi-national retail company is pursuing an IT re-engineering effort to set up a hybrid network architecture that would facilitate the company's envisaged long-term data center migration from multiple on-premises data centers to the AWS Cloud. The current on-premises data centers are in different locations and are inter-linked via a private fiber. Due to the unique constraints of the existing legacy applications, using NAT is not an option. During the migration period, many critical applications will need access to other applications deployed in both the on-premises data centers and AWS Cloud. As a Solutions Architect Professional, which of the following options would you suggest to set up a hybrid network architecture that is highly available and supports high bandwidth for a multi-Region deployment post-migration?

    Set up a Direct Connect to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center's Direct Connect in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network

  • 71

    An e-commerce company has hired an AWS Certified Solutions Architect Professional to transform a standard three-tier web application architecture in AWS. Currently, the web and application tiers run on EC2 instances and the database tier runs on RDS MySQL. The company wants to redesign the web and application tiers to use API Gateway with Lambda Functions with the final goal of deploying the new application within 6 months. As an immediate short-term task, the Engineering Manager has mandated the Solutions Architect to reduce costs for the existing stack. Which of the following options should the Solutions Architect recommend as the MOST cost-effective and reliable solution?

    Provision On-Demand Instances for the web and application tiers and Reserved Instances for the database tier

  • 72

    An e-commerce company is planning to migrate its IT infrastructure from the on-premises data center to AWS Cloud to ramp up its capabilities well in time for the upcoming Holiday Sale season. The company’s CTO has hired you as an AWS Certified Solutions Architect Professional to design a distributed, highly available and loosely coupled order processing application. The application is responsible for receiving and processing orders before storing them in a DynamoDB table. The application has seen sporadic traffic spikes in the past and the CTO wants the application to be able to scale during marketing campaigns to process the orders with minimal disruption. Which of the following options would you recommend as the MOST reliable solution to address these requirements?

    Ingest the orders in an SQS queue and trigger a Lambda function to process them

  • 73

    A multi-national retail company has built a hub-and-spoke network with AWS Transit Gateway. VPCs have been provisioned into multiple AWS accounts to facilitate network isolation and to enable delegated network administration. The organization is looking at a cost-effective, quick and secure way of maintaining this distributed architecture so that it provides access to services required by workloads in each of the VPCs. As a Solutions Architect Professional, which of the following options would you recommend for the given use-case?

    Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC

  • 74

    A big data analytics company is leveraging AWS Cloud to process Internet of Things (IoT) sensor data from the field devices of an agricultural sciences company. The analytics company stores the IoT sensor data in Amazon DynamoDB tables. To detect anomalous behaviors and respond quickly, all changes to the items stored in the DynamoDB tables must be logged in near real-time. As an AWS Certified Solutions Architect Professional, which of the following solutions would you recommend to meet the requirements of the given use-case so that it requires minimal custom development and infrastructure maintenance?

    Set up DynamoDB Streams to capture and send updates to a Lambda function that outputs records to Kinesis Data Analytics (KDA) via Kinesis Data Streams (KDS). Detect and analyze anomalies in KDA and send notifications via SNS

  • 75

    A web application is hosted on a fleet of Amazon EC2 instances running behind an Application Load Balancer (ALB). A custom functionality has mandated the need for a static IP address for the ALB. As a solutions architect, how will you implement this requirement while keeping the costs to a minimum?

    Register the Application Load Balancer behind a Network Load Balancer that will provide the necessary static IP address to the ALB

  • 76

    A web development company uses FTP servers for their growing list of 200 odd clients to facilitate remote data sharing of media assets. To reduce management costs and time, the company has decided to move to AWS Cloud. The company is looking for an AWS solution that can offer increased scalability with reduced costs. Also, the company's policy mandates complete privacy and isolation of data for each client. Which solution will you recommend for these requirements?

    Create a single Amazon S3 bucket. Create an IAM user for each client. Group these users under an IAM policy that permits access to sub-folders within the bucket via the use of the 'username' Policy variable. Train the clients to use an S3 client instead of an FTP client

  • 77

    An e-commerce company is migrating from its on-premises data center to AWS Cloud in a phased manner. As part of the test deployments, the company chose Amazon FSx for Windows File Server with Single-AZ 2 deployment as one of the solutions. After viability testing, it became apparent that the company will need a highly available and fault-tolerant shared Windows file data system to cater to its data storage requirements. As a solutions architect, what changes will you suggest in the current configuration to make it highly available while keeping the downtime low?

    Set up a new Amazon FSx file system with a Multi-AZ deployment type. Leverage AWS DataSync to transfer data from the old file system to the new one. Point the application to the new Multi-AZ file system

  • 78

    A company has decided to move their existing data warehouse solution to Amazon Redshift. Being apprehensive about moving their critical data directly, the company has decided to test run and migrate a part of their data warehouse to Amazon Redshift using AWS Database Migration Service (DMS) task. As a solutions architect, which of the following would you suggest as the key points of consideration while running the DMS task? (Select two)

    Add subnet CIDR range, or IP address of the replication instance in the inbound rules of the Amazon Redshift cluster security group, Your Amazon Redshift cluster must be in the same account and same AWS Region as the replication instance

  • 79

    A company runs a mobile app-based health tracking solution. The mobile app sends 2 KB of data to the company’s backend servers every 2 minutes. The user data is stored in a DynamoDB table. The development team runs a nightly procedure to scan the table for extracting and aggregating the data from the previous day. These insights are then stored on Amazon S3 in JSON files for each user (daily average file size per user is approximately 1 MB). Approximately 50,000 end-users in the US are then alerted via SNS push notifications the next morning, as the new insights are available to be parsed and visualized in the mobile app. You have been hired as an AWS Certified Solutions Architect Professional to recommend a cost-efficient solution to optimize the backend design. Which of the following options would you suggest? (Select two)

    Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3, Set up an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput

  • 80

    An Amazon Redshift cluster is used to store sensitive information of a business-critical application. The compliance guidelines mandate tracking audit logs of the Redshift cluster. The business needs to store the audit logs securely by encrypting the logs at rest. The logs are to be stored for a year at least and audits need to be conducted on the audit logs every month. Which of the following is a cost-effective solution that fulfills the requirement of storing the logs securely while having access to the logs for monthly audits?

    Enable default encryption on the Amazon S3 bucket that uses Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Use Amazon Redshift Spectrum to query the data for monthly audits

  • 81

    A media company uses Amazon S3 under the hood to power its offerings which allow the customers to upload and view the media files immediately. Currently, all the customer files are uploaded directly under a single S3 bucket. The systems administration team has started seeing scalability issues where customer file uploads are failing during the peak access hours with more than 5000 requests per second. Which of the following represents the MOST resource-efficient and cost-optimal way of resolving this issue?

    Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations

  • 82

    An on-premises data center, set up a decade ago, hosts all the applications of a business. The business now wants to move to AWS Cloud. The documentation of these systems is outdated and complete knowledge of all existing workloads is absent. The data center hosts a mix of Windows and Linux virtual machines. As a solutions architect, you need to provide a plan to migrate all the applications to the cloud. How will you gather the necessary data of the existing machines?

    Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data

  • 83

    An e-commerce company traditionally hosted its application APIs on Amazon EC2 instances. Recently, the company has started migrating to a serverless architecture that is built using Amazon API Gateway, AWS Lambda functions, and Amazon DynamoDB. The Lambda functions and EC2 instances share the same Virtual Private Cloud (VPC). The Lambda functions hold the logic to fetch data from a third-party service provider. After moving a portion of functionality to the serverless model, users have started complaining of API Gateway 5XX errors. The third-party service provider is unable to see any requests from the serverless architecture. Upon inspection, the development team can see that the Lambda functions have created some entries in the generated logs. Which solution would you recommend to troubleshoot this issue?

    NAT Gateway has to be configured to give internet access to the Amazon VPC connected Lambda function

  • 84

    A research agency processes multiple compressed (gzip) CSV files containing data about contagious diseases for the past month aggregated from healthcare facilities. The files are about ~200 GB and are stored in Amazon S3 Glacier Flexible Storage Class. As per the reporting guidelines, the agency needs to query a portion of this data to prepare a report every month. Which of the following is the most cost-effective way to query this data?

    Ingest the data into Amazon S3 from S3 Glacier and query the required data with Amazon S3 Select

  • 85

    A solutions architect at a company is managing the migration of the company's IT infrastructure from its on-premises data center to AWS Cloud. The architect needs to automate VPC creation to enforce the company's network and security standards which mandate that each application is isolated in its own VPC. The solution must also ensure that the CIDR range used in each VPC is unique. Which of the following options would you recommend to address these requirements?

    Deploy the VPC infrastructure using AWS CloudFormation and leverage a custom resource to request a unique CIDR range from an external IP address management (IPAM) service

  • 86

    The research department at a healthcare company stores its entire data on Amazon S3. The research department is concerned about the increased costs of storing large amounts of data, most of which is in the form of images. As of now, all data is stored using the S3 Standard storage class. The research department has the following data archival requirements: 1. Need optimum storage for medical reports that are accessed infrequently (about twice a year). But, when accessed, the data has to be retrieved in real-time. 2. Need optimum storage for medical images that are accessed very rarely but have to be stored durably for up to 10 years. These images can be retrieved in a flexible time frame. What will you recommend as the most cost-effective storage option that addresses the given requirements?

    Amazon S3 Glacier Instant Retrieval is the best fit for data accessed twice a year. Amazon S3 Glacier Deep Archive is cost-effective for data that is stored for long-term retention

  • 87

    A healthcare company is migrating sensitive data from its on-premises data center to AWS Cloud via an existing AWS Direct Connect connection. The company must ensure confidentiality and integrity of the data in transit to the AWS VPC. Which of the following options should be combined to set up the most cost-effective connection between your on-premises data center and AWS? (Select three)

    Create an IPsec tunnel between your customer gateway appliance and the virtual private gateway, Create a VPC with a virtual private gateway, Set up a public virtual interface on the Direct Connect connection

  • 88

    A social media company is migrating its legacy web application to the AWS Cloud. Since the application is complex and may take several months to refactor, the CTO at the company tasked the development team to build an ad-hoc solution of using CloudFront with a custom origin pointing to the SSL endpoint URL for the legacy web application until the replacement is ready and deployed. The ad-hoc solution has worked for several weeks, however, all browser connections recently began showing an HTTP 502 Bad Gateway error with the header "X-Cache: Error from CloudFront". Network monitoring services show that the HTTPS port 443 on the legacy web application is open and responding to requests. As an AWS Certified Solutions Architect Professional, which of the following options will you attribute as the likely cause of the error, and what is your recommendation to resolve this issue?

    The SSL certificate on the legacy web application server has expired. Reissue the SSL certificate on the web server that is signed by a globally recognized certificate authority (CA). Install the full certificate chain onto the legacy web application server

  • 89

    A solutions architect at a company is looking at connecting the company's Amazon EC2 instances to the confidential data stored on Amazon S3 storage. The architect has a requirement to use private IP addresses from the company's VPC to access Amazon S3 while also having the ability to access S3 buckets from the company's on-premises systems. In a few months, the S3 buckets will also be accessed from a VPC in another AWS Region. What is the BEST way to build a solution to meet this requirement?

    Set up Interface endpoints for Amazon S3

  • 90

    A legacy web application runs 24/7 and it is currently hosted on an on-premises server with an outdated version of the Operating System (OS). The OS support will end soon and the team wants to expedite migration to an Amazon EC2 instance with an updated version of the OS. The application also references 90 TB of static data in the form of images that need to be moved to AWS. How should this be accomplished most cost-effectively?

    Replatform the server to Amazon EC2 while choosing an AMI of your choice to cater to the OS requirements. Use AWS Snowball to transfer the image data to Amazon S3

  • 91

    A retail company is deploying a critical application on multiple EC2 instances in a VPC. Per the company policy, any failed client connections to the EC2 instances must be logged. Which of the following options would you recommend as the MOST cost-effective solution to address these requirements?

    Set up VPC Flow Logs for the elastic network interfaces associated with the instances and configure the VPC Flow Logs to be filtered for rejected traffic. Publish the Flow Logs to CloudWatch Logs

  • 92

    A financial services firm intends to migrate its IT operations to AWS. The security team is establishing a framework to ensure that AWS best practices are being followed. AWS management console is the only way used by the IT teams to provision AWS resources. As per the firm's compliance requirements, the AWS resources need to be maintained in a particular configuration and audited regularly for unauthorized changes. As an AWS Certified Solutions Architect Professional, how will you implement this requirement? (Select two)

    Leverage AWS Config rules for auditing changes to AWS resources periodically and monitor the compliance of the configuration. Set up AWS Config custom rules using AWS Lambda to create a test-driven development approach, and finally automate the evaluation of configuration changes against the required controls, Leverage AWS CloudTrail events to review management activities of all AWS accounts. Make sure that CloudTrail is enabled in all accounts for the available AWS services. Enable CloudTrail trails and encrypt CloudTrail event log files with an AWS KMS key and monitor the recorded events via CloudWatch Logs

  • 93

    A media company has its users accessing the content from different platforms including mobile, tablet, and desktop. Each platform is customized to provide a different user experience based on various viewing modes. Path-based headers are used to serve the content for different platforms, hosted on different Amazon EC2 instances. An Auto Scaling group (ASG) has also been configured for the EC2 instances to ensure that the solution is highly scalable. Which of the following combination of services can help minimize the cost while maximizing the performance? (Select two)

    Amazon CloudFront with Lambda@Edge, Application Load Balancer

  • 94

    A payment service provider company has a legacy application built on high throughput and resilient queueing system to send messages to the customers. The implementation relied on a manually-managed RabbitMQ cluster and consumers. The system was able to process a large load of messages within a reasonable delivery time. The cluster and consumers were both deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances. However, when the messages in the queue piled up due to network failures on the customer side, the latency of the overall flow was affected, resulting in a breach of the service level agreement (SLA). The development team had to manually scale the queues to resolve the issue. Also, while doing manual upgrades on RabbitMQ and the hosting operating system, the company faced downtimes. The company is growing and has to maintain a strict delivery time SLA. The company is now looking for a serverless solution for its messaging queues. The queue functions of handling concurrency, message delays and retries, maintaining message order, secure delivery, and scalability are needed in the proposed solution architecture. Which of the following would you propose for a cost-effective solution for the requirement?

    Design the serverless architecture by use of Amazon Simple Queue Service (SQS) with Amazon ECS Fargate. To save costs, run the Amazon SQS FIFO queues and Amazon ECS Fargate tasks only when needed

  • 95

    A media streaming service delivers billions of hours of content from Amazon S3 to customers around the world. Amazon S3 also serves as the data lake for its data analytics solution. The data lake has a staging zone where intermediary query results are kept only for 24 hours. These results are also heavily referenced by other parts of the analytics pipeline. Which of the following is the MOST cost-effective solution to store this intermediary query data?

    Store the intermediary query results in S3 Standard storage class

  • 96

    A company has an S3 bucket that contains files in two different folders - s3://my-bucket/images and s3://my-bucket/thumbnails. When an image is first uploaded and new, it is viewed several times. Post a detailed analysis, the company has noticed that after 45 days those image files are rarely requested, but the thumbnails still are. After 180 days, the company would like to archive the image files and the thumbnails. Overall, the company would like the solution to remain highly available to prevent disasters from happening against a whole AZ. Which of the following options can be combined to represent the most cost-efficient solution for the given scenario? (Select two)

    Configure a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days, Configure a Lifecycle Policy to transition all objects to Glacier after 180 days

  • 97

    A firm has created different AWS Virtual Private Cloud (VPCs) for each project belonging to a client. For inter-project functionality, the firm needs to connect to a load balancer in VPC V1 from the Amazon EC2 instance in VPC V2. How will you set up the access to the internal load balancer for this use case in the most cost-effective manner?

    Establish connectivity between VPC V1 and VPC V2 using VPC peering. Enable DNS resolution from the source VPC for VPC peering. Establish the necessary routes, security group rules, and network access control list (ACL) rules to allow traffic between the VPCs

  • 98

    A solutions architect at a retail company has set up a workflow to ingest the clickstream data into the raw zone of the S3 data lake. The architect wants to run some SQL-based data sanity checks on the raw zone of the data lake. What AWS services would you suggest for this requirement such that the solution is cost-effective and easy to maintain?

    Use Athena to run SQL based analytics against S3 data

  • 99

    A company wants to migrate its on-premises resources to AWS. The IT environment consists of 200 virtual machines (VMs) with a combined storage capacity of 50 TB. While the majority of VMs may be taken down for migration since they are only used during business hours, others are mission-critical, so the downtime must be minimized. The on-premises network engineer has allocated 10 Mbps of internet bandwidth for the migration. The capacity of the on-premises network has peaked and increasing it would be prohibitively expensive. You have been hired as an AWS Certified Solutions Architect Professional to develop a migration strategy that can be implemented in the next three months. Which of the following would you recommend?

    Migrate mission-critical VMs using AWS Application Migration Service (MGN). Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball Edge. Leverage VM Import/Export to import the VMs into Amazon EC2

  • 100

    An e-commerce company has created a data warehouse using Redshift that is used to analyze data from Amazon S3. From the usage patterns, the analytics team has detected that after 30 days, the data is rarely queried in Redshift and it's not "hot data" anymore. The team would like to preserve the SQL querying capability on the data and get the queries started immediately. Also, the team wants to adopt a pricing model that allows the company to save the maximum amount of cost on Redshift. Which of the following options would you recommend? (Select two)

    Analyze the cold data with Athena, Transition the data to S3 Standard IA after 30 days