amazon AWS Certified Solutions Architect - Associate SAA-C02 practice test
Question 1
A company is backing up on-premises databases to local file server shares using the SMB protocol. The company requires
immediate access to 1 week of backup files to meet recovery objectives. Recovery after a week is less likely to occur, and
the company can tolerate a delay in accessing those older backup files.
What should a solutions architect do to meet these requirements with the LEAST operational effort?
-
A. Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups.
-
B. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway.
-
C. Deploy Amazon Elastic File System (Amazon EFS) to create a file system with exposed NFS shares with sufficient storage to hold all the desired backups.
-
D. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.
Answer:
A
Question 2
A company serves content to its subscribers across the world using an application running on AWS. The application has
several Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in
copyright restrictions, the chief information officer (CIO) wants to block access for certain countries.
Which action will meet these requirements?
-
A. Modify the ALB security group to deny incoming traffic from blocked countries.
-
B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries.
-
C. Use Amazon CloudFront to serve the application and deny access to blocked countries.
-
D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.
Answer:
C
Explanation:
"block access for certain countries." You can use geo restriction, also known as geo blocking, to prevent users in specific
geographic locations from accessing content that you're distributing through a CloudFront web distribution.
Reference: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
Question 3
A company requires a durable backup storage solution for its on-premises database servers while ensuring on-premises
applications maintain access to these backups for quick recovery. The company will use AWS storage services as the
destination for these backups. A solutions architect is designing a solution with minimal operational overhead.
Which solution should the solutions architect implement?
-
A. Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.
-
B. Back up the databases to an AWS Storage Gateway volume gateway and access it using the Amazon S3 API.
-
C. Transfer the database backup files to an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance.
-
D. Back up the database directly to an AWS Snowball device and use lifecycle rules to move the data to Amazon S3 Glacier Deep Archive.
Answer:
A
Question 4
A company is using Amazon DynamoDB with provisioned throughput for the database tier of its ecommerce website. During
flash sales, customers experience periods of time when the database cannot handle the high number of transactions taking
place. This causes the company to lose transactions. During normal periods, the database performs appropriately.
Which solution solves the performance problem the company faces?
-
A. Switch DynamoDB to on-demand mode during flash sales.
-
B. Implement DynamoDB Accelerator for fast in memory performance.
-
C. Use Amazon Kinesis to queue transactions for processing to DynamoDB.
-
D. Use Amazon Simple Queue Service (Amazon SQS) to queue transactions to DynamoDB.
Answer:
A
Question 5
A company is deploying an application that processes streaming data in near-real time. The company plans to use Amazon
EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency
between nodes.
Which combination of network solutions will meet these requirements? (Choose two.)
-
A. Enable and configure enhanced networking on each EC2 instance.
-
B. Group the EC2 instances in separate accounts.
-
C. Run the EC2 instances in a cluster placement group.
-
D. Attach multiple elastic network interfaces to each EC2 instance.
-
E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Answer:
C D
Question 6
A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted
Amazon RDS DB instance in a Multi-AZ deployment. Daily database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?
-
A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot.
-
B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB instance.
-
C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS). Restore encrypted snapshot to an existing DB instance.
-
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS) managed keys (SSE-KMS).
Answer:
A
Question 7
A companys database is hosted on an Amazon Aurora MySQL DB cluster in the us-east-1 Region. The database is 4 TB in
size. The company needs to expand its disaster recovery strategy to the uswest-2 Region. The company must have the
ability to fail over to us-west-2 with a recovery time objective (RTO) of 15 minutes.
What should a solutions architect recommend to meet these requirements?
-
A. Create a Multi-Region Aurora MySQL DB cluster in us-east-1 and use-west-2. Use an Amazon Route 53 health check to monitor us-east-1 and fail over to us-west-2 upon failure.
-
B. Take a snapshot of the DB cluster in us-east-1. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to copy the snapshot to us-west-2 and restore the snapshot in us-west-2 when failure is detected.
-
C. Create an AWS CloudFormation script to create another Aurora MySQL DB cluster in us-west-2 in case of failure. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to deploy the AWS CloudFormation stack in us-west-2 when failure is detected.
-
D. Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to promote the DB cluster in us-west-2 when failure is detected.
Answer:
B
Explanation:
Reference: https://docs.aws.amazon.com/aws-backup/latest/devguide/eventbridge.html
Question 8
A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its
storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no
additional cost.
How can these requirements be met?
-
A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload.
-
B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
-
C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
-
D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
Answer:
B
Question 9
A company wants to move its on-premises network, attached storage (NAS) to AWS. The company wants to make the data
available to any Linux instances within its VPC and ensure changes are automatically synchronized across all instances
accessing the data store. The majority of the data is accessed very rarely, and some files are accessed by multiple users at
the same time.
Which solution meets these requirements and is MOST cost-effective?
-
A. Create an Amazon Elastic Block Store (Amazon EBS) snapshot containing the data. Share it with users within the VPC.
-
B. Create an Amazon S3 bucket that has a lifecycle policy set to transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after the appropriate number of days.
-
C. Create an Amazon Elastic File System (Amazon EFS) file system within the VPSet the throughput mode to Provisioned and to the required amount of IOPS to support concurrent usage.
-
D. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
Answer:
D
Question 10
A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several
applications that write to the same tables. The applications need to be migrated one by one with a month in between each
migration Management has expressed concerns that the database has a high number of reads and writes. The data must be
kept in sync across both databases throughout tie migration.
What should a solutions architect recommend?
-
A. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC) replication task and a table mapping to select all cables.
-
B. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
-
C. Use the AWS Schema Conversion Tool with AWS DataBase Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
-
D. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
Answer:
B