amazon AWS Certified Solutions Architect - Professional SAP-C01 practice test


Question 1

The CFO of a company wants to allow one of his employees to view only the AWS usage report page.
Which of the below mentioned IAM policy statements allows the user to have access to the AWS usage report page?

  • A. "Effect": "Allow", "Action": ["Describe"], "Resource": "Billing"
  • B. "Effect": "Allow", "Action": ["aws-portal: ViewBilling"], "Resource": "*"
  • C. "Effect": "Allow", "Action": ["aws-portal: ViewUsage"], "Resource": "*"
  • D. "Effect": "Allow", "Action": ["AccountUsage], "Resource": "*"
Answer:

C

Explanation:
AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions
for various AWS services. If the CFO wants to allow only AWS usage report page access, the policy for that IAM user will be
as given below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow", "Action": [
"aws-portal:ViewUsage"
],
"Resource": "*"
}
]
}
Reference:
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-permissions-ref.html

Discussions
0 / 600

Question 2

A company hosts a large on-premises MySQL database at its main office that supports an issue tracking system used by
employees around the world. The company already uses AWS for some workloads and has created an Amazon Route 53
entry for the database endpoint that points to the on-premises database. Management is concerned about the database
being a single point of failure and wants a solutions architect to migrate the database to AWS without any data loss or
downtime.
Which set of actions should the solutions architect implement?

  • A. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load from the on- premises database to Aurora. Update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.
  • B. During nonbusiness hours, shut down the on-premises database and create a backup. Restore this backup to an Amazon Aurora DB cluster. When the restoration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.
  • C. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load with continuous replication from the on-premises database to Aurora. When the migration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.
  • D. Create a backup of the database and restore it to an Amazon Aurora multi-master cluster. This Aurora cluster will be in a master-master replication configuration with the on-premises database. Update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.
Answer:

C

Discussions
0 / 600

Question 3

A company has an application that runs on a fleet of Amazon EC2 instances and stores 70 GB of device data for each
instance in Amazon S3. Recently, some of the S3 uploads have been failing. At the same time, the company is seeing an
unexpected increase in storage data costs. The application code cannot be modified.
What is the MOST efficient way to upload the device data to Amazon S3 while managing storage costs?

  • A. Upload device data using a multipart upload. Use the AWS CLI to list incomplete parts to address the failed S3 uploads. Enable the lifecycle policy for the incomplete multipart uploads on the S3 bucket to delete the old uploads and prevent new failed uploads from accumulating.
  • B. Upload device data using S3 Transfer Acceleration. Use the AWS Management Console to address the failed S3 uploads. Use the Multi-Object Delete operation nightly to delete the old uploads.
  • C. Upload device data using a multipart upload. Use the AWS Management Console to list incomplete parts to address the failed S3 uploads. Configure a lifecycle policy to archive continuously to Amazon S3 Glacier.
  • D. Upload device data using S3 Transfer Acceleration. Use the AWS Management Console to list incomplete parts to address the failed S3 uploads. Enable the lifecycle policy for the incomplete multipart uploads on the S3 bucket to delete the old uploads and prevent new failed uploads from accumulating.
Answer:

C

Explanation:
Reference: https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html

Discussions
0 / 600

Question 4

In AWS, which security aspects are the customer's responsibility? (Choose four.)

  • A. Security Group and ACL (Access Control List) settings
  • B. Decommissioning storage devices
  • C. Patch management on the EC2 instance's operating system
  • D. Life-cycle management of IAM credentials
  • E. Controlling physical access to compute resources
  • F. Encryption of EBS (Elastic Block Storage) volumes
Answer:

A C D F

Discussions
0 / 600

Question 5

A company hosts a legacy application that runs on an Amazon EC2 instance inside a VPC without internet access. Users
access the application with a desktop program installed on their corporate laptops. Communication between the laptops and
the VPC flows through AWS Direct Connect (DX). A new requirement states that all data in transit must be encrypted
between users and the VPC.
Which strategy should a solutions architect use to maintain consistent network performance while meeting this new
requirement?

  • A. Create a client VPN endpoint and configure the laptops to use an AWS client VPN to connect to the VPC over the internet.
  • B. Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
  • C. Create a new Site-to-Site VPN that connects to the VPC over the internet.
  • D. Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.
Answer:

D

Discussions
0 / 600

Question 6

In AWS IAM, which of the following predefined policy condition keys checks how long ago (in seconds) the MFA-validated
security credentials making the request were issued using multi- factor authentication (MFA)?

  • A. aws:MultiFactorAuthAge
  • B. aws:MultiFactorAuthLast
  • C. aws:MFAAge
  • D. aws:MultiFactorAuthPrevious
Answer:

A

Explanation:
aws:MultiFactorAuthAge is one of the predefined keys provided by AWS that can be included within a Condition element of
an IAM policy. The key allows to check how long ago (in seconds) the MFAvalidated security credentials making the request
were issued using Multi-Factor Authentication (MFA).
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html

Discussions
0 / 600

Question 7

A company is moving a business-critical application onto AWS. It is a traditional three-tier web application using an Oracle
database. Data must be encrypted in transit and at rest. The database hosts 12 TB of data. Network connectivity to the
source Oracle database over the internal is allowed, and the company wants to reduce operational costs by using AWS
Managed Services where possible. All resources within the web and application tiers have been migrated. The database has
a few tables and a simple schema using primary keys only; however, it contains many Binary Large Object (BLOB) fields. It
was not possible to use the databases native replication tools because of licensing restrictions.
Which database migration solution will result in the LEAST amount of impact to the applications availability?

  • A. Provision an Amazon RDS for Oracle instance. Host the RDS database within a virtual private cloud (VPC) subnet with internet access, and set up the RDS database as an encrypted Read Replica of the source database. Use SSL to encrypt the connection between the two databases. Monitor the replication performance by watching the RDS ReplicaLag metric. During the application maintenance window, shut down the on-premises database and switch over the application connection to the RDS instance when there is no more replication lag. Promote the Read Replica into a standalone database instance.
  • B. Provision an Amazon EC2 instance and install the same Oracle database software. Create a backup of the source database using the supported tools. During the application maintenance window, restore the backup into the Oracle database running in the EC2 instance. Set up an Amazon RDS for Oracle instance, and create an import job between the databases hosted in AWS. Shut down the source database and switch over the database connections to the RDS instance when the job is complete.
  • C. Use AWS DMS to load and replicate the dataset between the on-premises Oracle database and the replication instance hosted on AWS. Provision an Amazon RDS for Oracle instance with Transparent Data Encryption (TDE) enabled and configure it as a target for the replication instance. Create a customer-managed AWS KMS master key to set it as the encryption key for the replication instance. Use AWS DMS tasks to load the data into the target RDS instance. During the application maintenance window and after the load tasks reach the ongoing replication phase, switch the database connections to the new database.
  • D. Create a compressed full database backup of the on-premises Oracle database during an application maintenance window. While the backup is being performed, provision a 10 Gbps AWS Direct Connect connection to increase the transfer speed of the database backup files to Amazon S3, and shorten the maintenance window period. Use SSL/TLS to copy the files over the Direct Connect connection. When the backup files are successfully copied, start the maintenance window, and rise any of the Amazon RDS supported tools to import the data into a newly provisioned Amazon RDS for Oracle instance with encryption enabled. Wait until the data is fully loaded and switch over the database connections to the new database. Delete the Direct Connect connection to cut unnecessary charges.
Answer:

C

Explanation:
Reference: https://aws.amazon.com/blogs/apn/oracle-database-encryption-options-on-amazon-rds/
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.AdvSecurity.htm
(DMS in transit encryption)
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html

Discussions
0 / 600

Question 8

A company plans to refactor a monolithic application into a modern application design deployed on AWS. The CI/CD pipeline
needs to be upgraded to support the modern design for the application with the following requirements:
It should allow changes to be released several times every hour. It should be able to roll back the changes as quickly as


possible.
Which design will meet these requirements?

  • A. Deploy a CI/CD pipeline that incorporates AMIs to contain the application and their configurations. Deploy the application by replacing Amazon EC2 instances.
  • B. Specify AWS Elastic Beanstalk to stage in a secondary environment as the deployment target for the CI/CD pipeline of the application. To deploy, swap the staging and production environment URLs.
  • C. Use AWS Systems Manager to re-provision the infrastructure for each deployment. Update the Amazon EC2 user data to pull the latest code artifact from Amazon S3 and use Amazon Route 53 weighted routing to point to the new environment.
  • D. Roll out the application updates as part of an Auto Scaling event using prebuilt AMIs. Use new versions of the AMIs to add instances, and phase out all instances that use the previous AMI version with the configured termination policy during a deployment event.
Answer:

A

Discussions
0 / 600

Question 9

A company has a data center that must be migrated to AWS as quickly as possible. The data center has a 500 Mbps AWS
Direct Connect link and a separate, fully available 1 Gbps ISP connection. A Solutions Architect must transfer 20 TB of data
from the data center to an Amazon S3 bucket.
What is the FASTEST way transfer the data?

  • A. Upload the data to the S3 bucket using the existing DX link.
  • B. Send the data to AWS using the AWS Import/Export service.
  • C. Upload the data using an 80 TB AWS Snowball device.
  • D. Upload the data to the S3 bucket using S3 Transfer Acceleration.
Answer:

B

Explanation:
Import/Export supports importing and exporting data into and out of Amazon S3 buckets. For significant data sets, AWS
Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity.
Reference: https://stackshare.io/stackups/aws-direct-connect-vs-aws-import-export

Discussions
0 / 600

Question 10

A company is moving a business-critical, multi-tier application to AWS. The architecture consists of a desktop client
application and server infrastructure. The server infrastructure resides in an on-premises data center that frequently fails to
maintain the application uptime SLA of 99.95%. A Solutions Architect must re-architect the application to ensure that it can
meet or exceed the SLA.
The application contains a PostgreSQL database running on a single virtual machine. The business logic and presentation
layers are load balanced between multiple virtual machines. Remote users complain about slow load times while using this
latency-sensitive application.
Which of the following will meet the availability requirements with little change to the application while improving user
experience and minimizing costs?

  • A. Migrate the database to a PostgreSQL database in Amazon EC2. Host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon WorkSpaces WorkSpace for each end user to improve the user experience.
  • B. Migrate the database to an Amazon RDS Aurora PostgreSQL configuration. Host the application and presentation layers in an Auto Scaling configuration on Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.
  • C. Migrate the database to an Amazon RDS PostgreSQL Multi-AZ configuration. Host the application and presentation layers in automatically scaled AWS Fargate containers behind a Network Load Balancer. Use Amazon ElastiCache to improve the user experience.
  • D. Migrate the database to an Amazon Redshift cluster with at least two nodes. Combine and host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience.
Answer:

B

Discussions
0 / 600
To page 2