A company's software development team needs an Amazon RDS Multi-AZ cluster. The RDS cluster
will serve as a backend for a desktop client that is deployed on premises. The desktop client requires
direct connectivity to the RDS cluster.
The company must give the development team the ability to connect to the cluster by using the
client when the team is in the office.
Which solution provides the required connectivity MOST securely?
B
Explanation:
Requirement Analysis: Need secure, direct connectivity from an on-premises client to an RDS cluster,
accessible only when in the office.
VPC with Private Subnets: Ensures the RDS cluster is not publicly accessible, enhancing security.
Site-to-Site VPN: Provides secure, encrypted connection between on-premises office and AWS VPC.
Implementation:
Create a VPC with two private subnets.
Launch the RDS cluster in the private subnets.
Set up a Site-to-Site VPN connection with a customer gateway in the office.
Conclusion: This setup ensures secure and direct connectivity with minimal exposure, meeting the
requirement for secure access from the office.
Reference
AWS Site-to-Site VPN:AWS Site-to-Site VPN Documentation
Amazon RDS:Amazon RDS Documentation
A social media company wants to store its database of user profiles, relationships, and interactions in
the AWS Cloud. The company needs an application to monitor any changes in the database. The
application needs to analyze the relationships between the data entities and to provide
recommendations to users.
Which solution will meet these requirements with the LEAST operational overhead?
B
Explanation:
Amazon Neptune: Neptune is a fully managed graph database service that is optimized for storing
and querying highly connected data. It supports both property graph and RDF graph models, making
it suitable for applications that need to analyze relationships between data entities.
Neptune Streams: Neptune Streams captures changes to the graph and streams these changes to
other AWS services. This is useful for applications that need to monitor and respond to changes in
real-time, such as providing recommendations based on user interactions and relationships.
Least Operational Overhead: Using Neptune Streams directly with Amazon Neptune ensures that the
solution is tightly integrated, reducing the need for additional components and minimizing
operational overhead. This integration simplifies the architecture by eliminating the need for a
separate service like Kinesis for change processing.
Reference:
Amazon Neptune Documentation
Neptune Streams Documentation
A company uses an Amazon S3 bucket as its data lake storage platform The S3 bucket contains a
massive amount of data that is accessed randomly by multiple teams and hundreds of applications.
The company wants to reduce the S3 storage costs and provide immediate availability for frequently
accessed objects
What is the MOST operationally efficient solution that meets these requirements?
A
Explanation:
Amazon S3 Intelligent-Tiering: This storage class is designed to optimize costs by automatically
moving data between two access tiers (frequent and infrequent) when access patterns change. It
provides cost savings without performance impact or operational overhead.
S3 Lifecycle Rules: By creating an S3 Lifecycle rule, the company can automatically transition objects
to the Intelligent-Tiering storage class. This eliminates the need for manual intervention and ensures
that objects are moved to the most cost-effective storage tier based on their access patterns.
Operational Efficiency: Intelligent-Tiering requires no additional management and delivers
immediate availability for frequently accessed objects. This makes it the most operationally efficient
solution for the given requirements.
Reference:
Amazon S3 Intelligent-Tiering
S3 Lifecycle Policies
A company needs to optimize its Amazon S3 storage costs for an application that generates many
files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard
storage.
The company must store the files for 4 years before the files can be deleted The files must be
immediately accessible The files are frequently accessed in the first 30 days of object creation, but
they are rarely accessed after the first 30 days.
Which solution will meet these requirements MOST cost-effectively?
C
Explanation:
Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but
requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still
providing high availability and durability.
Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed
afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and
reduces storage costs significantly.
Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures
automatic management of the data lifecycle, moving files to a lower-cost storage class without
manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is
no longer needed.
Reference:
Amazon S3 Storage Classes
S3 Lifecycle Configuration
A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default
route to the internet through an Amazon EC2 NAT instance. The Lambda function processes input
data and saves its output as an object to Amazon S3.
Intermittently, the Lambda function times out while trying to upload the object because of saturated
traffic on the NAT instance's network The company wants to access Amazon S3 without traversing the
internet.
Which solution will meet these requirements?
C
Explanation:
Gateway Endpoint for Amazon S3: A VPC endpoint for Amazon S3 allows you to privately connect
your VPC to Amazon S3 without requiring an internet gateway, NAT device, VPN connection, or AWS
Direct Connect connection.
Provisioning the Endpoint:
Navigate to the VPC Dashboard.
Select "Endpoints" and create a new endpoint.
Choose the service name for S3 (com.amazonaws.region.s3).
Select the appropriate VPC and subnets.
Adjust the route tables of the subnets to include the new endpoint.
Update Route Tables: Modify the route tables of the subnets to direct traffic destined for S3 to the
newly created endpoint. This ensures that traffic to S3 does not go through the NAT instance,
avoiding the saturated network and eliminating timeouts.
Operational Efficiency: This solution minimizes operational overhead by removing dependency on
the NAT instance and avoiding internet traffic, leading to more stable and secure S3 interactions.
Reference:
VPC Endpoints for Amazon S3
Creating a Gateway Endpoint
A company needs to design a hybrid network architecture The company's workloads are currently
stored in the AWS Cloud and in on-premises data centers The workloads require single-digit latencies
to communicate The company uses an AWS Transit Gateway transit gateway to connect multiple
VPCs
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)
B,D
Explanation:
AWS Direct Connect: Provides a dedicated network connection from your on-premises data center to
AWS, ensuring low latency and consistent network performance.
Direct Connect Gateway Association:
Direct Connect Gateway: Acts as a global network transit hub to connect VPCs across different AWS
regions.
Association with Transit Gateway: Enables communication between on-premises data centers and
multiple VPCs connected to the transit gateway.
Transit Virtual Interface (VIF):
Create Transit VIF: To connect Direct Connect with a transit gateway.
Setup Steps:
Establish a Direct Connect connection.
Create a transit VIF to the Direct Connect gateway.
Associate the Direct Connect gateway with the transit gateway attached to the VPCs.
Cost Efficiency: This combination avoids the recurring costs and potential performance variability of
VPN connections, providing a robust, low-latency hybrid network solution.
Reference:
AWS Direct Connect
Transit Gateway and Direct Connect Gateway
A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS
database Compliance regulations mandate that all personally identifiable information (Pll) be
encrypted at rest.
Which solution should a solutions architect recommend to meet this requirement with the LEAST
amount of changes to the infrastructure?
D
Explanation:
EBS Encryption:
Default EBS Encryption: Can be enabled for new EBS volumes.
Use of AWS KMS: Specify AWS KMS keys to handle encryption and decryption of data transparently.
Amazon RDS Encryption:
RDS Encryption: Encrypts the underlying storage for RDS instances using AWS KMS.
Configuration: Enable encryption when creating the RDS instance or modify an existing instance to
enable encryption.
Least Amount of Changes:
Both EBS and RDS support seamless encryption with AWS KMS, requiring minimal changes to the
existing infrastructure.
Enables compliance with regulatory requirements without modifying the application.
Operational Efficiency: Using AWS KMS for both EBS and RDS ensures a consistent, managed
approach to encryption, simplifying key management and enhancing security.
Reference:
Amazon EBS Encryption
Amazon RDS Encryption
AWS Key Management Service
A global ecommerce company runs its critical workloads on AWS. The workloads use an Amazon RDS
for PostgreSQL DB instance that is configured for a Multi-AZ deployment.
Customers have reported application timeouts when the company undergoes database failovers. The
company needs a resilient solution to reduce failover time
Which solution will meet these requirements?
A
Explanation:
Amazon RDS Proxy: RDS Proxy is a fully managed, highly available database proxy that makes
applications more resilient to database failures by pooling and sharing connections, and it can
automatically handle database failovers.
Reduced Failover Time: By using RDS Proxy, the connection management between the application
and the database is improved, reducing failover times significantly. RDS Proxy maintains connections
in a connection pool and reduces the time required to re-establish connections during a failover.
Configuration:
Create an RDS Proxy instance.
Configure the proxy to connect to the RDS for PostgreSQL DB instance.
Modify the application configuration to use the RDS Proxy endpoint instead of the direct database
endpoint.
Operational Benefits: This solution provides high availability and reduces application timeouts during
failovers with minimal changes to the application code.
Reference:
Amazon RDS Proxy
Setting Up RDS Proxy
A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2
cluster. The application will receive large amounts of traffic. The company wants to optimize the
storage performance of the cluster as the load on the application increases
Which solution will meet these requirements MOST cost-effectively?
D
Explanation:
Aurora I/O-Optimized: This storage configuration is designed to provide consistent high performance
for Aurora databases. It automatically scales IOPS as the workload increases, without needing to
provision IOPS separately.
Cost-Effectiveness: With Aurora I/O-Optimized, you only pay for the storage and I/O you use, making
it a cost-effective solution for applications with varying and unpredictable I/O demands.
Implementation:
During the creation of the Aurora PostgreSQL Serverless v2 cluster, select the I/O-Optimized storage
configuration.
The storage system will automatically handle scaling and performance optimization based on the
application load.
Operational Efficiency: This configuration reduces the need for manual tuning and ensures optimal
performance without additional administrative overhead.
Reference:
Amazon Aurora I/O-Optimized
A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data
must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be
rotated every year.
Which solution meets these requirements and is the MOST operationally efficient?
D
Explanation:
SSE-KMS: Server-side encryption with AWS Key Management Service (SSE-KMS) provides robust
encryption of data at rest, integrated with AWS KMS for key management and auditing.
Automatic Key Rotation: By enabling automatic rotation for the KMS keys, the system ensures that
keys are rotated annually without manual intervention, meeting compliance requirements.
Logging and Auditing: AWS KMS automatically logs all key usage and management actions in AWS
CloudTrail, providing the necessary audit logs.
Implementation:
Create a KMS key with automatic rotation enabled.
Configure the S3 bucket to use SSE-KMS with the created KMS key.
Ensure CloudTrail is enabled for logging KMS key usage.
Operational Efficiency: This solution provides encryption, automatic key management, and auditing
in a seamless, fully managed way, reducing operational overhead.
Reference:
AWS KMS Automatic Key Rotation
Amazon S3 Server-Side Encryption
A company has several on-premises Internet Small Computer Systems Interface (iSCSI) network
storage servers The company wants to reduce the number of these servers by moving to the AWS
Cloud. A solutions architect must provide low-latency access to frequently used data and reduce the
dependency on on-premises servers with a minimal number of infrastructure changes.
Which solution will meet these requirements?
D
Explanation:
Storage Gateway Volume Gateway (Cached Volumes): This configuration allows you to store your
primary data in Amazon S3 while retaining frequently accessed data locally in a cache for low-latency
access.
Low-Latency Access: Frequently accessed data is cached locally on-premises, providing low-latency
access while the less frequently accessed data is stored cost-effectively in Amazon S3.
Implementation:
Deploy a Storage Gateway appliance on-premises or in a virtual environment.
Configure it as a volume gateway with cached volumes.
Create volumes and configure your applications to use these volumes.
Minimal Infrastructure Changes: This solution integrates seamlessly with existing on-premises
infrastructure, requiring minimal changes and reducing dependency on on-premises storage servers.
Reference:
AWS Storage Gateway Volume Gateway
Volume Gateway Cached Volumes
A marketing company receives a large amount of new clickstream data in Amazon S3 from a
marketing campaign The company needs to analyze the clickstream data in Amazon S3 quickly. Then
the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
B
Explanation:
AWS Glue Crawler: AWS Glue is a fully managed ETL (Extract, Transform, Load) service that makes it
easy to prepare and load data for analytics. A Glue crawler can automatically discover new data and
schema in Amazon S3, making it easy to keep the data catalog up-to-date.
Crawling the Data:
Set up an AWS Glue crawler to scan the S3 bucket containing the clickstream data.
The crawler will automatically detect the schema and create/update the tables in the AWS Glue Data
Catalog.
Amazon Athena:
Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard
SQL.
Once the data catalog is updated by the Glue crawler, use Athena to query the clickstream data
directly in S3.
Operational Efficiency: This solution leverages fully managed services, reducing operational
overhead. Glue crawlers automate data cataloging, and Athena provides a serverless, pay-per-query
model for quick data analysis without the need to set up or manage infrastructure.
Reference:
AWS Glue
Amazon Athena
A company has applications that run on Amazon EC2 instances in a VPC One of the applications
needs to call the Amazon S3 API to store and read objects. According to the company's security
regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?
A
Explanation:
VPC Endpoint for S3: A gateway endpoint for Amazon S3 enables you to privately connect your VPC
to S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect
connection.
Configuration Steps:
In the VPC console, navigate to "Endpoints" and create a new endpoint.
Select the service name for S3 (com.amazonaws.region.s3).
Choose the VPC and the subnets where your EC2 instances are running.
Update the route tables for the selected subnets to include a route pointing to the endpoint.
Security Compliance: By configuring an S3 gateway endpoint, all traffic between the VPC and S3 stays
within the AWS network, complying with the company's security regulations to avoid internet
traversal.
Reference:
VPC Endpoints for Amazon S3
A company wants to isolate its workloads by creating an AWS account for each workload. The
company needs a solution that centrally manages networking components for the workloads. The
solution also must create accounts with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?
A
Explanation:
AWS Control Tower: Provides a managed service to set up and govern a secure, multi-account AWS
environment based on AWS best practices. It automates the setup of AWS Organizations and applies
security controls (guardrails).
Networking Account:
Create a centralized networking account that includes a VPC with both private and public subnets.
This centralized VPC will manage and control the networking resources.
AWS Resource Access Manager (AWS RAM):
Use AWS RAM to share the subnets from the networking account with the other workload accounts.
This allows different workload accounts to utilize the shared networking resources without the need
to manage their own VPCs.
Operational Efficiency: Using AWS Control Tower simplifies the setup and governance of multiple
AWS accounts, while AWS RAM facilitates centralized management of networking resources,
reducing operational overhead and ensuring consistent security and compliance.
Reference:
AWS Control Tower
AWS Resource Access Manager
A company's SAP application has a backend SQL Server database in an on-premises environment. The
company wants to migrate its on-premises application and database server to AWS. The company
needs an instance type that meets the high demands of its SAP database. On-premises performance
data shows that both the SAP application and the database have high memory utilization.
Which solution will meet these requirements?
C
Explanation:
Memory Optimized Instances: These instances are designed to deliver fast performance for
workloads that process large data sets in memory. They are ideal for high-performance databases
like SAP and applications with high memory utilization.
High Memory Utilization: Both the SAP application and the SQL Server database have high memory
demands as per the on-premises performance data. Memory optimized instances provide the
necessary memory capacity and performance.
Instance Types:
For the SAP application, using a memory optimized instance ensures the application has sufficient
memory to handle the high workload efficiently.
For the SQL Server database, memory optimized instances ensure optimal database performance
with high memory throughput.
Operational Efficiency: Using the same instance family for both the application and the database
simplifies management and ensures both components meet performance requirements.
Reference:
Amazon EC2 Instance Types
SAP on AWS