amazon AWS Certified Solutions Architect - Associate practice test

Last exam update: Nov 18 ,2025
Page 1 out of 36. Viewing questions 1-15 out of 527

Question 1

A company's software development team needs an Amazon RDS Multi-AZ cluster. The RDS cluster
will serve as a backend for a desktop client that is deployed on premises. The desktop client requires
direct connectivity to the RDS cluster.
The company must give the development team the ability to connect to the cluster by using the
client when the team is in the office.
Which solution provides the required connectivity MOST securely?

  • A. Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Use AWS Site- to-Site VPN with a customer gateway in the company's office.
  • B. Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use AWS Site- to-Site VPN with a customer gateway in the company's office.
  • C. Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use RDS security groups to allow the company's office IP ranges to access the cluster.
  • D. Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Create a cluster user for each developer. Use RDS security groups to allow the users to access the cluster.
Mark Question:
Answer:

B


Explanation:
Requirement Analysis: Need secure, direct connectivity from an on-premises client to an RDS cluster,
accessible only when in the office.
VPC with Private Subnets: Ensures the RDS cluster is not publicly accessible, enhancing security.
Site-to-Site VPN: Provides secure, encrypted connection between on-premises office and AWS VPC.
Implementation:
Create a VPC with two private subnets.
Launch the RDS cluster in the private subnets.
Set up a Site-to-Site VPN connection with a customer gateway in the office.
Conclusion: This setup ensures secure and direct connectivity with minimal exposure, meeting the
requirement for secure access from the office.
Reference
AWS Site-to-Site VPN:AWS Site-to-Site VPN Documentation
Amazon RDS:Amazon RDS Documentation

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

A social media company wants to store its database of user profiles, relationships, and interactions in
the AWS Cloud. The company needs an application to monitor any changes in the database. The
application needs to analyze the relationships between the data entities and to provide
recommendations to users.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes in the database.
  • B. Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.
  • C. Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon Kinesis Data Streams to process changes in the database.
  • D. Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune Streams to process changes in the database.
Mark Question:
Answer:

B


Explanation:
Amazon Neptune: Neptune is a fully managed graph database service that is optimized for storing
and querying highly connected data. It supports both property graph and RDF graph models, making
it suitable for applications that need to analyze relationships between data entities.
Neptune Streams: Neptune Streams captures changes to the graph and streams these changes to
other AWS services. This is useful for applications that need to monitor and respond to changes in
real-time, such as providing recommendations based on user interactions and relationships.
Least Operational Overhead: Using Neptune Streams directly with Amazon Neptune ensures that the
solution is tightly integrated, reducing the need for additional components and minimizing
operational overhead. This integration simplifies the architecture by eliminating the need for a
separate service like Kinesis for change processing.
Reference:
Amazon Neptune Documentation
Neptune Streams Documentation

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A company uses an Amazon S3 bucket as its data lake storage platform The S3 bucket contains a
massive amount of data that is accessed randomly by multiple teams and hundreds of applications.
The company wants to reduce the S3 storage costs and provide immediate availability for frequently
accessed objects
What is the MOST operationally efficient solution that meets these requirements?

  • A. Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class
  • B. Store objects in Amazon S3 Glacier Use S3 Select to provide applications with access to the data.
  • C. Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
  • D. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Create an AWS Lambda function to transition objects to the S3 Standard storage class when they are accessed by an application
Mark Question:
Answer:

A


Explanation:
Amazon S3 Intelligent-Tiering: This storage class is designed to optimize costs by automatically
moving data between two access tiers (frequent and infrequent) when access patterns change. It
provides cost savings without performance impact or operational overhead.
S3 Lifecycle Rules: By creating an S3 Lifecycle rule, the company can automatically transition objects
to the Intelligent-Tiering storage class. This eliminates the need for manual intervention and ensures
that objects are moved to the most cost-effective storage tier based on their access patterns.
Operational Efficiency: Intelligent-Tiering requires no additional management and delivers
immediate availability for frequently accessed objects. This makes it the most operationally efficient
solution for the given requirements.
Reference:
Amazon S3 Intelligent-Tiering
S3 Lifecycle Policies

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

A company needs to optimize its Amazon S3 storage costs for an application that generates many
files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard
storage.
The company must store the files for 4 years before the files can be deleted The files must be
immediately accessible The files are frequently accessed in the first 30 days of object creation, but
they are rarely accessed after the first 30 days.
Which solution will meet these requirements MOST cost-effectively?

  • A. Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation.
  • B. Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days after object creation Delete the files 4 years after object creation.
  • C. Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation Delete the files 4 years after object creation.
  • D. Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.
Mark Question:
Answer:

C


Explanation:
Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but
requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still
providing high availability and durability.
Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed
afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and
reduces storage costs significantly.
Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures
automatic management of the data lifecycle, moving files to a lower-cost storage class without
manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is
no longer needed.
Reference:
Amazon S3 Storage Classes
S3 Lifecycle Configuration

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default
route to the internet through an Amazon EC2 NAT instance. The Lambda function processes input
data and saves its output as an object to Amazon S3.
Intermittently, the Lambda function times out while trying to upload the object because of saturated
traffic on the NAT instance's network The company wants to access Amazon S3 without traversing the
internet.
Which solution will meet these requirements?

  • A. Replace the EC2 NAT instance with an AWS managed NAT gateway.
  • B. Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type
  • C. Provision a gateway endpoint for Amazon S3 in the VPC. Update the route tables of the subnets accordingly.
  • D. Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running.
Mark Question:
Answer:

C


Explanation:
Gateway Endpoint for Amazon S3: A VPC endpoint for Amazon S3 allows you to privately connect
your VPC to Amazon S3 without requiring an internet gateway, NAT device, VPN connection, or AWS
Direct Connect connection.
Provisioning the Endpoint:
Navigate to the VPC Dashboard.
Select "Endpoints" and create a new endpoint.
Choose the service name for S3 (com.amazonaws.region.s3).
Select the appropriate VPC and subnets.
Adjust the route tables of the subnets to include the new endpoint.
Update Route Tables: Modify the route tables of the subnets to direct traffic destined for S3 to the
newly created endpoint. This ensures that traffic to S3 does not go through the NAT instance,
avoiding the saturated network and eliminating timeouts.
Operational Efficiency: This solution minimizes operational overhead by removing dependency on
the NAT instance and avoiding internet traffic, leading to more stable and secure S3 interactions.
Reference:
VPC Endpoints for Amazon S3
Creating a Gateway Endpoint

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A company needs to design a hybrid network architecture The company's workloads are currently
stored in the AWS Cloud and in on-premises data centers The workloads require single-digit latencies
to communicate The company uses an AWS Transit Gateway transit gateway to connect multiple
VPCs
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

  • A. Establish an AWS Site-to-Site VPN connection to each VPC.
  • B. Associate an AWS Direct Connect gateway with the transit gateway that is attached to the VPCs.
  • C. Establish an AWS Site-to-Site VPN connection to an AWS Direct Connect gateway.
  • D. Establish an AWS Direct Connect connection. Create a transit virtual interface (VIF) to a Direct Connect gateway.
  • E. Associate AWS Site-to-Site VPN connections with the transit gateway that is attached to the VPCs
Mark Question:
Answer:

B,D


Explanation:
AWS Direct Connect: Provides a dedicated network connection from your on-premises data center to
AWS, ensuring low latency and consistent network performance.
Direct Connect Gateway Association:
Direct Connect Gateway: Acts as a global network transit hub to connect VPCs across different AWS
regions.
Association with Transit Gateway: Enables communication between on-premises data centers and
multiple VPCs connected to the transit gateway.
Transit Virtual Interface (VIF):
Create Transit VIF: To connect Direct Connect with a transit gateway.
Setup Steps:
Establish a Direct Connect connection.
Create a transit VIF to the Direct Connect gateway.
Associate the Direct Connect gateway with the transit gateway attached to the VPCs.
Cost Efficiency: This combination avoids the recurring costs and potential performance variability of
VPN connections, providing a robust, low-latency hybrid network solution.
Reference:
AWS Direct Connect
Transit Gateway and Direct Connect Gateway

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 7

A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS
database Compliance regulations mandate that all personally identifiable information (Pll) be
encrypted at rest.
Which solution should a solutions architect recommend to meet this requirement with the LEAST
amount of changes to the infrastructure?

  • A. Deploy AWS Certificate Manager to generate certificates Use the certificates to encrypt the database volume
  • B. Deploy AWS CloudHSM. generate encryption keys, and use the keys to encrypt database volumes.
  • C. Configure SSL encryption using AWS Key Management Service {AWS KMS) keys to encrypt database volumes.
  • D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
Mark Question:
Answer:

D


Explanation:
EBS Encryption:
Default EBS Encryption: Can be enabled for new EBS volumes.
Use of AWS KMS: Specify AWS KMS keys to handle encryption and decryption of data transparently.
Amazon RDS Encryption:
RDS Encryption: Encrypts the underlying storage for RDS instances using AWS KMS.
Configuration: Enable encryption when creating the RDS instance or modify an existing instance to
enable encryption.
Least Amount of Changes:
Both EBS and RDS support seamless encryption with AWS KMS, requiring minimal changes to the
existing infrastructure.
Enables compliance with regulatory requirements without modifying the application.
Operational Efficiency: Using AWS KMS for both EBS and RDS ensures a consistent, managed
approach to encryption, simplifying key management and enhancing security.
Reference:
Amazon EBS Encryption
Amazon RDS Encryption
AWS Key Management Service

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A global ecommerce company runs its critical workloads on AWS. The workloads use an Amazon RDS
for PostgreSQL DB instance that is configured for a Multi-AZ deployment.
Customers have reported application timeouts when the company undergoes database failovers. The
company needs a resilient solution to reduce failover time
Which solution will meet these requirements?

  • A. Create an Amazon RDS Proxy. Assign the proxy to the DB instance.
  • B. Create a read replica for the DB instance Move the read traffic to the read replica.
  • C. Enable Performance Insights. Monitor the CPU load to identify the timeouts.
  • D. Take regular automatic snapshots Copy the automatic snapshots to multiple AWS Regions
Mark Question:
Answer:

A


Explanation:
Amazon RDS Proxy: RDS Proxy is a fully managed, highly available database proxy that makes
applications more resilient to database failures by pooling and sharing connections, and it can
automatically handle database failovers.
Reduced Failover Time: By using RDS Proxy, the connection management between the application
and the database is improved, reducing failover times significantly. RDS Proxy maintains connections
in a connection pool and reduces the time required to re-establish connections during a failover.
Configuration:
Create an RDS Proxy instance.
Configure the proxy to connect to the RDS for PostgreSQL DB instance.
Modify the application configuration to use the RDS Proxy endpoint instead of the direct database
endpoint.
Operational Benefits: This solution provides high availability and reduces application timeouts during
failovers with minimal changes to the application code.
Reference:
Amazon RDS Proxy
Setting Up RDS Proxy

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2
cluster. The application will receive large amounts of traffic. The company wants to optimize the
storage performance of the cluster as the load on the application increases
Which solution will meet these requirements MOST cost-effectively?

  • A. Configure the cluster to use the Aurora Standard storage configuration.
  • B. Configure the cluster storage type as Provisioned IOPS.
  • C. Configure the cluster storage type as General Purpose.
  • D. Configure the cluster to use the Aurora l/O-Optimized storage configuration.
Mark Question:
Answer:

D


Explanation:
Aurora I/O-Optimized: This storage configuration is designed to provide consistent high performance
for Aurora databases. It automatically scales IOPS as the workload increases, without needing to
provision IOPS separately.
Cost-Effectiveness: With Aurora I/O-Optimized, you only pay for the storage and I/O you use, making
it a cost-effective solution for applications with varying and unpredictable I/O demands.
Implementation:
During the creation of the Aurora PostgreSQL Serverless v2 cluster, select the I/O-Optimized storage
configuration.
The storage system will automatically handle scaling and performance optimization based on the
application load.
Operational Efficiency: This configuration reduces the need for manual tuning and ensures optimal
performance without additional administrative overhead.
Reference:
Amazon Aurora I/O-Optimized

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data
must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be
rotated every year.
Which solution meets these requirements and is the MOST operationally efficient?

  • A. Server-side encryption with customer-provided keys (SSE-C)
  • B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • C. Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation
  • D. Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
Mark Question:
Answer:

D


Explanation:
SSE-KMS: Server-side encryption with AWS Key Management Service (SSE-KMS) provides robust
encryption of data at rest, integrated with AWS KMS for key management and auditing.
Automatic Key Rotation: By enabling automatic rotation for the KMS keys, the system ensures that
keys are rotated annually without manual intervention, meeting compliance requirements.
Logging and Auditing: AWS KMS automatically logs all key usage and management actions in AWS
CloudTrail, providing the necessary audit logs.
Implementation:
Create a KMS key with automatic rotation enabled.
Configure the S3 bucket to use SSE-KMS with the created KMS key.
Ensure CloudTrail is enabled for logging KMS key usage.
Operational Efficiency: This solution provides encryption, automatic key management, and auditing
in a seamless, fully managed way, reducing operational overhead.
Reference:
AWS KMS Automatic Key Rotation
Amazon S3 Server-Side Encryption

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

A company has several on-premises Internet Small Computer Systems Interface (iSCSI) network
storage servers The company wants to reduce the number of these servers by moving to the AWS
Cloud. A solutions architect must provide low-latency access to frequently used data and reduce the
dependency on on-premises servers with a minimal number of infrastructure changes.
Which solution will meet these requirements?

  • A. Deploy an Amazon S3 File Gateway
  • B. Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3
  • C. Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes
  • D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.
Mark Question:
Answer:

D


Explanation:
Storage Gateway Volume Gateway (Cached Volumes): This configuration allows you to store your
primary data in Amazon S3 while retaining frequently accessed data locally in a cache for low-latency
access.
Low-Latency Access: Frequently accessed data is cached locally on-premises, providing low-latency
access while the less frequently accessed data is stored cost-effectively in Amazon S3.
Implementation:
Deploy a Storage Gateway appliance on-premises or in a virtual environment.
Configure it as a volume gateway with cached volumes.
Create volumes and configure your applications to use these volumes.
Minimal Infrastructure Changes: This solution integrates seamlessly with existing on-premises
infrastructure, requiring minimal changes and reducing dependency on on-premises storage servers.
Reference:
AWS Storage Gateway Volume Gateway
Volume Gateway Cached Volumes

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

A marketing company receives a large amount of new clickstream data in Amazon S3 from a
marketing campaign The company needs to analyze the clickstream data in Amazon S3 quickly. Then
the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create external tables in a Spark catalog Configure jobs in AWS Glue to query the data
  • B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
  • C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
  • D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data
Mark Question:
Answer:

B


Explanation:
AWS Glue Crawler: AWS Glue is a fully managed ETL (Extract, Transform, Load) service that makes it
easy to prepare and load data for analytics. A Glue crawler can automatically discover new data and
schema in Amazon S3, making it easy to keep the data catalog up-to-date.
Crawling the Data:
Set up an AWS Glue crawler to scan the S3 bucket containing the clickstream data.
The crawler will automatically detect the schema and create/update the tables in the AWS Glue Data
Catalog.
Amazon Athena:
Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard
SQL.
Once the data catalog is updated by the Glue crawler, use Athena to query the clickstream data
directly in S3.
Operational Efficiency: This solution leverages fully managed services, reducing operational
overhead. Glue crawlers automate data cataloging, and Athena provides a serverless, pay-per-query
model for quick data analysis without the need to set up or manage infrastructure.
Reference:
AWS Glue
Amazon Athena

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

A company has applications that run on Amazon EC2 instances in a VPC One of the applications
needs to call the Amazon S3 API to store and read objects. According to the company's security
regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?

  • A. Configure an S3 gateway endpoint.
  • B. Create an S3 bucket in a private subnet.
  • C. Create an S3 bucket in the same AWS Region as the EC2 instances.
  • D. Configure a NAT gateway in the same subnet as the EC2 instances
Mark Question:
Answer:

A


Explanation:
VPC Endpoint for S3: A gateway endpoint for Amazon S3 enables you to privately connect your VPC
to S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect
connection.
Configuration Steps:
In the VPC console, navigate to "Endpoints" and create a new endpoint.
Select the service name for S3 (com.amazonaws.region.s3).
Choose the VPC and the subnets where your EC2 instances are running.
Update the route tables for the selected subnets to include a route pointing to the endpoint.
Security Compliance: By configuring an S3 gateway endpoint, all traffic between the VPC and S3 stays
within the AWS network, complying with the company's security regulations to avoid internet
traversal.
Reference:
VPC Endpoints for Amazon S3

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

A company wants to isolate its workloads by creating an AWS account for each workload. The
company needs a solution that centrally manages networking components for the workloads. The
solution also must create accounts with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
  • B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
  • C. Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
  • D. Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
Mark Question:
Answer:

A


Explanation:
AWS Control Tower: Provides a managed service to set up and govern a secure, multi-account AWS
environment based on AWS best practices. It automates the setup of AWS Organizations and applies
security controls (guardrails).
Networking Account:
Create a centralized networking account that includes a VPC with both private and public subnets.
This centralized VPC will manage and control the networking resources.
AWS Resource Access Manager (AWS RAM):
Use AWS RAM to share the subnets from the networking account with the other workload accounts.
This allows different workload accounts to utilize the shared networking resources without the need
to manage their own VPCs.
Operational Efficiency: Using AWS Control Tower simplifies the setup and governance of multiple
AWS accounts, while AWS RAM facilitates centralized management of networking resources,
reducing operational overhead and ensuring consistent security and compliance.
Reference:
AWS Control Tower
AWS Resource Access Manager

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

A company's SAP application has a backend SQL Server database in an on-premises environment. The
company wants to migrate its on-premises application and database server to AWS. The company
needs an instance type that meets the high demands of its SAP database. On-premises performance
data shows that both the SAP application and the database have high memory utilization.
Which solution will meet these requirements?

  • A. Use the compute optimized Instance family for the application Use the memory optimized instance family for the database.
  • B. Use the storage optimized instance family for both the application and the database
  • C. Use the memory optimized instance family for both the application and the database
  • D. Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for the database.
Mark Question:
Answer:

C


Explanation:
Memory Optimized Instances: These instances are designed to deliver fast performance for
workloads that process large data sets in memory. They are ideal for high-performance databases
like SAP and applications with high memory utilization.
High Memory Utilization: Both the SAP application and the SQL Server database have high memory
demands as per the on-premises performance data. Memory optimized instances provide the
necessary memory capacity and performance.
Instance Types:
For the SAP application, using a memory optimized instance ensures the application has sufficient
memory to handle the high workload efficiently.
For the SQL Server database, memory optimized instances ensure optimal database performance
with high memory throughput.
Operational Efficiency: Using the same instance family for both the application and the database
simplifies management and ensures both components meet performance requirements.
Reference:
Amazon EC2 Instance Types
SAP on AWS

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2