amazon AWS Certified SysOps Administrator - Associate practice test

Last exam update: Nov 18 ,2025
Page 1 out of 38. Viewing questions 1-15 out of 557

Question 1

[Security and Compliance]
A Sysops administrator creates an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses
AWS Fargate. The cluster is deployed successfully. The Sysops administrator needs to manage the
cluster by using the kubect1 command line tool.
Which of the following must be configured on the Sysops administrator's machine so that kubect1
can communicate with the cluster API server?

  • A. The kubeconfig file
  • B. The kube-proxy Amazon EKS add-on
  • C. The Fargate profile
  • D. The eks-connector.yaml file
Mark Question:
Answer:

A


Explanation:
The kubeconfig file is a configuration file used to store cluster authentication information, which is
required to make requests to the Amazon EKS cluster API server. The kubeconfig file will need to be
configured on the SysOps administrator's machine in order for kubectl to be able to communicate
with the cluster API server.
https://aws.amazon.com/blogs/developer/running-a-kubernetes-job-in-amazon-eks-on-aws-fargate-
using-aws-stepfunctions/

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

[Security and Compliance]
A Sysops administrator needs to configure automatic rotation for Amazon RDS database credentials.
The credentials must rotate every 30 days. The solution must integrate with Amazon RDS.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Store the credentials in AWS Systems Manager Parameter Store as a secure string. Configure automatic rotation with a rotation interval of 30 days.
  • B. Store the credentials in AWS Secrets Manager. Configure automatic rotation with a rotation interval of 30 days.
  • C. Store the credentials in a file in an Amazon S3 bucket. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days.
  • D. Store the credentials in AWS Secrets Manager. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days.
Mark Question:
Answer:

B


Explanation:
Storing the credentials in AWS Secrets Manager and configuring automatic rotation with a rotation
interval of 30 days is the most efficient way to meet the requirements with the least operational
overhead. AWS Secrets Manager automatically rotates the credentials at the specified interval, so
there is no need for an additional AWS Lambda function or manual rotation. Additionally, Secrets
Manager is integrated with Amazon RDS, so the credentials can be easily used with the RDS
database.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

[Deployment, Provisioning, and Automation]
A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an
Amazon EC2 Auto Scaling group with scheduled scaling actions.
However, the capacity does not always increase at the scheduled times, and instances terminate
many times a day. A Sysops administrator must ensure that the instances launch on time and have
fewer interruptions.
Which action will meet these requirements?

  • A. Specify the capacity-optimized allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.
  • B. Specify the capacity-optimized allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.
  • C. Specify the lowest-price allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.
  • D. Specify the lowest-price allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.
Mark Question:
Answer:

A


Explanation:
Specifying the capacity-optimized allocation strategy for Spot Instances and adding more instance
types to the Auto Scaling group is the best action to meet the requirements. Increasing the size of the
instances in the Auto Scaling group will not necessarily help with the launch time or reduce
interruptions, as the Spot Instances could still be interrupted even with larger instance sizes.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

[Monitoring, Reporting, and Automation]
A company stores its data in an Amazon S3 bucket. The company is required to classify the data and
find any sensitive personal information in its S3 files.
Which solution will meet these requirements?

  • A. Create an AWS Config rule to discover sensitive personal information in the S3 files and mark them as noncompliant.
  • B. Create an S3 event-driven artificial intelligence/machine learning (AI/ML) pipeline to classify sensitive personal information by using Amazon Recognition.
  • C. Enable Amazon GuardDuty. Configure S3 protection to monitor all data inside Amazon S3.
  • D. Enable Amazon Macie. Create a discovery job that uses the managed data identifier.
Mark Question:
Answer:

D


Explanation:
Amazon Macie is a security service designed to help organizations find, classify, and protect sensitive
data stored in Amazon S3. Amazon Macie uses machine learning to automatically discover, classify,
and protect sensitive data in Amazon S3. Creating a discovery job with the managed data identifier
will allow Macie to identify sensitive personal information in the S3 files and classify it accordingly.
Enabling AWS Config and Amazon GuardDuty will not help with this requirement as they are not
designed to automatically classify and protect data.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

[Monitoring, Reporting, and Automation]
A company has an application that customers use to search for records on a website. The
application's data is stored in an Amazon Aurora DB cluster. The application's usage varies by season
and by day of the week.
The website's popularity is increasing, and the website is experiencing slower performance because
of increased load on the DB cluster during periods of peak activity. The application logs show that the
performance issues occur when users are searching for information. The same search is rarely
performed multiple times.
A SysOps administrator must improve the performance of the platform by using a solution that
maximizes resource efficiency.
Which solution will meet these requirements?

  • A. Deploy an Amazon ElastiCache for Redis cluster in front of the DB cluster. Modify the application to check the cache before the application issues new queries to the database. Add the results of any queries to the cache.
  • B. Deploy an Aurora Replica for the DB cluster. Modify the application to use the reader endpoint for search operations. Use Aurora Auto Scaling to scale the number of replicas based on load. Most Voted
  • C. Use Provisioned IOPS on the storage volumes that support the DB cluster to improve performance sufficiently to support the peak load on the application.
  • D. Increase the instance size in the DB cluster to a size that is sufficient to support the peak load on the application. Use Aurora Auto Scaling to scale the instance size based on load.
Mark Question:
Answer:

A


Explanation:
Step-by-Step
Understand the Problem:
The application experiences slower performance during peak activity due to increased load on the
Amazon Aurora DB cluster.
Performance issues occur primarily during search operations.
The goal is to improve performance and maximize resource efficiency.
Analyze the Requirements:
The solution should improve the performance of the platform.
It should maximize resource efficiency, which implies cost-effective and scalable options.
Evaluate the Options:
Option A: Deploy an Amazon ElastiCache for Redis cluster.
ElastiCache for Redis is a managed in-memory caching service that can significantly reduce the load
on the database by caching frequently accessed data.
By modifying the application to check the cache before querying the database, repeated searches for
the same information will be served from the cache, reducing the number of database reads.
This is efficient and cost-effective as it reduces database load and improves response times.
Option B: Deploy an Aurora Replica and use Auto Scaling.
Adding Aurora Replicas can help distribute read traffic and improve performance.
Aurora Auto Scaling can adjust the number of replicas based on the load.
However, this option may not be as efficient in terms of resource usage compared to caching
because it still involves querying the database.
Option C: Use Provisioned IOPS.
Provisioned IOPS can improve performance by providing fast and consistent I/O.
This option focuses on improving the underlying storage performance but doesn't address the
inefficiency of handling repeated searches directly.
Option D: Increase the instance size and use Auto Scaling.
Increasing the instance size can provide more resources to handle peak loads.
Aurora Auto Scaling can adjust instance sizes based on the load.
This option can be costly and may not be as efficient as caching in handling repeated searches.
Select the Best Solution:
Option A is the best solution because it leverages caching to reduce the load on the database, which
directly addresses the issue of repeated searches causing performance problems. Caching is
generally more resource-efficient and cost-effective compared to scaling database instances or
storage.
Amazon ElastiCache for Redis Documentation
Amazon Aurora Documentation
AWS Auto Scaling
Using ElastiCache for Redis aligns with best practices for improving application performance by
offloading repetitive read queries from the database, leading to faster response times and more
efficient resource usage.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

[Security and Compliance]
The security team is concerned because the number of AWS Identity and Access Management (IAM)
policies being used in the environment is increasing. The team tasked a SysOps administrator to
report on the current number of IAM policies in use and the total available IAM policies.
Which AWS service should the administrator use to check how current IAM policy usage compares to
current service limits?

  • A. AWS Trusted Advisor
  • B. Amazon Inspector
  • C. AWS Config
  • D. AWS Organizations
Mark Question:
Answer:

A


Explanation:
Step-by-Step
Understand the Problem:
The security team is concerned about the increasing number of IAM policies.
The task is to report on the current number of IAM policies and compare them to the service limits.
Analyze the Requirements:
The solution should help in checking the usage of IAM policies against the service limits.
Evaluate the Options:
Option A: AWS Trusted Advisor
AWS Trusted Advisor provides real-time guidance to help you provision your resources following AWS
best practices.
It includes a service limits check that alerts you when you are approaching the limits of your AWS
service usage, including IAM policies.
Option B: Amazon Inspector
Amazon Inspector is an automated security assessment service that helps improve the security and
compliance of applications deployed on AWS. It does not report on IAM policy usage.
Option C: AWS Config
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your
AWS resources. While useful for compliance, it does not provide a comparison against service limits.
Option D: AWS Organizations
AWS Organizations helps you centrally manage and govern your environment as you grow and scale
your AWS resources. It does not provide insights into IAM policy limits.
Select the Best Solution:
Option A: AWS Trusted Advisor is the correct answer because it includes a service limits check that
can report on the current number of IAM policies in use and compare them to the service limits.
AWS Trusted Advisor Documentation
IAM Service Limits
AWS Trusted Advisor is the appropriate tool for monitoring IAM policy usage and comparing it
against service limits, providing the necessary insights to manage and optimize IAM policies
effectively.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

[Deployment, Provisioning, and Automation]
A company has a stateless application that is hosted on a fleet of 10 Amazon EC2 On-Demand
Instances in an Auto Scaling group. A minimum of 6 instances are needed to meet service
requirements.
Which action will maintain uptime for the application MOST cost-effectively?

  • A. Use a Spot Fleet with an On-Demand capacity of 6 instances.
  • B. Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 On-Demand Instances.
  • C. Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 On-Demand Instances.
  • D. Use a Spot Fleet with a target capacity of 6 instances.
Mark Question:
Answer:

A


Explanation:
Step-by-Step
Understand the Problem:
The company has a stateless application on 10 EC2 On-Demand Instances in an Auto Scaling group.
At least 6 instances are needed to meet service requirements.
The goal is to maintain uptime cost-effectively.
Analyze the Requirements:
Maintain a minimum of 6 instances to meet service requirements.
Optimize costs by using a mix of instance types.
Evaluate the Options:
Option A: Use a Spot Fleet with an On-Demand capacity of 6 instances.
Spot Fleets allow you to request a combination of On-Demand and Spot Instances.
Ensuring a minimum of 6 On-Demand Instances guarantees the required capacity while leveraging
lower-cost Spot Instances to meet additional demand.
Option B: Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum
of 10 On-Demand Instances.
This option ensures the minimum required capacity but does not optimize costs since it only uses On-
Demand Instances.
Option C: Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum
of 6 On-Demand Instances.
This does not meet the requirement of maintaining at least 6 instances at all times.
Option D: Use a Spot Fleet with a target capacity of 6 instances.
This option relies entirely on Spot Instances, which may not always be available, risking insufficient
capacity.
Select the Best Solution:
Option A: Using a Spot Fleet with an On-Demand capacity of 6 instances ensures the necessary
uptime with a cost-effective mix of On-Demand and Spot Instances.
Amazon EC2 Auto Scaling
Amazon EC2 Spot Instances
Spot Fleet Documentation
Using a Spot Fleet with a combination of On-Demand and Spot Instances offers a cost-effective
solution while ensuring the required minimum capacity for the application.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

[Monitoring, Reporting, and Automation]
A SysOps administrator has launched a large general purpose Amazon EC2 instance to regularly
process large data files. The instance has an attached 1 TB General Purpose SSD (gp2) Amazon Elastic
Block Store (Amazon EBS) volume. The instance also is EBS-optimized. To save costs, the SysOps
administrator stops the instance each evening and restarts the instance each morning.
When data processing is active, Amazon CloudWatch metrics on the instance show a consistent 3.000
VolumeReadOps. The SysOps administrator must improve the I/O performance while ensuring data
integrity.
Which action will meet these requirements?

  • A. Change the instance type to a large, burstable, general purpose instance.
  • B. Change the instance type to an extra large general purpose instance.
  • C. Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume.
  • D. Move the data that resides on the EBS volume to the instance store.
Mark Question:
Answer:

C


Explanation:
Step-by-Step
Understand the Problem:
The EC2 instance processes large data files and uses a 1 TB General Purpose SSD (gp2) EBS volume.
CloudWatch metrics show consistent high VolumeReadOps.
The requirement is to improve I/O performance while ensuring data integrity.
Analyze the Requirements:
Improve I/O performance.
Maintain data integrity.
Evaluate the Options:
Option A: Change the instance type to a large, burstable, general-purpose instance.
Burstable instances provide a baseline level of CPU performance with the ability to burst to a higher
level when needed. However, this does not address the I/O performance directly.
Option B: Change the instance type to an extra-large general-purpose instance.
A larger instance type might improve performance, but it does not directly address the I/O
performance of the EBS volume.
Option C: Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume.
Increasing the size of a General Purpose SSD (gp2) volume can increase its IOPS. The larger the
volume, the higher the baseline performance in terms of IOPS.
Option D: Move the data that resides on the EBS volume to the instance store.
Instance store volumes provide high I/O performance but are ephemeral, meaning data will be lost if
the instance is stopped or terminated. This does not ensure data integrity.
Select the Best Solution:
Option C: Increasing the EBS volume size to 2 TB will provide higher IOPS, improving I/O
performance while maintaining data integrity.
Amazon EBS Volume Types
General Purpose SSD (gp2) Volumes
Increasing the size of the General Purpose SSD (gp2) volume is an effective way to improve I/O
performance while ensuring data integrity remains intact.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

[Monitoring, Reporting, and Automation]
With the threat of ransomware viruses encrypting and holding company data hostage, which action
should be taken to protect an Amazon S3 bucket?

  • A. Deny Post. Put. and Delete on the bucket.
  • B. Enable server-side encryption on the bucket.
  • C. Enable Amazon S3 versioning on the bucket.
  • D. Enable snapshots on the bucket.
Mark Question:
Answer:

C


Explanation:
Step-by-Step
Understand the Problem:
The threat of ransomware encrypting and holding company data hostage.
Need to protect an Amazon S3 bucket.
Analyze the Requirements:
Ensure that data in the S3 bucket is protected against unauthorized encryption or deletion.
Evaluate the Options:
Option A: Deny Post, Put, and Delete on the bucket.
Denying these actions would prevent any uploads or modifications to the bucket, making it unusable.
Option B: Enable server-side encryption on the bucket.
Server-side encryption protects data at rest but does not prevent the encryption of data by
ransomware.
Option C: Enable Amazon S3 versioning on the bucket.
S3 versioning keeps multiple versions of an object in the bucket. If a file is overwritten or encrypted
by ransomware, previous versions of the file can still be accessed.
Option D: Enable snapshots on the bucket.
Amazon S3 does not have a snapshot feature; this option is not applicable.
Select the Best Solution:
Option C: Enabling Amazon S3 versioning is the best solution as it allows access to previous versions
of objects, providing protection against ransomware encryption by retaining prior, unencrypted
versions.
Amazon S3 Versioning
Best Practices for Protecting Data with Amazon S3
Enabling S3 versioning ensures that previous versions of objects are preserved, providing a safeguard
against ransomware by allowing recovery of unencrypted versions of data.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

[High Availability, Backup, and Recovery]
A SysOps administrator is evaluating Amazon Route 53 DNS options to address concerns about high
availability for an on-premises website. The website consists of two servers: a primary active server
and a secondary passive server. Route 53 should route traffic to the primary server if the associated
health check returns 2xx or 3xx HTTP codes. All other traffic should be directed to the secondary
passive server. The failover record type, set ID. and routing policy have been set appropriately for
both primary and secondary servers.
Which next step should be taken to configure Route 53?

  • A. Create an A record for each server. Associate the records with the Route 53 HTTP health check.
  • B. Create an A record for each server. Associate the records with the Route 53 TCP health check.
  • C. Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 HTTP health check.
  • D. Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 TCP health check.
Mark Question:
Answer:

C


Explanation:
To configure Route 53 for high availability with failover between a primary and a secondary server,
the following steps should be taken:
Create Health Checks:
Create HTTP health checks for both the primary and secondary servers. Ensure these health checks
are configured to look for HTTP 2xx or 3xx status codes.
Reference: Creating and Updating Health Checks
Create Alias Records:
Create an alias record for the primary server. Set "Evaluate Target Health" to Yes. Associate this
record with the primary server's HTTP health check.
Create an alias record for the secondary server. Set "Evaluate Target Health" to Yes. Associate this
record with the secondary server's HTTP health check.
Reference: Creating Records by Using the Amazon Route 53 Console
Set Routing Policy:
Ensure the routing policy for both records is set to "Failover."
Assign appropriate "Set IDs" and configure the primary record as the primary failover record and the
secondary record as the secondary failover record.
Reference: Route 53 Routing Policies
Test Configuration:
Test the failover configuration to ensure that when the primary server health check fails, traffic is
routed to the secondary server.
Reference: Testing Failover

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

[Monitoring, Reporting, and Automation]
A SysOps administrator noticed that a large number of Elastic IP addresses are being created on the
company's AWS account, but they are not being associated with Amazon EC2 instances, and are
incurring Elastic IP address charges in the monthly bill.
How can the administrator identify who is creating the Elastic IP addresses?

  • A. Attach a cost-allocation tag to each requested Elastic IP address with the IAM user name of the developer who creates it.
  • B. Query AWS CloudTrail logs by using Amazon Athena to search for Elastic IP address events.
  • C. Create a CloudWatch alarm on the ElPCreated metric and send an Amazon SNS notification when the alarm triggers.
  • D. Use Amazon Inspector to get a report of all Elastic IP addresses created in the last 30 days.
Mark Question:
Answer:

B


Explanation:
To identify who is creating the Elastic IP addresses, the following steps should be taken:
Enable CloudTrail Logging:
Ensure AWS CloudTrail is enabled to log all API activities in your AWS account.
Reference: Setting Up AWS CloudTrail
Create an Athena Table for CloudTrail Logs:
Set up an Athena table that points to the S3 bucket where CloudTrail logs are stored.
Reference: Creating Tables in Athena
Query CloudTrail Logs:
Use Athena to run SQL queries to search for AllocateAddress events, which represent the creation of
Elastic IP addresses.
Example Query:
sql
Copy code
SELECT userIdentity.userName, eventTime, eventSource, eventName, requestParameters
FROM cloudtrail_logs
WHERE eventName = 'AllocateAddress';
Reference: Analyzing AWS CloudTrail Logs
Review Results:
Review the results to identify which IAM user or role is creating the Elastic IP addresses.
Reference: AWS CloudTrail Log Analysis

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

[Monitoring, Reporting, and Automation]
A company has an Amazon CloudFront distribution that uses an Amazon S3 bucket as its origin.
During a review of the access logs, the company determines that some requests are going directly to
the S3 bucket by using the website hosting endpoint. A SysOps administrator must secure the S3
bucket to allow requests only from CloudFront.
What should the SysOps administrator do to meet this requirement?

  • A. Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Remove access to and from other principals in the S3 bucket policy. Update the S3 bucket policy to allow access only from the OAI.
  • B. Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
  • C. Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
  • D. Update the S3 bucket policy to allow access only from the CloudFront distribution. Remove access to and from other principals in the S3 bucket policy. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
Mark Question:
Answer:

A


Explanation:
To secure the S3 bucket and allow access only from CloudFront, the following steps should be taken:
Create an OAI in CloudFront:
In the CloudFront console, create an origin access identity (OAI) and associate it with your
CloudFront distribution.
Reference: Restricting Access to S3 Buckets
Update S3 Bucket Policy:
Modify the S3 bucket policy to allow access only from the OAI. This involves adding a policy
statement that grants the OAI permission to get objects from the bucket and removing any other
public access permissions.
Example Policy:
json
Copy code
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3EXAMPLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-bucket/*"
}
]
}
Reference: Bucket Policy Examples
Test Configuration:
Ensure that the S3 bucket is not publicly accessible and that requests to the bucket through the
CloudFront distribution are successful.
Reference: Testing CloudFront OAI

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

[Security and Compliance]
A SysOps administrator must create an IAM policy for a developer who needs access to specific AWS
services. Based on the requirements, the SysOps administrator creates the following policy:

Which actions does this policy allow? (Select TWO.)

  • A. Create an AWS Storage Gateway.
  • B. Create an IAM role for an AWS Lambda function.
  • C. Delete an Amazon Simple Queue Service (Amazon SQS) queue.
  • D. Describe AWS load balancers.
  • E. Invoke an AWS Lambda function.
Mark Question:
Answer:

D,E


Explanation:
The provided IAM policy grants the following permissions:
Describe AWS Load Balancers:
The policy allows actions with the prefix elasticloadbalancing:. This includes actions like
DescribeLoadBalancers and other Describe* actions related to Elastic Load Balancing.
Reference: Elastic Load Balancing API Actions
Invoke AWS Lambda Function:
The policy allows actions with the prefix lambda:, which includes InvokeFunction and other actions
that allow listing and describing Lambda functions.
Reference: AWS Lambda API Actions
The actions related to AWS Storage Gateway (create), IAM role (create), and Amazon SQS (delete)
are not allowed by this policy. The policy only grants describe/list permissions for storagegateway,
elasticloadbalancing, lambda, and list permissions for SQS.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 14

[Networking and Content Delivery]
A company is trying to connect two applications. One application runs in an on-premises data center
that has a hostname of hostl .onprem.private. The other application runs on an Amazon EC2 instance
that has a hostname of hostl.awscloud.private. An AWS Site-to-Site VPN connection is in place
between the on-premises network and AWS.
The application that runs in the data center tries to connect to the application that runs on the EC2
instance, but DNS resolution fails. A SysOps administrator must implement DNS resolution between
on-premises and AWS resources.
Which solution allows the on-premises application to resolve the EC2 instance hostname?

  • A. Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the inbound resolver endpoint.
  • B. Set up an Amazon Route 53 inbound resolver endpoint. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the inbound resolver endpoint.
  • C. Set up an Amazon Route 53 outbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the outbound resolver endpoint.
  • D. Set up an Amazon Route 53 outbound resolver endpoint. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the outbound resolver endpoint.
Mark Question:
Answer:

A


Explanation:
Step-by-Step
Understand the Problem:
There are two applications, one in an on-premises data center and the other on an Amazon EC2
instance.
DNS resolution fails when the on-premises application tries to connect to the EC2 instance.
The goal is to implement DNS resolution between on-premises and AWS resources.
Analyze the Requirements:
Need to resolve the hostname of the EC2 instance from the on-premises network.
Utilize the existing AWS Site-to-Site VPN connection for DNS queries.
Evaluate the Options:
Option A: Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the
onprem.private hosted zone.
This allows DNS queries from on-premises to be forwarded to Route 53 for resolution.
The resolver endpoint is associated with the VPC, enabling resolution of AWS resources.
Option B: Set up an Amazon Route 53 inbound resolver endpoint without specifying the forwarding
rule.
This option does not address the specific need to resolve onprem.private DNS queries.
Option C: Set up an Amazon Route 53 outbound resolver endpoint.
Outbound resolver endpoints are used for forwarding DNS queries from AWS to on-premises, not
vice versa.
Option D: Set up an Amazon Route 53 outbound resolver endpoint without specifying the forwarding
rule.
Similar to Option C, this does not meet the requirement of resolving on-premises queries in AWS.
Select the Best Solution:
Option A: Setting up an inbound resolver endpoint with a forwarding rule for onprem.private and
associating it with the VPC ensures that DNS queries from on-premises can resolve AWS resources
effectively.
Amazon Route 53 Resolver
Integrating AWS and On-Premises Networks with Route 53
Using an Amazon Route 53 inbound resolver endpoint with a forwarding rule ensures that on-
premises applications can resolve EC2 instance hostnames effectively.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

[Security and Compliance]
A large company is using AWS Organizations to manage its multi-account AWS environment.
According to company policy, all users should have read-level access to a particular Amazon S3
bucket in a central account. The S3 bucket data should not be available outside the organization. A
SysOps administrator must set up the permissions and add a bucket policy to the S3 bucket.
Which parameters should be specified to accomplish this in the MOST efficient manner?

  • A. Specify "' as the principal and PrincipalOrgld as a condition.
  • B. Specify all account numbers as the principal.
  • C. Specify PrincipalOrgld as the principal.
  • D. Specify the organization's management account as the principal.
Mark Question:
Answer:

A


Explanation:
Step-by-Step
Understand the Problem:
Ensure all users in the organization have read-level access to a specific S3 bucket.
The data should not be accessible outside the organization.
Analyze the Requirements:
Grant read access to users within the organization.
Prevent access from outside the organization.
Evaluate the Options:
Option A: Specify "*" as the principal and PrincipalOrgId as a condition.
This grants access to all AWS principals but restricts it to those within the specified organization using
the PrincipalOrgId condition.
Option B: Specify all account numbers as the principal.
This is impractical for a large organization and requires constant updates if accounts are added or
removed.
Option C: Specify PrincipalOrgId as the principal.
The PrincipalOrgId condition must be used within a policy, not as a principal.
Option D: Specify the organization's management account as the principal.
This grants access only to the management account, not to all users within the organization.
Select the Best Solution:
Option A: Using "*" as the principal with the PrincipalOrgId condition ensures all users within the
organization have the required access while preventing external access.
Amazon S3 Bucket Policies
AWS Organizations Policy Examples
Using "*" as the principal with the PrincipalOrgId condition efficiently grants read access to the S3
bucket for all users within the organization.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2