amazon AWS Certified DevOps Engineer - Professional Exam practice test

Last exam update: Nov 18 ,2025
Page 1 out of 22. Viewing questions 1-15 out of 329

Question 1

A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB).
The ALB routes requests to an AWS Lambda function. Many different versions of the application are
in use at any given time, including versions that are in testing by a subset of users. The version of the
application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application.
The company needs to gather a metric for each API operation by response code for each version of
the application that is in use. A DevOps engineer has modified the Lambda function to extract the
API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?

  • A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
  • B. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.
  • C. Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
  • D. Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X- Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.
Mark Question:
Answer:

A


Explanation:
"Note that the metric filter is different from a log insights query, where the experience is interactive
and provides immediate search results for the user to investigate. No automatic action can be
invoked from an insights query. Metric filters, on the other hand, will generate metric data in the
form of a time series. This lets you create alarms that integrate into your ITSM processes, execute
AWS Lambda functions, or even create anomaly detection models."
https://aws.amazon.com/blogs/mt/quantify-custom-application-metrics-with-amazon-cloudwatch-
logs-and-metric-filters/

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

A company provides an application to customers. The application has an Amazon API Gateway REST
API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large
amount of data from an Amazon DynamoDB table. The data load process results in long cold-start
times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to requests. The
application receives thousands of requests throughout the day. In the middle of the day, the
application experiences 10 times more requests than at any other time of the day. Near the end of
the day, the application's request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.
Which solution will meet these requirements?

  • A. Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.
  • B. Configure reserved concurrency on the Lambda function with a concurrency value of 0.
  • C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
  • D. Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.
Mark Question:
Answer:

C


Explanation:
The following are the steps that the DevOps engineer should take to reduce the latency of the
Lambda function at all times of the day:
Configure provisioned concurrency on the Lambda function.
Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency
values set to a minimum of 1 and a maximum of 100.
The provisioned concurrency setting ensures that there is always a minimum number of Lambda
function instances available to handle requests. The Application Auto Scaling setting will
automatically scale the number of Lambda function instances up or down based on the demand for
the application.
This solution will ensure that the Lambda function is able to handle the increased load during the
middle of the day, while also keeping the cold-start latency low.
The following are the reasons why the other options are not correct:
Option A is incorrect because it will not reduce the cold-start latency of the Lambda function.
Option B is incorrect because it will not scale the number of Lambda function instances up or down
based on demand.
Option D is incorrect because it will only configure reserved concurrency on the API Gateway API,
which will not affect the Lambda function.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache
Tomcat application with an Apache Webserver. The development team started with a proof of
concept, created a deployment group for a developer environment, and performed functional tests
within the application. After completion, the team will create additional deployment groups for
staging and production.
The current log level is configured within the Apache settings, but the team wants to change this
configuration dynamically when the deployment occurs, so that they can set different log level
configurations depending on the deployment group without having a different application revision
for each group.
How can these requirements be met with the LEAST management overhead and without requiring
different script versions for each deployment group?

  • A. Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.
  • B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.
  • C. Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.
  • D. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.
Mark Question:
Answer:

B


Explanation:
The following are the steps that the company can take to change the log level dynamically when the
deployment occurs:
Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to
identify which deployment group the instance is part of.
Use this information to configure the log level settings.
Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.
The DEPLOYMENT_GROUP_NAME environment variable is automatically set by CodeDeploy when
the deployment is triggered. This means that the script does not need to call the metadata service or
the EC2 API to identify the deployment group.
This solution is the least complex and requires the least management overhead. It also does not
require different script versions for each deployment group.
The following are the reasons why the other options are not correct:
Option A is incorrect because it would require tagging the Amazon EC2 instances, which would be a
manual and time-consuming process.
Option C is incorrect because it would require creating a custom environment variable for each
environment. This would be a complex and error-prone process.
Option D is incorrect because it would use the DEPLOYMENT_GROUP_ID environment variable.
However, this variable is not automatically set by CodeDeploy, so the script would need to call the
metadata service or the EC2 API to get the deployment group ID. This would add complexity and
overhead to the solution.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an
account to indicate a desired backup frequency. This requirement Includes EBS volumes that do not
require backups. The company uses custom tags named Backup_Frequency that have values of none,
dally, or weekly that correspond to the desired backup frequency. An audit finds that developers are
occasionally not tagging the EBS volumes.
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so
that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?

  • A. Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
  • B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
  • C. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
  • D. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
Mark Question:
Answer:

B


Explanation:
The following are the steps that the DevOps engineer should take to ensure that all EBS volumes
always have the Backup_Frequency tag so that the company can perform backups at least weekly
unless a different value is specified:
Set up AWS Config in the account.
Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a
Backup Frequency tag applied.
Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to
apply the Backup_Frequency tag with a value of weekly.
The managed rule AWS::Config::EBSVolumesWithoutBackupTag will return a compliance failure for
any EBS volume that does not have the Backup_Frequency tag applied. The remediation action will
then use the Systems Manager Automation runbook to apply the Backup_Frequency tag with a value
of weekly to the EBS volume.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

A company is using an Amazon Aurora cluster as the data store for its application. The Aurora cluster
is configured with a single DB instance. The application performs read and write operations on the
database by using the cluster's instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming maintenance
window. The cluster must remain available with the least possible interruption during the
maintenance window.
What should a DevOps engineer do to meet these requirements?

  • A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.
  • B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
  • C. Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.
  • D. Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
Mark Question:
Answer:

C


Explanation:
To meet the requirements, the DevOps engineer should do the following:
Turn on the Multi-AZ option on the Aurora cluster.
Update the application to use the Aurora cluster endpoint for write operations.
Update the Aurora cluster's reader endpoint for reads.
Turning on the Multi-AZ option will create a replica of the database in a different Availability Zone.
This will ensure that the database remains available even if one of the Availability Zones is
unavailable.
Updating the application to use the Aurora cluster endpoint for write operations will ensure that all
writes are sent to both the primary and replica databases. This will ensure that the data is always
consistent.
Updating the Aurora cluster's reader endpoint for reads will allow the application to read data from
the replica database. This will improve the performance of the application during the maintenance
window.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A company must encrypt all AMIs that the company shares across accounts. A DevOps engineer has
access to a source account where an unencrypted custom AMI has been built. The DevOps engineer
also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2
instances from the AMI. The DevOps engineer must share the AMI with the target account.
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose
three.)

  • A. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.
  • B. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.
  • C. In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.
  • D. In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.
  • E. In the source account, share the unencrypted AMI with the target account.
  • F. In the source account, share the encrypted AMI with the target account.
Mark Question:
Answer:

A, D, F


Explanation:
The Auto Scaling group service-linked role must have a specific grant in the source account in order
to decrypt the encrypted AMI. This is because the service-linked role does not have permissions to
assume the default IAM role in the source account.
The following steps are required to meet the requirements:
In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the
copy action.
In the source account, create a KMS grant that delegates permissions to the Auto Scaling group
service-linked role in the target account.
In the source account, share the encrypted AMI with the target account.
In the target account, attach the KMS grant to the Auto Scaling group service-linked role.
The first three steps are the same as the steps that I described earlier. The fourth step is required to
grant the Auto Scaling group service-linked role permissions to decrypt the AMI in the target
account.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 7

A company uses AWS CodePipeline pipelines to automate releases of its application A typical
pipeline consists of three stages build, test, and deployment. The company has been using a separate
AWS CodeBuild project to run scripts for each stage. However, the company now wants to use AWS
CodeDeploy to handle the deployment stage of the pipelines.
The company has packaged the application as an RPM package and must deploy the application to a
fleet of Amazon EC2 instances. The EC2 instances are in an EC2 Auto Scaling group and are launched
from a common AMI.
Which combination of steps should a DevOps engineer perform to meet these requirements?
(Choose two.)

  • A. Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.
  • B. Create a new version of the common AMI with the CodeDeploy agent installed. Create an AppSpec file that contains application deployment scripts and grants access to CodeDeploy.
  • C. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Add a step to the CodePipeline pipeline to use EC2 Image Builder to create a new AMI. Configure CodeDeploy to deploy the newly created AMI.
  • D. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
  • E. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the EC2 instances that are launched from the common AMI as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
Mark Question:
Answer:

A, D


Explanation:
https://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws-auto-scaling.html

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 8

A company’s security team requires that all external Application Load Balancers (ALBs) and Amazon
API Gateway APIs are associated with AWS WAF web ACLs. The company has hundreds of AWS
accounts, all of which are included in a single organization in AWS Organizations. The company has
configured AWS Config for the organization. During an audit, the company finds some externally
facing ALBs that are not associated with AWS WAF web ACLs.
Which combination of steps should a DevOps engineer take to prevent future violations? (Choose
two.)

  • A. Delegate AWS Firewall Manager to a security account.
  • B. Delegate Amazon GuardDuty to a security account.
  • C. Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
  • D. Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
  • E. Configure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
Mark Question:
Answer:

A, C


Explanation:
If instead you want to automatically apply the policy to existing in-scope resources, choose Auto
remediate any noncompliant resources. This option creates a web ACL in each applicable account
within the AWS organization and associates the web ACL with the resources in the accounts. When
you choose Auto remediate any noncompliant resources, you can also choose to remove existing
web ACL associations from in-scope resources, for the web ACLs that aren't managed by another
active Firewall Manager policy. If you choose this option, Firewall Manager first associates the
policy's web ACL with the resources, and then removes the prior associations. If a resource has an
association with another web ACL that's managed by a different active Firewall Manager policy, this
choice doesn't affect that association.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 9

A company uses AWS Key Management Service (AWS KMS) keys and manual key rotation to meet
regulatory compliance requirements. The security team wants to be notified when any keys have not
been rotated after 90 days.
Which solution will accomplish this?

  • A. Configure AWS KMS to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
  • B. Configure an Amazon EventBridge event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon Simple Notification Service (Amazon SNS) topic.
  • C. Develop an AWS Config custom rule that publishes to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
  • D. Configure AWS Security Hub to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
Mark Question:
Answer:

C


Explanation:
https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-determine-compliance-of-aws-
kms-key-policies-to-your-specifications/

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

A security review has identified that an AWS CodeBuild project is downloading a database population
script from an Amazon S3 bucket using an unauthenticated request. The security team does not
allow unauthenticated requests to S3 buckets for this project.
How can this issue be corrected in the MOST secure manner?

  • A. Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.
  • B. Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.
  • C. Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.
  • D. Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.
Mark Question:
Answer:

C


Explanation:
A bucket policy is a resource-based policy that defines who can access a specific S3 bucket and what
actions they can perform on it. By removing unauthenticated access from the bucket policy, you can
prevent anyone without valid credentials from accessing the bucket. A service role is an IAM role that
allows an AWS service, such as CodeBuild, to perform actions on your behalf. By modifying the
service role for the CodeBuild project to include Amazon S3 access, you can grant the project
permission to read and write objects in the S3 bucket. The AWS CLI is a command-line tool that
allows you to interact with AWS services, such as S3, using commands in your terminal. By using the
AWS CLI to download the database population script, you can leverage the service role credentials
and encryption to secure the data transfer.
For more information, you can refer to these web pages:
[Using bucket policies and user policies - Amazon Simple Storage Service]
[Create a service role for CodeBuild - AWS CodeBuild]
[AWS Command Line Interface]

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

An ecommerce company has chosen AWS to host its new platform. The company's DevOps team has
started building an AWS Control Tower landing zone. The DevOps team has set the identity store
within AWS IAM Identity Center (AWS Single Sign-On) to external identity provider (IdP) and has
configured SAML 2.0.
The DevOps team wants a robust permission model that applies the principle of least privilege. The
model must allow the team to build and manage only the team's own resources.
Which combination of steps will meet these requirements? (Choose three.)

  • A. Create IAM policies that include the required permissions. Include the aws:PrincipalTag condition key.
  • B. Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.
  • C. Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in IAM Identity Center.
  • D. Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies.
  • E. Enable attributes for access control in IAM Identity Center. Apply tags to users. Map the tags as key-value pairs.
  • F. Enable attributes for access control in IAM Identity Center. Map attributes from the IdP as key- value pairs.
Mark Question:
Answer:

BCF


Explanation:
Using the principalTag in the Permission Set inline policy a logged in user belonging to a specific AD
group in the IDP can be permitted access to perform operations on certain resources if their group
matches the group used in the PrincipleTag. Basically you are narrowing the scope of privileges
assigned via Permission policies conditionally based on whether the logged in user belongs to a
specific AD Group in IDP. The mapping of the AD group to the request attributes can be done using
SSO attributes where we can pass other attributes like the SAML token as well.
https://docs.aws.amazon.com/singlesignon/latest/userguide/abac.html

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 12

An ecommerce company is receiving reports that its order history page is experiencing delays in
reflecting the processing status of orders. The order processing system consists of an AWS Lambda
function that uses reserved concurrency. The Lambda function processes order messages from an
Amazon Simple Queue Service (Amazon SQS) queue and inserts processed orders into an Amazon
DynamoDB table. The DynamoDB table has auto scaling enabled for read and write capacity.
Which actions should a DevOps engineer take to resolve this delay? (Choose two.)

  • A. Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Increase the Lambda function concurrency limit.
  • B. Check the ApproximateAgeOfOldestMessage metnc for the SQS queue Configure a redrive policy on the SQS queue.
  • C. Check the NumberOfMessagesSent metric for the SQS queue. Increase the SQS queue visibility timeout.
  • D. Check the WriteThrottleEvents metric for the DynamoDB table. Increase the maximum write capacity units (WCUs) for the table's scaling policy.
  • E. Check the Throttles metric for the Lambda function. Increase the Lambda function timeout.
Mark Question:
Answer:

A, D


Explanation:
A: If the ApproximateAgeOfOldestMessages indicate that orders are remaining in the SQS queue for
longer than expected, the reserved concurrency limit may be set too small to keep up with the
number of orders entering the queue and is being throttled. D: The DynamoDB table is using Auto
Scaling. With Auto Scaling, you create a scaling policy that specifies whether you want to scale read
capacity or write capacity (or both), and the minimum and maximum provisioned capacity unit
settings for the table. The ThottledWriteRequests metric will indicate if there is a throttling issue on
the DynamoDB table, which can be resolved by increasing the maximum write capacity units for the
table's Auto Scaling policy.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 13

A company has a single AWS account that runs hundreds of Amazon EC2 instances in a single AWS
Region. New EC2 instances are launched and terminated each hour in the account. The account also
includes existing EC2 instances that have been running for longer than a week.
The company's security policy requires all running EC2 instances to use an EC2 instance profile. If an
EC2 instance does not have an instance profile attached, the EC2 instance must use a default instance
profile that has no IAM permissions assigned.
A DevOps engineer reviews the account and discovers EC2 instances that are running without an
instance profile. During the review, the DevOps engineer also observes that new EC2 instances are
being launched without an instance profile.
Which solution will ensure that an instance profile is attached to all existing and future EC2 instances
in the Region?

  • A. Configure an Amazon EventBridge rule that reacts to EC2 RunInstances API calls. Configure the rule to invoke an AWS Lambda function to attach the default instance profile to the EC2 instances.
  • B. Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.
  • C. Configure an Amazon EventBridge rule that reacts to EC2 StartInstances API calls. Configure the rule to invoke an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.
  • D. Configure the iam-role-managed-policy-check AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Lambda function to attach the default instance profile to the EC2 instances.
Mark Question:
Answer:

B


Explanation:
https://docs.aws.amazon.com/config/latest/developerguide/ec2-instance-profile-attached.html

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

A DevOps engineer is building a continuous deployment pipeline for a serverless application that
uses AWS Lambda functions. The company wants to reduce the customer impact of an unsuccessful
deployment. The company also wants to monitor for issues.
Which deploy stage configuration will meet these requirements?

  • A. Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.
  • B. Use AWS CloudFormation to publish a new stack update, and include Amazon CloudWatch alarms on all resources. Set up an AWS CodePipeline approval action for a developer to verify and approve the AWS CloudFormation change set.
  • C. Use AWS CloudFormation to publish a new version on every stack update, and include Amazon CloudWatch alarms on all resources. Use the RoutingConfig property of the AWS::Lambda::Alias resource to update the traffic routing during the stack update.
  • D. Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions. Publish a new version of the functions, and include Amazon CloudWatch alarms. Update the production alias to point to the new version. Configure rollbacks to occur when an alarm is in the ALARM state.
Mark Question:
Answer:

D


Explanation:
Use routing configuration on an alias to send a portion of traffic to a second function version. For
example, you can reduce the risk of deploying a new version by configuring the alias to send most of
the traffic to the existing version, and only a small percentage of traffic to the new version.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
The following are the steps involved in the deploy stage configuration that will meet the
requirements:
Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions.
Publish a new version of the functions, and include Amazon CloudWatch alarms.
Update the production alias to point to the new version.
Configure rollbacks to occur when an alarm is in the ALARM state.
This configuration will help to reduce the customer impact of an unsuccessful deployment by
deploying the new version of the functions to a staging environment first. This will allow the DevOps
engineer to test the new version of the functions before deploying it to production.
The configuration will also help to monitor for issues by including Amazon CloudWatch alarms. These
alarms will alert the DevOps engineer if there are any problems with the new version of the
functions.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

To run an application, a DevOps engineer launches an Amazon EC2 instance with public IP addresses
in a public subnet. A user data script obtains the application artifacts and installs them on the
instances upon launch. A change to the security classification of the application now requires the
instances to run with no access to the internet. While the instances launch successfully and show as
healthy, the application does not seem to be installed.
Which of the following should successfully install the application while complying with the new rule?

  • A. Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.
  • B. Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.
  • C. Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.
  • D. Create a security group for the application instances and allow only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.
Mark Question:
Answer:

C


Explanation:
EC2 instances running in private subnets of a VPC can now have controlled access to S3 buckets,
objects, and API functions that are in the same region as the VPC. You can use an S3 bucket policy to
indicate which VPCs and which VPC Endpoints have access to your S3 buckets 1-
https://aws.amazon.com/pt/blogs/aws/new-vpc-endpoint-for-amazon-s3/

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2