google associate cloud engineer practice test

Associate Cloud Engineer

Last exam update: Apr 19 ,2024
Page 1 out of 13. Viewing questions 1-15 out of 192

Question 1

You have a Google Cloud Platform account with access to both production and development projects. You need to create an
automated process to list all compute instances in development and production projects on a daily basis. What should you
do?

  • A. Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources.
  • B. Create two configurations using gsutil config. Write a script that sets configurations as active, individually. For each configuration, use gsutil compute instances list to get a list of compute resources.
  • C. Go to Cloud Shell and export this information to Cloud Storage on a daily basis.
  • D. Go to GCP Console and export this information to Cloud SQL on a daily basis.
Answer:

A

User Votes:
A 1 votes
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

You need to create a Compute Engine instance in a new project that doesn’t exist yet. What should you do?

  • A. Using the Cloud SDK, create a new project, enable the Compute Engine API in that project, and then create the instance specifying your new project.
  • B. Enable the Compute Engine API in the Cloud Console, use the Cloud SDK to create the instance, and then use the -- project flag to specify a new project.
  • C. Using the Cloud SDK, create the new instance, and use the --project flag to specify the new project. Answer yes when prompted by Cloud SDK to enable the Compute Engine API.
  • D. Enable the Compute Engine API in the Cloud Console. Go to the Compute Engine section of the Console to create a new instance, and look for the Create In A New Project option in the creation form.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains
sensitive data. You want access to the content to be removed after four hours. The external company does not have a
Google account to which you can grant specific user-based access privileges. You want to use the most secure method that
requires the fewest steps. What should you do?

  • A. Create a signed URL with a four-hour expiration and share the URL with the company.
  • B. Set object access to ‘public’ and use object lifecycle management to remove the object after four hours.
  • C. Configure the storage bucket as a static website and furnish the objects URL to the company. Delete the object from the storage bucket after four hours.
  • D. Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

Your auditor wants to view your organizations use of data in Google Cloud. The auditor is most interested in auditing who
accessed data in Cloud Storage buckets. You need to help the auditor access the data they need. What should you do?

  • A. Turn on Data Access Logs for the buckets they want to audit, and then build a query in the log viewer that filters on Cloud Storage.
  • B. Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit Logs.
  • C. Assign the appropriate permissions, and the use Cloud Monitoring to review metrics.
  • D. Use the export logs API to provide the Admin Activity Audit Logs in the format they want.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/storage/docs/audit-logging

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud
Storage bucket. You want to follow Google-recommended practices. What should you do?

  • A. Create a service account with an access scope. Use the access scope https://www.googleapis.com/auth/devstorage.write_only.
  • B. Create a service account with an access scope. Use the access scope ‘https://www.googleapis.com/auth/cloud-platform’.
  • C. Create a service account and add it to the IAM role ‘storage.objectCreator’ for that bucket.
  • D. Create a service account and add it to the IAM role ‘storage.objectAdmin’ for that bucket.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

Your management has asked an external auditor to review all the resources in a specific project. The security team has
enabled the Organization Policy called Domain Restricted Sharing on the organization node by specifying only your Cloud
Identity domain. You want the auditor to only be able to view, but not modify, the resources in that project. What should you
do?

  • A. Ask the auditor for their Google account, and give them the Viewer role on the project.
  • B. Ask the auditor for their Google account, and give them the Security Reviewer role on the project.
  • C. Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project.
  • D. Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

You are working for a hospital that stores its medical images in an on-premises data room. The hospital wants to use Cloud
Storage for archival storage of these images. The hospital wants an automated process to upload any new medical images
to Cloud Storage. You need to design and implement a solution. What should you do?

  • A. Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topic. Create an application that sends all medical images to the Pub/Sub topic.
  • B. Deploy a Dataflow job from the batch template, Datastore to Cloud Storage. Schedule the batch job on the desired interval.
  • C. Create a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage. Schedule the script as a cron job.
  • D. In the Cloud Console, go to Cloud Storage. Upload the relevant images to the appropriate bucket.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined
resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What
should you do?

  • A. Use granular logging statements within a Deployment Manager template authored in Python.
  • B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console.
  • C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures.
  • D. Execute the Deployment Manager template using the -preview option in the same project, and observe the state of interdependent resources.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/deployment-manager/docs/deployments/updating-deployments

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

You have sensitive data stored in three Cloud Storage buckets and have enabled data access logging. You want to verify
activities for a particular user for these buckets, using the fewest possible steps.
You need to verify the addition of metadata labels and which files have been viewed from those buckets. What should you
do?

  • A. Using the GCP Console, filter the Activity log to view the information.
  • B. Using the GCP Console, filter the Stackdriver log to view the information.
  • C. View the bucket in the Storage section of the GCP Console.
  • D. Create a trace in Stackdriver to view the information.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic
pattern. You want to automatically scale up or down the number of Spanner nodes depending on traffic. What should you
do?

  • A. Create a cron job that runs on a scheduled basis to review Cloud Monitoring metrics, and then resize the Spanner instance accordingly.
  • B. Create a Cloud Monitoring alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshold. SREs would scale resources up or down accordingly.
  • C. Create a Cloud Monitoring alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshold. Google support would scale resources up or down accordingly.
  • D. Create a Cloud Monitoring alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshold. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-
production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models.
You want to minimize effort and cost. What should you do?

  • A. Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
  • B. Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
  • C. Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.
  • D. Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

You are developing a new web application that will be deployed on Google Cloud Platform. As part of your release cycle, you
want to test updates to your application on a small portion of real user traffic. The majority of the users should still be
directed towards a stable version of your application. What should you do?

  • A. Deploy the application on App Engine. For each update, create a new version of the same service. Configure traffic splitting to send a small percentage of traffic to the new version.
  • B. Deploy the application on App Engine. For each update, create a new service. Configure traffic splitting to send a small percentage of traffic to the new service.
  • C. Deploy the application on Kubernetes Engine. For a new release, update the deployment to use the new version.
  • D. Deploy the application on Kubernetes Engine. For a new release, create a new deployment for the now version. Update the service to use the new deployment.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/appengine/docs/admin-api/migrating-splitting-traffic

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk read throttling
on its Zonal SSD Persistent Disk. The application primarily reads large files from disk. The disk size is currently 350 GB. You
want to provide the maximum amount of throughput while minimizing costs. What should you do?

  • A. Increase the size of the disk to 1 TB.
  • B. Increase the allocated CPU to the instance.
  • C. Migrate to use a Local SSD on the instance.
  • D. Migrate to use a Regional SSD on the instance.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/compute/docs/disks/performance

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

A team of data scientists infrequently needs to use a Google Kubernetes Engine (GKE) cluster that you manage. They
require GPUs for some long-running, non-restartable jobs. You want to minimize cost. What should you do?

  • A. Enable node auto-provisioning on the GKE cluster.
  • B. Create a VerticalPodAutscaler for those workloads.
  • C. Create a node pool with preemptible VMs and GPUs attached to those VMs.
  • D. Create a node pool of instances with GPUs, and enable autoscaling on this node pool with a minimum size of 1.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

You create a new Google Kubernetes Engine (GKE) cluster and want to make sure that it always runs a supported and
stable version of Kubernetes. What should you do?

  • A. Enable the Node Auto-Repair feature for your GKE cluster.
  • B. Enable the Node Auto-Upgrades feature for your GKE cluster.
  • C. Select the latest available cluster version for your GKE cluster.
  • D. Select “Container-Optimized OS (cos)” as a node image for your GKE cluster.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2