google professional cloud architect practice test

Professional Cloud Architect on Google Cloud Platform

Note: Test Case questions are at the end of the exam
Last exam update: Apr 18 ,2024
Page 1 out of 18. Viewing questions 1-15 out of 259

Question 1 Topic 6, Mixed Questions

Your operations team currently stores 10 TB of data in an object storage service from a third-party provider. They want to
move this data to a Cloud Storage bucket as quickly as possible, following Googlerecommended practices. They want to
minimize the cost of this data migration. Which approach should they use?

  • A. Use the gsutil mv command to move the data.
  • B. Use the Storage Transfer Service to move the data.
  • C. Download the data to a Transfer Appliance, and ship it to Google.
  • D. Download the data to the on-premises data center, and upload it to the Cloud Storage bucket.
Answer:

B

User Votes:
A
50%
B 2 votes
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets

Discussions
vote your answer:
A
B
C
D
0 / 1000
Sanjay191985
5 months, 1 week ago

Use the Storage Transfer Service to move the data.


Question 2 Topic 6, Mixed Questions

Your company has a Google Cloud project that uses BigQuery for data warehousing. There are some tables that contain
personally identifiable information (PII). Only the compliance team may access the PII. The other information in the tables
must be available to the data science team. You want to minimize cost and the time it takes to assign appropriate access to
the tables. What should you do?

  • A. 1. From the dataset where you have the source data, create views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view.
  • A. 1. From the dataset where you have the source data, create views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view.
  • B. 1. From the dataset where you have the source data, create materialized views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view.
  • B. 1. From the dataset where you have the source data, create materialized views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view.
  • C. 1. Create a dataset for the data science team. 2. Create views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
  • C. 1. Create a dataset for the data science team. 2. Create views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
  • D. 1. Create a dataset for the data science team. 2. Create materialized views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
Answer:

C

User Votes:
A
50%
A
50%
B
50%
B
50%
C
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/blog/topics/developers-practitioners/bigquery-admin-reference-guide-data-
governance?skip_cache=true

Discussions
vote your answer:
A
A
B
B
C
C
D
0 / 1000
Sanjay191985
5 months, 1 week ago

Create a dataset for the data science team. 2. Create views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.


Question 3 Topic 6, Mixed Questions

Your company has an application running on Google Cloud that is collecting data from thousands of physical devices that
are globally distributed. Data is published to Pub/Sub and streamed in real time into an SSD Cloud Bigtable cluster via a
Dataflow pipeline. The operations team informs you that your Cloud Bigtable cluster has a hotspot, and queries are taking
longer than expected. You need to resolve the problem and prevent it from happening in the future. What should you do?

  • A. Advise your clients to use HBase APIs instead of NodeJS APIs.
  • B. Delete records older than 30 days.
  • C. Review your RowKey strategy and ensure that keys are evenly spread across the alphabet.
  • D. Double the number of nodes you currently have.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
Sanjay191985
5 months, 1 week ago

Review your RowKey strategy and ensure that keys are evenly spread across the alphabet.


Question 4 Topic 6, Mixed Questions

Your company has just recently activated Cloud Identity to manage users. The Google Cloud Organization has been
configured as well. The security team needs to secure projects that will be part of the Organization. They want to prohibit
IAM users outside the domain from gaining permissions from now on. What should they do?

  • A. Configure an organization policy to restrict identities by domain.
  • B. Configure an organization policy to block creation of service accounts.
  • C. Configure Cloud Scheduler to trigger a Cloud Function every hour that removes all users that dont belong to the Cloud Identity domain from all projects.
  • D. Create a technical user (e.g., [email protected]), and give it the project owner role at root organization level. Write a bash script that: Lists all the IAM rules of all projects within the organization. Deletes all users that do not belong to the company domain. Create a Compute Engine instance in a project within the Organization and configure gcloud to be executed with technical user credentials. Configure a cron job that executes the bash script every hour.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://sysdig.com/blog/gcp-security-best-practices/

Discussions
vote your answer:
A
B
C
D
0 / 1000
Sanjay191985
5 months, 1 week ago

Create a technical user (e.g., [email protected]), and give it the project owner role at root organization level. Write a bash script that: Lists all the IAM rules of all projects within the organization. Deletes all users that do not belong to the company domain. Create a Compute Engine instance in a project within the Organization and configure gcloud to be executed with technical user credentials. Configure a cron job that executes the bash script every hour.


Question 5 Topic 6, Mixed Questions

Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs.
For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30
days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant,
minimizes costs, and follows Google-recommended practices. What should you do?

  • A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock.
  • A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock.
  • B. 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
  • B. 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
  • C. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
  • C. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
  • D. 1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set a time_partitioning_expiration of 30 days.
Answer:

C

User Votes:
A
50%
A
50%
B
50%
B
50%
C
50%
C
50%
D
50%
Discussions
vote your answer:
A
A
B
B
C
C
D
0 / 1000
Sanjay191985
5 months, 1 week ago

1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock.

Sanjay191985
5 months, 1 week ago

1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock.

Sanjay191985
5 months, 1 week ago

1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock.


Question 6 Topic 6, Mixed Questions

You want to allow your operations team to store logs from all the production projects in your Organization, without including
logs from other projects. All of the production projects are contained in a folder. You want to ensure that all logs for existing
and new production projects are captured automatically. What should you do?

  • A. Create an aggregated export on the Production folder. Set the log sink to be a Cloud Storage bucket in an operations project.
  • B. Create an aggregated export on the Organization resource. Set the log sink to be a Cloud Storage bucket in an operations project.
  • C. Create log exports in the production projects. Set the log sinks to be a Cloud Storage bucket in an operations project.
  • D. Create log exports in the production projects. Set the log sinks to be BigQuery datasets in the production projects, and grant IAM access to the operations team to run queries on the datasets.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/logging/docs/audit

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7 Topic 6, Mixed Questions

You are working with a data warehousing team that performs data analysis. The team needs to process data from external
partners, but the data contains personally identifiable information (PII). You need to process and store the data without
storing any of the PIIE data. What should you do?

  • A. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery.
  • B. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, store all non-PII data in BigQuery and store all PII data in a Cloud Storage bucket that has a retention policy set.
  • C. Ask the external partners to upload all data on Cloud Storage. Configure Bucket Lock for the bucket. Create a Dataflow pipeline to read the data from the bucket. As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery.
  • D. Ask the external partners to import all data in your BigQuery dataset. Create a dataflow pipeline to copy the data into a new table. As part of the Dataflow bucket, skip all data in columns that have PII data
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8 Topic 6, Mixed Questions

The operations team in your company wants to save Cloud VPN log events for one year. You need to configure the cloud
infrastructure to save the logs. What should you do?

  • A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save.
  • B. Enable the Compute Engine API, and then enable logging on the firewall rules that match the traffic you want to save.
  • C. Set up a Cloud Logging Dashboard titled Cloud VPN Logs, and then add a chart that queries for the VPN metrics over a one-year time period.
  • D. Set up a filter in Cloud Logging and a topic in Pub/Sub to publish the logs.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/network-connectivity/docs/vpn/how-to/viewing-logs-metrics

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9 Topic 6, Mixed Questions

Your company has an application running on Compute Engine that allows users to play their favorite music. There are a
fixed number of instances. Files are stored in Cloud Storage, and data is streamed directly to users. Users are reporting that
they sometimes need to attempt to play popular songs multiple times before they are successful. You need to improve the
performance of the application. What should you do?

  • A. 1. Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engine instances. 2. Serve music files directly from the backend Compute Engine instance.
  • B. 1. Create a Cloud Filestore NFS volume and attach it to the backend Compute Engine instances. 2. Download popular songs in Cloud Filestore. 3. Serve music files directly from the backend Compute Engine instance.C. 1. Copy popular songs into CloudSQL as a blob. 2. Update application code to retrieve data from CloudSQL when Cloud Storage is overloaded.
  • C. 1. Create a managed instance group with Compute Engine instances. 2. Create a global load balancer and configure it with two backends: Managed instance group Cloud Storage bucket 3. Enable Cloud CDN on the bucket backend.
Answer:

A

User Votes:
A
50%
B
50%
C
50%

Explanation:
Reference: https://cloud.google.com/compute/docs/logging/usage-export

Discussions
vote your answer:
A
B
C
0 / 1000

Question 10 Topic 6, Mixed Questions

Your company has a Google Workspace account and Google Cloud Organization. Some developers in the company have
created Google Cloud projects outside of the Google Cloud Organization.
You want to create an Organization structure that allows developers to create projects, but prevents them from modifying
production projects. You want to manage policies for all projects centrally and be able to set more restrictive policies for
production projects.
You want to minimize disruption to users and developers when business needs change in the future. You want to follow
Google-recommended practices. Now should you design the Organization structure?

  • A. 1. Create a second Google Workspace account and Organization. 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on both Organizations. 5. Additionally, set the production policies on the original Organization.
  • A. 1. Create a second Google Workspace account and Organization. 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on both Organizations. 5. Additionally, set the production policies on the original Organization.
  • B. 1. Create a folder under the Organization resource named Production. 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the Production folder.
  • B. 1. Create a folder under the Organization resource named Production. 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the Production folder.
  • C. 1. Create folders under the Organization resource named Development and Production. 2. Grant all developers the Project Creator IAM role on the Development folder. 3. Move the developer projects into the Development folder. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the Production folder.
  • C. 1. Create folders under the Organization resource named Development and Production. 2. Grant all developers the Project Creator IAM role on the Development folder. 3. Move the developer projects into the Development folder. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the Production folder.
  • D. 1. Designate the Organization for production projects only. 2. Ensure that developers do not have the Project Creator IAM role on the Organization. 3. Create development projects outside of the Organization using the developer Google Workspace accounts. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the individual production projects.
Answer:

D

User Votes:
A
50%
A
50%
B
50%
B
50%
C
50%
C
50%
D
50%

Explanation:
Reference: https://cloud.google.com/resource-manager/docs/creating-managing-organization

Discussions
vote your answer:
A
A
B
B
C
C
D
0 / 1000

Question 11 Topic 6, Mixed Questions

One of your primary business objectives is being able to trust the data stored in your application. You want to log all changes
to the application data.
How can you design your logging system to verify authenticity of your logs?

  • A. Write the log concurrently in the cloud and on premises
  • B. Use a SQL database and limit who can modify the log table
  • C. Digitally sign each timestamp and log entry and store the signature
  • D. Create a JSON dump of each log entry and store it in Google Cloud Storage
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12 Topic 6, Mixed Questions

Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9
months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? (Choose two.)

  • A. Compute Engine with containers
  • B. Google Kubernetes Engine with containers
  • C. Google App Engine Standard Environment
  • D. Compute Engine with custom instance types
  • E. Compute Engine with managed instance groups
Answer:

B C

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
B: With Container Engine, Google will automatically deploy your cluster for you, update, patch, secure the nodes.
Kubernetes Engine's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to
run. C: Solutions like Datastore, BigQuery, AppEngine, etc are truly NoOps.
App Engine by default scales the number of instances running up and down to match the load, thus providing consistent
performance for your app at all times while minimizing idle instances and thus reducing cost.
Note: At a high level, NoOps means that there is no infrastructure to build out and manage during usage of the platform.
Typically, the compromise you make with NoOps is that you lose control of the underlying infrastructure.
Reference: https://www.quora.com/How-well-does-Google-Container-Engine-support-Google-Cloud-Platform%E2%80%99s-
NoOps-claim

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 13 Topic 6, Mixed Questions

Your marketing department wants to send out a promotional email campaign. The development team wants to minimize
direct operation management. They project a wide range of possible customer responses, from 100 to 500,000 click-through
per day. The link leads to a simple website that explains the promotion and collects user information and preferences. Which
infrastructure should you recommend? (Choose two.)

  • A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
  • B. Use a Google Container Engine cluster to serve the website and store data to persistent disk.
  • C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
  • D. Use a single Compute Engine virtual machine (VM) to host a web server, backend by Google Cloud SQL.
Answer:

A C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:

Reference: https://cloud.google.com/storage-options/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14 Topic 6, Mixed Questions

You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.
What should you do?

  • A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container clusters resize CLUSTER_Name -size 10
  • B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE - - tags enableautoscaling max-nodes-10
  • C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
  • D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15 Topic 6, Mixed Questions

Your company places a high value on being responsive and meeting customer needs quickly. Their primary business
objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced.
Which two actions can you take? (Choose two.)

  • A. Ensure every code check-in is peer reviewed by a security SME
  • B. Use source code security analyzers as part of the CI/CD pipeline
  • C. Ensure you have stubs to unit test all interfaces between components
  • D. Enable code signing and a trusted binary repository integrated with your CI/CD pipeline
  • E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline
Answer:

B E

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000
To page 2