New Year Special Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: mxmas70

Home > Google > Cloud Developer > Professional-Cloud-Developer

Professional-Cloud-Developer Google Certified Professional - Cloud Developer Question and Answers

Question # 4

You have an application running on Google Kubernetes Engine (GKE). The application is currently using a logging library and is outputting to standard output You need to export the logs to Cloud Logging, and you need the logs to include metadata about each request. You want to use the simplest method to accomplish this. What should you do?

A.

Change your application s logging library to the Cloud Logging library and configure your application to export logs to Cloud Logging

B.

Update your application to output logs in CSV format, and add the necessary metadata to the CSV.

C.

Install the Fluent Bit agent on each of your GKE nodes, and have the agent export all logs from /var/ log.

D.

Update your application to output logs in JSON format, and add the necessary metadata to the JSON

Full Access
Question # 5

Your data is stored in Cloud Storage buckets. Fellow developers have reported that data downloaded from Cloud Storage is resulting in slow API performance. You want to research the issue to provide details to the GCP support team. Which command should you run?

A.

gsutil test –o output.json gs://my-bucket

B.

gsutil perfdiag –o output.json gs://my-bucket

C.

gcloud compute scp example-instance:~/test-data –o output.json gs://my-bucket

D.

gcloud services test –o output.json gs://my-bucket

Full Access
Question # 6

You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use?

A.

Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account.

B.

Use Bigtable, and assign the roles/bigtable.viewer role to the service account.

C.

Use Firestore in Native mode and assign the roles/datastore.user role to the service account.

D.

Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account.

Full Access
Question # 7

You developed a JavaScript web application that needs to access Google Drive’s API and obtain permission from users to store files in their Google Drives. You need to select an authorization approach for your application. What should you do?

A.

Create an API key.

B.

Create a SAML token.

C.

Create a service account.

D.

Create an OAuth Client ID.

Full Access
Question # 8

You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do?

A.

Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script.

B.

Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script.

C.

Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script.

D.

Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script.

Full Access
Question # 9

You made a typo in a low-level Linux configuration file that prevents your Compute Engine instance from booting to a normal run level. You just created the Compute Engine instance today and have done no other maintenance on it, other than tweaking files. How should you correct this error?

A.

Download the file using scp, change the file, and then upload the modified version

B.

Configure and log in to the Compute Engine instance through SSH, and change the file

C.

Configure and log in to the Compute Engine instance through the serial port, and change the file

D.

Configure and log in to the Compute Engine instance using a remote desktop client, and change the file

Full Access
Question # 10

You are using the Cloud Client Library to upload an image in your application to Cloud Storage. Users of the application report that occasionally the upload does not complete and the client library reports an HTTP 504 Gateway Timeout error. You want to make the application more resilient to errors. What changes to the application should you make?

A.

Write an exponential backoff process around the client library call.

B.

Write a one-second wait time backoff process around the client library call.

C.

Design a retry button in the application and ask users to click if the error occurs.

D.

Create a queue for the object and inform the users that the application will try again in 10 minutes.

Full Access
Question # 11

You are deploying a microservices application to Google Kubernetes Engine (GKE). The application will receive daily updates. You expect to deploy a large number of distinct containers that will run on the Linux operating system (OS). You want to be alerted to any known OS vulnerabilities in the new containers. You want to follow Google-recommended best practices. What should you do?

A.

Use the gcloud CLI to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.

B.

Enable Container Analysis, and upload new container images to Artifact Registry. Review the vulnerability results before each deployment.

C.

Enable Container Analysis, and upload new container images to Artifact Registry. Review the critical vulnerability results before each deployment.

D.

Use the Container Analysis REST API to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.

Full Access
Question # 12

You have decided to migrate your Compute Engine application to Google Kubernetes Engine. You need to build a container image and push it to Artifact Registry using Cloud Build. What should you do? (Choose two.)

A)

Run gcloud builds submit in the directory that contains the application source code.

B)

Run gcloud run deploy app-name --image gcr.io/$PROJECT_ID/app-name in the directory that contains the application source code.

C)

Run gcloud container images add-tag gcr.io/$PROJECT_ID/app-name gcr.io/$PROJECT_ID/app-name:latest in the directory that contains the application source code.

D)

In the application source directory, create a file named cloudbuild.yaml that contains the following contents:

E)

In the application source directory, create a file named cloudbuild.yaml that contains the following contents:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

E.

Option E

Full Access
Question # 13

Your website is deployed on Compute Engine. Your marketing team wants to test conversion rates between 3

different website designs.

Which approach should you use?

A.

Deploy the website on App Engine and use traffic splitting.

B.

Deploy the website on App Engine as three separate services.

C.

Deploy the website on Cloud Functions and use traffic splitting.

D.

Deploy the website on Cloud Functions as three separate functions.

Full Access
Question # 14

You are designing a resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster. You need to ensure that all applications can access the resources needed to run. What should you do? (Choose two.)

A.

Specify the resource limits and requests in the object specifications.

B.

Create a namespace for each team, and attach resource quotas to each namespace.

C.

Create a LimitRange to specify the default compute resource requirements for each namespace.

D.

Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace.

E.

Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for namespaces.

Full Access
Question # 15

Your team is developing an application in Google Cloud that executes with user identities maintained by Cloud Identity. Each of your application’s users will have an associated Pub/Sub topic to which messages are published, and a Pub/Sub subscription where the same user will retrieve published messages. You need to ensure that only authorized users can publish and subscribe to their own specific Pub/Sub topic and subscription. What should you do?

A.

Bind the user identity to the pubsub.publisher and pubsub.subscriber roles at the resource level.

B.

Grant the user identity the pubsub.publisher and pubsub.subscriber roles at the project level.

C.

Grant the user identity a custom role that contains the pubsub.topics.create and pubsub.subscriptions.create permissions.

D.

Configure the application to run as a service account that has the pubsub.publisher and pubsub.subscriber roles.

Full Access
Question # 16

You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which source code is consuming the most CPU and memory resources. What should you do?

A.

Download, install, and start the Snapshot Debugger agent in your VM. Take debug snapshots of the functions that take the longest time. Review the call stack frame, and identify the local variables at that level in the stack.

B.

Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions.

C.

Import OpenTelemetry and Trace export packages into your application, and create the trace provider.

Review the latency data for your application on the Trace overview page, and identify where bottlenecks are occurring.

D.

Create a Cloud Logging query that gathers the web application's logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identity time-intensive functions.

Full Access
Question # 17

Your application performs well when tested locally, but it runs significantly slower when you deploy it to App Engine standard environment. You want to diagnose the problem. What should you do?

A.

File a ticket with Cloud Support indicating that the application performs faster locally.

B.

Use Stackdriver Debugger Snapshots to look at a point-in-time execution of the application.

C.

Use Stackdriver Trace to determine which functions within the application have higher latency.

D.

Add logging commands to the application and use Stackdriver Logging to check where the latency problem occurs.

Full Access
Question # 18

Your analytics system executes queries against a BigQuery dataset. The SQL query is executed in batch and passes the contents of a SQL file to the BigQuery CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting a permission error from the BigQuery CLI when the queries are executed. You want to resolve the issue. What should you do?

A.

Grant the service account BigQuery Data Viewer and BigQuery Job User roles.

B.

Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles.

C.

Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI.

D.

Create a new dataset in BigQuery, and copy the source table to the new dataset Query the new dataset and table from the CLI.

Full Access
Question # 19

You are creating and running containers across different projects in Google Cloud. The application you are developing needs to access Google Cloud services from within Google Kubernetes Engine (GKE).

What should you do?

A.

Assign a Google service account to the GKE nodes.

B.

Use a Google service account to run the Pod with Workload Identity.

C.

Store the Google service account credentials as a Kubernetes Secret.

D.

Use a Google service account with GKE role-based access control (RBAC).

Full Access
Question # 20

You work on an application that relies on Cloud Spanner as its main datastore. New application features have occasionally caused performance regressions. You want to prevent performance issues by running an automated performance test with Cloud Build for each commit made. If multiple commits are made at the same time, the tests might run concurrently. What should you do?

A.

Create a new project with a random name for every build. Load the required data. Delete the project after the test is run.

B.

Create a new Cloud Spanner instance for every build. Load the required data. Delete the Cloud Spanner instance after the test is run.

C.

Create a project with a Cloud Spanner instance and the required data. Adjust the Cloud Build build file to automatically restore the data to its previous state after the test is run.

D.

Start the Cloud Spanner emulator locally. Load the required data. Shut down the emulator after the test is run.

Full Access
Question # 21

You manage a microservices application on Google Kubernetes Engine (GKE) using Istio. You secure the communication channels between your microservices by implementing an Istio AuthorizationPolicy, a Kubernetes NetworkPolicy, and mTLS on your GKE cluster. You discover that HTTP requests between two Pods to specific URLs fail, while other requests to other URLs succeed. What is the cause of the connection issue?

A.

A Kubernetes NetworkPolicy resource is blocking HTTP traffic between the Pods.

B.

The Pod initiating the HTTP requests is attempting to connect to the target Pod via an incorrect TCP port.

C.

The Authorization Policy of your cluster is blocking HTTP requests for specific paths within your application.

D.

The cluster has mTLS configured in permissive mode, but the Pod's sidecar proxy is sending unencrypted traffic in plain text.

Full Access
Question # 22

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

A.

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Full Access
Question # 23

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

A.

Cloud Spanner

B.

Cloud Datastore

C.

Cloud Memorystore as a cache

D.

Separate Cloud SQL clusters for each region

Full Access
Question # 24

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

A.

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Full Access
Question # 25

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

A.

Cloud Profiler

B.

Cloud Monitoring

C.

Cloud Trace

D.

Cloud Logging

Full Access
Question # 26

Which service should HipLocal use for their public APIs?

A.

Cloud Armor

B.

Cloud Functions

C.

Cloud Endpoints

D.

Shielded Virtual Machines

Full Access
Question # 27

Which database should HipLocal use for storing user activity?

A.

BigQuery

B.

Cloud SQL

C.

Cloud Spanner

D.

Cloud Datastore

Full Access
Question # 28

In order to meet their business requirements, how should HipLocal store their application state?

A.

Use local SSDs to store state.

B.

Put a memcache layer in front of MySQL.

C.

Move the state storage to Cloud Spanner.

D.

Replace the MySQL instance with Cloud SQL.

Full Access
Question # 29

HipLocal is configuring their access controls.

Which firewall configuration should they implement?

A.

Block all traffic on port 443.

B.

Allow all traffic into the network.

C.

Allow traffic on port 443 for a specific tag.

D.

Allow all traffic on port 443 into the network.

Full Access
Question # 30

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

A.

Use App Engine for autoscaling.

B.

Use Cloud Functions for autoscaling.

C.

Use a Compute Engine cluster for the service.

D.

Use a dedicated Compute Engine virtual machine instance for the service.

Full Access
Question # 31

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

A.

Migrate the database to Bigtable and use it to serve all global user traffic.

B.

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Full Access
Question # 32

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

A.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Full Access
Question # 33

For this question, refer to the HipLocal case study.

HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

A.

Create an API key. Use the API key to interact with Google Cloud.

B.

Use the default compute service account to interact with Google Cloud.

C.

Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D.

Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

Full Access
Question # 34

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

A.

Use Google App Engine services.

B.

Use serverless Google Cloud Functions.

C.

Use Knative to build and deploy serverless applications.

D.

Use Google Kubernetes Engine for automated deployments.

E.

Use a large Google Compute Engine cluster for deployments.

Full Access
Question # 35

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

A.

Take frequent snapshots of all of the VMs.

B.

Install the Stackdriver Logging agent on the VMs.

C.

Install the Stackdriver Monitoring agent on the VMs.

D.

Use Stackdriver Trace to look for performance bottlenecks.

Full Access
Question # 36

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

A.

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.

Full Access
Question # 37

Which service should HipLocal use to enable access to internal apps?

A.

Cloud VPN

B.

Cloud Armor

C.

Virtual Private Cloud

D.

Cloud Identity-Aware Proxy

Full Access
Question # 38

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

A.

Create manual subnets.

B.

Create an auto mode subnet.

C.

Create multiple peered VPCs.

D.

Provision a single instance for NAT.

Full Access
Question # 39

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

A.

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Full Access
Question # 40

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

A.

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.

Migrate data to Firestore in Native mode and set up instan

Full Access