New Year Special Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: mxmas70

Home > Google > Cloud DevOps Engineer > Professional-Cloud-DevOps-Engineer

Professional-Cloud-DevOps-Engineer Google Cloud Certified - Professional Cloud DevOps Engineer Exam Question and Answers

Question # 4

Your company processes IOT data at scale by using Pub/Sub, App Engine standard environment, and an application written in GO. You noticed that the performance inconsistently degrades at peak load. You could not reproduce this issue on your workstation. You need to continuously monitor the application in production to identify slow paths in the code. You want to minimize performance impact and management overhead. What should you do?

  • Install a continuous profiling tool into Compute Engine. Configure the application to send profiling data to the tool.

A.

Periodically run the go tool pprof command against the application instance. Analyze the results by using flame graphs.

B.

Configure Cloud Profiler, and initialize the cloud.go@gle.com/go/profiler library in the application.

C.

Use Cloud Monitoring to assess the App Engine CPU utilization metric.

Full Access
Question # 5

Your organization recently adopted a container-based workflow for application development. Your team develops numerous applications that are deployed continuously through an automated build pipeline to a Kubernetes cluster in the production environment. The security auditor is concerned that developers or operators could circumvent automated testing and push code changes to production without approval. What should you do to enforce approvals?

A.

Configure the build system with protected branches that require pull request approval.

B.

Use an Admission Controller to verify that incoming requests originate from approved sources.

C.

Leverage Kubernetes Role-Based Access Control (RBAC) to restrict access to only approved users.

D.

Enable binary authorization inside the Kubernetes cluster and configure the build pipeline as an attestor.

Full Access
Question # 6

You currently store the virtual machine (VM) utilization logs in Stackdriver. You need to provide an easy-to-share interactive VM utilization dashboard that is updated in real time and contains information aggregated on a quarterly basis. You want to use Google Cloud Platform solutions. What should you do?

A.

1. Export VM utilization logs from Stackdriver to BigOuery.

2. Create a dashboard in Data Studio.

3. Share the dashboard with your stakeholders.

B.

1. Export VM utilization logs from Stackdriver to Cloud Pub/Sub.

2. From Cloud Pub/Sub, send the logs to a Security Information and Event Management (SIEM) system.

3. Build the dashboards in the SIEM system and share with your stakeholders.

C.

1. Export VM utilization logs (rom Stackdriver to BigQuery.

2. From BigQuery. export the logs to a CSV file.

3. Import the CSV file into Google Sheets.

4. Build a dashboard in Google Sheets and share it with your stakeholders.

D.

1. Export VM utilization logs from Stackdriver to a Cloud Storage bucket.

2. Enable the Cloud Storage API to pull the logs programmatically.

3. Build a custom data visualization application.

4. Display the pulled logs in a custom dashboard.

Full Access
Question # 7

You are using Terraform to manage infrastructure as code within a Cl/CD pipeline You notice that multiple copies of the entire infrastructure stack exist in your Google Cloud project, and a new copy is created each time a change to the existing infrastructure is made You need to optimize your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time. You want to follow Google-recommended practices What should you do?

A.

Create a new pipeline to delete old infrastructure stacks when they are no longer needed

B.

Confirm that the pipeline is storing and retrieving the terraform. if state file from Cloud Storage with the Terraform gcs backend

C.

Verify that the pipeline is storing and retrieving the terrafom.tfstat* file from a source control

D.

Update the pipeline to remove any existing infrastructure before you apply the latest configuration

Full Access
Question # 8

You are creating and assigning action items in a postmodern for an outage. The outage is over, but you need to address the root causes. You want to ensure that your team handles the action items quickly and efficiently. How should you assign owners and collaborators to action items?

A.

Assign one owner for each action item and any necessary collaborators.

B.

Assign multiple owners for each item to guarantee that the team addresses items quickly

C.

Assign collaborators but no individual owners to the items to keep the postmortem blameless.

D.

Assign the team lead as the owner for all action items because they are in charge of the SRE team.

Full Access
Question # 9

Your organization wants to increase the availability target of an application from 99 9% to 99 99% for an investment of $2 000 The application's current revenue is S1,000,000 You need to determine whether the increase in availability is worth the investment for a single year of usage What should you do?

A.

Calculate the value of improved availability to be $900, and determine that the increase in availability is not worth the investment

B.

Calculate the value of improved availability to be $1 000 and determine that the increase in availability is not worth the investment

C.

Calculate the value of improved availability to be $1 000 and determine that the increase in availability is worth the investment

D.

Calculate the value of improved availability to be $9,000. and determine that the increase in availability is worth the investment

Full Access
Question # 10

You are designing a deployment technique for your applications on Google Cloud. As part Of your deployment planning, you want to use live traffic to gather performance metrics for new versions Of your applications. You need to test against the full production load before your applications are launched. What should you do?

A.

Use A/B testing with blue/green deployment.

B.

Use shadow testing with continuous deployment.

C.

Use canary testing with continuous deployment.

D.

Use canary testing with rolling updates deployment,

Full Access
Question # 11

You support a Node.js application running on Google Kubernetes Engine (GKE) in production. The application makes several HTTP requests to dependent applications. You want to anticipate which dependent applications might cause performance issues. What should you do?

A.

Instrument all applications with Stackdriver Profiler.

B.

Instrument all applications with Stackdriver Trace and review inter-service HTTP requests.

C.

Use Stackdriver Debugger to review the execution of logic within each application to instrument all applications.

D.

Modify the Node.js application to log HTTP request and response times to dependent applications. Use Stackdriver Logging to find dependent applications that are performing poorly.

Full Access
Question # 12

You are the Site Reliability Engineer responsible for managing your company's data services and products. You regularly navigate operational challenges, such as unpredictable data volume and high cost, with your company's data ingestion processes. You recently learned that a new data ingestion product will be developed in Google Cloud. You need to collaborate with the product development team to provide operational input on the new product. What should you do?

A.

Deploy the prototype product in a test environment, run a load test, and share the results with the product development team.

B.

When the initial product version passes the quality assurance phase and compliance assessments, deploy the product to a staging environment. Share error logs and performance metrics with the product development team.

C.

When the new product is used by at least one internal customer in production, share error logs and monitoring metrics with the product development team.

D.

Review the design of the product with the product development team to provide feedback early in the design phase.

Full Access
Question # 13

You are working with a government agency that requires you to archive application logs for seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of storage. What should you do?

A.

Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.

B.

Develop an App Engine application that pulls the logs from Stackdriver and saves them in BigQuery.

C.

Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage for seven years.

D.

Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs, and then select the bucket as the log export destination.

Full Access
Question # 14

You are developing the deployment and testing strategies for your CI/CD pipeline in Google Cloud You must be able to

• Reduce the complexity of release deployments and minimize the duration of deployment rollbacks

• Test real production traffic with a gradual increase in the number of affected users

You want to select a deployment and testing strategy that meets your requirements What should you do?

A.

Recreate deployment and canary testing

B.

Blue/green deployment and canary testing

C.

Rolling update deployment and A/B testing

D.

Rolling update deployment and shadow testing

Full Access
Question # 15

You support a trading application written in Python and hosted on App Engine flexible environment. You want to customize the error information being sent to Stackdriver Error Reporting. What should you do?

A.

Install the Stackdriver Error Reporting library for Python, and then run your code on a Compute Engine VM.

B.

Install the Stackdriver Error Reporting library for Python, and then run your code on Google Kubernetes Engine.

C.

Install the Stackdriver Error Reporting library for Python, and then run your code on App Engine flexible environment.

D.

Use the Stackdriver Error Reporting API to write errors from your application to ReportedErrorEvent, and then generate log entries with properly formatted error messages in Stackdriver Logging.

Full Access
Question # 16

You have a pool of application servers running on Compute Engine. You need to provide a secure solution that requires the least amount of configuration and allows developers to easily access application logs for troubleshooting. How would you implement the solution on GCP?

A.

• Deploy the Stackdriver logging agent to the application servers.

• Give the developers the IAM Logs Viewer role to access Stackdriver and view logs.

B.

• Deploy the Stackdriver logging agent to the application servers.

• Give the developers the IAM Logs Private Logs Viewer role to access Stackdriver and view logs.

C.

• Deploy the Stackdriver monitoring agent to the application servers.

• Give the developers the IAM Monitoring Viewer role to access Stackdriver and view metrics.

D.

• Install the gsutil command line tool on your application servers.

• Write a script using gsutil to upload your application log to a Cloud Storage bucket, and then schedule it to run via cron every 5 minutes.

• Give the developers IAM Object Viewer access to view the logs in the specified bucket.

Full Access
Question # 17

Your company runs services by using multiple globally distributed Google Kubernetes Engine (GKE) clusters Your operations team has set up workload monitoring that uses Prometheus-based tooling for metrics alerts: and generating dashboards This setup does not provide a method to view metrics globally across all clusters You need to implement a scalable solution to support global Prometheus querying and minimize management overhead What should you do?

A.

Configure Prometheus cross-service federation for centralized data access

B.

Configure workload metrics within Cloud Operations for GKE

C.

Configure Prometheus hierarchical federation for centralized data access

D.

Configure Google Cloud Managed Service for Prometheus

Full Access
Question # 18

Your company is developing applications that are deployed on Google Kubernetes Engine (GKE) Each team manages a different application You need to create the development and production environments for each team while you minimize costs Different teams should not be able to access other teams environments You want to follow Google-recommended practices What should you do?

A.

Create one Google Cloud project per team In each project create a cluster for development and one for

production Grant the teams Identity and Access Management (1AM) access to their respective clusters

B.

Create one Google Cloud project per team In each project create a cluster with a Kubernetes namespace

for development and one for production Grant the teams Identity and Access Management (1AM) access to their respective clusters.

C.

Create a development and a production GKE cluster in separate projects In each cluster create a Kubernetes namespace per team and then configure Identity-Aware Proxy so that each team can only

access its own namespace

D.

Create a development and a production GKE cluster in separate projects In each cluster create a Kubernetes namespace per team and then configure Kubernetes role-based access control (RBAC) so that each team can only access its own namespace

Full Access
Question # 19

You are building and deploying a microservice on Cloud Run for your organization Your service is used by many applications internally You are deploying a new release, and you need to test the new version extensively in the staging and production environments You must minimize user and developer impact. What should you do?

A.

Deploy the new version of the service to the staging environment Split the traffic, and allow 1 % of traffic through to the latest version Test the latest version If the test passes gradually roll out the latest version to the staging and production environments

B.

Deploy the new version of the service to the staging environment Split the traffic, and allow 50% of traffic through to the latest version Test the latest version If the test passes, send all traffic to the latest version Repeat for the production environment

C.

Deploy the new version of the service to the staging environment with a new-release tag without serving traffic Test the new-release version If the test passes; gradually roll out this tagged version Repeat for the production environment

D.

Deploy a new environment with the green tag to use as the staging environment Deploy the new version of the service to the green environment and test the new version If the tests pass, send all traffic to the green environment and delete the existing staging environment Repeat for the production environment

Full Access
Question # 20

You support an e-commerce application that runs on a large Google Kubernetes Engine (GKE) cluster deployed on-premises and on Google Cloud Platform. The application consists of microservices that run in containers. You want to identify containers that are using the most CPU and memory. What should you do?

A.

Use Stackdriver Kubernetes Engine Monitoring.

B.

Use Prometheus to collect and aggregate logs per container, and then analyze the results in Grafana.

C.

Use the Stackdriver Monitoring API to create custom metrics, and then organize your containers using groups.

D.

Use Stackdriver Logging to export application logs to BigOuery. aggregate logs per container, and then analyze CPU and memory consumption.

Full Access
Question # 21

You have a CI/CD pipeline that uses Cloud Build to build new Docker images and push them to Docker Hub. You use Git for code versioning. After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are being built by the pipeline. You need to resolve the issue following Site Reliability Engineering practices. What should you do?

A.

Disable the CI pipeline and revert to manually building and pushing the artifacts.

B.

Change the CI pipeline to push the artifacts to Container Registry instead of Docker Hub.

C.

Upload the configuration YAML file to Cloud Storage and use Error Reporting to identify and fix the issue.

D.

Run a Git compare between the previous and current Cloud Build Configuration files to find and fix the bug.

Full Access
Question # 22

You need to enforce several constraint templates across your Google Kubernetes Engine (GKE) clusters. The constraints include policy parameters, such as restricting the Kubernetes API. You must ensure that the policy parameters are stored in a GitHub repository and automatically applied when changes occur. What should you do?

A.

Set up a GitHub action to trigger Cloud Build when there is a parameter change. In Cloud Build, run a gcloud CLI command to apply the change.

B.

When there is a change in GitHub, use a web hook to send a request to Anthos Service Mesh, and apply the change.

C.

Configure Anthos Config Management with the GitHub repository. When there is a change in the repository, use Anthos Config Management to apply the change.

D.

Configure Config Connector with the GitHub repository. When there is a change in the repository, use Config Connector to apply the change.

Full Access
Question # 23

Your applications performance in Google Cloud has degraded since the last release You suspect that downstream dependencies might be causing some requests to take longer to complete You need to investigate the issue with your application to determine the cause What should you do?

A.

Configure Error Reporting in your application

B.

Configure Google Cloud Managed Service for Prometheus in your application

C.

Configure Cloud Profiler in your application

D.

Configure Cloud Trace in your application

Full Access
Question # 24

Your company follows Site Reliability Engineering practices. You are the Incident Commander for a new. customer-impacting incident. You need to immediately assign two incident management roles to assist you in an effective incident response. What roles should you assign?

Choose 2 answers

A.

Operations Lead

B.

Engineering Lead

C.

Communications Lead

D.

Customer Impact Assessor

E.

External Customer Communications Lead

Full Access
Question # 25

Your team deploys applications to three Google Kubernetes Engine (GKE) environments development staging and production You use GitHub reposrtones as your source of truth You need to ensure that the three environments are consistent You want to follow Google-recommended practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in those environments What should you do?

A.

Use Google Cloud Deploy to deploy the network policies and the DaemonSet Use Cloud Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the repository.

B.

Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the network policies Use Cloud Monitoring to detect drifts from the source in the repository and Cloud Functions to

correct the drifts

C.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up Config Sync to sync the configurations for the three environments

D.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up a Policy Controller to enforce the configurations for the three environments

Full Access
Question # 26

You support an application deployed on Compute Engine. The application connects to a Cloud SQL instance to store and retrieve data. After an update to the application, users report errors showing database timeout messages. The number of concurrent active users remained stable. You need to find the most probable cause of the database timeout. What should you do?

A.

Check the serial port logs of the Compute Engine instance.

B.

Use Stackdriver Profiler to visualize the resources utilization throughout the application.

C.

Determine whether there is an increased number of connections to the Cloud SQL instance.

D.

Use Cloud Security Scanner to see whether your Cloud SQL is under a Distributed Denial of Service (DDoS) attack.

Full Access
Question # 27

You need to reduce the cost of virtual machines (VM| for your organization. After reviewing different options, you decide to leverage preemptible VM instances. Which application is suitable for preemptible VMs?

A.

A scalable in-memory caching system

B.

The organization's public-facing website

C.

A distributed, eventually consistent NoSQL database cluster with sufficient quorum

D.

A GPU-accelerated video rendering platform that retrieves and stores videos in a storage bucket

Full Access
Question # 28

You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs:

Initializing the backend. ..

Error: Failed to get existing workspaces : querying Cloud Storage failed: googleapi : Error

403

You need to resolve the issue by following Google-recommended practices. What should you do?

A.

Change the Terraform code to use local state.

B.

Create a storage bucket with the name specified in the Terraform configuration.

C.

Grant the roles/ owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.

D.

Grant the roles/ storage. objectAdmin Identity and Access Management (IAM) role to the Cloud Build service account on the state file bucket.

Full Access
Question # 29

You support a high-traffic web application that runs on Google Cloud Platform (GCP). You need to measure application reliability from a user perspective without making any engineering changes to it. What should you do?

Choose 2 answers

A.

Review current application metrics and add new ones as needed.

B.

Modify the code to capture additional information for user interaction.

C.

Analyze the web proxy logs only and capture response time of each request.

D.

Create new synthetic clients to simulate a user journey using the application.

E.

Use current and historic Request Logs to trace customer interaction with the application.

Full Access
Question # 30

Your organization is starting to containerize with Google Cloud. You need a fully managed storage solution for container images and Helm charts. You need to identify a storage solution that has native integration into existing Google Cloud services, including Google Kubernetes Engine (GKE), Cloud Run, VPC Service Controls, and Identity and Access Management (IAM). What should you do?

A.

Use Docker to configure a Cloud Storage driver pointed at the bucket owned by your organization.

B.

Configure Container Registry as an OCI-based container registry for container images.

C.

Configure Artifact Registry as an OCI-based container registry for both Helm charts and container images.

D.

Configure an open source container registry server to run in GKE with a restrictive role-based access control (RBAC) configuration.

Full Access
Question # 31

You recently migrated an ecommerce application to Google Cloud. You now need to prepare the application for the upcoming peak traffic season. You want to follow Google-recommended practices. What should you do first to prepare for the busy season?

A.

Migrate the application to Cloud Run, and use autoscaling.

B.

Load test the application to profile its performance for scaling.

C.

Create a Terraform configuration for the application's underlying infrastructure to quickly deploy to additional regions.

D.

Pre-provision the additional compute power that was used last season, and expect growth.

Full Access
Question # 32

Your team of Infrastructure DevOps Engineers is growing, and you are starting to use Terraform to manage infrastructure. You need a way to implement code versioning and to share code with other team members. What should you do?

A.

Store the Terraform code in a version-control system. Establish procedures for pushing new versions and merging with the master.

B.

Store the Terraform code in a network shared folder with child folders for each version release. Ensure that everyone works on different files.

C.

Store the Terraform code in a Cloud Storage bucket using object versioning. Give access to the bucket to every team member so they can download the files.

D.

Store the Terraform code in a shared Google Drive folder so it syncs automatically to every team member’s computer. Organize files with a naming convention that identifies each new version.

Full Access
Question # 33

You manage several production systems that run on Compute Engine in the same Google Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances. You want to know how must it costs to run each of the systems. What should you do?

A.

In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per system.

B.

Assign all instances a label specific to the system they run. Configure BigQuery billing export and query costs per label.

C.

Enrich all instances with metadata specific to the system they run. Configure Stackdriver Logging to export to BigQuery, and query costs based on the metadata.

D.

Name each virtual machine (VM) after the system it runs. Set up a usage report export to a Cloud Storage bucket. Configure the bucket as a source in BigQuery to query costs based on VM name.

Full Access
Question # 34

You are part of an organization that follows SRE practices and principles. You are taking over the management of a new service from the Development Team, and you conduct a Production Readiness Review (PRR). After the PRR analysis phase, you determine that the service cannot currently meet its Service Level Objectives (SLOs). You want to ensure that the service can meet its SLOs in production. What should you do next?

A.

Adjust the SLO targets to be achievable by the service so you can bring it into production.

B.

Notify the development team that they will have to provide production support for the service.

C.

Identify recommended reliability improvements to the service to be completed before handover.

D.

Bring the service into production with no SLOs and build them when you have collected operational data.

Full Access
Question # 35

You are running an application in a virtual machine (VM) using a custom Debian image. The image has the Stackdriver Logging agent installed. The VM has the cloud-platform scope. The application is logging information via syslog. You want to use Stackdriver Logging in the Google Cloud Platform Console to visualize the logs. You notice that syslog is not showing up in the "All logs" dropdown list of the Logs Viewer. What is the first thing you should do?

A.

Look for the agent's test log entry in the Logs Viewer.

B.

Install the most recent version of the Stackdriver agent.

C.

Verify the VM service account access scope includes the monitoring.write scope.

D.

SSH to the VM and execute the following commands on your VM: ps ax I grep fluentd

Full Access
Question # 36

You work for a global organization and run a service with an availability target of 99% with limited engineering resources. For the current calendar month you noticed that the service has 99 5% availability. You must ensure that your service meets the defined availability goals and can react to business changes including the upcoming launch of new features You also need to reduce technical debt while minimizing operational costs You want to follow Google-recommended practices What should you do?

A.

Add N+1 redundancy to your service by adding additional compute resources to the service

B.

Identify, measure and eliminate toil by automating repetitive tasks

C.

Define an error budget for your service level availability and minimize the remaining error budget

D.

Allocate available engineers to the feature backlog while you ensure that the sen/ice remains within the availability target

Full Access
Question # 37

Your application services run in Google Kubernetes Engine (GKE). You want to make sure that only images from your centrally-managed Google Container Registry (GCR) image registry in the altostrat-images project can be deployed to the cluster while minimizing development time. What should you do?

A.

Create a custom builder for Cloud Build that will only push images to gcr.io/altostrat-images.

B.

Use a Binary Authorization policy that includes the whitelist name pattern gcr.io/attostrat-images/.

C.

Add logic to the deployment pipeline to check that all manifests contain only images from gcr.io/altostrat-images.

D.

Add a tag to each image in gcr.io/altostrat-images and check that this tag is present when the image is deployed.

Full Access
Question # 38

You are managing the production deployment to a set of Google Kubernetes Engine (GKE) clusters. You want to make sure only images which are successfully built by your trusted CI/CD pipeline are deployed to production. What should you do?

A.

Enable Cloud Security Scanner on the clusters.

B.

Enable Vulnerability Analysis on the Container Registry.

C.

Set up the Kubernetes Engine clusters as private clusters.

D.

Set up the Kubernetes Engine clusters with Binary Authorization.

Full Access
Question # 39

Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do?

A.

A Install a Fluent Bit sidecar container, and use a JSON parser.

B.

Install the log agent in the Cloud Run container image, and use the log agent to forward logs to Cloud Logging.

C.

Configure the log agent to convert log text payload to JSON payload.

D.

Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPay10ad field.

Full Access
Question # 40

You have an application running in Google Kubernetes Engine. The application invokes multiple services per request but responds too slowly. You need to identify which downstream service or services are causing the delay. What should you do?

A.

Analyze VPC flow logs along the path of the request.

B.

Investigate the Liveness and Readiness probes for each service.

C.

Create a Dataflow pipeline to analyze service metrics in real time.

D.

Use a distributed tracing framework such as OpenTelemetry or Stackdriver Trace.

Full Access
Question # 41

Your team is writing a postmortem after an incident on your external facing application Your team wants to improve the postmortem policy to include triggers that indicate whether an incident requires a postmortem Based on Site Reliability Engineenng (SRE) practices, what triggers should be defined in the postmortem policy?

Choose 2 answers

A.

An external stakeholder asks for a postmortem

B.

Data is lost due to an incident

C.

An internal stakeholder requests a postmortem

D.

The monitoring system detects that one of the instances for your application has failed

E.

The CD pipeline detects an issue and rolls back a problematic release.

Full Access
Question # 42

Your company is using HTTPS requests to trigger a public Cloud Run-hosted service accessible at the https://booking-engine-abcdef .a.run.app URL You need to give developers the ability to test the latest revisions of the service before the service is exposed to customers What should you do?

A.

Runthegcioud run deploy booking-engine —no-traffic —-ag dev command Use the https://dev----booking-engine-abcdef. a. run. app URL for testing

B.

Runthegcioud run services update-traffic booking-engine —to-revisions LATEST*! command Use the ht tps: //booking-engine-abcdef. a. run. ape URL for testing

C.

Pass the curl -K "Authorization: Hearer S(gclcud auth print-identity-token)" auth token Use the https: / /booking-engine-abcdef. a. run. app URL to test privately

D.

Grant the roles/run. invoker role to the developers testing the booking-engine service Use the https: //booking-engine-abcdef. private. run. app URL for testing

Full Access
Question # 43

Your company is developing applications that are deployed on Google Kubernetes Engine (GKE). Each team manages a different application. You need to create the development and production environments for each team, while minimizing costs. Different teams should not be able to access other teams’ environments. What should you do?

A.

Create one GCP Project per team. In each project, create a cluster for Development and one for Production. Grant the teams IAM access to their respective clusters.

B.

Create one GCP Project per team. In each project, create a cluster with a Kubernetes namespace for Development and one for Production. Grant the teams IAM access to their respective clusters.

C.

Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Identity Aware Proxy so that each team can only access its own namespace.

D.

Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Kubernetes Role-based access control (RBAC) so that each team can only access its own namespace.

Full Access
Question # 44

Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort. What should you do?

A.

Use Cloud Build to trigger a Spinnaker pipeline.

B.

Use Cloud Pub/Sub to trigger a Spinnaker pipeline.

C.

Use a custom builder in Cloud Build to trigger a Jenkins pipeline.

D.

Use Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine (GKE).

Full Access
Question # 45

You are the Operations Lead for an ongoing incident with one of your services. The service usually runs at around 70% capacity. You notice that one node is returning 5xx errors for all requests. There has also been a noticeable increase in support cases from customers. You need to remove the offending node from the load balancer pool so that you can isolate and investigate the node. You want to follow Google-recommended practices to manage the incident and reduce the impact on users. What should you do?

A.

1. Communicate your intent to the incident team.

2. Perform a load analysis to determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately.

3. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.

B.

1. Communicate your intent to the incident team.

2. Add a new node to the pool, and wait for the new node to report as healthy.

3. When traffic is being served on the new node, drain traffic from the unhealthy node, and remove the old node from service.

C.

1 . Drain traffic from the unhealthy node and remove the node from service.

2. Monitor traffic to ensure that the error is resolved and that the other nodes in the pool are handling the traffic appropriately.

3. Scale the pool as necessary to handle the new load.

4. Communicate your actions to the incident team.

D.

1 . Drain traffic from the unhealthy node and remove the old node from service.

2. Add a new node to the pool, wait for the new node to report as healthy, and then serve traffic to the new node.

3. Monitor traffic to ensure that the pool is healthy and is handling traffic appropriately.

4. Communicate your actions to the incident team.

Full Access