Black Friday Special Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: mxmas70

Home > Google > Google Cloud Certified > Associate-Cloud-Engineer

Associate-Cloud-Engineer Google Cloud Certified - Associate Cloud Engineer Question and Answers

Question # 4

You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services. What should you do?

A.

Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet.

B.

Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition.

C.

With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet.

D.

In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value.

Full Access
Question # 5

During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain.

You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do?

A.

Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.

B.

Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.

C.

Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.

D.

Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users.

Full Access
Question # 6

Your company is moving from an on-premises environment to Google Cloud Platform (GCP). You have multiple development teams that use Cassandra environments as backend databases. They all need a development environment that is isolated from other Cassandra instances. You want to move to GCP quickly and with minimal support effort. What should you do?

A.

1. Build an instruction guide to install Cassandra on GCP.

2. Make the instruction guide accessible to your developers.

B.

1. Advise your developers to go to Cloud Marketplace.

2. Ask the developers to launch a Cassandra image for their development work.

C.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.

2. Use the snapshot to create instances for your developers.

D.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.

2.Upload the snapshot to Cloud Storage and make it accessible to your developers.

3.Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves.

Full Access
Question # 7

You are running out of primary internal IP addresses in a subnet for a custom mode VPC. The subnet has the IP range 10.0.0.0/20. and the IP addresses are primarily used by virtual machines in the project. You need to provide more IP addresses for the virtual machines. What should you do?

A.

Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/22.

B.

Change the subnet IP range from 10.0 0.0/20 to 10.0.0.0718.

C.

Add a secondary IP range 10.1.0.0/20 to the subnet.

D.

Convert the subnet IP range from IPv4 to IPv6

Full Access
Question # 8

You have a Compute Engine instance hosting a production application. You want to receive an email if the instance consumes more than 90% of its CPU resources for more than 15 minutes. You want to use Google services. What should you do?

A.

1. Create a consumer Gmail account.

2.Write a script that monitors the CPU usage.

3.When the CPU usage exceeds the threshold, have that script send an email using the Gmail account and smtp.gmail.com on port 25 as SMTP server.

B.

1. Create a Stackdriver Workspace, and associate your Google Cloud Platform (GCP) project with it.

2.Create an Alerting Policy in Stackdriver that uses the threshold as a trigger condition.

3.Configure your email address in the notification channel.

C.

1. Create a Stackdriver Workspace, and associate your GCP project with it.

2.Write a script that monitors the CPU usage and sends it as a custom metric to Stackdriver.

3.Create an uptime check for the instance in Stackdriver.

D.

1. In Stackdriver Logging, create a logs-based metric to extract the CPU usage by using this regular expression: CPU Usage: ([0-9] {1,3}) %

2.In Stackdriver Monitoring, create an Alerting Policy based on this metric.

3.Configure your email address in the notification channel.

Full Access
Question # 9

Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices. Which storage option should you use?

A.

Multi-Regional Storage

B.

Regional Storage

C.

Nearline Storage

D.

Coldline Storage

Full Access
Question # 10

A team of data scientists infrequently needs to use a Google Kubernetes Engine (GKE) cluster that you manage. They require GPUs for some long-running, non-restartable jobs. You want to minimize cost. What should you do?

A.

Enable node auto-provisioning on the GKE cluster.

B.

Create a VerticalPodAutscaler for those workloads.

C.

Create a node pool with preemptible VMs and GPUs attached to those VMs.

D.

Create a node pool of instances with GPUs, and enable autoscaling on this node pool with a minimum size of 1.

Full Access
Question # 11

Your manager asks you to deploy a workload to a Kubernetes cluster. You are not sure of the workloads resource requirements or how the requirements might vary depending on usage patterns, external dependencies, or other factors. You need a solution that makes cost-effective recommendations regarding CPU and memory requirements, and allows the workload to function consistently in any situation. You want to follow Google-recommended practices. What should you do?

A.

Configure the Horizontal Pod Autoscaler for availability, and configure the cluster autoscaler for suggestions.

B.

Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical Pod Autoscaler recommendations for suggestions.

C.

Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Cluster autoscaler for suggestions.

D.

Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Horizontal Pod Autoscaler for suggestions.

Full Access
Question # 12

You are working with a user to set up an application in a new VPC behind a firewall. The user is concerned about data egress. You want to configure the fewest open egress ports. What should you do?

A.

Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.

B.

Set up a high-priority (1000) rule that pairs both ingress and egress ports.

C.

Set up a high-priority (1000) rule that blocks all egress and a low-priority (65534) rule that allows only the appropriate ports.

D.

Set up a high-priority (1000) rule to allow the appropriate ports.

Full Access
Question # 13

You host a static website on Cloud Storage. Recently, you began to include links to PDF files on this site. Currently, when users click on the links to these PDF files, their browsers prompt them to save the file onto their local system. Instead, you want the clicked PDF files to be displayed within the browser window directly, without prompting the user to save the file locally. What should you do?

A.

Enable Cloud CDN on the website frontend.

B.

Enable ‘Share publicly’ on the PDF file objects.

C.

Set Content-Type metadata to application/pdf on the PDF file objects.

D.

Add a label to the storage bucket with a key of Content-Type and value of application/pdf.

Full Access
Question # 14

You are running multiple microservices in a Kubernetes Engine cluster. One microservice is rendering images. The microservice responsible for the image rendering requires a large amount of CPU time compared to the memory it requires. The other microservices are workloads that are optimized for n1-standard machine types. You need to optimize your cluster so that all workloads are using resources as efficiently as possible. What should you do?

A.

Assign the pods of the image rendering microservice a higher pod priority than the older microservices

B.

Create a node pool with compute-optimized machine type nodes for the image rendering microservice Use the node pool with general-purpose

machine type nodes for the other microservices

C.

Use the node pool with general-purpose machine type nodes for lite mage rendering microservice Create a nodepool with compute-optimized machine type nodes for the other microservices

D.

Configure the required amount of CPU and memory in the resource requests specification of the image rendering microservice deployment Keep the resource requests for the other microservices at the default

Full Access
Question # 15

You need to create a custom IAM role for use with a GCP service. All permissions in the role must be suitable for production use. You also want to clearly share with your organization the status of the custom role. This will be the first version of the custom role. What should you do?

A.

Use permissions in your role that use the ‘supported’ support level for role permissions. Set the role stage to ALPHA while testing the role permissions.

B.

Use permissions in your role that use the ‘supported’ support level for role permissions. Set the role stage to BETA while testing the role permissions.

C.

Use permissions in your role that use the ‘testing’ support level for role permissions. Set the role stage to ALPHA while testing the role permissions.

D.

Use permissions in your role that use the ‘testing’ support level for role permissions. Set the role stage to BETA while testing the role permissions.

Full Access
Question # 16

You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?

A.

Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.

B.

Encode username and password in sha256 encoding, and save it to a text file. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.

C.

Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.

D.

Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.

Full Access
Question # 17

Your company has a Google Cloud Platform project that uses BigQuery for data warehousing. Your data science team changes frequently and has few members. You need to allow members of this team to perform queries. You want to follow Google-recommended practices. What should you do?

A.

1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery jobUser role to the group.

B.

1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery dataViewer user role to the group.

C.

1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery jobUser role to the group.

D.

1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery dataViewer user role to the group.

Full Access
Question # 18

Your application stores files on Cloud Storage by using the Standard Storage class. The application only requires access to files created in the last 30 days. You want to automatically save costs on files that are no longer accessed by the application. What should you do?

A.

Create a retention policy on the storage bucket of 30 days, and lock the bucket by using a retention policy lock.

B.

Enable object versioning on the storage bucket and add lifecycle rules to expire non-current versions after 30 days

C.

Create an object lifecycle on the storage bucket to change the storage class to Archive Storage for objects with an age over 30 days.

D.

Create a cron job in Cloud Scheduler to call a Cloud Functions instance every day to delete files older than 30 days.

Full Access
Question # 19

You have a single binary application that you want to run on Google Cloud Platform. You decided to automatically scale the application based on underlying infrastructure CPU usage. Your organizational policies require you to use virtual machines directly. You need to ensure that the application scaling is operationally efficient and completed as quickly as possible. What should you do?

A.

Create a Google Kubernetes Engine cluster, and use horizontal pod autoscaling to scale the application.

B.

Create an instance template, and use the template in a managed instance group with autoscaling configured.

C.

Create an instance template, and use the template in a managed instance group that scales up and down based on the time of day.

D.

Use a set of third-party tools to build automation around scaling the application up and down, based on Stackdriver CPU usage monitoring.

Full Access
Question # 20

You need to migrate invoice documents stored on-premises to Cloud Storage. The documents have the following storage requirements:

• Documents must be kept for five years.

• Up to five revisions of the same invoice document must be stored, to allow for corrections.

• Documents older than 365 days should be moved to lower cost storage tiers.

You want to follow Google-recommended practices to minimize your operational and development costs. What should you do?

A.

Enable retention policies on the bucket, and use Cloud Scheduler to invoke a Cloud Function to move or delete your documents based on their metadata.

B.

Enable retention policies on the bucket, use lifecycle rules to change the storage classes of the objects, set the number of versions, and delete old files.

C.

Enable object versioning on the bucket, and use Cloud Scheduler to invoke a Cloud Functions instance to move or delete your documents based on their metadata.

D.

Enable object versioning on the bucket, use lifecycle conditions to change the storage class of the objects, set the number of versions, and delete old files.

Full Access
Question # 21

A colleague handed over a Google Cloud project for you to maintain. As part of a security checkup, you want to review who has been granted the Project Owner role. What should you do?

A.

In the Google Cloud console, validate which SSH keys have been stored as project-wide keys.

B.

Navigate to Identity-Aware Proxy and check the permissions for these resources.

C.

Enable Audit logs on the 1AM & admin page for all resources, and validate the results.

D.

Use the gcloud projects get-iam-policy command to view the current role assignments.

Full Access
Question # 22

You need to configure optimal data storage for files stored in Cloud Storage for minimal cost. The files are used in a mission-critical analytics pipeline that is used continually. The users are in Boston, MA (United States). What should you do?

A.

Configure regional storage for the region closest to the users Configure a Nearline storage class

B.

Configure regional storage for the region closest to the users Configure a Standard storage class

C.

Configure dual-regional storage for the dual region closest to the users Configure a Nearline storage class

D.

Configure dual-regional storage for the dual region closest to the users Configure a Standard storage class

Full Access
Question # 23

Your team maintains the infrastructure for your organization. The current infrastructure requires changes. You need to share your proposed changes with the rest of the team. You want to follow Google’s recommended best practices. What should you do?

A.

Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.

B.

Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.

C.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.

D.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.

Full Access
Question # 24

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do?

A.

Add the support team group to the roles/monitoring.viewer role

B.

Add the support team group to the roles/spanner.databaseUser role.

C.

Add the support team group to the roles/spanner.databaseReader role.

D.

Add the support team group to the roles/stackdriver.accounts.viewer role.

Full Access
Question # 25

You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS on a public IP address. What should you do?

A.

Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer.

B.

Create a Kubernetes Service of type ClusterIP for your application. Configure the public DNS name of your application using the IP of this Service.

C.

Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluster. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing.

D.

Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application. Forward the public traffic to HAProxy with an iptable rule. Configure the DNS name of your application using the public IP of the node HAProxy is running on.

Full Access
Question # 26

You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing instances of your company on Google Cloud. What must you do before you run the gcloud compute instances list command?

Choose 2 answers

A.

Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.

B.

Create a Google Cloud service account, and download the service account key. Place the key file in a folder on your machine where gcloud CLI can find it.

C.

Download your Cloud Identity user account key. Place the key file in a folder on your machine where gcloud CLI can find it.

D.

Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.

E.

Run gcloud config set project $my_project to set the default project for gcloud CLI.

Full Access
Question # 27

You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the problem. What should you do?

A.

Use the BigQuery interface to review the nightly Job and look for any errors

B.

Review the Error Reporting page in the Cloud Console to find any errors.

C.

In Cloud Logging create a filter for your Data Studio report

D.

Use the open source CLI tool. Snapshot Debugger, to find out why the data was not refreshed correctly.

Full Access
Question # 28

The core business of your company is to rent out construction equipment at a large scale. All the equipment that is being rented out has been equipped with multiple sensors that send event information every few seconds. These signals can vary from engine status, distance traveled, fuel level, and more. Customers are billed based on the consumption monitored by these sensors. You expect high throughput – up to thousands of events per hour per device – and need to retrieve consistent data based on the time of the event. Storing and retrieving individual signals should be atomic. What should you do?

A.

Create a file in Cloud Storage per device and append new data to that file.

B.

Create a file in Cloud Filestore per device and append new data to that file.

C.

Ingest the data into Datastore. Store data in an entity group based on the device.

D.

Ingest the data into Cloud Bigtable. Create a row key based on the event timestamp.

Full Access
Question # 29

You need to produce a list of the enabled Google Cloud Platform APIs for a GCP project using the gcloud command line in the Cloud Shell. The project name is my-project. What should you do?

A.

Run gcloud projects list to get the project ID, and then run gcloud services list --project .

B.

Run gcloud init to set the current project to my-project, and then run gcloud services list --available.

C.

Run gcloud info to view the account value, and then run gcloud services list --account .

D.

Run gcloud projects describe to verify the project value, and then run gcloud services list --available.

Full Access
Question # 30

You have a number of compute instances belonging to an unmanaged instances group. You need to SSH to one of the Compute Engine instances to run an ad hoc script. You’ve already authenticated gcloud, however, you don’t have an SSH key deployed yet. In the fewest steps possible, what’s the easiest way to SSH to the instance?

A.

Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.

B.

Use the gcloud compute ssh command.

C.

Create a key with the ssh-keygen command. Then use the gcloud compute ssh command.

D.

Create a key with the ssh-keygen command. Upload the key to the instance. Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.

Full Access
Question # 31

Your company has an internal application for managing transactional orders. The application is used exclusively by employees in a single physical location. The application requires strong consistency, fast queries, and ACID guarantees for multi-table transactional updates. The first version of the application is implemented inPostgreSQL, and you want to deploy it to the cloud with minimal code changes. Which database is most appropriate for this application?

A.

BigQuery

B.

Cloud SQL

C.

Cloud Spanner

D.

Cloud Datastore

Full Access
Question # 32

Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project. What should you do?

A.

Add the group for the finance team to roles/billing user role.

B.

Add the group for the finance team to roles/billing admin role.

C.

Add the group for the finance team to roles/billing viewer role.

D.

Add the group for the finance team to roles/billing project/Manager role.

Full Access
Question # 33

Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account. What should you do?

A.

Contact cloud-billing@google.com with your bank account details and request a corporate billing account for your company.

B.

Create a ticket with Google Support and wait for their call to share your credit card details over the phone.

C.

In the Google Platform Console, go to the Resource Manage and move all projects to the root Organization.

D.

In the Google Cloud Platform Console, create a new billing account and set up a payment method.

Full Access
Question # 34

You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?

A.

Deploy the monitoring pod in a StatefulSet object.

B.

Deploy the monitoring pod in a DaemonSet object.

C.

Reference the monitoring pod in a Deployment object.

D.

Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

Full Access
Question # 35

You are using Container Registry to centrally store your company’s container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do?

A.

In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes.

B.

When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under ‘Access scopes’.

C.

Create a service account, and give it access to Cloud Storage. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes.

D.

Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account.

Full Access
Question # 36

You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items.

How should you configure the auditor's permissions?

A.

Create a custom role with view-only project permissions. Add the user's account to the custom role.

B.

Create a custom role with view-only service permissions. Add the user's account to the custom role.

C.

Select the built-in IAM project Viewer role. Add the user's account to this role.

D.

Select the built-in IAM service Viewer role. Add the user's account to this role.

Full Access
Question # 37

You are configuring Cloud DNS. You want !to create DNS records to point home.mydomain.com, mydomain.com. and www.mydomain.com to the IP address of your Google Cloud load balancer. What should you do?

A.

Create one CNAME record to point mydomain.com to the load balancer, and create two A records to point WWW and HOME lo mydomain.com respectively.

B.

Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA records to point WWW and HOME to mydomain.com respectively.

C.

Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively.

D.

Create one A record to point mydomain.com lo the load balancer, and create two NS records to point WWW and HOME to mydomain.com respectively.

Full Access
Question # 38

An external member of your team needs list access to compute images and disks in one of your projects. You want to follow Google-recommended practices when you grant the required permissions to this user. What should you do?

A.

Create a custom role, and add all the required compute.disks.list and compute, images.list permissions as includedPermissions. Grant the custom role to the user at the project level.

B.

Create a custom role based on the Compute Image User role Add the compute.disks, list to the

includedPermissions field Grant the custom role to the user at the project level

C.

Grant the Compute Storage Admin role at the project level.

D.

Create a custom role based on the Compute Storage Admin role. Exclude unnecessary permissions from the custom role. Grant the custom role to the user at the project level.

Full Access
Question # 39

You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs. What should you do?

A.

Deploy the container on Cloud Run.

B.

Deploy the container on Cloud Run on GKE.

C.

Deploy the container on App Engine Flexible.

D.

Deploy the container on Google Kubernetes Engine, with cluster autoscaling and horizontal pod autoscaling enabled.

Full Access
Question # 40

You are assigned to maintain a Google Kubernetes Engine (GKE) cluster named dev that was deployed on Google Cloud. You want to manage the GKE configuration using the command line interface (CLI). You have just downloaded and installed the Cloud SDK. You want to ensure that future CLI commands by default address this specific cluster. What should you do?

A.

Use the command gcloud config set container/cluster dev.

B.

Use the command gcloud container clusters update dev.

C.

Create a file called gke.default in the ~/.gcloud folder that contains the cluster name.

D.

Create a file called defaults.json in the ~/.gcloud folder that contains the cluster name.

Full Access
Question # 41

Your company's security vulnerability management policy wonts 3 member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance This Compute Engine instance hosts a critical application in your Goggle Cloud project. You need to implement your company's security vulnerability management policy. What should you dc?

A.

• Ensure that the Ops Agent Is Installed on the Compute Engine instance.

• Create a custom metric in the Cloud Monitoring dashboard.

• Provide the security team member with access to this dashboard.

B.

• Ensure that the Ops Agent is installed on tie Compute Engine instance.

• Provide the security team member roles/configure.inventoryViewer permission.

C.

• Ensure that the OS Config agent Is Installed on the Compute Engine instance.

• Provide the security team member roles/configure.vulnerabilityViewer permission.

D.

• Ensure that the OS Config agent is installed on the Compute Engine instance

• Create a log sink Co a BigQuery dataset.

• Provide the security team member with access to this dataset.

Full Access
Question # 42

You need to extract text from audio files by using the Speech-to-Text API. The audio files are pushed to a Cloud Storage bucket. You need to implement a fully managed, serverless compute solution that requires authentication and aligns with Google-recommended practices. You want to automate the call to the API by submitting each file to the API as the audio file arrives in the bucket. What should you do?

A.

Run a Kubernetes job to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.

B.

Create an App Engine standard environment triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.

C.

Run a Python script by using a Linux cron job in Compute Engine to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.

D.

Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.

Full Access
Question # 43

You need to deploy a single stateless web application with a web interface and multiple endpoints. For security reasons, the web application must be reachable from an internal IP address from your company's private VPC and on-premises network. You also need to update the web application multiple times per day with minimal effort and want to manage a minimal amount of cloud infrastructure. What should you do?

A.

Deploy the web application on Google Kubernetes Engine standard edition with an internal ingress.

B.

Deploy the web application on Cloud Run with Private Google Access configured

C.

Deploy the web application to GKE Autopilot with Private Google Access configured

D.

Deploy the web application on Cloud Run with Private Service Connect configured.

Full Access
Question # 44

You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface. What should you do?

A.

Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances.

B.

Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances.

C.

Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances.

D.

Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.

Full Access
Question # 45

You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to examine the status of your Pod and observe that one of them is still in Pending status:

What is the most likely cause?

A.

The pending Pod's resource requests are too large to fit on a single node of the cluster.

B.

Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.

C.

The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.

D.

The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ status. It is currently being rescheduled on a new node.

Full Access
Question # 46

Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute Engine instances. The pipeline will manage the entire cloud infrastructure through code. How can you ensure that the pipeline has appropriate permissions while your system is following security best practices?

A.

• Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure

provisioning.

• Use the human approvals IAM account for the provisioning.

B.

• Attach a single service account to the compute instances.

• Add minimal rights to the service account.

• Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources.

C.

• Attach a single service account to the compute instances.

• Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources

D.

• Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and

Access Management (IAM) permissions.

• Use a secret manager service to store the key files of the service accounts.

• Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.

Full Access
Question # 47

You have designed a solution on Google Cloud Platform (GCP) that uses multiple GCP products. Your company has asked you to estimate the costs of the solution. You need to provide estimates for the monthly total cost. What should you do?

A.

For each GCP product in the solution, review the pricing details on the products pricing page. Use the pricing calculator to total the monthly costs for each GCP product.

B.

For each GCP product in the solution, review the pricing details on the products pricing page. Create a Google Sheet that summarizes the expected monthly costs for each product.

C.

Provision the solution on GCP. Leave the solution provisioned for 1 week. Navigate to the Billing Report page in the Google Cloud Platform Console. Multiply the 1 week cost to determine the monthly costs.

D.

Provision the solution on GCP. Leave the solution provisioned for 1 week. Use Stackdriver to determine the provisioned and used resource amounts. Multiply the 1 week cost to determine the monthly costs.

Full Access
Question # 48

You are working in a team that has developed a new application that needs to be deployed on Kubernetes. The production application is business critical and should be optimized for reliability. You need to provision a Kubernetes cluster and want to follow Google-recommended practices. What should you do?

A.

Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel.

B.

Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.

C.

Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel.

D.

Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel.

Full Access
Question # 49

Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible. What should you do?

A.

Download and deploy the Jenkins Java WAR to App Engine Standard.

B.

Create a new Compute Engine instance and install Jenkins through the command line interface.

C.

Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image.

D.

Use GCP Marketplace to launch the Jenkins solution.

Full Access
Question # 50

Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the amount of repetitive code needed to manage the environment What should you do?

A.

Create a bash script that contains all requirement steps as gcloud commands

B.

Develop templates for the environment using Cloud Deployment Manager

C.

Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.

D.

Use the Cloud Console interface to provision and manage all related resources

Full Access
Question # 51

You are creating an application that will run on Google Kubernetes Engine. You have identified MongoDB as the most suitable database system for your application and want to deploy a managed MongoDB environment that provides a support SLA. What should you do?

A.

Create a Cloud Bigtable cluster and use the HBase API

B.

Deploy MongoDB Alias from the Google Cloud Marketplace

C.

Download a MongoDB installation package and run it on Compute Engine instances

D.

Download a MongoDB installation package, and run it on a Managed Instance Group

Full Access
Question # 52

The sales team has a project named Sales Data Digest that has the ID acme-data-digest You need to set up similar Google Cloud resources for the marketing team but their resources must be organized independently of the sales team. What should you do?

A.

Grant the Project Editor role to the Marketing learn for acme data digest

B.

Create a Project Lien on acme-data digest and then grant the Project Editor role to the Marketing team

C.

Create another protect with the ID acme-marketing-data-digest for the Marketing team and deploy the resources there

D.

Create a new protect named Meeting Data Digest and use the ID acme-data-digest Grant the Project Editor role to the Marketing team.

Full Access
Question # 53

You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?

A.

After the VM has been created, use your Google Account credentials to log in into the VM.

B.

After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.

C.

When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.

D.

After the VM has been created, download the JSON private key for the default Compute Engine service account. Use the credentials in the JSON file to log in to the VM.

Full Access
Question # 54

You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters prod-cluster and dev-cluster.

• prod-cluster is a standard cluster.

• dev-cluster is an auto-pilot duster.

When you run the Kubect1 get nodes command, you only see the nodes from prod-cluster Which commands should you run to check the node status for dev-cluster?

A.

B.

C.

D.

Full Access
Question # 55

Your company completed the acquisition of a startup and is now merging the IT systems of both companies. The startup had a production Google Cloud project in their organization. You need to move this project into your organization and ensure that the project is billed lo your organization. You want to accomplish this task with minimal effort. What should you do?

A.

Use the projects. move method to move the project to your organization. Update the billing account of the project to that of your organization.

B.

Ensure that you have an Organization Administrator Identity and Access Management (IAM) role assigned to you in both organizations. Navigate to the Resource Manager in the startup's Google Cloud organization, and drag the project to your company's organization.

C.

Create a Private Catalog tor the Google Cloud Marketplace, and upload the resources of the startup’s production project to the Catalog. Share the Catalog with your organization, and deploy the resources in your company’s project.

D.

Create an infrastructure-as-code template tor all resources in the project by using Terraform. and deploy that template to a new project in your organization. Delete the protect from the startup's Google Cloud organization.

Full Access
Question # 56

Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not responsive. You want to replace this VM in the MIG quickly. What should you do?

A.

Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.

B.

Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.

C.

Use the gcloud compute instances update command with a REFRESH action for the VM.

D.

Update and apply the instance template of the MIG.

Full Access
Question # 57

You are managing a Data Warehouse on BigQuery. An external auditor will review your company's processes, and multiple external consultants will need view access to the data. You need to provide them with view access while following Google-recommended practices. What should you do?

A.

Grant each individual external consultant the role of BigQuery Editor

B.

Grant each individual external consultant the role of BigQuery Viewer

C.

Create a Google Group that contains the consultants and grant the group the role of BigQuery Editor

D.

Create a Google Group that contains the consultants, and grant the group the role of BigQuery Viewer

Full Access
Question # 58

You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one geographic location. You need to support point-in-time recovery. What should you do?

A.

Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected.

B.

Select Cloud SQL (MySQL). Select the create failover replicas option.

C.

Select Cloud Spanner. Set up your instance with 2 nodes.

D.

Select Cloud Spanner. Set up your instance as multi-regional.

Full Access
Question # 59

You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each. What should you do?

A.

Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP).

B.

Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10.

C.

Create a managed instance group. Set the Autohealing health check to healthy (HTTP).

D.

Create a managed instance group. Verify that the autoscaling setting is on.

Full Access
Question # 60

You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file?

A.

Use the GCP Console to transfer the file instead of gsutil.

B.

Enable parallel composite uploads using gsutil on the file transfer.

C.

Decrease the TCP window size on the machine initiating the transfer.

D.

Change the storage class of the bucket from Nearline to Multi-Regional.

Full Access
Question # 61

Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the solution. What should you do?

A.

Search for the CMS solution in Google Cloud Marketplace. Use gcloud CLI to deploy the solution.

B.

Search for the CMS solution in Google Cloud Marketplace. Deploy the solution directly from Cloud Marketplace.

C.

Search for the CMS solution in Google Cloud Marketplace. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters.

D.

Use the installation guide of the CMS provider. Perform the installation through your configuration management system.

Full Access
Question # 62

You need to immediately change the storage class of an existing Google Cloud bucket. You need to reduce service cost for infrequently accessed files stored in that bucket and for all files that will be added to that bucket in the future. What should you do?

A.

Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket

B.

Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket

C.

Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket

D.

Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket

Full Access
Question # 63

You have a managed instance group comprised of preemptible VM's. All of the VM's keepdeleting and recreating themselves every minute. What is a possible cause of thisbehavior?

A.

Your zonal capacity is limited, causing all preemptible VM's to be shutdown torecover capacity. Try deploying your group to another zone.

B.

You have hit your instance quota for the region.

C.

Your managed instance group's VM's are toggled to only last 1 minute inpreemptible settings.

D.

Your managed instance group's health check is repeatedly failing, either to amisconfigured health check or misconfigured firewall rules not allowing the healthcheck to access the instance

Full Access
Question # 64

You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar environment on GCP. What should you do?

A.

When creating the VM, use machine type n1-standard-96.

B.

When creating the VM, use Intel Skylake as the CPU platform.

C.

Create the VM using Compute Engine default settings. Use gcloud to modify the running instance to have 96 vCPUs.

D.

Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations.

Full Access
Question # 65

You need to manage a third-party application that will run on a Compute Engine instance. Other Compute Engine instances are already running with default configuration. Application installation files are hosted on Cloud Storage. You need to access these files from the new instance without allowing other virtual machines (VMs) to access these files. What should you do?

A.

Create the instance with the default Compute Engine service account Grant the service account permissions on Cloud Storage.

B.

Create the instance with the default Compute Engine service account Add metadata to the objects on Cloud Storage that matches the metadata on the new instance.

C.

Create a new service account and assig n this service account to the new instance Grant the service account permissions on Cloud Storage.

D.

Create a new service account and assign this service account to the new instance Add metadata to the objects on Cloud Storage that matches the metadata on the new instance.

Full Access
Question # 66

Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes. What should you do?

A.

Use gcloud to expand the IP range of the current subnet.

B.

Delete the subnet, and recreate it using a wider range of IP addresses.

C.

Create a new project. Use Shared VPC to share the current network with the new project.

D.

Create a new subnet with the same starting IP but a wider range to overwrite the current subnet.

Full Access
Question # 67

You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project. How should you configure the instance group?

A.

Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.

B.

Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.

C.

Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2.

D.

Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.

Full Access
Question # 68

You are monitoring an application and receive user feedback that a specific error is spiking. You notice that the error is caused by a Service Account having insufficient permissions. You are able to solve the problem but want to be notified if the problem recurs. What should you do?

A.

In the Log Viewer, filter the logs on severity 'Error' and the name of the Service Account.

B.

Create a sink to BigQuery to export all the logs. Create a Data Studio dashboard on the exported logs.

C.

Create a custom log-based metric for the specific error to be used in an Alerting Policy.

D.

Grant Project Owner access to the Service Account.

Full Access
Question # 69

You have successfully created a development environment in a project for an application. This application uses Compute Engine and Cloud SQL. Now, you need to create a production environment for this application.

The security team has forbidden the existence of network routes between these 2 environments, and asks you to follow Google-recommended practices. What should you do?

A.

Create a new project, enable the Compute Engine and Cloud SQL APIs in that project, and replicate the setup you have created in the development environment.

B.

Create a new production subnet in the existing VPC and a new production Cloud SQL instance in your existing project, and deploy your application using those resources.

C.

Create a new project, modify your existing VPC to be a Shared VPC, share that VPC with your new project, and replicate the setup you have in the development environment in that new project, in the Shared VPC.

D.

Ask the security team to grant you the Project Editor role in an existing production project used by another division of your company. Once they grant you that role, replicate the setup you have in the development environment in that project.

Full Access
Question # 70

You have a number of applications that have bursty workloads and are heavily dependent on topics to decouple publishing systems from consuming systems. Your company would like to go serverless to enable developers to focus on writing code without worrying about infrastructure. Your solution architect has already identified Cloud Pub/Sub as a suitable alternative for decoupling systems. You have been asked to identify a suitable GCP Serverless service that is easy to use with Cloud Pub/Sub. You want the ability to scale down to zero when there is no traffic in order to minimize costs. You want to follow Google recommended practices. What should you suggest?

A.

Cloud Run for Anthos

B.

Cloud Run

C.

App Engine Standard

D.

Cloud Functions.

Full Access
Question # 71

You are developing a financial trading application that will be used globally. Data is stored and queried using a relational structure, and clients from all over the world should get the exact identical state of the data. The application will be deployed in multiple regions to provide the lowest latency to end users. You need to select a storage option for the application data while minimizing latency. What should you do?

A.

Use Cloud Bigtable for data storage.

B.

Use Cloud SQL for data storage.

C.

Use Cloud Spanner for data storage.

D.

Use Firestore for data storage.

Full Access
Question # 72

Your customer wants you to create a secure website with autoscaling based on the compute instance CPU load. You want to enhance performance by storing static content in Cloud Storage. Which resources are needed to distribute the user traffic?

A.

An internal HTTP(S) load balancer together with Identity-Aware Proxy to allow only HTTPS traffic.

B.

An external HTTP(S) load balancer to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend. Install the HTTPS certificates on the instance.

C.

An external HTTP(S) load balancer with a managed SSL certificate to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend.

D.

An external network load balancer pointing to the backend instances to distribute the load evenly. The web servers will forward the request to the Cloud Storage as needed.

Full Access
Question # 73

For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

A.

1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add the following value: logs-destination: bq://platform-logs.

B.

1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2. Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.

C.

1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.

D.

1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTO dataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

Full Access
Question # 74

You are managing a project for the Business Intelligence (BI) department in your company. A data pipeline ingests data into BigQuery via streaming. You want the users in the BI department to be able to run the custom SQL queries against the latest data in BigQuery. What should you do?

A.

Create a Data Studio dashboard that uses the related BigQuery tables as a source and give the BI team view access to the Data Studio dashboard.

B.

Create a Service Account for the BI team and distribute a new private key to each member of the BI team.

C.

Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team's internal data warehouse.

D.

Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.

Full Access
Question # 75

You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company's security policies only allow the use to internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a Cloud Storage bucket within your project. What should you do?

A.

Enable Private Service Access on the Cloud Storage Bucket.

B.

Add slorage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.

C.

Enable Private Google Access on the subnet within the custom VPC.

D.

Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.

Full Access
Question # 76

Your company has developed a new application that consists of multiple microservices. You want to deploy the application to Google Kubernetes Engine (GKE), and you want to ensure that the cluster can scale as more applications are deployed in the future. You want to avoid manual intervention when each new application is deployed. What should you do?

A.

Deploy the application on GKE, and add a HorizontalPodAutoscaler to the deployment.

B.

Deploy the application on GKE, and add a VerticalPodAutoscaler to the deployment.

C.

Create a GKE cluster with autoscaling enabled on the node pool. Set a minimum and maximum for the size of the node pool.

D.

Create a separate node pool for each application, and deploy each application to its dedicated node pool.

Full Access
Question # 77

You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?

A.

Navigate to Stackdriver Logging and select resource.labels.project_id="*"

B.

Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days.

C.

Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days.

D.

Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days.

Full Access
Question # 78

Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running instances specified by the template to be able to process expected application traffic. What should you do?

A.

Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names.

B.

Create an instance template that contains valid syntax that will be used by the instance group. Verify that the instance name and persistent disk name values are not the same in the template.

C.

Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template.

D.

Delete the current instance template and replace it with a new instance template. Verify that the instance name and persistent disk name values are not the same in the template. Set the disks.autoDelete property to true in the instance template.

Full Access
Question # 79

You created several resources in multiple Google Cloud projects. All projects are linked to different billing accounts. To better estimate future charges, you want to have a single visual representation of all costs incurred. You want to include new cost data as soon as possible. What should you do?

A.

Configure Billing Data Export to BigQuery and visualize the data in Data Studio.

B.

Visit the Cost Table page to get a CSV export and visualize it using Data Studio.

C.

Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.

D.

Use the Reports view in the Cloud Billing Console to view the desired cost information.

Full Access
Question # 80

You have a project for your App Engine application that serves a development environment. The required testing has succeeded and you want to create a new project to serve as your production environment. What should you do?

A.

Use gcloud to create the new project, and then deploy your application to the new project.

B.

Use gcloud to create the new project and to copy the deployed application to the new project.

C.

Create a Deployment Manager configuration file that copies the current App Engine deployment into a new project.

D.

Deploy your application again using gcloud and specify the project parameter with the new project name to create the new project.

Full Access
Question # 81

Your company has embraced a hybrid cloud strategy where some of the applications are deployed on Google Cloud. A Virtual Private Network (VPN) tunnel connects your Virtual Private Cloud (VPC) in Google Cloud with your company's on-premises network. Multiple applications in Google Cloud need to connect to an on-premises database server, and you want to avoid having to change the IP configuration in all of your applications when the IP of the database changes.

What should you do?

A.

Configure Cloud NAT for all subnets of your VPC to be used when egressing from the VM instances.

B.

Create a private zone on Cloud DNS, and configure the applications with the DNS name.

C.

Configure the IP of the database as custom metadata for each instance, and query the metadata server.

D.

Query the Compute Engine internal DNS from the applications to retrieve the IP of the database.

Full Access
Question # 82

You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?

A.

Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

B.

Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.

C.

Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.

D.

Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

Full Access
Question # 83

You are using multiple configurations for gcloud. You want to review the configured Kubernetes Engine cluster of an inactive configuration using the fewest possible steps. What should you do?

A.

Use gcloud config configurations describe to review the output.

B.

Use gcloud config configurations activate and gcloud config list to review the output.

C.

Use kubectl config get-contexts to review the output.

D.

Use kubectl config use-context and kubectl config view to review the output.

Full Access
Question # 84

Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?

A.

Create a cluster with a single node-pool by using standard VMs. Label the fault-tolerant Deployments as spot-true.

B.

Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot-false.

C.

Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the critical.

deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool by using standard VMs.

D.

Create a cluster with both a Spot VM node pool and by using standard VMs. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.

Full Access
Question # 85

You deployed a new application inside your Google Kubernetes Engine cluster using the YAML file specified below.

You check the status of the deployed pods and notice that one of them is still in PENDING status:

You want to find out why the pod is stuck in pending status. What should you do?

A.

Review details of the myapp-service Service object and check for error messages.

B.

Review details of the myapp-deployment Deployment object and check for error messages.

C.

Review details of myapp-deployment-58ddbbb995-lp86m Pod and check for warning messages.

D.

View logs of the container in myapp-deployment-58ddbbb995-lp86m pod and check for warning messages.

Full Access