0% found this document useful (0 votes)
115 views26 pages

Associate Cloud Engineer - 3

Uploaded by

shekhar kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views26 pages

Associate Cloud Engineer - 3

Uploaded by

shekhar kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps

https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Exam Questions Associate-Cloud-Engineer


Google Cloud Certified - Associate Cloud Engineer

https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 1
Your company has a single sign-on (SSO) identity provider that supports Security Assertion Markup Language (SAML) integration with service providers. Your
company has users in Cloud Identity. You would like users to authenticate using your company’s SSO provider. What should you do?

A. In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps.
B. In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider.
C. Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps.
D. Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications.

Answer: B

Explanation:
https://support.google.com/cloudidentity/answer/6262987?hl=en&ref_topic=7558767

NEW QUESTION 2
Your organization uses Active Directory (AD) to manage user identities. Each user uses this identity for federated access to various on-premises systems. Your
security team has adopted a policy that requires users to log into Google Cloud with their AD identity instead of their own login. You want to follow the
Google-recommended practices to implement this policy. What should you do?

A. Sync Identities with Cloud Directory Sync, and then enable SAML for single sign-on
B. Sync Identities in the Google Admin console, and then enable Oauth for single sign-on
C. Sync identities with 3rd party LDAP sync, and then copy passwords to allow simplified login with (he same credentials
D. Sync identities with Cloud Directory Sync, and then copy passwords to allow simplified login with the same credentials.

Answer: A

NEW QUESTION 3
You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver
Monitoring dashboard. What should you do?

A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.
B. For each project, create a Stackdriver accoun
C. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.
D. Configure a single Stackdriver account, and link all projects to the same account.
E. Configure a single Stackdriver account for one of the project
F. In Stackdriver, create a Group and add the other project names as criteria for that Group.

Answer: C

Explanation:
When you intially click on Monitoring(Stackdriver Monitoring) it creates a workspac(a stackdriver account) linked to the ACTIVE(CURRENT) Project from which it
was clicked.
Now if you change the project and again click onto Monitoring it would create an another workspace(a stackdriver account) linked to the changed
ACTIVE(CURRENT) Project, we don't want this as this would not consolidate our result into a single dashboard(workspace/stackdriver account).
If you have accidently created two diff workspaces merge them under Monitoring > Settings > Merge Workspaces > MERGE.
If we have only one workspace and two projects we can simply add other GCP Project under Monitoring > Settings > GCP Projects > Add GCP Projects.
https://cloud.google.com/monitoring/settings/multiple-projects
Nothing about groups https://cloud.google.com/monitoring/settings?hl=en

NEW QUESTION 4
Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the
solution. What should you do?

A. Search for the CMS solution in Google Cloud Marketplac


B. Use gcloud CLI to deploy the solution.
C. Search for the CMS solution in Google Cloud Marketplac
D. Deploy the solution directly from Cloud Marketplace.
E. Search for the CMS solution in Google Cloud Marketplac
F. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters.
G. Use the installation guide of the CMS provide
H. Perform the installation through your configuration management system.

Answer: B

NEW QUESTION 5
You have a developer laptop with the Cloud SDK installed on Ubuntu. The Cloud SDK was installed from the Google Cloud Ubuntu package repository. You want
to test your application locally on your laptop with Cloud Datastore. What should you do?

A. Export Cloud Datastore data using gcloud datastore export.


B. Create a Cloud Datastore index using gcloud datastore indexes create.
C. Install the google-cloud-sdk-datastore-emulator component using the apt get install command.
D. Install the cloud-datastore-emulator component using the gcloud components install command.

Answer: D

Explanation:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

The Datastore emulator provides local emulation of the production Datastore environment. You can use the emulator to develop and test your application
locallyRef: https://cloud.google.com/datastore/docs/tools/datastore-emulator

NEW QUESTION 6
Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data
accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?

A. Upload the data to BigQuery using the bq command line tool.


B. Upload the data to Cloud Storage using the gsutil command line tool.
C. Upload the data into Cloud SQL using the import function in the console.
D. Upload the data into Cloud Spanner using the import function in the console.

Answer: B

Explanation:
"large quantity" : Cloud Storage or BigQuery "files" a file is nothing but an Object

NEW QUESTION 7
You have one project called proj-sa where you manage all your service accounts. You want to be able to use a service account from this project to take snapshots
of VMs running in another project called proj-vm. What should you do?

A. Download the private key from the service account, and add it to each VMs custom metadata.
B. Download the private key from the service account, and add the private key to each VM’s SSH keys.
C. Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm.
D. When creating the VMs, set the service account’s API scope for Compute Engine to read/write.

Answer: C

Explanation:
https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0
You create the service account in proj-sa and take note of the service account email, then you go to proj-vm in IAM > ADD and add the service account's email as
new member and give it the Compute Storage Admin role.
https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin

NEW QUESTION 8
You need to create a custom VPC with a single subnet. The subnet’s range must be as large as possible. Which range should you use?

A. 1.00.0.0/0
B. 10.0.0.0/8
C. 172.16.0.0/12
D. 192.168.0.0/16

Answer: B

Explanation:
https://cloud.google.com/vpc/docs/vpc#manually_created_subnet_ip_ranges

NEW QUESTION 9
You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new
project, using the fewest possible steps. What should you do?

A. Use gcloud iam roles copy and specify the production project as the destination project.
B. Use gcloud iam roles copy and specify your organization as the destination organization.
C. In the Google Cloud Platform Console, use the ‘create role from role’ functionality.
D. In the Google Cloud Platform Console, use the ‘create role’ functionality and select all applicable permissions.

Answer: A

NEW QUESTION 10
Your company is moving its entire workload to Compute Engine. Some servers should be accessible through the Internet, and other servers should only be
accessible over the internal network. All servers need to be able to talk to each other over specific ports and protocols. The current on-premises network relies on
a demilitarized zone (DMZ) for the public servers and a Local Area Network (LAN) for the private servers. You need to design the networking infrastructure on
Google Cloud to match these requirements. What should you do?

A. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
B. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
C. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
D. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
E. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
F. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
G. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
H. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.

Answer: C

Explanation:
https://cloud.google.com/vpc/docs/vpc-peering

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 10
You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service
account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do?

A. Enable the Cloud Pub/Sub API in the API Library on the GCP Console.
B. Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it.
C. Use Deployment Manager to deploy your applicatio
D. Rely on the automatic enablement of all APIs used by the application being deployed.
E. Grant the App Engine Default service account the role of Cloud Pub/Sub Admi
F. Have your application enable the API on the first connection to Cloud Pub/Sub.

Answer: A

Explanation:
Quickstart: using the Google Cloud Console
This page shows you how to perform basic tasks in Pub/Sub using the Google Cloud Console. Note: If you are new to Pub/Sub, we recommend that you start with
the interactive tutorial. Before you begin
Set up a Cloud Console project. Set up a project
Click to:
Create or select a project.
Enable the Pub/Sub API for that project.
You can view and manage these resources at any time in the Cloud Console. Install and initialize the Cloud SDK.
Note: You can run the gcloud tool in the Cloud Console without installing the Cloud SDK. To run the gcloud tool in the Cloud Console, use Cloud Shell .
https://cloud.google.com/pubsub/docs/quickstart-console

NEW QUESTION 15
You need to set a budget alert for use of Compute Engineer services on one of the three Google Cloud Platform projects that you manage. All three projects are
linked to a single billing account. What should you do?

A. Verify that you are the project billing administrato


B. Select the associated billing account and create a budget and alert for the appropriate project.
C. Verify that you are the project billing administrato
D. Select the associated billing account and create a budget and a custom alert.
E. Verify that you are the project administrato
F. Select the associated billing account and create a budget for the appropriate project.
G. Verify that you are project administrato
H. Select the associated billing account and create a budget and a custom alert.

Answer: A

Explanation:
https://cloud.google.com/iam/docs/understanding-roles#billing-roles

NEW QUESTION 20
Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project.
What should you do?

A. Add the group for the finance team to roles/billing user role.
B. Add the group for the finance team to roles/billing admin role.
C. Add the group for the finance team to roles/billing viewer role.
D. Add the group for the finance team to roles/billing project/Manager role.

Answer: C

Explanation:
"Billing Account Viewer access would usually be granted to finance teams, it provides access to spend information, but does not confer the right to link or unlink
projects or otherwise manage the properties of the billing account." https://cloud.google.com/billing/docs/how-to/billing-access

NEW QUESTION 23
Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the
amount of repetitive code needed to manage the environment What should you do?

A. Create a bash script that contains all requirement steps as gcloud commands
B. Develop templates for the environment using Cloud Deployment Manager
C. Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.
D. Use the Cloud Console interface to provision and manage all related resources

Answer: B

Explanation:
You can use Google Cloud Deployment Manager to create a set of Google Cloud resources and manage them as a unit, called a deployment. For example, if your
team's development environment needs two virtual machines (VMs) and a BigQuery database, you can define these resources in a configuration file, and use
Deployment Manager to create, change, or delete these resources. You can make the configuration file part of your team's code repository, so that anyone can
create the same environment with consistent results. https://cloud.google.com/deployment-manager/docs/quickstart

NEW QUESTION 27
You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are
the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

A. Use the GCP Console to transfer the file instead of gsutil.


B. Enable parallel composite uploads using gsutil on the file transfer.
C. Decrease the TCP window size on the machine initiating the transfer.
D. Change the storage class of the bucket from Nearline to Multi-Regional.

Answer: B

Explanation:
https://cloud.google.com/storage/docs/parallel-composite-uploads https://cloud.google.com/storage/docs/uploads-downloads#parallel-composite-uploads

NEW QUESTION 30
You deployed an App Engine application using gcloud app deploy, but it did not deploy to the intended project. You want to find out why this happened and where
the application deployed. What should you do?

A. Check the app.yaml file for your application and check project settings.
B. Check the web-application.xml file for your application and check project settings.
C. Go to Deployment Manager and review settings for deployment of applications.
D. Go to Cloud Shell and run gcloud config list to review the Google Cloud configuration used for deployment.

Answer: D

Explanation:
C:\GCP\appeng>gcloud config list [core]
account = xxx@gmail.com disable_usage_reporting = False
project = my-first-demo-xxxx https://cloud.google.com/endpoints/docs/openapi/troubleshoot-gce-deployment

NEW QUESTION 32
Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not
responsive. You want to replace this VM in the MIG quickly. What should you do?

A. Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.
B. Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.
C. Use the gcloud compute instances update command with a REFRESH action for the VM.
D. Update and apply the instance template of the MIG.

Answer: A

NEW QUESTION 37
You are building a data lake on Google Cloud for your Internet of Things (loT) application. The loT application has millions of sensors that are constantly streaming
structured and unstructured data to your backend in the cloud. You want to build a highly available and resilient architecture based on
Google-recommended practices. What should you do?

A. Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage
B. Stream data to Pub/Su
C. and use Storage Transfer Service to send data to BigQuery.
D. Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery.
E. Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable.

Answer: B

NEW QUESTION 42
You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP. You want to make sure it is reachable by clients over
that port. What should you do?

A. Add the network tag allow-udp-636 to the VM instance running the LDAP server.
B. Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server.
C. Add a network tag of your choice to the instanc
D. Create a firewall rule to allow ingress on UDP port 636 for that network tag.
E. Add a network tag of your choice to the instance running the LDAP serve
F. Create a firewall rule to allow egress on UDP port 636 for that network tag.

Answer: C

Explanation:
A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or instance templates. A tag is not a
separate resource, so you cannot create it separately. All resources with that string are considered to have that tag. Tags enable you to make firewall rules and
routes applicable to specific VM instances.

NEW QUESTION 45
You are managing a Data Warehouse on BigQuery. An external auditor will review your company's processes, and multiple external consultants will need view
access to the data. You need to provide them with view access while following Google-recommended practices. What should you do?

A. Grant each individual external consultant the role of BigQuery Editor


B. Grant each individual external consultant the role of BigQuery Viewer
C. Create a Google Group that contains the consultants and grant the group the role of BigQuery Editor
D. Create a Google Group that contains the consultants, and grant the group the role of BigQuery Viewer

Answer:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 49
Your company is using Google Workspace to manage employee accounts. Anticipated growth will increase the number of personnel from 100 employees to 1.000
employees within 2 years. Most employees will need access to your company's Google Cloud account. The systems and processes will need to support 10x
growth without performance degradation, unnecessary complexity, or security issues. What should you do?

A. Migrate the users to Active Director


B. Connect the Human Resources system to Active Director
C. Turn on Google Cloud Directory Sync (GCDS) for Cloud Identit
D. Turn on Identity Federation from Cloud Identity to Active Directory.
E. Organize the users in Cloud Identity into group
F. Enforce multi-factor authentication in Cloud Identity.
G. Turn on identity federation between Cloud Identity and Google Workspac
H. Enforce multi-factor authentication for domain wide delegation.
I. Use a third-party identity provider service through federatio
J. Synchronize the users from Google Workplace to the third-party provider in real time.

Answer: B

NEW QUESTION 54
You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single
Compute Engine zone, eliminate downtime, and minimize cost. What should you do?

A. – Create Compute Engine resources in us–central1–b.–Balance the load across both us–central1–a and us–central1–b.
B. – Create a Managed Instance Group and specify us–central1–a as the zone.–Configure the Health Check with a short Health Interval.
C. – Create an HTTP(S) Load Balancer.–Create one or more global forwarding rules to direct traffic to your VMs.
D. – Perform regular backups of your application.–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.–Restore from backups
when notified.

Answer: A

Explanation:
Choosing a region and zone You choose which region or zone hosts your resources, which controls where your data is stored and used. Choosing a region and
zone is important for several reasons:
Handling failures
Distribute your resources across multiple zones and regions to tolerate outages. Google designs zones to be independent from each other: a zone usually has
power, cooling, networking, and control planes that are isolated from other zones, and most single failure events will affect only a single zone. Thus, if a zone
becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running. Similarly, if a region experiences any disturbances,
you should have backup services running in a different region. For more information about distributing your resources and designing a robust system, see
Designing Robust Systems. Decreased network latency To decrease network latency, you might want to choose a region or zone that is close to your point of
service.
https://cloud.google.com/compute/docs/regions-zones#choosing_a_region_and_zone

NEW QUESTION 56
You have created an application that is packaged into a Docker image. You want to deploy the Docker image as a workload on Google Kubernetes Engine. What
should you do?

A. Upload the image to Cloud Storage and create a Kubernetes Service referencing the image.
B. Upload the image to Cloud Storage and create a Kubernetes Deployment referencing the image.
C. Upload the image to Container Registry and create a Kubernetes Service referencing the image.
D. Upload the image to Container Registry and create a Kubernetes Deployment referencing the image.

Answer: D

Explanation:
A deployment is responsible for keeping a set of pods running. A service is responsible for enabling network access to a set of pods.

NEW QUESTION 59
You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to
have 8 GB of memory. What should you do?

A. Rely on live migration to move the workload to a machine with more memory.
B. Use gcloud to add metadata to the V
C. Set the key to required-memory-size and the value to 8 GB.
D. Stop the VM, change the machine type to n1-standard-8, and start the VM.
E. Stop the VM, increase the memory to 8 GB, and start the VM.

Answer: D

Explanation:
In Google compute engine, if predefined machine types don’t meet your needs, you can create an instance with custom virtualized hardware settings. Specifically,
you can create an instance with a custom number of vCPUs and custom memory, effectively using a custom machine type. Custom machine types are ideal for the
following scenarios: 1. Workloads that aren’t a good fit for the predefined machine types that are available you. 2. Workloads that require more processing power
or more memory but don’t need all of the upgrades that are provided by the next machine type level.In our scenario, we only need a memory upgrade. Moving to a
bigger instance would also bump up the CPU which we don’t need so we have to use a custom machine type. It is not possible to change memory while the
instance is running so you need to first stop the instance, change the memory and then start it again. See below a screenshot that shows how CPU/Memory can
be customized for an instance that has been
stopped.Ref: https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 60
You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has suggested uploading docker images to GCR in one
project and running an application in a GKE cluster in a separate project. You want to store images in the project img-278322 and run the application in the project
prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended practices. What should you do?

A. Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace


B. Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1
C. Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace
D. Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1

Answer: B

Explanation:
Run gcloud builds submit tag gcr.io/img-278322/acme_track_n_trace:v1. is the right answer.
This command correctly tags the image as acme_track_n_trace:v1 and uploads the image to the img-278322 project.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit

NEW QUESTION 63
Your web application has been running successfully on Cloud Run for Anthos. You want to evaluate an updated version of the application with a specific
percentage of your production users (canary deployment). What should you do?

A. Create a new service with the new version of the applicatio


B. Split traffic between this version and the version that is currently running.
C. Create a new revision with the new version of the applicatio
D. Split traffic between this version and the version that is currently running.
E. Create a new service with the new version of the applicatio
F. Add an HTTP Load Balancer in front of both services.
G. Create a new revision with the new version of the applicatio
H. Add an HTTP Load Balancer in front of both revisions.

Answer: B

Explanation:
https://cloud.google.com/kuberun/docs/rollouts-rollbacks-traffic-migration

NEW QUESTION 66
You have a workload running on Compute Engine that is critical to your business. You want to ensure that the data on the boot disk of this workload is backed up
regularly. You need to be able to restore a backup as quickly as possible in case of disaster. You also want older backups to be cleaned automatically to save on
cost. You want to follow Google-recommended practices. What should you do?

A. Create a Cloud Function to create an instance template.


B. Create a snapshot schedule for the disk using the desired interval.
C. Create a cron job to create a new disk from the disk using gcloud.
D. Create a Cloud Task to create an image and export it to Cloud Storage.

Answer: B

Explanation:
Best practices for persistent disk snapshots
You can create persistent disk snapshots at any time, but you can create snapshots more quickly and with greater reliability if you use the following best practices.
Creating frequent snapshots efficiently
Use snapshots to manage your data efficiently.
Create a snapshot of your data on a regular schedule to minimize data loss due to unexpected failure. Improve performance by eliminating excessive snapshot
downloads and by creating an image and reusing it. Set your snapshot schedule to off-peak hours to reduce snapshot time.
Snapshot frequency limits
Creating snapshots from persistent disks
You can snapshot your disks at most once every 10 minutes. If you want to issue a burst of requests to snapshot your disks, you can issue at most 6 requests in
60 minutes.
If the limit is exceeded, the operation fails and returns the following error: https://cloud.google.com/compute/docs/disks/snapshot-best-practices

NEW QUESTION 68
You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow
Google-recommended practices. What should you do?

A. Create a service account with an access scop


B. Use the access scope ‘https://www.googleapis.com/auth/devstorage.write_only’.
C. Create a service account with an access scop
D. Use the access scope ‘https://www.googleapis.com/auth/cloud-platform’.
E. Create a service account and add it to the IAM role ‘storage.objectCreator’ for that bucket.
F. Create a service account and add it to the IAM role ‘storage.objectAdmin’ for that bucket.

Answer: C

Explanation:
https://cloud.google.com/iam/docs/understanding-service-accounts#using_service_accounts_with_compute_eng https://cloud.google.com/storage/docs/access-
control/iam-roles

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 71
A colleague handed over a Google Cloud Platform project for you to maintain. As part of a security checkup, you want to review who has been granted the Project
Owner role. What should you do?

A. In the console, validate which SSH keys have been stored as project-wide keys.
B. Navigate to Identity-Aware Proxy and check the permissions for these resources.
C. Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
D. Use the command gcloud projects get–iam–policy to view the current role assignments.

Answer: D

Explanation:
A simple approach would be to use the command flags available when listing all the IAM policy for a given project. For instance, the following command: `gcloud
projects get-iam-policy $PROJECT_ID
--flatten="bindings[].members" --format="table(bindings.members)" --filter="bindings.role:roles/owner"`
outputs all the users and service accounts associated with the role ‘roles/owner’ in the project in question. https://groups.google.com/g/google-cloud-
dev/c/Z6sZs7TvygQ?pli=1

NEW QUESTION 75
You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day.
At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the
problem. What should you do?

A. Use the BigQuery interface to review the nightly Job and look for any errors
B. Review the Error Reporting page in the Cloud Console to find any errors.
C. In Cloud Logging create a filter for your Data Studio report
D. Use the open source CLI too
E. Snapshot Debugger, to find out why the data was not refreshed correctly.

Answer: D

Explanation:
Cloud Debugger helps inspect the state of an application, at any code location, without stopping or slowing down the running app //
https://cloud.google.com/debugger/docs

NEW QUESTION 78
Your projects incurred more costs than you expected last month. Your research reveals that a development
GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What
should you do?

A. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource.
B. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource.
C. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Logging.
D. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Monitoring.

Answer: A

Explanation:
https://cloud.google.com/logging/docs/api/v2/resource-list GKE Containers have more log than GKE Cluster Operations:
-GKE Containe:
cluster_name: An immutable name for the cluster the container is running in. namespace_id: Immutable ID of the cluster namespace the container is running in.
instance_id: Immutable ID of the GCE instance the container is running in. pod_id: Immutable ID of the pod the container is running in.
container_name: Immutable name of the container. zone: The GCE zone in which the instance is running. VS -GKE Cluster Operations
project_id: The identifier of the GCP project associated with this resource, such as "my-project". cluster_name: The name of the GKE Cluster.
location: The location in which the GKE Cluster is running.

NEW QUESTION 81
You need to manage a Cloud Spanner Instance for best query performance. Your instance in production runs in a single Google Cloud region. You need to
improve performance in the shortest amount of time. You want to follow Google best practices for service configuration. What should you do?

A. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 45% If you exceed this threshold, add nodes lo your
instance.
B. Create an alert in Cloud Monitoring to alert when the percentage to high priority CPU utilization reaches 45% Use database query statistics to identify queries
that result in high CPU usage, and then rewrite those queries to optimize their resource usage
C. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65% If you exceed this threshold, add nodes to your
instance
D. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65%. Use database query statistics to identity queries
that result in high CPU usage, and then rewrite those queries to optimize their resource usage.

Answer: B

Explanation:
https://cloud.google.com/spanner/docs/cpu-utilization#recommended-max

NEW QUESTION 86
You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS
on a public IP address. What should you do?

A. Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

B. Create a Kubernetes Service of type ClusterIP for your applicatio


C. Configure the public DNS name of your application using the IP of this Service.
D. Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluste
E. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing.
F. Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application.Forward the public traffic to HAProxy with an iptable rul
G. Configure the DNS name of your application using the public IP of the node HAProxy is running on.

Answer: A

NEW QUESTION 87
You have been asked to create robust Virtual Private Network (VPN) connectivity between a new Virtual Private Cloud (VPC) and a remote site. Key requirements
include dynamic routing, a shared address space of 10.19.0.1/22, and no overprovisioning of tunnels during a failover event. You want to follow
Google-recommended practices to set up a high availability Cloud VPN. What should you do?

A. Use a custom mode VPC network, configure static routes, and use active/passive routing
B. Use an automatic mode VPC network, configure static routes, and use active/active routing
C. Use a custom mode VPC network use Cloud Router border gateway protocol (86P) routes, and use active/passive routing
D. Use an automatic mode VPC network, use Cloud Router border gateway protocol (BGP) routes and configure policy-based routing

Answer: C

Explanation:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/best-practices

NEW QUESTION 91
Your team is running an on-premises ecommerce application. The application contains a complex set of microservices written in Python, and each microservice is
running on Docker containers. Configurations are injected by using environment variables. You need to deploy your current application to a serverless Google
Cloud cloud solution. What should you do?

A. Use your existing CI/CD pipeline Use the generated Docker images and deploy them to Cloud Run.Update the configurations and the required endpoints.
B. Use your existing continuous integration and delivery (CI/CD) pipelin
C. Use the generated Docker images and deploy them to Cloud Functio
D. Use the same configuration as on-premises.
E. Use the existing codebase and deploy each service as a separate Cloud Function Update the configurations and the required endpoints.
F. Use your existing codebase and deploy each service as a separate Cloud Run Use the same configurations as on-premises.

Answer: A

NEW QUESTION 93
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will
run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?

A. Deploy the monitoring pod in a StatefulSet object.


B. Deploy the monitoring pod in a DaemonSet object.
C. Reference the monitoring pod in a Deployment object.
D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

Answer: B

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets
automatically add Pods to the new nodes as needed.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you
add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples
of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have
DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use
different configurations for different hardware types and resource needs.

NEW QUESTION 94
You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own
Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's
organization as in your own organization. What should you do?

A. In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization
B. In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization.
C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination.
D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company s organization as the destination.

Answer: C

Explanation:
https://cloud.google.com/architecture/best-practices-vpc-design#shared-service Cloud VPN is another alternative. Because Cloud VPN establishes reachability
through managed IPsec tunnels, it doesn't have the aggregate limits of VPC Network Peering. Cloud VPN uses a VPN Gateway for connectivity and doesn't
consider the aggregate resource use of the IPsec peer. The drawbacks of Cloud VPN include increased costs (VPN tunnels and traffic egress), management
overhead required to maintain tunnels, and the performance overhead of IPsec.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 95
You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?

A. After the VM has been created, use your Google Account credentials to log in into the VM.
B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.
C. When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.
D. After the VM has been created, download the JSON private key for the default Compute Engine service accoun
E. Use the credentials in the JSON file to log in to the VM.

Answer: B

Explanation:
You can generate Windows passwords using either the Google Cloud Console or the gcloud command-line tool. This option uses the right syntax to reset the
windows password.
gcloud compute reset-windows-password windows-instance
Ref: https://cloud.google.com/compute/docs/instances/windows/creating-passwords-for-windows-instances#gc

NEW QUESTION 99
You are building an application that processes data files uploaded from thousands of suppliers. Your primary goals for the application are data security and the
expiration of aged data. You need to design the application to:
•Restrict access so that suppliers can access only their own data.
•Give suppliers write access to data only for 30 minutes.
•Delete data that is over 45 days old.
You have a very short development cycle, and you need to make sure that the application requires minimal maintenance. Which two strategies should you use?
(Choose two.)

A. Build a lifecycle policy to delete Cloud Storage objects after 45 days.


B. Use signed URLs to allow suppliers limited time access to store their objects.
C. Set up an SFTP server for your application, and create a separate user for each supplier.
D. Build a Cloud function that triggers a timer of 45 days to delete objects that have expired.
E. Develop a script that loops through all Cloud Storage buckets and deletes any buckets that are older than 45 days.

Answer: AB

Explanation:
(A) Object Lifecycle Management Delete
The Delete action deletes an object when the object meets all conditions specified in the lifecycle rule.
Exception: In buckets with Object Versioning enabled, deleting the live version of an object causes it to become a noncurrent version, while deleting a noncurrent
version deletes that version permanently.
https://cloud.google.com/storage/docs/lifecycle#delete
(B) Signed URLs
This page provides an overview of signed URLs, which you use to give time-limited resource access to anyone in possession of the URL, regardless of whether
they have a Google account
https://cloud.google.com/storage/docs/access-control/signed-urls

NEW QUESTION 101


You have downloaded and installed the gcloud command line interface (CLI) and have authenticated with your Google Account. Most of your Compute Engine
instances in your project run in the europe-west1-d zone. You want to avoid having to specify this zone with each CLI command when managing these instances.
What should you do?

A. Set the europe-west1-d zone as the default zone using the gcloud config subcommand.
B. In the Settings page for Compute Engine under Default location, set the zone to europe–west1-d.
C. In the CLI installation directory, create a file called default.conf containing zone=europe–west1–d.
D. Create a Metadata entry on the Compute Engine page with key compute/zone and value europe–west1–d.

Answer: A

Explanation:
Change your default zone and region in the metadata server Note: This only applies to the default configuration. You can change the default zone and region in
your metadata server by making a request to the metadata server. For example: gcloud compute project-info add-metadata \ --metadata
google-compute-default-region=europe-west1,google-compute-default-zone=europe-west1-b The gcloud command-line tool only picks up on new default zone and
region changes after you rerun the gcloud init command. After updating your default metadata, run gcloud init to reinitialize your default configuration.
https://cloud.google.com/compute/docs/gcloud-compute#change_your_default_zone_and_region_in_the_metad

NEW QUESTION 103


You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected
to your corporate network through a VPN connection, but the consultant has no Google account. What should you do?

A. Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance.
B. Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it.
C. Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant.Add the public key to the instance yourself, and
have the consultant access the instance through SSH with their private key.
D. Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant.Add the private key to the instance yourself, and
have the consultant access the instance through SSH with their public key.

Answer: C

Explanation:
The best option is to instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Then, add the public key to the
instance yourself, and have the consultant access the instance through SSH with their private key. This way, you can grant the consultant access to the instance

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

without requiring a Google account or exposing the instance’s public IP address. This option also follows the best practice of using user-managed SSH keys
instead of service account keys for SSH access1.
Option A is not feasible because the external consultant does not have a Google account, and therefore cannot use Identity-Aware Proxy (IAP) to access the
instance. IAP requires the user to authenticate with a Google account and have the appropriate IAM permissions to access the instance2. Option B is not secure
because it exposes the instance’s public IP address, which can increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the
roles of the public and private keys. The public key should be added to the instance, and the private key should be kept by the consultant. Sharing the private key
with anyone else can compromise the security of the SSH connection3.
References:
1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
2: https://cloud.google.com/iap/docs/using-tcp-forwarding
3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances

NEW QUESTION 105


You need to create a custom IAM role for use with a GCP service. All permissions in the role must be suitable for production use. You also want to clearly share
with your organization the status of the custom role. This will be the first version of the custom role. What should you do?

A. Use permissions in your role that use the ‘supported’ support level for role permission
B. Set the rolestage to ALPHA while testing the role permissions.
C. Use permissions in your role that use the ‘supported’ support level for role permission
D. Set the role stage to BETA while testing the role permissions.
E. Use permissions in your role that use the ‘testing’ support level for role permission
F. Set the role stage to ALPHA while testing the role permissions.
G. Use permissions in your role that use the ‘testing’ support level for role permission
H. Set the role stage to BETA while testing the role permissions.

Answer: A

Explanation:
When setting support levels for permissions in custom roles, you can set to one of SUPPORTED, TESTING or NOT_SUPPORTED.
Ref: https://cloud.google.com/iam/docs/custom-roles-permissions-support

NEW QUESTION 110


You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company's security policies only allow the use
to internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a
Cloud Storage bucket within your project. What should you do?

A. Enable Private Service Access on the Cloud Storage Bucket.


B. Add slorage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.
C. Enable Private Google Access on the subnet within the custom VPC.
D. Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.

Answer: A

NEW QUESTION 113


You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What
should you do?

A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic.2. Call your application on Cloud Run from the Cloud Function for every message.
B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run.2. Create a Cloud Pub/Sub subscription for that topic.3. Make your application
pull messages from that subscription.
C. 1. Create a service account.2. Give the Cloud Run Invoker role to that service account for your Cloud Run application.3. Create a Cloud Pub/Sub subscription
that uses that service account and uses your Cloud Run application as the push endpoint.
D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal.2. Create a Cloud Pub/Sub subscription for that topic.3. In the same
Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.

Answer: C

Explanation:
https://cloud.google.com/run/docs/tutorials/pubsub#integrating-pubsub
* 1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription
that uses that service account and uses your Cloud Run application as the push endpoint.

NEW QUESTION 114


An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed until 2 weeks later. You need to find out this employee accessed
any sensitive customer information after their termination. What should you do?

A. View System Event Logs in Stackdrive


B. Search for the user’s email as the principal.
C. View System Event Logs in Stackdrive
D. Search for the service account associated with the user.
E. View Data Access audit logs in Stackdrive
F. Search for the user’s email as the principal.
G. View the Admin Activity log in Stackdrive
H. Search for the service account associated with the user.

Answer: C

Explanation:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

https://cloud.google.com/logging/docs/audit
Data Access audit logs Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create,
modify, or read user-provided resource data.
https://cloud.google.com/logging/docs/audit#data-access

NEW QUESTION 117


You are using Container Registry to centrally store your company’s container images in a separate project. In another project, you want to create a Google
Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do?

A. In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes.
B. When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under ‘Access scopes’.
C. Create a service account, and give it access to Cloud Storag
D. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes.
E. Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account.

Answer: A

Explanation:
Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account. is not right.As mentioned above,
Container Registry ignores permissions set on individual objects within the storage bucket so this isnt going to work.
Ref: https://cloud.google.com/container-registry/docs/access-control

NEW QUESTION 119


You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do?

A. Use kubectl app deploy <dockerfilename>.


B. Use gcloud app deploy <dockerfilename>.
C. Create a docker image from the Dockerfile and upload it to Container Registr
D. Create a Deployment YAML file to point to that imag
E. Use kubectl to create the deployment with that file.
F. Create a docker image from the Dockerfile and upload it to Cloud Storag
G. Create a Deployment YAML file to point to that imag
H. Use kubectl to create the deployment with that file.

Answer: C

NEW QUESTION 124


You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You want to back up this instance daily for disaster
recovery purposes. You want to keep the backups for 30 days. You want the Google-recommended solution with the least management overhead and the least
number of services. What should you do?

A. * 1. Update your instances’ metadata to add the following value: snapshot–schedule: 0 1 * * ** 2. Update your instances’ metadata to add the following value:
snapshot–retention: 30
B. * 1. In the Cloud Console, go to the Compute Engine Disks page and select your instance’s disk.* 2. In the Snapshot Schedule section, select Create Schedule
and configure the following parameters:–Schedule frequency: Daily–Start time: 1:00 AM – 2:00 AM–Autodelete snapshots after 30 days
C. * 1. Create a Cloud Function that creates a snapshot of your instance’s disk.* 2.Create a Cloud Function that deletes snapshots that are older than 30 day
D. 3.Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
E. * 1. Create a bash script in the instance that copies the content of the disk to Cloud Storage.* 2. Create a bash script in the instance that deletes data older than
30 days in the backup Cloud Storage bucket.* 3. Configure the instance’s crontab to execute these scripts daily at 1:00 AM.

Answer: B

Explanation:
Creating scheduled snapshots for persistent disk This document describes how to create a snapshot schedule to regularly and automatically back up your zonal
and regional persistent disks. Use snapshot schedules as a best practice to back up your Compute Engine workloads. After creating a snapshot schedule, you can
apply it to one or more persistent disks. https://cloud.google.com/compute/docs/disks/scheduled-snapshots

NEW QUESTION 127


You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to
have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse
proxy?

A. Create a Cloud Memorystore for Redis instance with 32-GB capacity.


B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Answer: A

Explanation:
What is Google Cloud Memorystore?
Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve
extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

NEW QUESTION 128


You have developed a containerized web application that will serve Internal colleagues during business hours. You want to ensure that no costs are incurred
outside of the hours the application is used. You have just created a new Google Cloud project and want to deploy the application. What should you do?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

A. Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero
B. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to zero.
C. Deploy the container on App Engine flexible environment with autoscalin
D. and set the value min_instances to zero in the app yaml
E. Deploy the container on App Engine flexible environment with manual scaling, and set the value instances to zero in the app yaml

Answer: B

Explanation:
https://cloud.google.com/kuberun/docs/architecture-overview#components_in_the_default_installation

NEW QUESTION 130


You need to configure optimal data storage for files stored in Cloud Storage for minimal cost. The files are used in a mission-critical analytics pipeline that is used
continually. The users are in Boston, MA (United States). What should you do?

A. Configure regional storage for the region closest to the users Configure a Nearline storage class
B. Configure regional storage for the region closest to the users Configure a Standard storage class
C. Configure dual-regional storage for the dual region closest to the users Configure a Nearline storage class
D. Configure dual-regional storage for the dual region closest to the users Configure a Standard storage class

Answer: B

Explanation:
Keywords: - continually -> Standard - mission-critical analytics -> dual-regional

NEW QUESTION 131


You manage three Google Cloud projects with the Cloud Monitoring API enabled. You want to follow Google-recommended practices to visualize CPU and
network metrics for all three projects together. What should you do?

A. * 1. Create a Cloud Monitoring Dashboard* 2. Collect metrics and publish them into the Pub/Sub topics 3. Add CPU and network Charts (or each of (he three
projects
B. * 1. Create a Cloud Monitoring Dashboard.* 2. Select the CPU and Network metrics from the three projects.* 3. Add CPU and network Charts lot each of the
three protects.
C. * 1 Create a Service Account and apply roles/viewer on the three projects* 2. Collect metrics and publish them lo the Cloud Monitoring API* 3. Add CPU and
network Charts for each of the three projects.
D. * 1. Create a fourth Google Cloud project* 2 Create a Cloud Workspace from the fourth project and add the other three projects

Answer: B

NEW QUESTION 134


You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled. The application exposes a TCP endpoint. There are
several replicas of this application. You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that
has no overlapping IP ranges with the first VPC. This instance needs to connect to the application on GKE. You want to minimize effort. What should you do?

A. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Set the service's externalTrafficPolicy to Cluster.3. Configure
the Compute Engine instance to use the address of the load balancer that has been created.
B. 1. In GKE, create a Service of type NodePort that uses the application's Pods as backend.2. Create a Compute Engine instance called proxy with 2 network
interfaces, one in each VPC.3. Use iptables on this instance to forward traffic from gce-network to the GKE nodes.4. Configure the Compute Engine instance to
use the address of proxy in gce-network as endpoint.
C. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add an annotation to this service: cloud.google.com/load-
balancer-type: Internal3. Peer the two VPCs together.4. Configure the Compute Engine instance to use the address of the load balancer that has been created.
D. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add a Cloud Armor Security Policy to the load balancer that
whitelists the internal IPs of the MIG'sinstances.3. Configure the Compute Engine instance to use the address of the load balancer that has been created.

Answer: A

Explanation:
performs a peering between the two VPC's (the statement makes sure that this option is feasible since it clearly specifies that there is no overlapping between the
ip ranges of both vpc's), deploy the LoadBalancer as internal with the annotation, and configure the endpoint so that the compute engine instance can access the
application internally, that is, without the need to have a public ip at any time and therefore, without the need to go outside the google network. The traffic,
therefore, never crosses the public internet.
https://medium.com/pablo-perez/k8s-externaltrafficpolicy-local-or-cluster-40b259a19404 https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-
balancing
clients in a VPC network connected to the LoadBalancer network using VPC Network Peering can also access the Service
https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters

NEW QUESTION 135


You are configuring service accounts for an application that spans multiple projects. Virtual machines (VMs) running in the web-applications project need access to
BigQuery datasets in crm-databases-proj. You want to follow Google-recommended practices to give access to the service account in the web-applications project.
What should you do?

A. Give “project owner” for web-applications appropriate roles to crm-databases- proj


B. Give “project owner” role to crm-databases-proj and the web-applications project.
C. Give “project owner” role to crm-databases-proj and bigquery.dataViewer role to web-applications.
D. Give bigquery.dataViewer role to crm-databases-proj and appropriate roles to web-applications.

Answer: C

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 137


You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment.
You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-nqqmt 1/1 Running 0 9m41s
$ kubectl delete pod nginx-84748895c4-nqqmt
pod nginx-84748895c4-nqqmt deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-k6bzl 1/1 Running 0 25s
What should you do to delete the deployment and avoid pod getting recreated?

A. kubectl delete deployment nginx


B. kubectl delete –deployment=nginx
C. kubectl delete pod nginx-84748895c4-k6bzl –no-restart 2
D. kubectl delete inginx

Answer: A

Explanation:
This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the
pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the
kubectl delete deployment command.
$ kubectl delete deployment nginx
deployment.apps nginx deleted
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources

NEW QUESTION 139


You manage an App Engine Service that aggregates and visualizes data from BigQuery. The application is deployed with the default App Engine Service account.
The data that needs to be visualized resides in a different project managed by another team. You do not have access to this project, but you want your application
to be able to read data from the BigQuery dataset. What should you do?

A. Ask the other team to grant your default App Engine Service account the role of BigQuery Job User.
B. Ask the other team to grant your default App Engine Service account the role of BigQuery Data Viewer.
C. In Cloud IAM of your project, ensure that the default App Engine service account has the role of BigQuery Data Viewer.
D. In Cloud IAM of your project, grant a newly created service account from the other team the role of BigQuery Job User in your project.

Answer: B

Explanation:
The resource that you need to get access is in the other project. roles/bigquery.dataViewer BigQuery Data Viewer
When applied to a table or view, this role provides permissions to: Read data and metadata from the table or view.
This role cannot be applied to individual models or routines. When applied to a dataset, this role provides permissions to:
Read the dataset's metadata and list tables in the dataset. Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the
running of jobs.

NEW QUESTION 140


You recently received a new Google Cloud project with an attached billing account where you will work. You need to create instances, set firewalls, and store data
in Cloud Storage. You want to follow
Google-recommended practices. What should you do?

A. Use the gcloud CLI services enable cloudresourcemanager.googleapis.com command to enable all resources.
B. Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com
command to enable the Cloud Storage APIs.
C. Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.
D. Open the Google Cloud console and run gcloud init --project <project-id> in a Cloud Shell.

Answer: B

NEW QUESTION 145


You have two subnets (subnet-a and subnet-b) in the default VPC. Your database servers are running in subnet-a. Your application servers and web servers are
running in subnet-b. You want to configure a firewall rule that only allows database traffic from the application servers to the database servers. What should you
do?

A. * Create service accounts sa-app and sa-db.• Associate service account: sa-app with the application servers and the service account sa-db with the database
servers.• Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db.
B. • Create network tags app-server and db-server.• Add the app-server lag lo the application servers and the db-server lag to the database servers.• Create an
egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server.
C. * Create a service account sa-app and a network tag db-server.* Associate the service account sa-app with the application servers and the network tag db-
server with the database servers.• Create an ingress firewall rule to allow network traffic from source VPC IP addresses and target the subnet-a IP addresses.
D. • Create a network lag app-server and service account sa-db.• Add the tag to the application servers and associate the service account with the database
servers.• Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db.

Answer: C

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 149


You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing instances of your company on Google Cloud. What must you do
before you run the gcloud compute instances list command?
Choose 2 answers

A. Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.
B. Create a Google Cloud service account, and download the service account ke
C. Place the key file in a folder on your machine where gcloud CLI can find it.
D. Download your Cloud Identity user account ke
E. Place the key file in a folder on your machine where gcloud CLI can find it.
F. Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
G. Run gcloud config set project $my_project to set the default project for gcloud CLI.

Answer: AE

Explanation:
Before you run the gcloud compute instances list command, you need to do two things: authenticate with your user account and set the default project for gcloud
CLI.
To authenticate with your user account, you need to run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to
gcloud CLI. This will authorize the gcloud CLI to access Google Cloud resources on your behalf1.
To set the default project for gcloud CLI, you need to run gcloud config set project $my_project, where
$my_project is the ID of the project that contains the instances you want to list. This will save you from having to specify the project flag for every gcloud
command2.
Option B is not recommended, because using a service account key increases the risk of credential leakage and misuse. It is also not necessary, because you can
use your user account to authenticate to the gcloud CLI3. Option C is not correct, because there is no such thing as a Cloud Identity user account key. Cloud
Identity is a service that provides identity and access management for Google Cloud users and groups4. Option D is not required, because the gcloud compute
instances list command does not depend on the default zone. You can
list instances from all zones or filter by a specific zone using the --filter flag.
References:
1: https://cloud.google.com/sdk/docs/authorizing
2: https://cloud.google.com/sdk/gcloud/reference/config/set
3: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
4: https://cloud.google.com/identity/docs/overview
: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list

NEW QUESTION 151


You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before
committing it to the project. You want the most rapid feedback on your changes. What should you do?

A. Use granular logging statements within a Deployment Manager template authored in Python.
B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console.
C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures.
D. Execute the Deployment Manager template using the –-preview option in the same project, and observe the state of interdependent resources.

Answer: D

NEW QUESTION 152


Your company’s infrastructure is on-premises, but all machines are running at maximum capacity. You want to burst to Google Cloud. The workloads on Google
Cloud must be able to directly communicate to the workloads on-premises using a private IP range. What should you do?

A. In Google Cloud, configure the VPC as a host for Shared VPC.


B. In Google Cloud, configure the VPC for VPC Network Peering.
C. Create bastion hosts both in your on-premises environment and on Google Clou
D. Configure both as proxy servers using their public IP addresses.
E. Set up Cloud VPN between the infrastructure on-premises and Google Cloud.

Answer: D

Explanation:
"Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to
the same project or the same organization."
https://cloud.google.com/vpc/docs/vpc-peering while
"Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Google Cloud Virtual
Private Cloud (VPC) networks."
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview and
"HA VPN is a high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to your VPC network through an IPsec VPN
connection in a single region."
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview

NEW QUESTION 155


You deployed an application on a managed instance group in Compute Engine. The application accepts Transmission Control Protocol (TCP) traffic on port 389
and requires you to preserve the IP address of the client who is making a request. You want to expose the application to the internet by using a load balancer.
What should you do?

A. Expose the application by using an external TCP Network Load Balancer.


B. Expose the application by using a TCP Proxy Load Balancer.
C. Expose the application by using an SSL Proxy Load Balancer.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

D. Expose the application by using an internal TCP Network Load Balancer.

Answer: B

NEW QUESTION 159


Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of
projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task.
What should you do?

A. Go to Data Catalog and search for employee_ssn in the search box.


B. Write a shell script that uses the bq command line tool to loop through all the projects in your organization.
C. Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn
column.
D. Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find
employee_ssn column.

Answer: A

Explanation:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4

NEW QUESTION 162


You need to grant access for three users so that they can view and edit table data on a Cloud Spanner instance. What should you do?

A. Run gcloud iam roles describe roles/spanner.databaseUse


B. Add the users to the role.
C. Run gcloud iam roles describe roles/spanner.databaseUse
D. Add the users to a new grou
E. Add the group to the role.
F. Run gcloud iam roles describe roles/spanner.viewer --project my-projec
G. Add the users to the role.
H. Run gcloud iam roles describe roles/spanner.viewer --project my-projec
I. Add the users to a new group.Add the group to the role.

Answer: B

Explanation:
https://cloud.google.com/spanner/docs/iam#spanner.databaseUser
Using the gcloud tool, execute the gcloud iam roles describe roles/spanner.databaseUser command on Cloud Shell. Attach the users to a newly created Google
group and add the group to the role.

NEW QUESTION 167


You are working with a user to set up an application in a new VPC behind a firewall. The user is concerned about data egress. You want to configure the fewest
open egress ports. What should you do?

A. Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.
B. Set up a high-priority (1000) rule that pairs both ingress and egress ports.
C. Set up a high-priority (1000) rule that blocks all egress and a low-priority (65534) rule that allows only the appropriate ports.
D. Set up a high-priority (1000) rule to allow the appropriate ports.

Answer: A

Explanation:
Implied rules Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console: Implied allow egress rule. An egress
rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic
blocked by Google Cloud. A higher priority firewall rule may restrict outbound access. Internet access is allowed if no other firewall rules deny outbound traffic and
if the instance has an external IP address or uses a Cloud NAT instance. For more information, see Internet access requirements. Implied deny ingress rule. An
ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming connections to them. A
higher priority rule might allow incoming access. The default network includes some additional rules that override this one, allowing certain types of incoming
connections. https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules

NEW QUESTION 169


You are performing a monthly security check of your Google Cloud environment and want to know who has access to view data stored in your Google Cloud
Project. What should you do?

A. Enable Audit Logs for all APIs that are related to data storage.
B. Review the IAM permissions for any role that allows for data access.
C. Review the Identity-Aware Proxy settings for each resource.
D. Create a Data Loss Prevention job.

Answer: B

Explanation:
https://cloud.google.com/logging/docs/audit

NEW QUESTION 171


You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?

A. Navigate to Stackdriver Logging and select resource.labels.project_id="*"


B. Create a Stackdriver Logging Export with a Sink destination to a BigQuery datase
C. Configure the table expiration to 60 days.
D. Create a Stackdriver Logging Export with a Sink destination to Cloud Storag
E. Create a lifecycle rule to delete objects after 60 days.
F. Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuer
G. Configure the table expiration to 60 days.

Answer: B

Explanation:
Navigate to Stackdriver Logging and select resource.labels.project_id=*. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention period which is 30 days (default configuration). After that, the entries are
deleted. To keep log entries longer, you need to export them outside of Stackdriver Logging by configuring log sinks.
Ref: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit-logging Configure a Cloud Scheduler job to read from
Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery when Google provides a feature (export
sinks) that does exactly the same thing and works out of the box.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days. is not right.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from Cloud Storage is harder than Querying information from BigQuery dataset.
For this reason, we should prefer Big Query over Cloud Storage.
Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a Big Query dataset is easier and quicker than analyzing contents in Cloud
Storage bucket. As our requirement is to Quickly analyze the log contents, we should prefer Big Query over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default table expiration for newly created tables in a dataset. If you set the property
when the dataset is created, any table created in the dataset is deleted after the expiration period. If you set the property after the dataset is created, only new
tables are deleted after the expiration period.For example, if you set the default table expiration to 7 days, older data is automatically deleted after 1 week.Ref:
https://cloud.google.com/bigquery/docs/best-practices-storage

NEW QUESTION 174


You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to
automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?

A. Create an instance template for the instance


B. Set the ‘Automatic Restart’ to o
C. Set the ‘On-host maintenance’ to Migrate VM instanc
D. Add the instance template to an instance group.
E. Create an instance template for the instance
F. Set ‘Automatic Restart’ to of
G. Set ‘On-host maintenance’ to Terminate VM instance
H. Add the instance template to an instance group.
I. Create an instance group for the instance
J. Set the ‘Autohealing’ health check to healthy (HTTP).
K. Create an instance group for the instanc
L. Verify that the ‘Advanced creation options’ setting for ‘do not retry machine creation’ is set to off.

Answer: A

Explanation:
Create an instance template for the instances so VMs have same specs. Set the "˜Automatic Restart' to on to VM automatically restarts upon crash. Set the "˜On-
host maintenance' to Migrate VM instance. This will take care of VM during maintenance window. It will migrate VM instance making it highly available Add the
instance template to an instance group so instances can be managed.
• onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
• [Default] MIGRATE, which causes Compute Engine to live migrate an instance when there is a maintenance event.
• TERMINATE, which stops an instance instead of migrating it.
• automaticRestart: Determines the behavior when an instance crashes or is stopped by the system.
• [Default] true, so Compute Engine restarts an instance if the instance crashes or is stopped.
• false, so Compute Engine does not restart an instance if the instance crashes or is stopped.
Enabling automatic restart ensures that compute engine instances are automatically restarted when they crash. And Enabling Migrate VM Instance enables live
migrates i.e. compute instances are migrated during system maintenance and remain running during the migration.
Automatic Restart If your instance is set to terminate when there is a maintenance event, or if your instance crashes because of an underlying hardware issue, you
can set up Compute Engine to automatically restart the instance by setting the automaticRestart field to true. This setting does not apply if the instance is taken
offline through a user action, such as calling sudo shutdown, or during a zone outage.Ref: https://cloud.google.com/compute/docs/instances/setting-instance-
scheduling-options#autorestart

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Enabling the Migrate VM Instance option migrates your instance away from an infrastructure maintenance event, and your instance remains running during the
migration. Your instance might experience a short period of decreased performance, although generally, most instances should not notice any difference. This is
ideal for instances that require constant uptime and can tolerate a short period of decreased
performance.Ref: https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#live_

NEW QUESTION 176


You have successfully created a development environment in a project for an application. This application uses Compute Engine and Cloud SQL. Now, you need
to create a production environment for this application.
The security team has forbidden the existence of network routes between these 2 environments, and asks you to follow Google-recommended practices. What
should you do?

A. Create a new project, enable the Compute Engine and Cloud SQL APIs in that project, and replicate the setup you have created in the development
environment.
B. Create a new production subnet in the existing VPC and a new production Cloud SQL instance in your existing project, and deploy your application using those
resources.
C. Create a new project, modify your existing VPC to be a Shared VPC, share that VPC with your new project, and replicate the setup you have in the
development environment in that new project, in the Shared VPC.
D. Ask the security team to grant you the Project Editor role in an existing production project used by another division of your compan
E. Once they grant you that role, replicate the setup you have in the development environment in that project.

Answer: A

Explanation:
This aligns with Googles recommended practices. By creating a new project, we achieve complete isolation between development and production environments;
as well as isolate this production application from production applications of other departments.
Ref: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#define-hierarchy

NEW QUESTION 180


You created a cluster.YAML file containing
resources:
name: cluster
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
description: My GCP ACE cluster
initialNodeCount: 2
You want to use Cloud Deployment Manager to create this cluster in GKE. What should you do?

A. gcloud deployment-manager deployments create my-gcp-ace-cluster --config cluster.yaml


B. gcloud deployment-manager deployments create my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml
C. gcloud deployment-manager deployments apply my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml
D. gcloud deployment-manager deployments apply my-gcp-ace-cluster --config cluster.yaml

Answer: D

Explanation:
gcloud deployment-manager deployments create creates deployments based on the configuration file. (Infrastructure as code). All the configuration related to the
artifacts is in the configuration file. This command correctly creates a cluster based on the provided cluster.yaml configuration file.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/create

NEW QUESTION 185


You have one GCP account running in your default region and zone and another account running in a
non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface.
What should you do?

A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts
when running the commands to start the Compute Engine instances.
B. Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances.
C. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances.
D. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.

Answer: A

Explanation:
"Run gcloud configurations list to start the Compute Engine instances". How the heck are you expecting to "start" GCE instances doing "configuration list".
Each gcloud configuration has a 1 to 1 relationship with the region (if a region is defined). Since we have two different regions, we would need to create two
separate configurations using gcloud config configurations createRef: https://cloud.google.com/sdk/gcloud/reference/config/configurations/create
Secondly, you can activate each configuration independently by running gcloud config configurations activate [NAME]Ref:
https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate
Finally, while each configuration is active, you can run the gcloud compute instances start [NAME] command to start the instance in the configurations
region.https://cloud.google.com/sdk/gcloud/reference/compute/instances/start

NEW QUESTION 190


You are running multiple microservices in a Kubernetes Engine cluster. One microservice is rendering images. The microservice responsible for the image
rendering requires a large amount of CPU time compared to the memory it requires. The other microservices are workloads that are optimized for n1-standard

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

machine types. You need to optimize your cluster so that all workloads are using resources as efficiently as possible. What should you do?

A. Assign the pods of the image rendering microservice a higher pod priority than the older microservices
B. Create a node pool with compute-optimized machine type nodes for the image rendering microservice Use the node pool with general-purposemachine type
nodes for the other microservices
C. Use the node pool with general-purpose machine type nodes for lite mage rendering microservice Create a nodepool with compute-optimized machine type
nodes for the other microservices
D. Configure the required amount of CPU and memory in the resource requests specification of the imagerendering microservice deployment Keep the resource
requests for the other microservices at the default

Answer: B

NEW QUESTION 192


You are deploying a production application on Compute Engine. You want to prevent anyone from accidentally destroying the instance by clicking the wrong
button. What should you do?

A. Disable the flag “Delete boot disk when instance is deleted.”


B. Enable delete protection on the instance.
C. Disable Automatic restart on the instance.
D. Enable Preemptibility on the instance.

Answer: D

Explanation:
Preventing Accidental VM Deletion This document describes how to protect specific VM instances from deletion by setting the deletionProtection property on an
Instance resource. To learn more about VM instances, read the Instances documentation. As part of your workload, there might be certain VM instances that are
critical to running your application or services, such as an instance running a SQL server, a server used as a license manager, and so on. These VM instances
might need to stay running indefinitely so you need a way to protect these VMs from being deleted. By setting the deletionProtection flag, a VM instance can be
protected from accidental deletion. If a user attempts to delete a VM instance for which you have set the deletionProtection flag, the request fails. Only a user that
has been granted a role with compute.instances.create permission can reset the flag to allow the resource to be deleted.
https://cloud.google.com/compute/docs/instances/preventing-accidental-vm-deletion

NEW QUESTION 197


You have a Bigtable instance that consists of three nodes that store personally identifiable information (Pll) data. You need to log all read or write operations,
including any metadata or configuration reads of this database table, in your company's Security Information and Event Management (SIEM) system. What should
you do?

A. • Navigate to Cloud Mentioning in the Google Cloud console, and create a custom monitoring job for theBigtable instance to track all changes.• Create an alert
by using webhook endpoint
B. with the SIEM endpoint as a receiver
C. Navigate to the Audit Logs page in the Google Cloud console, and enable Data Rea
D. Data Write and Admin Read logs for the Bigtable instance• Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to
the topic.
E. • Install the Ops Agent on the Bigtable instance during configuratio
F. K• Create a service account with read permissions for the Bigtable instance.• Create a custom Dataflow job with this service account to export logs to the
company's SIEM system.
G. • Navigate to the Audit Logs page in the Google Cloud console, and enable Admin Write logs for the Biglable instance.• Create a Cloud Functions instance to
export logs from Cloud Logging to your SIEM.

Answer: B

NEW QUESTION 199


Your team maintains the infrastructure for your organization. The current infrastructure requires changes. You need to share your proposed changes with the rest
of the team. You want to follow Google’s recommended best practices. What should you do?

A. Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.
B. Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.
C. Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.
D. Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.

Answer: B

Explanation:
Showing Deployment Manager templates to your team will allow you to define the changes you want to implement in your cloud infrastructure. You can use Cloud
Source Repositories to store Deployment Manager templates and collaborate with your team. Cloud Source Repositories are fully-featured, scalable, and private
Git repositories you can use to store, manage and track changes to your code.
https://cloud.google.com/source-repositories/docs/features

NEW QUESTION 201


Your organization has user identities in Active Directory. Your organization wants to use Active Directory as their source of truth for identities. Your organization
wants to have full control over the Google accounts used by employees for all Google services, including your Google Cloud Platform (GCP) organization. What
should you do?

A. Use Google Cloud Directory Sync (GCDS) to synchronize users into Cloud Identity.
B. Use the cloud Identity APIs and write a script to synchronize users to Cloud Identity.
C. Export users from Active Directory as a CSV and import them to Cloud Identity via the Admin Console.
D. Ask each employee to create a Google account using self signu
E. Require that each employee use their company email address and password.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Answer: A

NEW QUESTION 202


You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses
this service account instead of the default Compute Engine service account. What should you do?

A. When creating the VM via the web console, specify the service account under the ‘Identity and API Access’ section.
B. Download a JSON Private Key for the service accoun
C. On the Project Metadata, add that JSON as thevalue for the key compute-engine-service-account.
D. Download a JSON Private Key for the service accoun
E. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine-service-account.
F. Download a JSON Private Key for the service accoun
G. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.

Answer: A

NEW QUESTION 205


Your company wants to migrate their on-premises workloads to Google Cloud. The current on-premises workloads consist of:
• A Flask web application
• AbackendAPI
• A scheduled long-running background job for ETL and reporting.
You need to keep operational costs low You want to follow Google-recommended practices to migrate these workloads to serverless solutions on Google Cloud.
What should you do?

A. Migrate the web application to App Engine and the backend API to Cloud Run Use Cloud Tasks to run your background job on Compute Engine
B. Migrate the web application to App Engine and the backend API to Cloud Ru
C. Use Cloud Tasks to run your background job on Cloud Run.
D. Run the web application on a Cloud Storage bucket and the backend API on Cloud Run Use Cloud Tasks to run your background job on Cloud Run.
E. Run the web application on a Cloud Storage bucket and the backend API on Cloud Ru
F. Use Cloud Tasks to run your background job on Compute Engine

Answer: B

NEW QUESTION 206


You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL and need access to the data stored in this file. You
want to find a cost-effective way to complete their request as soon as possible. What should you do?

A. Load data in Cloud Datastore and run a SQL query against it.
B. Create a BigQuery table and load data in BigQuer
C. Run a SQL query on this table and drop this table after you complete your request.
D. Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request.
E. Create a Hadoop cluster and copy the AVRO file to NDFS by compressing i
F. Load the file in a hive table and provide access to your analysts so that they can run SQL queries.

Answer: C

Explanation:
https://cloud.google.com/bigquery/external-data-sources
An external data source is a data source that you can query directly from BigQuery, even though the data is not stored in BigQuery storage.
BigQuery supports the following external data sources: Amazon S3
Azure Storage Cloud Bigtable Cloud Spanner Cloud SQL Cloud Storage
Drive

NEW QUESTION 208


You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning
(ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

A. Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
B. Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
C. Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPU
D. Dedicate this cluster to your ML team.
E. Add a new, GPU-enabled, node pool to the GKE cluste
F. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.

Answer: D

Explanation:
This is the most optimal solution. Rather than recreating all nodes, you create a new node pool with GPU enabled. You then modify the pod specification to target
particular GPU types by adding node selector to your workloads Pod specification. YOu still have a single cluster so you pay Kubernetes cluster management fee
for just one cluster thus minimizing the
cost.Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/gpusRef: https://cloud.google.com/kubern
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-gpu-pod
spec:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

containers:
name: my-gpu-container
image: nvidia/cuda:10.0-runtime-ubuntu18.04
command: [/bin/bash]
resources:
limits:
nvidia.com/gpu: 2
nodeSelector:
cloud.google.com/gke-accelerator: nvidia-tesla-k80 # or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-tesla-v100 or nvidia-tesla-t4

NEW QUESTION 209


You deployed a new application inside your Google Kubernetes Engine cluster using the YAML file specified below.

You check the status of the deployed pods and notice that one of them is still in PENDING status:

You want to find out why the pod is stuck in pending status. What should you do?

A. Review details of the myapp-service Service object and check for error messages.
B. Review details of the myapp-deployment Deployment object and check for error messages.
C. Review details of myapp-deployment-58ddbbb995-lp86m Pod and check for warning messages.
D. View logs of the container in myapp-deployment-58ddbbb995-lp86m pod and check for warning messages.

Answer: C

Explanation:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods

NEW QUESTION 212


Users of your application are complaining of slowness when loading the application. You realize the slowness is because the App Engine deployment serving the
application is deployed in us-central whereas all users of this application are closest to europe-west3. You want to change the region of the App Engine application
to europe-west3 to minimize latency. What’s the best way to change the App Engine region?

A. Create a new project and create an App Engine instance in europe-west3


B. Use the gcloud app region set command and supply the name of the new region.
C. From the console, under the App Engine page, click edit, and change the region drop-down.
D. Contact Google Cloud Support and request the change.

Answer: A

Explanation:
App engine is a regional service, which means the infrastructure that runs your app(s) is located in a specific region and is managed by Google to be redundantly
available across all the zones within that region. Once an app engine deployment is created in a region, it cant be changed. The only way is to create a new project
and create an App Engine instance in europe-west3, send all user traffic to this instance and delete the app engine instance in us-central.
Ref: https://cloud.google.com/appengine/docs/locations

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 214


You are building an application that stores relational data from users. Users across the globe will use this application. Your CTO is concerned about the scaling
requirements because the size of the user base is unknown. You need to implement a database solution that can scale with your user growth with minimum
configuration changes. Which storage solution should you use?

A. Cloud SQL
B. Cloud Spanner
C. Cloud Firestore
D. Cloud Datastore

Answer: B

Explanation:
Cloud Spanner is a relational database and is highly scalable. Cloud Spanner is a highly scalable,
enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database
structure with a non-relational horizontal scale. This combination delivers high-performance transactions and strong consistency across rows, regions, and
continents with an industry-leading 99.999% availability SLA, no planned downtime, and enterprise-grade security
Ref: https://cloud.google.com/spanner
Graphical user interface, application, Teams Description automatically generated

NEW QUESTION 216


You have an application that receives SSL-encrypted TCP traffic on port 443. Clients for this application are located all over the world. You want to minimize
latency for the clients. Which load balancing option should you use?

A. HTTPS Load Balancer


B. Network Load Balancer
C. SSL Proxy Load Balancer
D. Internal TCP/UDP Load Balance
E. Add a firewall rule allowing ingress traffic from 0.0.0.0/0 on the target instances.

Answer: C

NEW QUESTION 221


You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should
you do?

A. Create a health check on port 443 and use that when creating the Managed Instance Group.
B. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
C. In the Instance Template, add the label ‘health-check’.
D. In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Answer: A

Explanation:
https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autoheali

NEW QUESTION 222


You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one
geographic location. You need to support point-in-time recovery. What should you do?

A. Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected.
B. Select Cloud SQL (MySQL). Select the create failover replicas option.
C. Select Cloud Spanne
D. Set up your instance with 2 nodes.
E. Select Cloud Spanne
F. Set up your instance as multi-regional.

Answer: A

NEW QUESTION 224


You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

A. Use kubect1 to delete the topic resource.


B. Use gcloud CLI to delete the topic.
C. Use kubect1 to create the label deleted-by-cnrm and to change its value to true for the topic resource.
D. Use gcloud CLI to update the topic label managed-by-cnrm to false.

Answer: A

NEW QUESTION 229


You created a Google Cloud Platform project with an App Engine application inside the project. You initially configured the application to be served from the us-
central region. Now you want the application to be served from the asia-northeast1 region. What should you do?

A. Change the default region property setting in the existing GCP project to asia-northeast1.
B. Change the region property setting in the existing App Engine application from us-central to asia-northeast1.
C. Create a second App Engine application in the existing GCP project and specify asia-northeast1 as the region to serve your application.
D. Create a new GCP project and create an App Engine application inside this new projec
E. Specify asia-northeast1 as the region to serve your application.

Answer: D

Explanation:
https://cloud.google.com/appengine/docs/flexible/managing-projects-apps-billing#:~:text=Each%20Cloud%20p Two App engine can't be running on the same
project: you can check this easy diagram for more info:
https://cloud.google.com/appengine/docs/standard/an-overview-of-app-engine#components_of_an_application
And you can't change location after setting it for your app Engine. https://cloud.google.com/appengine/docs/standard/locations
App Engine is regional and you cannot change an apps region after you set it. Therefore, the only way to have an app run in another region is by creating a new
project and targeting the app engine to run in the required region (asia-northeast1 in our case).
Ref: https://cloud.google.com/appengine/docs/locations

NEW QUESTION 232


You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that
cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods. What should you do?

A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
C. Create a GKE node pool with a sandbox type configured to gviso
D. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
E. Use the cos_containerd image for your GKE node
F. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.

Answer: C

NEW QUESTION 234


Your company publishes large files on an Apache web server that runs on a Compute Engine instance. The Apache web server is not the only application running
in the project. You want to receive an email when the egress network costs for the server exceed 100 dollars for the current month as measured by Google Cloud
Platform (GCP). What should you do?

A. Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
B. Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
C. Export the billing data to BigQuer
D. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and
sends an email if it is over 100 dollar
E. Schedule the Cloud Function using Cloud Scheduler to run hourly.
F. Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging.Create a Cloud Function that uses BigQuery to parse the HTTP
response log data in Stackdriver for thecurrent month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over
100 dollar
G. Schedule the Cloud Function using Cloud Scheduler to run hourly.

Answer: C

Explanation:
https://blog.doit-intl.com/the-truth-behind-google-cloud-egress-traffic-6e8f57b5c2f8

NEW QUESTION 239


Your organization uses G Suite for communication and collaboration. All users in your organization have a G Suite account. You want to grant some G Suite users
access to your Cloud Platform project. What should you do?

A. Enable Cloud Identity in the GCP Console for your domain.


B. Grant them the required IAM roles using their G Suite email address.
C. Create a CSV sheet with all users’ email addresse
D. Use the gcloud command line tool to convert them into Google Cloud Platform accounts.
E. In the G Suite console, add the users to a special group called cloud-console-users@yourdomain.com.Rely on the default behavior of the Cloud Platform to
grant users access if they are members of this group.

Answer: B

NEW QUESTION 240


You have experimented with Google Cloud using your own credit card and expensed the costs to your company. Your company wants to streamline the billing
process and charge the costs of your projects to their monthly invoice. What should you do?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

A. Grant the financial team the IAM role of €Billing Account User€ on the billing account linked to your credit card.
B. Set up BigQuery billing export and grant your financial department IAM access to query the data.
C. Create a ticket with Google Billing Support to ask them to send the invoice to your company.
D. Change the billing account of your projects to the billing account of your company.

Answer: D

NEW QUESTION 241


You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize
complexity. What should you do?

A. Deploy the new version in the same application and use the --migrate option.
B. Deploy the new version in the same application and use the --splits option to give a weight of 99 to the current version and a weight of 1 to the new version.
C. Create a new App Engine application in the same projec
D. Deploy the new version in that application.Use the App Engine library to proxy 1% of the requests to the new version.
E. Create a new App Engine application in the same projec
F. Deploy the new version in that application.Configure your network load balancer to send 1% of the traffic to that new application.

Answer: B

Explanation:
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic#gcloud

NEW QUESTION 245


You need to track and verity modifications to a set of Google Compute Engine instances in your Google Cloud project. In particular, you want to verify OS system
patching events on your virtual machines (VMs). What should you do?

A. Review the Compute Engine activity logs Select and review the Admin Event logs
B. Review the Compute Engine activity logs Select and review the System Event logs
C. Install the Cloud Logging Agent In Cloud Logging review the Compute Engine syslog logs
D. Install the Cloud Logging Agent In Cloud Logging, review the Compute Engine operation logs

Answer: A

NEW QUESTION 249


You are working with a Cloud SQL MySQL database at your company. You need to retain a month-end copy of the database for three years for audit purposes.
What should you do?

A. Save file automatic first-of-the- month backup for three years Store the backup file in an Archive class Cloud Storage bucket
B. Convert the automatic first-of-the-month backup to an export file Write the export file to a Coldline class Cloud Storage bucket
C. Set up an export job for the first of the month Write the export file to an Archive class Cloud Storage bucket
D. Set up an on-demand backup tor the first of the month Write the backup to an Archive class Cloud Storage bucket

Answer: C

Explanation:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups#can_i_export_a_backup https://cloud.google.com/sql/docs/mysql/import-
export#automating_export_operations

NEW QUESTION 252


You have deployed an application on a single Compute Engine instance. The application writes logs to disk. Users start reporting errors with the application. You
want to diagnose the problem. What should you do?

A. Navigate to Cloud Logging and view the application logs.


B. Connect to the instance’s serial console and read the application logs.
C. Configure a Health Check on the instance and set a Low Healthy Threshold value.
D. Install and configure the Cloud Logging Agent and view the logs from Cloud Logging.

Answer: D

NEW QUESTION 253


You are working for a hospital that stores Its medical images in an on-premises data room. The hospital wants to use Cloud Storage for archival storage of these
images. The hospital wants an automated process to upload any new medical images to Cloud Storage. You need to design and implement a solution. What
should you do?

A. Deploy a Dataflow job from the batch template "Datastore lo Cloud Storage" Schedule the batch job on the desired interval
B. In the Cloud Console, go to Cloud Storage Upload the relevant images to the appropriate bucket
C. Create a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage Schedule the script as a cron job
D. Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topi
E. Create an application that sends all medical images to the Pub/Sub lope

Answer: C

Explanation:
they require cloud storage for archival and the want to automate the process to upload new medical image to cloud storage, hence we go for gsutil to copy on-
prem images to cloud storage and automate the process via cron job. whereas Pub/Sub listens to the changes in the Cloud Storage bucket and triggers the
pub/sub topic, which is not required.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 256


Your company implemented BigQuery as an enterprise data warehouse. Users from multiple business units run queries on this data warehouse. However, you
notice that query costs for BigQuery are very high, and you need to control costs. Which two methods should you use? (Choose two.)

A. Split the users from business units to multiple projects.


B. Apply a user- or project-level custom query quota for BigQuery data warehouse.
C. Create separate copies of your BigQuery data warehouse for each business unit.
D. Split your BigQuery data warehouse into multiple data warehouses for each business unit.
E. Change your BigQuery query model from on-demand to flat rat
F. Apply the appropriate number of slots to each Project.

Answer: BE

Explanation:
https://cloud.google.com/bigquery/docs/custom-quotas https://cloud.google.com/bigquery/pricing#flat_rate_pricing

NEW QUESTION 261


An application generates daily reports in a Compute Engine virtual machine (VM). The VM is in the project corp-iot-insights. Your team operates only in the project
corp-aggregate-reports and needs a copy of the daily exports in the bucket corp-aggregate-reports-storage. You want to configure access so that the daily reports
from the VM are available in the bucket corp-aggregate-reports-storage and use as few steps as possible while following Google-recommended practices. What
should you do?

A. Move both projects under the same folder.


B. Grant the VM Service Account the role Storage Object Creator on corp-aggregate-reports-storage.
C. Create a Shared VPC network between both project
D. Grant the VM Service Account the role Storage Object Creator on corp-iot-insights.
E. Make corp-aggregate-reports-storage public and create a folder with a pseudo-randomized suffix name.Share the folder with the IoT team.

Answer: B

Explanation:
Predefined roles
The following table describes Identity and Access Management (IAM) roles that are associated with Cloud Storage and lists the permissions that are contained in
each role. Unless otherwise noted, these roles can be applied either to entire projects or specific buckets.
Storage Object Creator (roles/storage.objectCreator) Allows users to create objects. Does not give permission to view, delete, or overwrite objects.
https://cloud.google.com/storage/docs/access-control/iam-roles#standard-roles

NEW QUESTION 263


......

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

THANKS FOR TRYING THE DEMO OF OUR PRODUCT

Visit Our Site to Purchase the Full Set of Actual Associate-Cloud-Engineer Exam Questions With Answers.

We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the
Associate-Cloud-Engineer Product From:

https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/

Money Back Guarantee

Associate-Cloud-Engineer Practice Exam Features:

* Associate-Cloud-Engineer Questions and Answers Updated Frequently

* Associate-Cloud-Engineer Practice Questions Verified by Expert Senior Certified Staff

* Associate-Cloud-Engineer Most Realistic Questions that Guarantee you a Pass on Your FirstTry

* Associate-Cloud-Engineer Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Powered by TCPDF (www.tcpdf.org)

You might also like