0% found this document useful (0 votes)
149 views27 pages

Associate Cloud Engineer - 5

The document promotes Exambible's PREMIUM Associate-Cloud-Engineer Dumps, offering high-quality IT exam practice materials and a guarantee of a 100% pass rate. It outlines various questions and answers related to Google Cloud Certified - Associate Cloud Engineer exam topics, including service account management, identity synchronization, resource monitoring, and deployment strategies. Exambible emphasizes its unique advantages and commitment to supporting candidates in their exam preparation.

Uploaded by

saiarungcp10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views27 pages

Associate Cloud Engineer - 5

The document promotes Exambible's PREMIUM Associate-Cloud-Engineer Dumps, offering high-quality IT exam practice materials and a guarantee of a 100% pass rate. It outlines various questions and answers related to Google Cloud Certified - Associate Cloud Engineer exam topics, including service account management, identity synchronization, resource monitoring, and deployment strategies. Exambible emphasizes its unique advantages and commitment to supporting candidates in their exam preparation.

Uploaded by

saiarungcp10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible

https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Google
Exam Questions Associate-Cloud-Engineer
Google Cloud Certified - Associate Cloud Engineer

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

About Exambible

Your Partner of IT Exam

Found in 1998

Exambible is a company specialized on providing high quality IT exam practice study materials, especially Cisco CCNA, CCDA,
CCNP, CCIE, Checkpoint CCSE, CompTIA A+, Network+ certification practice exams and so on. We guarantee that the
candidates will not only pass any IT exam at the first attempt but also get profound understanding about the certificates they have
got. There are so many alike companies in this industry, however, Exambible has its unique advantages that other companies could
not achieve.

Our Advances

* 99.9% Uptime
All examinations will be up to date.
* 24/7 Quality Support
We will provide service round the clock.
* 100% Pass Rate
Our guarantee that you will pass the exam.
* Unique Gurantee
If you do not pass the exam at the first time, we will not only arrange FULL REFUND for you, but also provide you another
exam of your claim, ABSOLUTELY FREE!

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

NEW QUESTION 1
You recently discovered that your developers are using many service account keys during their development process. While you work on a long term
improvement, you need to quickly implement a process to enforce short-lived service account credentials in your company. You have the following requirements:
• All service accounts that require a key should be created in a centralized project called pj-sa.
• Service account keys should only be valid for one day.
You need a Google-recommended solution that minimizes cost. What should you do?

A. Implement a Cloud Run job to rotate all service account keys periodically in pj-s
B. Enforce an org policy to deny service account key creation with an exception to pj-sa.
C. Implement a Kubernetes Cronjob to rotate all service account keys periodicall
D. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
E. Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hour
F. Enforce an org policy constraint denying service account key creation with an exception on pj-sa.
G. Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hour
H. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.

Answer: C

Explanation:
According to the Google Cloud documentation, you can use organization policy constraints to control the creation and expiration of service account keys. The
constraints are:
constraints/iam.allowServiceAccountKeyCreation: This constraint allows you to specify which projects
or folders can create service account keys. You can set the value to true or false, or use a condition to apply the constraint to specific service accounts. By setting
this constraint to false for the organization and adding an exception for the pj-sa project, you can prevent developers from creating service account keys in other
projects.
constraints/iam.serviceAccountKeyMaxLifetime: This constraint allows you to specify the maximum lifetime of service account keys. You can set the value to a
duration in seconds, such as 86400 for one day. By setting this constraint to 86400 for the organization, you can ensure that all service account ke expire after one
day.
These constraints are recommended by Google Cloud as best practices to minimize the risk of service account key misuse or compromise. They also help you
reduce the cost of managing service account keys, as you do not need to implement a custom solution to rotate or delete them.
References:
1: Associate Cloud Engineer Certification Exam Guide | Learn - Google Cloud
5: Create and delete service account keys - Google Cloud
Organization policy constraints for service accounts

NEW QUESTION 2
Your organization uses Active Directory (AD) to manage user identities. Each user uses this identity for federated access to various on-premises systems. Your
security team has adopted a policy that requires users to log into Google Cloud with their AD identity instead of their own login. You want to follow the
Google-recommended practices to implement this policy. What should you do?

A. Sync Identities with Cloud Directory Sync, and then enable SAML for single sign-on
B. Sync Identities in the Google Admin console, and then enable Oauth for single sign-on
C. Sync identities with 3rd party LDAP sync, and then copy passwords to allow simplified login with (he same credentials
D. Sync identities with Cloud Directory Sync, and then copy passwords to allow simplified login with the same credentials.

Answer: A

NEW QUESTION 3
You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver
Monitoring dashboard. What should you do?

A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.
B. For each project, create a Stackdriver accoun
C. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.
D. Configure a single Stackdriver account, and link all projects to the same account.
E. Configure a single Stackdriver account for one of the project
F. In Stackdriver, create a Group and add the other project names as criteria for that Group.

Answer: C

Explanation:
When you intially click on Monitoring(Stackdriver Monitoring) it creates a workspac(a stackdriver account) linked to the ACTIVE(CURRENT) Project from which it
was clicked.
Now if you change the project and again click onto Monitoring it would create an another workspace(a stackdriver account) linked to the changed
ACTIVE(CURRENT) Project, we don't want this as this would not consolidate our result into a single dashboard(workspace/stackdriver account).
If you have accidently created two diff workspaces merge them under Monitoring > Settings > Merge Workspaces > MERGE.
If we have only one workspace and two projects we can simply add other GCP Project under Monitoring > Settings > GCP Projects > Add GCP Projects.
https://cloud.google.com/monitoring/settings/multiple-projects
Nothing about groups https://cloud.google.com/monitoring/settings?hl=en

NEW QUESTION 4
Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the
solution. What should you do?

A. Search for the CMS solution in Google Cloud Marketplac


B. Use gcloud CLI to deploy the solution.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

C. Search for the CMS solution in Google Cloud Marketplac


D. Deploy the solution directly from Cloud Marketplace.
E. Search for the CMS solution in Google Cloud Marketplac
F. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters.
G. Use the installation guide of the CMS provide
H. Perform the installation through your configuration management system.

Answer: B

NEW QUESTION 5
You have a developer laptop with the Cloud SDK installed on Ubuntu. The Cloud SDK was installed from the Google Cloud Ubuntu package repository. You want
to test your application locally on your laptop with Cloud Datastore. What should you do?

A. Export Cloud Datastore data using gcloud datastore export.


B. Create a Cloud Datastore index using gcloud datastore indexes create.
C. Install the google-cloud-sdk-datastore-emulator component using the apt get install command.
D. Install the cloud-datastore-emulator component using the gcloud components install command.

Answer: D

Explanation:

The Datastore emulator provides local emulation of the production Datastore environment. You can use the emulator to develop and test your application
locallyRef: https://cloud.google.com/datastore/docs/tools/datastore-emulator

NEW QUESTION 6
Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data
accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?

A. Upload the data to BigQuery using the bq command line tool.


B. Upload the data to Cloud Storage using the gsutil command line tool.
C. Upload the data into Cloud SQL using the import function in the console.
D. Upload the data into Cloud Spanner using the import function in the console.

Answer: B

Explanation:
"large quantity" : Cloud Storage or BigQuery "files" a file is nothing but an Object

NEW QUESTION 7
You need to create a custom VPC with a single subnet. The subnet’s range must be as large as possible. Which range should you use?

A. 1.00.0.0/0
B. 10.0.0.0/8
C. 172.16.0.0/12
D. 192.168.0.0/16

Answer: B

Explanation:
https://cloud.google.com/vpc/docs/vpc#manually_created_subnet_ip_ranges

NEW QUESTION 8
You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as
quickly and easily as possible. What should you do?

A. Deploy Jenkins through the Google Cloud Marketplace.


B. Create a new Compute Engine instanc
C. Run the Jenkins executable.
D. Create a new Kubernetes Engine cluste
E. Create a deployment for the Jenkins image.
F. Create an instance template with the Jenkins executabl
G. Create a managed instance group with this template.

Answer: A

Explanation:

Installing Jenkins
In this section, you use Cloud Marketplace to provision a Jenkins instance. You customize this instance to use the agent image you created in the previous
section.
Go to the Cloud Marketplace solution for Jenkins. Click Launch on Compute Engine.
Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4.
Machine type selection for Jenkins deployment.
Click Deploy and wait for your Jenkins instance to finish being provisioned. When it is finished, you will see: Jenkins has been deployed.
https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine#installing_jenkins

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

NEW QUESTION 9
Your company has developed a new application that consists of multiple microservices. You want to deploy the application to Google Kubernetes Engine (GKE),
and you want to ensure that the cluster can scale as more applications are deployed in the future. You want to avoid manual intervention when each new
application is deployed. What should you do?

A. Deploy the application on GKE, and add a HorizontalPodAutoscaler to the deployment.


B. Deploy the application on GKE, and add a VerticalPodAutoscaler to the deployment.
C. Create a GKE cluster with autoscaling enabled on the node poo
D. Set a minimum and maximum for the size of the node pool.
E. Create a separate node pool for each application, and deploy each application to its dedicated node pool.

Answer: C

Explanation:
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler#adding_a_node_pool_with_autoscal

NEW QUESTION 10
Your company is moving its entire workload to Compute Engine. Some servers should be accessible through the Internet, and other servers should only be
accessible over the internal network. All servers need to be able to talk to each other over specific ports and protocols. The current on-premises network relies on
a demilitarized zone (DMZ) for the public servers and a Local Area Network (LAN) for the private servers. You need to design the networking infrastructure on
Google Cloud to match these requirements. What should you do?

A. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
B. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
C. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
D. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
E. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
F. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
G. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
H. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.

Answer: C

Explanation:
https://cloud.google.com/vpc/docs/vpc-peering

NEW QUESTION 10
Your organization has a dedicated person who creates and manages all service accounts for Google Cloud projects. You need to assign this person the minimum
role for projects. What should you do?

A. Add the user to roles/iam.roleAdmin role.


B. Add the user to roles/iam.securityAdmin role.
C. Add the user to roles/iam.serviceAccountUser role.
D. Add the user to roles/iam.serviceAccountAdmin role.

Answer: D

NEW QUESTION 11
You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items.
How should you configure the auditor's permissions?

A. Create a custom role with view-only project permission


B. Add the user's account to the custom role.
C. Create a custom role with view-only service permission
D. Add the user's account to the custom role.
E. Select the built-in IAM project Viewer rol
F. Add the user's account to this role.
G. Select the built-in IAM service Viewer rol
H. Add the user's account to this role.

Answer: C

NEW QUESTION 13
Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project.
What should you do?

A. Add the group for the finance team to roles/billing user role.
B. Add the group for the finance team to roles/billing admin role.
C. Add the group for the finance team to roles/billing viewer role.
D. Add the group for the finance team to roles/billing project/Manager role.

Answer: C

Explanation:
"Billing Account Viewer access would usually be granted to finance teams, it provides access to spend information, but does not confer the right to link or unlink
projects or otherwise manage the properties of the billing account." https://cloud.google.com/billing/docs/how-to/billing-access

NEW QUESTION 18

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

You have been asked to set up the billing configuration for a new Google Cloud customer. Your customer wants to group resources that share common IAM
policies. What should you do?

A. Use labels to group resources that share common IAM policies


B. Use folders to group resources that share common IAM policies
C. Set up a proper billing account structure to group IAM policies
D. Set up a proper project naming structure to group IAM policies

Answer: B

Explanation:
Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders, or a combination of both. Organizations can use folders
to group projects under the organization node in a hierarchy. For example, your organization might contain multiple departments, each with its own set of Google
Cloud resources. Folders allow you to group these resources on a per-department basis. Folders are used to group resources that share common IAM policies.
While a folder can contain multiple folders or resources, a given folder or resource can have exactly one parent.
https://cloud.google.com/resource-manager/docs/creating-managing-folders

NEW QUESTION 23
You received a JSON file that contained a private key of a Service Account in order to get access to several resources in a Google Cloud project. You downloaded
and installed the Cloud SDK and want to use this private key for authentication and authorization when performing gcloud commands. What should you do?

A. Use the command gcloud auth login and point it to the private key
B. Use the command gcloud auth activate-service-account and point it to the private key
C. Place the private key file in the installation directory of the Cloud SDK and rename it to "credentials ison"
D. Place the private key file in your home directory and rename it to ‘’GOOGLE_APPUCATION_CREDENTiALS".

Answer: B

Explanation:
Authorizing with a service account
gcloud auth activate-service-account authorizes access using a service account. As with gcloud init and gcloud auth login, this command saves the service
account credentials to the local system on successful completion and sets the specified account as the active account in your Cloud SDK configuration.
https://cloud.google.com/sdk/docs/authorizing#authorizing_with_a_service_account

NEW QUESTION 24
Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not
responsive. You want to replace this VM in the MIG quickly. What should you do?

A. Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.
B. Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.
C. Use the gcloud compute instances update command with a REFRESH action for the VM.
D. Update and apply the instance template of the MIG.

Answer: A

NEW QUESTION 25
You are building a data lake on Google Cloud for your Internet of Things (loT) application. The loT application has millions of sensors that are constantly streaming
structured and unstructured data to your backend in the cloud. You want to build a highly available and resilient architecture based on
Google-recommended practices. What should you do?

A. Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage
B. Stream data to Pub/Su
C. and use Storage Transfer Service to send data to BigQuery.
D. Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery.
E. Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable.

Answer: B

NEW QUESTION 27
You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP. You want to make sure it is reachable by clients over
that port. What should you do?

A. Add the network tag allow-udp-636 to the VM instance running the LDAP server.
B. Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server.
C. Add a network tag of your choice to the instanc
D. Create a firewall rule to allow ingress on UDP port 636 for that network tag.
E. Add a network tag of your choice to the instance running the LDAP serve
F. Create a firewall rule to allow egress on UDP port 636 for that network tag.

Answer: C

Explanation:
A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or instance templates. A tag is not a
separate resource, so you cannot create it separately. All resources with that string are considered to have that tag. Tags enable you to make firewall rules and
routes applicable to specific VM instances.

NEW QUESTION 32
You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Compute Engine zone, eliminate downtime, and minimize cost. What should you do?

A. – Create Compute Engine resources in us–central1–b.–Balance the load across both us–central1–a and us–central1–b.
B. – Create a Managed Instance Group and specify us–central1–a as the zone.–Configure the Health Check with a short Health Interval.
C. – Create an HTTP(S) Load Balancer.–Create one or more global forwarding rules to direct traffic to your VMs.
D. – Perform regular backups of your application.–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.–Restore from backups
when notified.

Answer: A

Explanation:
Choosing a region and zone You choose which region or zone hosts your resources, which controls where your data is stored and used. Choosing a region and
zone is important for several reasons:
Handling failures
Distribute your resources across multiple zones and regions to tolerate outages. Google designs zones to be independent from each other: a zone usually has
power, cooling, networking, and control planes that are isolated from other zones, and most single failure events will affect only a single zone. Thus, if a zone
becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running. Similarly, if a region experiences any disturbances,
you should have backup services running in a different region. For more information about distributing your resources and designing a robust system, see
Designing Robust Systems. Decreased network latency To decrease network latency, you might want to choose a region or zone that is close to your point of
service.
https://cloud.google.com/compute/docs/regions-zones#choosing_a_region_and_zone

NEW QUESTION 35
A team of data scientists infrequently needs to use a Google Kubernetes Engine (GKE) cluster that you manage. They require GPUs for some long-running, non-
restartable jobs. You want to minimize cost. What should you do?

A. Enable node auto-provisioning on the GKE cluster.


B. Create a VerticalPodAutscaler for those workloads.
C. Create a node pool with preemptible VMs and GPUs attached to those VMs.
D. Create a node pool of instances with GPUs, and enable autoscaling on this node pool with a minimum size of 1.

Answer: A

Explanation:
auto-provisioning = Attaches and deletes node pools to cluster based on the requirements. Hence creating a GPU node pool, and auto-scaling would be better
https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning

NEW QUESTION 40
You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow
Google-recommended practices. What should you do?

A. Create a service account with an access scop


B. Use the access scope ‘https://www.googleapis.com/auth/devstorage.write_only’.
C. Create a service account with an access scop
D. Use the access scope ‘https://www.googleapis.com/auth/cloud-platform’.
E. Create a service account and add it to the IAM role ‘storage.objectCreator’ for that bucket.
F. Create a service account and add it to the IAM role ‘storage.objectAdmin’ for that bucket.

Answer: C

Explanation:
https://cloud.google.com/iam/docs/understanding-service-accounts#using_service_accounts_with_compute_eng https://cloud.google.com/storage/docs/access-
control/iam-roles

NEW QUESTION 42
Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible. What should you do?

A. Download and deploy the Jenkins Java WAR to App Engine Standard.
B. Create a new Compute Engine instance and install Jenkins through the command line interface.
C. Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image.
D. Use GCP Marketplace to launch the Jenkins solution.

Answer: D

NEW QUESTION 45
A company wants to build an application that stores images in a Cloud Storage bucket and wants to generate thumbnails as well as resize the images. They want
to use a google managed service that can scale up and scale down to zero automatically with minimal effort. You have been asked to recommend a service.
Which GCP service would you suggest?

A. Google Compute Engine


B. Google App Engine
C. Cloud Functions
D. Google Kubernetes Engine

Answer: C

Explanation:
Text Description automatically generated with low confidence

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Cloud Functions is Google Cloud’s event-driven serverless compute platform. It automatically scales based on the load and requires no additional configuration.
You pay only for the resources used.
Ref: https://cloud.google.com/functions
While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google App Engine support autoscaling, it needs to be configured explicitly based
on the load and is not as trivial as the scale up or scale down offered by Google’s cloud functions.

NEW QUESTION 48
You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in
the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?

A. Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.
B. Encode username and password in sha256 encoding, and save it to a text fil
C. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.
D. Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.
E. Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.

Answer: D

NEW QUESTION 51
Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a
Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it. What should you
do?

A. Create the instance without a public IP address.


B. Create the instance with Private Google Access enabled.
C. Create a deny-all egress firewall rule on the VPC network.
D. Create a route on the VPC to route all traffic to the instance over the VPN tunnel.

Answer: A

Explanation:
VMs cannot communicate over the internet without a public IP address. Private Google Access permits access to Google APIs and services in Google's production
infrastructure.
https://cloud.google.com/vpc/docs/private-google-access

NEW QUESTION 55
You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to
ensure that the clusters can grow in nodes when needed. What should you do?

A. Create a new subnet in the same region as the subnet being used.
B. Add an alias IP range to the subnet used by the GKE clusters.
C. Create a new VPC, and set up VPC peering with the existing VPC.
D. Expand the CIDR range of the relevant subnet for the cluster.

Answer: D

Explanation:
gcloud compute networks subnets expand-ip-range NAME gcloud compute networks subnets expand-ip-range
- expand the IP range of a Compute Engine subnetwork https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range

NEW QUESTION 58
You’ve deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should
you do?

A. Store the database password inside the Docker image of the container, not in the YAML file.
B. Store the database password inside a Secret objec
C. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.
D. Store the database password inside a ConfigMap objec
E. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.
F. Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.

Answer: B

Explanation:
https://cloud.google.com/config-connector/docs/how-to/secrets#gcloud

NEW QUESTION 60
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will
run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?

A. Deploy the monitoring pod in a StatefulSet object.


B. Deploy the monitoring pod in a DaemonSet object.
C. Reference the monitoring pod in a Deployment object.
D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

Answer: B

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets
automatically add Pods to the new nodes as needed.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you
add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples
of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have
DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use
different configurations for different hardware types and resource needs.

NEW QUESTION 64
You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?

A. After the VM has been created, use your Google Account credentials to log in into the VM.
B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.
C. When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.
D. After the VM has been created, download the JSON private key for the default Compute Engine service accoun
E. Use the credentials in the JSON file to log in to the VM.

Answer: B

Explanation:
You can generate Windows passwords using either the Google Cloud Console or the gcloud command-line tool. This option uses the right syntax to reset the
windows password.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

gcloud compute reset-windows-password windows-instance


Ref: https://cloud.google.com/compute/docs/instances/windows/creating-passwords-for-windows-instances#gc

NEW QUESTION 68
You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company's security policies only allow the use
to internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a
Cloud Storage bucket within your project. What should you do?

A. Enable Private Service Access on the Cloud Storage Bucket.


B. Add slorage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.
C. Enable Private Google Access on the subnet within the custom VPC.
D. Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.

Answer: A

NEW QUESTION 71
You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent
the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the
application with access to Cloud Storage. What should you do?

A. 1. Use nslookup to get the IP address for storage.googleapis.com.2. Negotiate with the security team to be able to give a public IP address to the servers.3.
Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com.
B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud Platform (GCP).2. In this VPC, create a Compute Engine instance
and install the Squid proxy server on this instance.3. Configure your servers to use that instance as a proxy to access Cloud Storage.
C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine.2. Create an internal load balancer (ILB) that
uses storage.googleapis.com as backend.3. Configure your new instances to use this ILB as proxy.
D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP.2. Use Cloud Router to create a custom route advertisement for 199.36.153.4/30.
Announce that network to your on-premises network through the VPN tunnel.3. In your on-premises network, configure your DNS server to
resolve*.googleapis.com as a CNAME to restricted.googleapis.com.

Answer: D

Explanation:
Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved
by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.
Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP
Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.
In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it
is what Google recommends.
Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid
You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises
firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.
You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises
network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range
is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection.
Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.
So Negotiate with the security team to be able to give public IP addresses to the servers is not right.
Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).
So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.
Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity architectures
https://cloud.google.com/hybrid-connectivity.
So,
Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

NEW QUESTION 73
An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed until 2 weeks later. You need to find out this employee accessed
any sensitive customer information after their termination. What should you do?

A. View System Event Logs in Stackdrive


B. Search for the user’s email as the principal.
C. View System Event Logs in Stackdrive
D. Search for the service account associated with the user.
E. View Data Access audit logs in Stackdrive
F. Search for the user’s email as the principal.
G. View the Admin Activity log in Stackdrive
H. Search for the service account associated with the user.

Answer: C

Explanation:
https://cloud.google.com/logging/docs/audit
Data Access audit logs Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create,
modify, or read user-provided resource data.
https://cloud.google.com/logging/docs/audit#data-access

NEW QUESTION 74

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

You have an application that looks for its licensing server on the IP 10.0.3.21. You need to deploy the licensing server on Compute Engine. You do not want to
change the configuration of the application and want the application to be able to reach the licensing server. What should you do?

A. Reserve the IP 10.0.3.21 as a static internal IP address using gcloud and assign it to the licensing server.
B. Reserve the IP 10.0.3.21 as a static public IP address using gcloud and assign it to the licensing server.
C. Use the IP 10.0.3.21 as a custom ephemeral IP address and assign it to the licensing server.
D. Start the licensing server with an automatic ephemeral IP address, and then promote it to a static internal IP address.

Answer: A

Explanation:
IP 10.0.3.21 is internal by default, and to ensure that it will be static non-changing it should be selected as static internal ip address.

NEW QUESTION 78
You have deployed multiple Linux instances on Compute Engine. You plan on adding more instances in the coming weeks. You want to be able to access all of
these instances through your SSH client over me Internet without having to configure specific access on the existing and new instances. You do not want the
Compute Engine instances to have a public IP. What should you do?

A. Configure Cloud Identity-Aware Proxy (or HTTPS resources


B. Configure Cloud Identity-Aware Proxy for SSH and TCP resources.
C. Create an SSH keypair and store the public key as a project-wide SSH Key
D. Create an SSH keypair and store the private key as a project-wide SSH Key

Answer: B

Explanation:
https://cloud.google.com/iap/docs/using-tcp-forwarding

NEW QUESTION 83
You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute
Engine instances is running in its own VPC. What should you do?

A. Verify that both projects are in a GCP Organizatio


B. Create a new VPC and add all instances.
C. Verify that both projects are in a GCP Organizatio
D. Share the VPC from one project and request that the Compute Engine instances in the other project use this shared VPC.
E. Verify that you are the Project Administrator of both project
F. Create two new VPCs and add all instances.
G. Verify that you are the Project Administrator of both project
H. Create a new VPC and add all instances.

Answer: B

Explanation:
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate
with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one
or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use
subnets in the Shared VPC network
https://cloud.google.com/vpc/docs/shared-vpc
"For example, an existing instance in a service project cannot be reconfigured to use a Shared VPC network, but a new instance can be created to use available
subnets in a Shared VPC network."

NEW QUESTION 86
Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the
application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a
Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?

A. Create a cluster with a single node-pool by using standard VM


B. Label the fault-tolerant Deployments as spot-true.
C. Create a cluster with a single node-pool by using Spot VM
D. Label the critical Deployments as spot-false.
E. Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the critica
F. deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool by using standard VMs.
G. Create a cluster with both a Spot VM node pool and by using standard VM
H. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.

Answer: C

NEW QUESTION 89
You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute
instances in development and production projects on a daily basis. What should you do?

A. Create two configurations using gcloud confi


B. Write a script that sets configurations as active, individuall
C. For each configuration, use gcloud compute instances list to get a list of compute resources.
D. Create two configurations using gsutil confi
E. Write a script that sets configurations as active, individuall
F. For each configuration, use gsutil compute instances list to get a list of compute resources.
G. Go to Cloud Shell and export this information to Cloud Storage on a daily basis.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

H. Go to GCP Console and export this information to Cloud SQL on a daily basis.

Answer: A

Explanation:
You can create two configurations – one for the development project and another for the production project. And you do that by running “gcloud config
configurations create” command.https://cloud.google.com/sdk/gcloud/reference/config/configurations/createIn your custom script, you can load these
configurations one at a time and execute gcloud compute instances list to list Google Compute Engine instances in the project that is active in the gcloud
configuration.Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/listOnce you have this information, you can export it in a suitable format to a
suitable target e.g. export as CSV or export to Cloud Storage/BigQuery/SQL, etc

NEW QUESTION 91
You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You want to back up this instance daily for disaster
recovery purposes. You want to keep the backups for 30 days. You want the Google-recommended solution with the least management overhead and the least
number of services. What should you do?

A. * 1. Update your instances’ metadata to add the following value: snapshot–schedule: 0 1 * * ** 2. Update your instances’ metadata to add the following value:
snapshot–retention: 30
B. * 1. In the Cloud Console, go to the Compute Engine Disks page and select your instance’s disk.* 2. In the Snapshot Schedule section, select Create Schedule
and configure the following parameters:–Schedule frequency: Daily–Start time: 1:00 AM – 2:00 AM–Autodelete snapshots after 30 days
C. * 1. Create a Cloud Function that creates a snapshot of your instance’s disk.* 2.Create a Cloud Function that deletes snapshots that are older than 30 day
D. 3.Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
E. * 1. Create a bash script in the instance that copies the content of the disk to Cloud Storage.* 2. Create a bash script in the instance that deletes data older than
30 days in the backup Cloud Storage bucket.* 3. Configure the instance’s crontab to execute these scripts daily at 1:00 AM.

Answer: B

Explanation:
Creating scheduled snapshots for persistent disk This document describes how to create a snapshot schedule to regularly and automatically back up your zonal
and regional persistent disks. Use snapshot schedules as a best practice to back up your Compute Engine workloads. After creating a snapshot schedule, you can
apply it to one or more persistent disks. https://cloud.google.com/compute/docs/disks/scheduled-snapshots

NEW QUESTION 93
You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to
have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse
proxy?

A. Create a Cloud Memorystore for Redis instance with 32-GB capacity.


B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Answer: A

Explanation:
What is Google Cloud Memorystore?
Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve
extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

NEW QUESTION 94
You have developed a containerized web application that will serve Internal colleagues during business hours. You want to ensure that no costs are incurred
outside of the hours the application is used. You have just created a new Google Cloud project and want to deploy the application. What should you do?

A. Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero
B. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to zero.
C. Deploy the container on App Engine flexible environment with autoscalin
D. and set the value min_instances to zero in the app yaml
E. Deploy the container on App Engine flexible environment with manual scaling, and set the value instances to zero in the app yaml

Answer: B

Explanation:
https://cloud.google.com/kuberun/docs/architecture-overview#components_in_the_default_installation

NEW QUESTION 99
You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled. The application exposes a TCP endpoint. There are
several replicas of this application. You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that
has no overlapping IP ranges with the first VPC. This instance needs to connect to the application on GKE. You want to minimize effort. What should you do?

A. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Set the service's externalTrafficPolicy to Cluster.3. Configure
the Compute Engine instance to use the address of the load balancer that has been created.
B. 1. In GKE, create a Service of type NodePort that uses the application's Pods as backend.2. Create a Compute Engine instance called proxy with 2 network
interfaces, one in each VPC.3. Use iptables on this instance to forward traffic from gce-network to the GKE nodes.4. Configure the Compute Engine instance to
use the address of proxy in gce-network as endpoint.
C. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add an annotation to this service: cloud.google.com/load-
balancer-type: Internal3. Peer the two VPCs together.4. Configure the Compute Engine instance to use the address of the load balancer that has been created.
D. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add a Cloud Armor Security Policy to the load balancer that
whitelists the internal IPs of the MIG'sinstances.3. Configure the Compute Engine instance to use the address of the load balancer that has been created.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Answer: A

Explanation:
performs a peering between the two VPC's (the statement makes sure that this option is feasible since it clearly specifies that there is no overlapping between the
ip ranges of both vpc's), deploy the LoadBalancer as internal with the annotation, and configure the endpoint so that the compute engine instance can access the
application internally, that is, without the need to have a public ip at any time and therefore, without the need to go outside the google network. The traffic,
therefore, never crosses the public internet.
https://medium.com/pablo-perez/k8s-externaltrafficpolicy-local-or-cluster-40b259a19404 https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-
balancing
clients in a VPC network connected to the LoadBalancer network using VPC Network Peering can also access the Service
https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters

NEW QUESTION 104


You are building an archival solution for your data warehouse and have selected Cloud Storage to archive your data. Your users need to be able to access this
archived data once a quarter for some regulatory requirements. You want to select a cost-efficient option. Which storage option should you use?

A. Coldline Storage
B. Nearline Storage
C. Regional Storage
D. Multi-Regional Storage

Answer: A

Explanation:
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is ideal for data you plan to read or
modify at most once a quarter. Since we have a requirement to access data once a quarter and want to go with the most cost-efficient option, we should select
Coldline Storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline

NEW QUESTION 107


You created a Kubernetes deployment by running kubectl run nginx image=nginx labels=app=prod. Your Kubernetes cluster is also used by a number of other
deployments. How can you find the identifier of the pods for this nginx deployment?

A. kubectl get deployments –output=pods


B. gcloud get pods –selector=”app=prod”
C. kubectl get pods -I “app=prod”
D. gcloud list gke-deployments -filter={pod }

Answer: C

Explanation:
This command correctly lists pods that have the label app=prod. When creating the deployment, we used the label app=prod so listing pods that have this label

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

retrieve the pods belonging to nginx deployments. You can list pods by using Kubernetes CLI kubectl get pods.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containe

NEW QUESTION 110


Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and
must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?

A. Migrate the workload to a Compute Engine Preemptible VM.


B. Migrate the workload to a Google Kubernetes Engine cluster with Preemptible nodes.
C. Migrate the workload to a Compute Engine V
D. Start and stop the instance as needed.
E. Create an Instance Template with Preemptible VMs O
F. Create a Managed Instance Group from the template and adjust Target CPU Utilizatio
G. Migrate the workload.

Answer: D

Explanation:
Install the workload in a compute engine VM, start and stop the instance as needed, because as per the question the VM runs for 30 hours, process can be
performed offline and should not be interrupted, if interrupted we need to restart the batch process again. Preemptible VMs are cheaper, but they will not be
available beyond 24hrs, and if the process gets interrupted the preemptible VM will restart.

NEW QUESTION 112


You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment.
You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-nqqmt 1/1 Running 0 9m41s
$ kubectl delete pod nginx-84748895c4-nqqmt
pod nginx-84748895c4-nqqmt deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-k6bzl 1/1 Running 0 25s
What should you do to delete the deployment and avoid pod getting recreated?

A. kubectl delete deployment nginx


B. kubectl delete –deployment=nginx
C. kubectl delete pod nginx-84748895c4-k6bzl –no-restart 2
D. kubectl delete inginx

Answer: A

Explanation:
This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the
pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the
kubectl delete deployment command.
$ kubectl delete deployment nginx
deployment.apps nginx deleted
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources

NEW QUESTION 114


You manage an App Engine Service that aggregates and visualizes data from BigQuery. The application is deployed with the default App Engine Service account.
The data that needs to be visualized resides in a different project managed by another team. You do not have access to this project, but you want your application
to be able to read data from the BigQuery dataset. What should you do?

A. Ask the other team to grant your default App Engine Service account the role of BigQuery Job User.
B. Ask the other team to grant your default App Engine Service account the role of BigQuery Data Viewer.
C. In Cloud IAM of your project, ensure that the default App Engine service account has the role of BigQuery Data Viewer.
D. In Cloud IAM of your project, grant a newly created service account from the other team the role of BigQuery Job User in your project.

Answer: B

Explanation:
The resource that you need to get access is in the other project. roles/bigquery.dataViewer BigQuery Data Viewer
When applied to a table or view, this role provides permissions to: Read data and metadata from the table or view.
This role cannot be applied to individual models or routines. When applied to a dataset, this role provides permissions to:
Read the dataset's metadata and list tables in the dataset. Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the
running of jobs.

NEW QUESTION 115


Your company uses a large number of Google Cloud services centralized in a single project. All teams have specific projects for testing and development. The
DevOps team needs access to all of the production services in order to perform their job. You want to prevent Google Cloud product changes from broadening
their permissions in the future. You want to follow Google-recommended practices. What should you do?

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

A. Grant all members of the DevOps team the role of Project Editor on the organization level.
B. Grant all members of the DevOps team the role of Project Editor on the production project.
C. Create a custom role that combines the required permission
D. Grant the DevOps team the custom role on the production project.
E. Create a custom role that combines the required permission
F. Grant the DevOps team the custom role on the organization level.

Answer: C

Explanation:
Understanding IAM custom roles
Key Point: Custom roles enable you to enforce the principle of least privilege, ensuring that the user and service accounts in your organization have only the
permissions essential to performing their intended functions.
Basic concepts
Custom roles are user-defined, and allow you to bundle one or more supported permissions to meet your specific needs. Custom roles are not maintained by
Google; when new permissions, features, or services are added to Google Cloud, your custom roles will not be updated automatically.
When you create a custom role, you must choose an organization or project to create it in. You can then grant the custom role on the organization or project, as
well as any resources within that organization or project.
https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts

NEW QUESTION 119


For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already
installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add
the following value: logs-destination:bq://platform-logs.
B. 1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2.Create a Cloud Function that is triggered by messages in the
logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.
C. 1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3.Choose BigQuery as Sink Service, and the platform-logs
dataset as Sink Destination.
D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that
executes this query:INSERT INTOdataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp>
DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

Answer: C

Explanation:
* 1. In Stackdriver Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs
dataset as Sink Destination.

NEW QUESTION 122


You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing instances of your company on Google Cloud. What must you do
before you run the gcloud compute instances list command?
Choose 2 answers

A. Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.
B. Create a Google Cloud service account, and download the service account ke
C. Place the key file in a folder on your machine where gcloud CLI can find it.
D. Download your Cloud Identity user account ke
E. Place the key file in a folder on your machine where gcloud CLI can find it.
F. Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
G. Run gcloud config set project $my_project to set the default project for gcloud CLI.

Answer: AE

Explanation:
Before you run the gcloud compute instances list command, you need to do two things: authenticate with your user account and set the default project for gcloud
CLI.
To authenticate with your user account, you need to run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to
gcloud CLI. This will authorize the gcloud CLI to access Google Cloud resources on your behalf1.
To set the default project for gcloud CLI, you need to run gcloud config set project $my_project, where
$my_project is the ID of the project that contains the instances you want to list. This will save you from having to specify the project flag for every gcloud
command2.
Option B is not recommended, because using a service account key increases the risk of credential leakage and misuse. It is also not necessary, because you can
use your user account to authenticate to the gcloud CLI3. Option C is not correct, because there is no such thing as a Cloud Identity user account key. Cloud
Identity is a service that provides identity and access management for Google Cloud users and groups4. Option D is not required, because the gcloud compute
instances list command does not depend on the default zone. You can
list instances from all zones or filter by a specific zone using the --filter flag.
References:
1: https://cloud.google.com/sdk/docs/authorizing
2: https://cloud.google.com/sdk/gcloud/reference/config/set
3: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
4: https://cloud.google.com/identity/docs/overview
: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list

NEW QUESTION 123


Your company’s infrastructure is on-premises, but all machines are running at maximum capacity. You want to burst to Google Cloud. The workloads on Google
Cloud must be able to directly communicate to the workloads on-premises using a private IP range. What should you do?

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

A. In Google Cloud, configure the VPC as a host for Shared VPC.


B. In Google Cloud, configure the VPC for VPC Network Peering.
C. Create bastion hosts both in your on-premises environment and on Google Clou
D. Configure both as proxy servers using their public IP addresses.
E. Set up Cloud VPN between the infrastructure on-premises and Google Cloud.

Answer: D

Explanation:
"Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to
the same project or the same organization."
https://cloud.google.com/vpc/docs/vpc-peering while
"Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Google Cloud Virtual
Private Cloud (VPC) networks."
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview and
"HA VPN is a high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to your VPC network through an IPsec VPN
connection in a single region."
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview

NEW QUESTION 128


You deployed an application on a managed instance group in Compute Engine. The application accepts Transmission Control Protocol (TCP) traffic on port 389
and requires you to preserve the IP address of the client who is making a request. You want to expose the application to the internet by using a load balancer.
What should you do?

A. Expose the application by using an external TCP Network Load Balancer.


B. Expose the application by using a TCP Proxy Load Balancer.
C. Expose the application by using an SSL Proxy Load Balancer.
D. Expose the application by using an internal TCP Network Load Balancer.

Answer: B

NEW QUESTION 131


You need to grant access for three users so that they can view and edit table data on a Cloud Spanner instance. What should you do?

A. Run gcloud iam roles describe roles/spanner.databaseUse


B. Add the users to the role.
C. Run gcloud iam roles describe roles/spanner.databaseUse
D. Add the users to a new grou
E. Add the group to the role.
F. Run gcloud iam roles describe roles/spanner.viewer --project my-projec
G. Add the users to the role.
H. Run gcloud iam roles describe roles/spanner.viewer --project my-projec
I. Add the users to a new group.Add the group to the role.

Answer: B

Explanation:
https://cloud.google.com/spanner/docs/iam#spanner.databaseUser
Using the gcloud tool, execute the gcloud iam roles describe roles/spanner.databaseUser command on Cloud Shell. Attach the users to a newly created Google
group and add the group to the role.

NEW QUESTION 133


You are working with a user to set up an application in a new VPC behind a firewall. The user is concerned about data egress. You want to configure the fewest
open egress ports. What should you do?

A. Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.
B. Set up a high-priority (1000) rule that pairs both ingress and egress ports.
C. Set up a high-priority (1000) rule that blocks all egress and a low-priority (65534) rule that allows only the appropriate ports.
D. Set up a high-priority (1000) rule to allow the appropriate ports.

Answer: A

Explanation:
Implied rules Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console: Implied allow egress rule. An egress
rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic
blocked by Google Cloud. A higher priority firewall rule may restrict outbound access. Internet access is allowed if no other firewall rules deny outbound traffic and
if the instance has an external IP address or uses a Cloud NAT instance. For more information, see Internet access requirements. Implied deny ingress rule. An
ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming connections to them. A
higher priority rule might allow incoming access. The default network includes some additional rules that override this one, allowing certain types of incoming
connections. https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules

NEW QUESTION 137


You are storing sensitive information in a Cloud Storage bucket. For legal reasons, you need to be able to record all requests that read any of the stored data. You
want to make sure you comply with these requirements. What should you do?

A. Enable the Identity Aware Proxy API on the project.


B. Scan the bucker using the Data Loss Prevention API.
C. Allow only a single Service Account access to read the data.
D. Enable Data Access audit logs for the Cloud Storage API.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Answer: D

Explanation:
Logged information Within Cloud Audit Logs, there are two types of logs: Admin Activity logs: Entries for operations that modify the configuration or metadata of a
project, bucket, or object. Data Access logs: Entries for operations that modify objects or read a project, bucket, or object. There are several sub-types of data
access logs: ADMIN_READ: Entries for operations that read the configuration or metadata of a project, bucket, or object. DATA_READ: Entries for operations that
read an object. DATA_WRITE: Entries for operations that create or modify an object. https://cloud.google.com/storage/docs/audit-logs#types

NEW QUESTION 139


The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to
create or update any other resources in the project. You want to follow Google's recommendations for setting permissions for the DevOps group. What should you
do?

A. Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.
B. Create an IAM policy and grant all comput
C. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.
D. Create a custom role at the folder level and grant all comput
E. instanceAdml
F. * permissions to the role Grant the custom role to the DevOps group.
G. Grant the basic role roles/editor to the DevOps group.

Answer: A

NEW QUESTION 144


You are planning to migrate your on-premises data to Google Cloud. The data includes:
• 200 TB of video files in SAN storage
• Data warehouse data stored on Amazon Redshift
• 20 GB of PNG files stored on an S3 bucket
You need to load the video files into a Cloud Storage bucket, transfer the data warehouse data into BigQuery, and load the PNG files into a second Cloud Storage
bucket. You want to follow Google-recommended practices and avoid writing any code for the migration. What should you do?

A. Use gcloud storage for the video file


B. Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
C. Use Transfer Appliance for the video
D. BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
E. Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
F. Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.

Answer: C

NEW QUESTION 149


You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic pattern. You want to automatically scale up
or down the number of Spanner nodes depending on traffic. What should you do?

A. Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.
B. Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshol
C. SREs would scale resources up or down accordingly.
D. Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshol
E. Google support would scale resources up or down accordingly.
F. Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshol
G. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.

Answer: D

Explanation:
As to mexblood1's point, CPU utilization is a recommended proxy for traffic when it comes to Cloud Spanner. See: Alerts for high CPU utilization The following
table specifies our recommendations for maximum CPU usage for both single-region and multi-region instances. These numbers are to ensure that your instance
has enough compute capacity to continue to serve your traffic in the event of the loss of an entire zone (for single-region instances) or an entire region (for multi-
region instances). - https://cloud.google.com/spanner/docs/cpu-utilization

NEW QUESTION 153


You are running multiple microservices in a Kubernetes Engine cluster. One microservice is rendering images. The microservice responsible for the image
rendering requires a large amount of CPU time compared to the memory it requires. The other microservices are workloads that are optimized for n1-standard
machine types. You need to optimize your cluster so that all workloads are using resources as efficiently as possible. What should you do?

A. Assign the pods of the image rendering microservice a higher pod priority than the older microservices
B. Create a node pool with compute-optimized machine type nodes for the image rendering microservice Use the node pool with general-purposemachine type
nodes for the other microservices
C. Use the node pool with general-purpose machine type nodes for lite mage rendering microservice Create a nodepool with compute-optimized machine type
nodes for the other microservices
D. Configure the required amount of CPU and memory in the resource requests specification of the imagerendering microservice deployment Keep the resource
requests for the other microservices at the default

Answer: B

NEW QUESTION 156


You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar
environment on GCP. What should you do?

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

A. When creating the VM, use machine type n1-standard-96.


B. When creating the VM, use Intel Skylake as the CPU platform.
C. Create the VM using Compute Engine default setting
D. Use gcloud to modify the running instance to have 96 vCPUs.
E. Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations.

Answer: A

Explanation:
Ref: https://cloud.google.com/compute/docs/machine-types#n1_machine_type

NEW QUESTION 157


You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be
cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also
occasionally updated at month-end. What should you do?

A. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.
B. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.
C. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.
D. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.

Answer: B

NEW QUESTION 160


Your company wants to migrate their on-premises workloads to Google Cloud. The current on-premises workloads consist of:
• A Flask web application
• AbackendAPI
• A scheduled long-running background job for ETL and reporting.
You need to keep operational costs low You want to follow Google-recommended practices to migrate these workloads to serverless solutions on Google Cloud.
What should you do?

A. Migrate the web application to App Engine and the backend API to Cloud Run Use Cloud Tasks to run your background job on Compute Engine
B. Migrate the web application to App Engine and the backend API to Cloud Ru
C. Use Cloud Tasks to run your background job on Cloud Run.
D. Run the web application on a Cloud Storage bucket and the backend API on Cloud Run Use Cloud Tasks to run your background job on Cloud Run.
E. Run the web application on a Cloud Storage bucket and the backend API on Cloud Ru
F. Use Cloud Tasks to run your background job on Compute Engine

Answer: B

NEW QUESTION 164


You need a dynamic way of provisioning VMs on Compute Engine. The exact specifications will be in a dedicated configuration file. You want to follow Google’s
recommended practices. Which method should you use?

A. Deployment Manager
B. Cloud Composer
C. Managed Instance Group
D. Unmanaged Instance Group

Answer: A

Explanation:
https://cloud.google.com/deployment-manager/docs/configuration/create-basic-configuration

NEW QUESTION 165


You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a
public web application over HTTPS. You want to follow Google-recommended practices. What should you do?

A. Configure an HTTP(S) load balancer.


B. Configure an internal TCP load balancer.
C. Configure an external SSL proxy load balancer.
D. Configure an external TCP proxy load balancer.

Answer: A

NEW QUESTION 167


You are about to deploy a new Enterprise Resource Planning (ERP) system on Google Cloud. The application holds the full database in-memory for fast data
access, and you need to configure the most appropriate resources on Google Cloud for this application. What should you do?

A. Provision preemptible Compute Engine instances.


B. Provision Compute Engine instances with GPUs attached.
C. Provision Compute Engine instances with local SSDs attached.
D. Provision Compute Engine instances with M1 machine type.

Answer: D

Explanation:

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

M1 machine series Medium in-memory databases such as SAP HANA Tasks that require intensive use of memory with higher memory-to-vCPU ratios than the
general-purpose high-memory machine types.
In-memory databases and in-memory analytics, business warehousing (BW) workloads, genomics analysis, SQL analysis services. Microsoft SQL Server and
similar databases.
https://cloud.google.com/compute/docs/machine-types
https://cloud.google.com/compute/docs/machine-types#:~:text=databases%20such%20as-,SAP%20HANA,-In%
https://www.sap.com/india/products/hana.html#:~:text=is%20SAP%20HANA-,in%2Dmemory,-database%3F

NEW QUESTION 169


You need to add a group of new users to Cloud Identity. Some of the users already have existing Google accounts. You want to follow one of Google's
recommended practices and avoid conflicting accounts. What should you do?

A. Invite the user to transfer their existing account


B. Invite the user to use an email alias to resolve the conflict
C. Tell the user that they must delete their existing account
D. Tell the user to remove all personal email from the existing account

Answer: A

Explanation:
https://cloud.google.com/architecture/identity/migrating-consumer-accounts

NEW QUESTION 172


You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to
examine the status of your Pod and observe that one of them is still in Pending status:

What is the most likely cause?

A. The pending Pod's resource requests are too large to fit on a single node of the cluster.
B. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
C. The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
D. The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ statu
E. It is currently being rescheduled on a new node.

Answer: B

Explanation:
The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not
enough resources left to schedule the pending Pod. is the right answer.
When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes.
Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or
manually scale up the nodes.

NEW QUESTION 177


Users of your application are complaining of slowness when loading the application. You realize the slowness is because the App Engine deployment serving the
application is deployed in us-central whereas all users of this application are closest to europe-west3. You want to change the region of the App Engine application
to europe-west3 to minimize latency. What’s the best way to change the App Engine region?

A. Create a new project and create an App Engine instance in europe-west3


B. Use the gcloud app region set command and supply the name of the new region.
C. From the console, under the App Engine page, click edit, and change the region drop-down.
D. Contact Google Cloud Support and request the change.

Answer: A

Explanation:
App engine is a regional service, which means the infrastructure that runs your app(s) is located in a specific region and is managed by Google to be redundantly
available across all the zones within that region. Once an app engine deployment is created in a region, it cant be changed. The only way is to create a new project
and create an App Engine instance in europe-west3, send all user traffic to this instance and delete the app engine instance in us-central.
Ref: https://cloud.google.com/appengine/docs/locations

NEW QUESTION 182


You have an application that receives SSL-encrypted TCP traffic on port 443. Clients for this application are located all over the world. You want to minimize
latency for the clients. Which load balancing option should you use?

A. HTTPS Load Balancer


B. Network Load Balancer
C. SSL Proxy Load Balancer
D. Internal TCP/UDP Load Balance
E. Add a firewall rule allowing ingress traffic from 0.0.0.0/0 on the target instances.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Answer: C

NEW QUESTION 185


You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do?

A. Use kubect1 to delete the topic resource.


B. Use gcloud CLI to delete the topic.
C. Use kubect1 to create the label deleted-by-cnrm and to change its value to true for the topic resource.
D. Use gcloud CLI to update the topic label managed-by-cnrm to false.

Answer: A

NEW QUESTION 188


You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that
cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods. What should you do?

A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
C. Create a GKE node pool with a sandbox type configured to gviso
D. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
E. Use the cos_containerd image for your GKE node
F. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.

Answer: C

NEW QUESTION 189


You are asked to set up application performance monitoring on Google Cloud projects A, B, and C as a single pane of glass. You want to monitor CPU, memory,
and disk. What should you do?

A. Enable API and then share charts from project A, B, and C.


B. Enable API and then give the metrics.reader role to projects A, B, and C.
C. Enable API and then use default dashboards to view all projects in sequence.
D. Enable API, create a workspace under project A, and then add project B and C.

Answer: D

Explanation:
https://cloud.google.com/monitoring/settings/multiple-projects https://cloud.google.com/monitoring/workspaces

NEW QUESTION 192


Your company publishes large files on an Apache web server that runs on a Compute Engine instance. The Apache web server is not the only application running
in the project. You want to receive an email when the egress network costs for the server exceed 100 dollars for the current month as measured by Google Cloud
Platform (GCP). What should you do?

A. Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
B. Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
C. Export the billing data to BigQuer
D. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and
sends an email if it is over 100 dollar
E. Schedule the Cloud Function using Cloud Scheduler to run hourly.
F. Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging.Create a Cloud Function that uses BigQuery to parse the HTTP
response log data in Stackdriver for thecurrent month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over
100 dollar
G. Schedule the Cloud Function using Cloud Scheduler to run hourly.

Answer: C

Explanation:
https://blog.doit-intl.com/the-truth-behind-google-cloud-egress-traffic-6e8f57b5c2f8

NEW QUESTION 194


You are developing a new web application that will be deployed on Google Cloud Platform. As part of your release cycle, you want to test updates to your
application on a small portion of real user traffic. The majority of the users should still be directed towards a stable version of your application. What should you
do?

A. Deploy me application on App Engine For each update, create a new version of the same service Configure traffic splitting to send a small percentage of traffic
to the new version
B. Deploy the application on App Engine For each update, create a new service Configure traffic splitting to send a small percentage of traffic to the new service.
C. Deploy the application on Kubernetes Engine For a new release, update the deployment to use the new version
D. Deploy the application on Kubernetes Engine For a now release, create a new deployment for the new version Update the service e to use the now deployment.

Answer: D

Explanation:
Keyword, Version, traffic splitting, App Engine supports traffic splitting for versions before releasing.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

NEW QUESTION 197


You need to update a deployment in Deployment Manager without any resource downtime in the deployment. Which command should you use?

A. gcloud deployment-manager deployments create --config <deployment-config-path>


B. gcloud deployment-manager deployments update --config <deployment-config-path>
C. gcloud deployment-manager resources create --config <deployment-config-path>
D. gcloud deployment-manager resources update --config <deployment-config-path>

Answer: B

NEW QUESTION 200


You have deployed an application on a single Compute Engine instance. The application writes logs to disk. Users start reporting errors with the application. You
want to diagnose the problem. What should you do?

A. Navigate to Cloud Logging and view the application logs.


B. Connect to the instance’s serial console and read the application logs.
C. Configure a Health Check on the instance and set a Low Healthy Threshold value.
D. Install and configure the Cloud Logging Agent and view the logs from Cloud Logging.

Answer: D

NEW QUESTION 205


You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one
geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?

A. Select Multi-Regional Storag


B. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
C. Select Multi-Regional Storag
D. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
E. Select Regional Storag
F. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
G. Select Regional Storag
H. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

Answer: D

Explanation:
Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has
no delays prior to data access, so now it is the leading solution among competitors.
The Real description is about Coldline storage Class: Coldline Storage
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard
Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable
trade-offs for lowered at-rest storage costs.
Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving
purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
https://cloud.google.com/storage/docs/storage-classes#coldline

NEW QUESTION 209


Your company implemented BigQuery as an enterprise data warehouse. Users from multiple business units run queries on this data warehouse. However, you
notice that query costs for BigQuery are very high, and you need to control costs. Which two methods should you use? (Choose two.)

A. Split the users from business units to multiple projects.


B. Apply a user- or project-level custom query quota for BigQuery data warehouse.
C. Create separate copies of your BigQuery data warehouse for each business unit.
D. Split your BigQuery data warehouse into multiple data warehouses for each business unit.
E. Change your BigQuery query model from on-demand to flat rat
F. Apply the appropriate number of slots to each Project.

Answer: BE

Explanation:
https://cloud.google.com/bigquery/docs/custom-quotas https://cloud.google.com/bigquery/pricing#flat_rate_pricing

NEW QUESTION 214


You have a number of applications that have bursty workloads and are heavily dependent on topics to decouple publishing systems from consuming systems.
Your company would like to go serverless to enable developers to focus on writing code without worrying about infrastructure. Your solution architect has already
identified Cloud Pub/Sub as a suitable alternative for decoupling systems. You have been asked to identify a suitable GCP Serverless service that is easy to use
with Cloud Pub/Sub. You want the ability to scale down to zero when there is no traffic in order to minimize costs. You want to follow Google recommended
practices. What should you suggest?

A. Cloud Run for Anthos


B. Cloud Run
C. App Engine Standard
D. Cloud Functions.

Answer: D

Explanation:
Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets you run your code locally or in the cloud without having to provision servers.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Cloud Functions scales up or down, so you pay only for compute resources you use. Cloud Functions have excellent integration with Cloud Pub/Sub, lets you
scale down to zero and is recommended by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub."If you’re building a simple API (a
small set of functions to be accessed via HTTP or Cloud Pub/Sub), we recommend using Cloud Functions."Ref: https://cloud.google.com/serverless-options

NEW QUESTION 215


You have production and test workloads that you want to deploy on Compute Engine. Production VMs need to be in a different subnet than the test VMs. All the
VMs must be able to reach each other over internal IP without creating additional routes. You need to set up VPC and the 2 subnets. Which configuration meets
these requirements?

A. Create a single custom VPC with 2 subnet


B. Create each subnet in a different region and with a different CIDR range.
C. Create a single custom VPC with 2 subnet
D. Create each subnet in the same region and with the same CIDR range.
E. Create 2 custom VPCs, each with a single subne
F. Create each subnet is a different region and with a different CIDR range.
G. Create 2 custom VPCs, each with a single subne
H. Create each subnet in the same region and with the same CIDR range.

Answer: A

Explanation:
When we create subnets in the same VPC with different CIDR ranges, they can communicate automatically within VPC. Resources within a VPC network can
communicate with one another by using internal (private) IPv4 addresses, subject to applicable network firewall rules
Ref: https://cloud.google.com/vpc/docs/vpc

NEW QUESTION 220


You need to assign a Cloud Identity and Access Management (Cloud IAM) role to an external auditor. The auditor needs to have permissions to review your
Google Cloud Platform (GCP) Audit Logs and also to review your Data Access logs. What should you do?

A. Assign the auditor the IAM role roles/logging.privateLogViewe


B. Perform the export of logs to Cloud Storage.
C. Assign the auditor the IAM role roles/logging.privateLogViewe
D. Direct the auditor to also review the logs for changes to Cloud IAM policy.
E. Assign the auditor’s IAM user to a custom role that has logging.privateLogEntries.list permissio
F. Perform the export of logs to Cloud Storage.
G. Assign the auditor’s IAM user to a custom role that has logging.privateLogEntries.list permissio
H. Direct the auditor to also review the logs for changes to Cloud IAM policy.

Answer: B

Explanation:
Google Cloud provides Cloud Audit Logs, which is an integral part of Cloud Logging. It consists of two log streams for each project: Admin Activity and Data
Access, which are generated by Google Cloud services to help you answer the question of who did what, where, and when? within your Google Cloud projects.
Ref: https://cloud.google.com/iam/docs/job-functions/auditing#scenario_external_auditors

NEW QUESTION 221


After a recent security incident, your startup company wants better insight into what is happening in the Google Cloud environment. You need to monitor
unexpected firewall changes and instance creation. Your company prefers simple solutions. What should you do?

A. Use Cloud Logging filters to create log-based metrics for firewall and instance action
B. Monitor the changes and set up reasonable alerts.
C. Install Kibana on a compute Instanc
D. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Su
E. Target the Pub/Sub topic to push messages to the Kibana instanc
F. Analyze the logs on Kibana in real time.
G. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.
H. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage.Use BigQuery to periodically analyze log events in
the storage bucket.

Answer: A

Explanation:
This answer is the simplest and most effective way to monitor unexpected firewall changes and instance creation in Google Cloud. Cloud Logging filters allow you
to specify the criteria for the log entries that you want to view or export. You can use the Logging query language to write filters based on the LogEntry fields, such
as resource.type, severity, or protoPayload.methodName. For example, you can filter for firewall-related events by using the following query:
resource.type=“gce_subnetwork” logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Ffirewall”
You can filter for instance-related events by using the following query: resource.type=“gce_instance”
logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Factivity_log”
You can create log-based metrics from these filters to measure the rate or count of log entries that match the filter. Log-based metrics can be used to create charts
and dashboards in Cloud Monitoring, or to set up alerts based on the metric values. For example, you can create an alert policy that triggers when the log-based
metric for firewall changes exceeds a certain threshold in a given time interval. This way, you can get notified of any unexpected or malicious changes to your
firewall rules.
Option B is incorrect because it is unnecessarily complex and costly. Installing Kibana on a compute instance requires additional configuration and maintenance.
Creating a log sink to forward Cloud Audit Logs to Pub/Sub also incurs additional charges for the Pub/Sub service. Analyzing the logs on Kibana in real time may
not be feasible or efficient, as it requires constant monitoring and manual intervention.
Option C is incorrect because Google Cloud firewall rules logging is a different feature from Cloud Audit Logs. Firewall rules logging allows you to audit, verify, and
analyze the effects of your firewall rules by creating connection records for each rule that applies to traffic. However, firewall rules logging does not log the insert,
update, or delete events for the firewall rules themselves. Those events are logged by Cloud Audit Logs, which record the administrative activities in your Google
Cloud project.
Option D is incorrect because it is not a real-time solution. Creating a log sink to forward Cloud Audit Logs to Cloud Storage requires additional storage space and

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

charges. Using BigQuery to periodically analyze log events in the storage bucket also incurs additional costs for the BigQuery service. Moreover, this option does
not provide any alerting mechanism to notify you of any unexpected or malicious changes to your firewall rules or instances.

NEW QUESTION 225


An external member of your team needs list access to compute images and disks in one of your projects. You want to follow Google-recommended practices when
you grant the required permissions to this user. What should you do?

A. Create a custom role, and add all the required compute.disks.list and compute, images.list permissions as includedPermission
B. Grant the custom role to the user at the project level.
C. Create a custom role based on the Compute Image User role Add the compute.disks, list to theincludedPermissions field Grant the custom role to the user at
the project level
D. Grant the Compute Storage Admin role at the project level.
E. Create a custom role based on the Compute Storage Admin rol
F. Exclude unnecessary permissions from the custom rol
G. Grant the custom role to the user at the project level.

Answer: B

NEW QUESTION 226


Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has
hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to
consolidate all costs as of tomorrow. What should you do?

A. Link the acquired company’s projects to your company's billing account.


B. Configure the acquired company's billing account and your company's billing account to export the billing data into the same BigQuery dataset.
C. Migrate the acquired company’s projects into your company’s GCP organizatio
D. Link the migrated projects to your company's billing account.
E. Create a new GCP organization and a new billing accoun
F. Migrate the acquired company's projects and your company's projects into the new GCP organization and link the projects to the new billing account.

Answer: A

Explanation:
https://cloud.google.com/resource-manager/docs/project-migration#oauth_consent_screen https://cloud.google.com/resource-manager/docs/project-migration

NEW QUESTION 229


Your organization has strict requirements to control access to Google Cloud projects. You need to enable your Site Reliability Engineers (SREs) to approve
requests from the Google Cloud support team when an SRE opens a support case. You want to follow Google-recommended practices. What should you do?

A. Add your SREs to roles/iam.roleAdmin role.


B. Add your SREs to roles/accessapproval approver role.
C. Add your SREs to a group and then add this group to roles/iam roleAdmin role.
D. Add your SREs to a group and then add this group to roles/accessapproval approver role.

Answer: D

NEW QUESTION 230


You need to produce a list of the enabled Google Cloud Platform APIs for a GCP project using the gcloud command line in the Cloud Shell. The project name is
my-project. What should you do?

A. Run gcloud projects list to get the project ID, and then run gcloud services list --project <project ID>.
B. Run gcloud init to set the current project to my-project, and then run gcloud services list --available.
C. Run gcloud info to view the account value, and then run gcloud services list --account <Account>.
D. Run gcloud projects describe <project ID> to verify the project value, and then run gcloud services list--available.

Answer: A

Explanation:
`gcloud services list --available` returns not only the enabled services in the project but also services that CAN be enabled.
https://cloud.google.com/sdk/gcloud/reference/services/list#--available
Run the following command to list the enabled APIs and services in your current project: gcloud services list
whereas, Run the following command to list the APIs and services available to you in your current project: gcloud services list –available
https://cloud.google.com/sdk/gcloud/reference/services/list#--available
--available
Return the services available to the project to enable. This list will include any services that the project has already enabled.
To list the services the current project has enabled for consumption, run: gcloud services list --enabled
To list the services the current project can enable for consumption, run: gcloud services list –available

NEW QUESTION 231


Your continuous integration and delivery (CI/CD) server can't execute Google Cloud actions in a specific project because of permission issues. You need to
validate whether the used service account has the appropriate roles in the specific project. What should you do?

A. Open the Google Cloud console, and run a query to determine which resources this service account can access.
B. Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account.
C. Open the Google Cloud console, and check the organization policies.
D. Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from
the folder or organization levels.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Answer: D

Explanation:
This answer is the most effective way to validate whether the service account used by the CI/CD server has the appropriate roles in the specific project. By
checking the IAM roles assigned to the service account, you can see which permissions the service account has and which resources it can access. You can also
check if the service account inherits any roles from the folder or organization levels, which may affect its access to the project. You can use the Google Cloud
console, the gcloud command-line tool, or the IAM API to view the IAM roles of a service account.

NEW QUESTION 235


You have created a new project in Google Cloud through the gcloud command line interface (CLI) and linked a billing account. You need to create a new Compute
Engine instance using the CLI. You need to perform the prerequisite steps. What should you do?

A. Create a Cloud Monitoring Workspace.


B. Create a VPC network in the project.
C. Enable the compute googleapis.com API.
D. Grant yourself the IAM role of Compute Admin.

Answer: D

NEW QUESTION 236


You need to immediately change the storage class of an existing Google Cloud bucket. You need to reduce service cost for infrequently accessed files stored in
that bucket and for all files that will be added to that bucket in the future. What should you do?

A. Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket
B. Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket
C. Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket
D. Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket

Answer: B

NEW QUESTION 239


You are the team lead of a group of 10 developers. You provided each developer with an individual Google Cloud Project that they can use as their personal
sandbox to experiment with different Google Cloud solutions. You want to be notified if any of the developers are spending above $500 per month on their sandbox
environment. What should you do?

A. Create a single budget for all projects and configure budget alerts on this budget.
B. Create a separate billing account per sandbox project and enable BigQuery billing export
C. Create a Data Studio dashboard to plot the spending per billing account.
D. Create a budget per project and configure budget alerts on all of these budgets.
E. Create a single billing account for all sandbox projects and enable BigQuery billing export
F. Create a Data Studio dashboard to plot the spending per project.

Answer: C

Explanation:
Set budgets and budget alerts Overview Avoid surprises on your bill by creating Cloud Billing budgets to monitor all of your Google Cloud charges in one place. A
budget enables you to track your actual Google Cloud spend against your planned spend. After you've set a budget amount, you set budget alert threshold rules
that are used to trigger email notifications. Budget alert emails help you stay informed about how your spend is tracking against your budget. 2. Set budget scope
Set the budget Scope and then click Next. In the Projects field, select one or more projects that you want to apply the budget alert to. To apply the budget alert to
all the projects in the Cloud Billing account, choose Select all.
https://cloud.google.com/billing/docs/how-to/budgets#budget-scop

NEW QUESTION 240


Your management has asked an external auditor to review all the resources in a specific project. The security team has enabled the Organization Policy called
Domain Restricted Sharing on the organization node by specifying only your Cloud Identity domain. You want the auditor to only be able to view, but not modify,
the resources in that project. What should you do?

A. Ask the auditor for their Google account, and give them the Viewer role on the project.
B. Ask the auditor for their Google account, and give them the Security Reviewer role on the project.
C. Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project.
D. Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project.

Answer: C

Explanation:
Using primitive roles The following table lists the primitive roles that you can grant to access a project, the description of what the role does, and the permissions
bundled within that role. Avoid using primitive roles except when absolutely necessary. These roles are very powerful, and include a large number of permissions
across all Google Cloud services. For more details on when you should use primitive roles, see the Identity and Access Management FAQ. IAM predefined roles
are much more granular, and allow you to carefully manage the set of permissions that your users have access to. See Understanding Roles for a list of roles that
can be granted at the project level. Creating custom roles can further increase the control you have over user permissions. https://cloud.google.com/resource-
manager/docs/access-control-proj#using_primitive_roles
https://cloud.google.com/iam/docs/understanding-custom-roles

NEW QUESTION 244


You have created a code snippet that should be triggered whenever a new file is uploaded to a Cloud Storage bucket. You want to deploy this code snippet. What
should you do?

A. Use App Engine and configure Cloud Scheduler to trigger the application using Pub/Sub.

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

B. Use Cloud Functions and configure the bucket as a trigger resource.


C. Use Google Kubernetes Engine and configure a CronJob to trigger the application using Pub/Sub.
D. Use Dataflow as a batch job, and configure the bucket as a data source.

Answer: B

Explanation:
Google Cloud Storage Triggers
Cloud Functions can respond to change notifications emerging from Google Cloud Storage. These notifications can be configured to trigger in response to various
events inside a bucket—object creation, deletion, archiving and metadata updates.
Note: Cloud Functions can only be triggered by Cloud Storage buckets in the same Google Cloud Platform project.
Event types
Cloud Storage events used by Cloud Functions are based on Cloud Pub/Sub Notifications for Google Cloud Storage and can be configured in a similar way.
Supported trigger type values are: google.storage.object.finalize google.storage.object.delete google.storage.object.archive google.storage.object.metadataUpdate
Object Finalize
Trigger type value: google.storage.object.finalize
This event is sent when a new object is created (or an existing object is overwritten, and a new generation of that object is created) in the bucket.
https://cloud.google.com/functions/docs/calling/storage#event_types

NEW QUESTION 248


You have a managed instance group comprised of preemptible VM's. All of the VM's keepdeleting and
recreating themselves every minute. What is a possible cause of thisbehavior?

A. Your zonal capacity is limited, causing all preemptible VM's to be shutdown torecover capacit
B. Try deploying your group to another zone.
C. You have hit your instance quota for the region.
D. Your managed instance group's VM's are toggled to only last 1 minute inpreemptible settings.
E. Your managed instance group's health check is repeatedly failing, either to amisconfigured health check or misconfigured firewall rules not allowing the
healthcheck to access the instance

Answer: D

Explanation:
as the instances (normal or preemptible) would be terminated and relaunched if the health check fails either due to application not configured properly or the
instances firewall do not allow health check to happen.
GCP provides health check systems that connect to virtual machine (VM) instances on a configurable, periodic basis. Each connection attempt is called a probe.
GCP records the success or failure of each probe.
Health checks and load balancers work together. Based on a configurable number of sequential successful or failed probes, GCP computes an overall health state
for each VM in the load balancer. VMs that respond successfully for the configured number of times are considered healthy. VMs that fail to respond successfully
for a separate number of times are unhealthy.
GCP uses the overall health state of each VM to determine its eligibility for receiving new requests. In addition to being able to configure probe frequency and
health state thresholds, you can configure the criteria that define a successful probe.

NEW QUESTION 250


You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year
from their creation. How should you set up the policy?

A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete action
B. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 – 90)
C. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete action
D. Set the SetStorageClass action to 90 days and the Delete action to 365 days.
E. Use gsutil rewrite and set the Delete action to 275 days (365-90).
F. Use gsutil rewrite and set the Delete action to 365 days.

Answer: A

Explanation:
https://cloud.google.com/storage/docs/lifecycle#setstorageclass-cost
# The object's time spent set at the original storage class counts towards any minimum storage duration that applies for the new storage class.

NEW QUESTION 253


You are migrating a business critical application from your local data center into Google Cloud. As part of your high-availability strategy, you want to ensure that
any data used by the application will be immediately available if a zonal failure occurs. What should you do?

A. Store the application data on a zonal persistent dis


B. Create a snapshot schedule for the dis
C. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
D. Store the application data on a zonal persistent dis
E. If an outage occurs, create an instance in another zone with this disk attached.
F. Store the application data on a regional persistent dis
G. Create a snapshot schedule for the dis
H. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
I. Store the application data on a regional persistent disk If an outage occurs, create an instance in another zone with this disk attached.

Answer: A

NEW QUESTION 256


Your organization has three existing Google Cloud projects. You need to bill the Marketing department for only their Google Cloud services for a new initiative
within their group. What should you do?

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

A. * 1. Verify that you ace assigned the Billing Administrator IAM role tor your organization's Google Cloud Project for the Marketing department* 2. Link the new
project to a Marketing Billing Account
B. * 1. Verify that you are assigned the Billing Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project for the
Marketing department* 3. Set the default key-value project labels to department marketing for all services in this project
C. * 1. Verify that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department 3. Link the new project to a Marketing Billing Account.
D. * 1. Verity that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department* 3. Set the default key value project labels to department marketing for all services in this protect

Answer: A

NEW QUESTION 260


......

Your Partner of IT Exam visit - https://www.exambible.com


We recommend you to try the PREMIUM Associate-Cloud-Engineer Dumps From Exambible
https://www.exambible.com/Associate-Cloud-Engineer-exam/ (244 Q&As)

Relate Links

100% Pass Your Associate-Cloud-Engineer Exam with Exambible Prep Materials

https://www.exambible.com/Associate-Cloud-Engineer-exam/

Contact us

We are proud of our high-quality customer service, which serves you around the clock 24/7.

Viste - https://www.exambible.com/

Your Partner of IT Exam visit - https://www.exambible.com


Powered by TCPDF (www.tcpdf.org)

You might also like