Associate Cloud Engineer - 5
Associate Cloud Engineer - 5
Google
Exam Questions Associate-Cloud-Engineer
Google Cloud Certified - Associate Cloud Engineer
About Exambible
Found in 1998
Exambible is a company specialized on providing high quality IT exam practice study materials, especially Cisco CCNA, CCDA,
CCNP, CCIE, Checkpoint CCSE, CompTIA A+, Network+ certification practice exams and so on. We guarantee that the
candidates will not only pass any IT exam at the first attempt but also get profound understanding about the certificates they have
got. There are so many alike companies in this industry, however, Exambible has its unique advantages that other companies could
not achieve.
Our Advances
* 99.9% Uptime
All examinations will be up to date.
* 24/7 Quality Support
We will provide service round the clock.
* 100% Pass Rate
Our guarantee that you will pass the exam.
* Unique Gurantee
If you do not pass the exam at the first time, we will not only arrange FULL REFUND for you, but also provide you another
exam of your claim, ABSOLUTELY FREE!
NEW QUESTION 1
You recently discovered that your developers are using many service account keys during their development process. While you work on a long term
improvement, you need to quickly implement a process to enforce short-lived service account credentials in your company. You have the following requirements:
• All service accounts that require a key should be created in a centralized project called pj-sa.
• Service account keys should only be valid for one day.
You need a Google-recommended solution that minimizes cost. What should you do?
A. Implement a Cloud Run job to rotate all service account keys periodically in pj-s
B. Enforce an org policy to deny service account key creation with an exception to pj-sa.
C. Implement a Kubernetes Cronjob to rotate all service account keys periodicall
D. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
E. Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hour
F. Enforce an org policy constraint denying service account key creation with an exception on pj-sa.
G. Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hour
H. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
Answer: C
Explanation:
According to the Google Cloud documentation, you can use organization policy constraints to control the creation and expiration of service account keys. The
constraints are:
constraints/iam.allowServiceAccountKeyCreation: This constraint allows you to specify which projects
or folders can create service account keys. You can set the value to true or false, or use a condition to apply the constraint to specific service accounts. By setting
this constraint to false for the organization and adding an exception for the pj-sa project, you can prevent developers from creating service account keys in other
projects.
constraints/iam.serviceAccountKeyMaxLifetime: This constraint allows you to specify the maximum lifetime of service account keys. You can set the value to a
duration in seconds, such as 86400 for one day. By setting this constraint to 86400 for the organization, you can ensure that all service account ke expire after one
day.
These constraints are recommended by Google Cloud as best practices to minimize the risk of service account key misuse or compromise. They also help you
reduce the cost of managing service account keys, as you do not need to implement a custom solution to rotate or delete them.
References:
1: Associate Cloud Engineer Certification Exam Guide | Learn - Google Cloud
5: Create and delete service account keys - Google Cloud
Organization policy constraints for service accounts
NEW QUESTION 2
Your organization uses Active Directory (AD) to manage user identities. Each user uses this identity for federated access to various on-premises systems. Your
security team has adopted a policy that requires users to log into Google Cloud with their AD identity instead of their own login. You want to follow the
Google-recommended practices to implement this policy. What should you do?
A. Sync Identities with Cloud Directory Sync, and then enable SAML for single sign-on
B. Sync Identities in the Google Admin console, and then enable Oauth for single sign-on
C. Sync identities with 3rd party LDAP sync, and then copy passwords to allow simplified login with (he same credentials
D. Sync identities with Cloud Directory Sync, and then copy passwords to allow simplified login with the same credentials.
Answer: A
NEW QUESTION 3
You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver
Monitoring dashboard. What should you do?
A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.
B. For each project, create a Stackdriver accoun
C. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.
D. Configure a single Stackdriver account, and link all projects to the same account.
E. Configure a single Stackdriver account for one of the project
F. In Stackdriver, create a Group and add the other project names as criteria for that Group.
Answer: C
Explanation:
When you intially click on Monitoring(Stackdriver Monitoring) it creates a workspac(a stackdriver account) linked to the ACTIVE(CURRENT) Project from which it
was clicked.
Now if you change the project and again click onto Monitoring it would create an another workspace(a stackdriver account) linked to the changed
ACTIVE(CURRENT) Project, we don't want this as this would not consolidate our result into a single dashboard(workspace/stackdriver account).
If you have accidently created two diff workspaces merge them under Monitoring > Settings > Merge Workspaces > MERGE.
If we have only one workspace and two projects we can simply add other GCP Project under Monitoring > Settings > GCP Projects > Add GCP Projects.
https://cloud.google.com/monitoring/settings/multiple-projects
Nothing about groups https://cloud.google.com/monitoring/settings?hl=en
NEW QUESTION 4
Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the
solution. What should you do?
Answer: B
NEW QUESTION 5
You have a developer laptop with the Cloud SDK installed on Ubuntu. The Cloud SDK was installed from the Google Cloud Ubuntu package repository. You want
to test your application locally on your laptop with Cloud Datastore. What should you do?
Answer: D
Explanation:
The Datastore emulator provides local emulation of the production Datastore environment. You can use the emulator to develop and test your application
locallyRef: https://cloud.google.com/datastore/docs/tools/datastore-emulator
NEW QUESTION 6
Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data
accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?
Answer: B
Explanation:
"large quantity" : Cloud Storage or BigQuery "files" a file is nothing but an Object
NEW QUESTION 7
You need to create a custom VPC with a single subnet. The subnet’s range must be as large as possible. Which range should you use?
A. 1.00.0.0/0
B. 10.0.0.0/8
C. 172.16.0.0/12
D. 192.168.0.0/16
Answer: B
Explanation:
https://cloud.google.com/vpc/docs/vpc#manually_created_subnet_ip_ranges
NEW QUESTION 8
You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as
quickly and easily as possible. What should you do?
Answer: A
Explanation:
Installing Jenkins
In this section, you use Cloud Marketplace to provision a Jenkins instance. You customize this instance to use the agent image you created in the previous
section.
Go to the Cloud Marketplace solution for Jenkins. Click Launch on Compute Engine.
Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4.
Machine type selection for Jenkins deployment.
Click Deploy and wait for your Jenkins instance to finish being provisioned. When it is finished, you will see: Jenkins has been deployed.
https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine#installing_jenkins
NEW QUESTION 9
Your company has developed a new application that consists of multiple microservices. You want to deploy the application to Google Kubernetes Engine (GKE),
and you want to ensure that the cluster can scale as more applications are deployed in the future. You want to avoid manual intervention when each new
application is deployed. What should you do?
Answer: C
Explanation:
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler#adding_a_node_pool_with_autoscal
NEW QUESTION 10
Your company is moving its entire workload to Compute Engine. Some servers should be accessible through the Internet, and other servers should only be
accessible over the internal network. All servers need to be able to talk to each other over specific ports and protocols. The current on-premises network relies on
a demilitarized zone (DMZ) for the public servers and a Local Area Network (LAN) for the private servers. You need to design the networking infrastructure on
Google Cloud to match these requirements. What should you do?
A. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
B. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
C. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
D. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
E. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
F. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
G. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
H. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
Answer: C
Explanation:
https://cloud.google.com/vpc/docs/vpc-peering
NEW QUESTION 10
Your organization has a dedicated person who creates and manages all service accounts for Google Cloud projects. You need to assign this person the minimum
role for projects. What should you do?
Answer: D
NEW QUESTION 11
You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items.
How should you configure the auditor's permissions?
Answer: C
NEW QUESTION 13
Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project.
What should you do?
A. Add the group for the finance team to roles/billing user role.
B. Add the group for the finance team to roles/billing admin role.
C. Add the group for the finance team to roles/billing viewer role.
D. Add the group for the finance team to roles/billing project/Manager role.
Answer: C
Explanation:
"Billing Account Viewer access would usually be granted to finance teams, it provides access to spend information, but does not confer the right to link or unlink
projects or otherwise manage the properties of the billing account." https://cloud.google.com/billing/docs/how-to/billing-access
NEW QUESTION 18
You have been asked to set up the billing configuration for a new Google Cloud customer. Your customer wants to group resources that share common IAM
policies. What should you do?
Answer: B
Explanation:
Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders, or a combination of both. Organizations can use folders
to group projects under the organization node in a hierarchy. For example, your organization might contain multiple departments, each with its own set of Google
Cloud resources. Folders allow you to group these resources on a per-department basis. Folders are used to group resources that share common IAM policies.
While a folder can contain multiple folders or resources, a given folder or resource can have exactly one parent.
https://cloud.google.com/resource-manager/docs/creating-managing-folders
NEW QUESTION 23
You received a JSON file that contained a private key of a Service Account in order to get access to several resources in a Google Cloud project. You downloaded
and installed the Cloud SDK and want to use this private key for authentication and authorization when performing gcloud commands. What should you do?
A. Use the command gcloud auth login and point it to the private key
B. Use the command gcloud auth activate-service-account and point it to the private key
C. Place the private key file in the installation directory of the Cloud SDK and rename it to "credentials ison"
D. Place the private key file in your home directory and rename it to ‘’GOOGLE_APPUCATION_CREDENTiALS".
Answer: B
Explanation:
Authorizing with a service account
gcloud auth activate-service-account authorizes access using a service account. As with gcloud init and gcloud auth login, this command saves the service
account credentials to the local system on successful completion and sets the specified account as the active account in your Cloud SDK configuration.
https://cloud.google.com/sdk/docs/authorizing#authorizing_with_a_service_account
NEW QUESTION 24
Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not
responsive. You want to replace this VM in the MIG quickly. What should you do?
A. Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.
B. Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.
C. Use the gcloud compute instances update command with a REFRESH action for the VM.
D. Update and apply the instance template of the MIG.
Answer: A
NEW QUESTION 25
You are building a data lake on Google Cloud for your Internet of Things (loT) application. The loT application has millions of sensors that are constantly streaming
structured and unstructured data to your backend in the cloud. You want to build a highly available and resilient architecture based on
Google-recommended practices. What should you do?
A. Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage
B. Stream data to Pub/Su
C. and use Storage Transfer Service to send data to BigQuery.
D. Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery.
E. Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable.
Answer: B
NEW QUESTION 27
You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP. You want to make sure it is reachable by clients over
that port. What should you do?
A. Add the network tag allow-udp-636 to the VM instance running the LDAP server.
B. Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server.
C. Add a network tag of your choice to the instanc
D. Create a firewall rule to allow ingress on UDP port 636 for that network tag.
E. Add a network tag of your choice to the instance running the LDAP serve
F. Create a firewall rule to allow egress on UDP port 636 for that network tag.
Answer: C
Explanation:
A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or instance templates. A tag is not a
separate resource, so you cannot create it separately. All resources with that string are considered to have that tag. Tags enable you to make firewall rules and
routes applicable to specific VM instances.
NEW QUESTION 32
You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single
Compute Engine zone, eliminate downtime, and minimize cost. What should you do?
A. – Create Compute Engine resources in us–central1–b.–Balance the load across both us–central1–a and us–central1–b.
B. – Create a Managed Instance Group and specify us–central1–a as the zone.–Configure the Health Check with a short Health Interval.
C. – Create an HTTP(S) Load Balancer.–Create one or more global forwarding rules to direct traffic to your VMs.
D. – Perform regular backups of your application.–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.–Restore from backups
when notified.
Answer: A
Explanation:
Choosing a region and zone You choose which region or zone hosts your resources, which controls where your data is stored and used. Choosing a region and
zone is important for several reasons:
Handling failures
Distribute your resources across multiple zones and regions to tolerate outages. Google designs zones to be independent from each other: a zone usually has
power, cooling, networking, and control planes that are isolated from other zones, and most single failure events will affect only a single zone. Thus, if a zone
becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running. Similarly, if a region experiences any disturbances,
you should have backup services running in a different region. For more information about distributing your resources and designing a robust system, see
Designing Robust Systems. Decreased network latency To decrease network latency, you might want to choose a region or zone that is close to your point of
service.
https://cloud.google.com/compute/docs/regions-zones#choosing_a_region_and_zone
NEW QUESTION 35
A team of data scientists infrequently needs to use a Google Kubernetes Engine (GKE) cluster that you manage. They require GPUs for some long-running, non-
restartable jobs. You want to minimize cost. What should you do?
Answer: A
Explanation:
auto-provisioning = Attaches and deletes node pools to cluster based on the requirements. Hence creating a GPU node pool, and auto-scaling would be better
https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning
NEW QUESTION 40
You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow
Google-recommended practices. What should you do?
Answer: C
Explanation:
https://cloud.google.com/iam/docs/understanding-service-accounts#using_service_accounts_with_compute_eng https://cloud.google.com/storage/docs/access-
control/iam-roles
NEW QUESTION 42
Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible. What should you do?
A. Download and deploy the Jenkins Java WAR to App Engine Standard.
B. Create a new Compute Engine instance and install Jenkins through the command line interface.
C. Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image.
D. Use GCP Marketplace to launch the Jenkins solution.
Answer: D
NEW QUESTION 45
A company wants to build an application that stores images in a Cloud Storage bucket and wants to generate thumbnails as well as resize the images. They want
to use a google managed service that can scale up and scale down to zero automatically with minimal effort. You have been asked to recommend a service.
Which GCP service would you suggest?
Answer: C
Explanation:
Text Description automatically generated with low confidence
Cloud Functions is Google Cloud’s event-driven serverless compute platform. It automatically scales based on the load and requires no additional configuration.
You pay only for the resources used.
Ref: https://cloud.google.com/functions
While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google App Engine support autoscaling, it needs to be configured explicitly based
on the load and is not as trivial as the scale up or scale down offered by Google’s cloud functions.
NEW QUESTION 48
You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in
the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?
A. Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.
B. Encode username and password in sha256 encoding, and save it to a text fil
C. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.
D. Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.
E. Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.
Answer: D
NEW QUESTION 51
Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a
Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it. What should you
do?
Answer: A
Explanation:
VMs cannot communicate over the internet without a public IP address. Private Google Access permits access to Google APIs and services in Google's production
infrastructure.
https://cloud.google.com/vpc/docs/private-google-access
NEW QUESTION 55
You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to
ensure that the clusters can grow in nodes when needed. What should you do?
A. Create a new subnet in the same region as the subnet being used.
B. Add an alias IP range to the subnet used by the GKE clusters.
C. Create a new VPC, and set up VPC peering with the existing VPC.
D. Expand the CIDR range of the relevant subnet for the cluster.
Answer: D
Explanation:
gcloud compute networks subnets expand-ip-range NAME gcloud compute networks subnets expand-ip-range
- expand the IP range of a Compute Engine subnetwork https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range
NEW QUESTION 58
You’ve deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:
You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should
you do?
A. Store the database password inside the Docker image of the container, not in the YAML file.
B. Store the database password inside a Secret objec
C. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.
D. Store the database password inside a ConfigMap objec
E. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.
F. Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.
Answer: B
Explanation:
https://cloud.google.com/config-connector/docs/how-to/secrets#gcloud
NEW QUESTION 60
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will
run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?
Answer: B
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets
automatically add Pods to the new nodes as needed.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you
add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples
of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have
DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use
different configurations for different hardware types and resource needs.
NEW QUESTION 64
You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?
A. After the VM has been created, use your Google Account credentials to log in into the VM.
B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.
C. When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.
D. After the VM has been created, download the JSON private key for the default Compute Engine service accoun
E. Use the credentials in the JSON file to log in to the VM.
Answer: B
Explanation:
You can generate Windows passwords using either the Google Cloud Console or the gcloud command-line tool. This option uses the right syntax to reset the
windows password.
NEW QUESTION 68
You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company's security policies only allow the use
to internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a
Cloud Storage bucket within your project. What should you do?
Answer: A
NEW QUESTION 71
You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent
the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the
application with access to Cloud Storage. What should you do?
A. 1. Use nslookup to get the IP address for storage.googleapis.com.2. Negotiate with the security team to be able to give a public IP address to the servers.3.
Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com.
B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud Platform (GCP).2. In this VPC, create a Compute Engine instance
and install the Squid proxy server on this instance.3. Configure your servers to use that instance as a proxy to access Cloud Storage.
C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine.2. Create an internal load balancer (ILB) that
uses storage.googleapis.com as backend.3. Configure your new instances to use this ILB as proxy.
D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP.2. Use Cloud Router to create a custom route advertisement for 199.36.153.4/30.
Announce that network to your on-premises network through the VPN tunnel.3. In your on-premises network, configure your DNS server to
resolve*.googleapis.com as a CNAME to restricted.googleapis.com.
Answer: D
Explanation:
Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved
by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.
Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP
Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.
In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it
is what Google recommends.
Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid
You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises
firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.
You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises
network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range
is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection.
Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.
So Negotiate with the security team to be able to give public IP addresses to the servers is not right.
Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).
So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.
Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity architectures
https://cloud.google.com/hybrid-connectivity.
So,
Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.
NEW QUESTION 73
An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed until 2 weeks later. You need to find out this employee accessed
any sensitive customer information after their termination. What should you do?
Answer: C
Explanation:
https://cloud.google.com/logging/docs/audit
Data Access audit logs Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create,
modify, or read user-provided resource data.
https://cloud.google.com/logging/docs/audit#data-access
NEW QUESTION 74
You have an application that looks for its licensing server on the IP 10.0.3.21. You need to deploy the licensing server on Compute Engine. You do not want to
change the configuration of the application and want the application to be able to reach the licensing server. What should you do?
A. Reserve the IP 10.0.3.21 as a static internal IP address using gcloud and assign it to the licensing server.
B. Reserve the IP 10.0.3.21 as a static public IP address using gcloud and assign it to the licensing server.
C. Use the IP 10.0.3.21 as a custom ephemeral IP address and assign it to the licensing server.
D. Start the licensing server with an automatic ephemeral IP address, and then promote it to a static internal IP address.
Answer: A
Explanation:
IP 10.0.3.21 is internal by default, and to ensure that it will be static non-changing it should be selected as static internal ip address.
NEW QUESTION 78
You have deployed multiple Linux instances on Compute Engine. You plan on adding more instances in the coming weeks. You want to be able to access all of
these instances through your SSH client over me Internet without having to configure specific access on the existing and new instances. You do not want the
Compute Engine instances to have a public IP. What should you do?
Answer: B
Explanation:
https://cloud.google.com/iap/docs/using-tcp-forwarding
NEW QUESTION 83
You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute
Engine instances is running in its own VPC. What should you do?
Answer: B
Explanation:
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate
with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one
or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use
subnets in the Shared VPC network
https://cloud.google.com/vpc/docs/shared-vpc
"For example, an existing instance in a service project cannot be reconfigured to use a Shared VPC network, but a new instance can be created to use available
subnets in a Shared VPC network."
NEW QUESTION 86
Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the
application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a
Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?
Answer: C
NEW QUESTION 89
You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute
instances in development and production projects on a daily basis. What should you do?
H. Go to GCP Console and export this information to Cloud SQL on a daily basis.
Answer: A
Explanation:
You can create two configurations – one for the development project and another for the production project. And you do that by running “gcloud config
configurations create” command.https://cloud.google.com/sdk/gcloud/reference/config/configurations/createIn your custom script, you can load these
configurations one at a time and execute gcloud compute instances list to list Google Compute Engine instances in the project that is active in the gcloud
configuration.Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/listOnce you have this information, you can export it in a suitable format to a
suitable target e.g. export as CSV or export to Cloud Storage/BigQuery/SQL, etc
NEW QUESTION 91
You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You want to back up this instance daily for disaster
recovery purposes. You want to keep the backups for 30 days. You want the Google-recommended solution with the least management overhead and the least
number of services. What should you do?
A. * 1. Update your instances’ metadata to add the following value: snapshot–schedule: 0 1 * * ** 2. Update your instances’ metadata to add the following value:
snapshot–retention: 30
B. * 1. In the Cloud Console, go to the Compute Engine Disks page and select your instance’s disk.* 2. In the Snapshot Schedule section, select Create Schedule
and configure the following parameters:–Schedule frequency: Daily–Start time: 1:00 AM – 2:00 AM–Autodelete snapshots after 30 days
C. * 1. Create a Cloud Function that creates a snapshot of your instance’s disk.* 2.Create a Cloud Function that deletes snapshots that are older than 30 day
D. 3.Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
E. * 1. Create a bash script in the instance that copies the content of the disk to Cloud Storage.* 2. Create a bash script in the instance that deletes data older than
30 days in the backup Cloud Storage bucket.* 3. Configure the instance’s crontab to execute these scripts daily at 1:00 AM.
Answer: B
Explanation:
Creating scheduled snapshots for persistent disk This document describes how to create a snapshot schedule to regularly and automatically back up your zonal
and regional persistent disks. Use snapshot schedules as a best practice to back up your Compute Engine workloads. After creating a snapshot schedule, you can
apply it to one or more persistent disks. https://cloud.google.com/compute/docs/disks/scheduled-snapshots
NEW QUESTION 93
You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to
have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse
proxy?
Answer: A
Explanation:
What is Google Cloud Memorystore?
Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve
extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.
NEW QUESTION 94
You have developed a containerized web application that will serve Internal colleagues during business hours. You want to ensure that no costs are incurred
outside of the hours the application is used. You have just created a new Google Cloud project and want to deploy the application. What should you do?
A. Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero
B. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to zero.
C. Deploy the container on App Engine flexible environment with autoscalin
D. and set the value min_instances to zero in the app yaml
E. Deploy the container on App Engine flexible environment with manual scaling, and set the value instances to zero in the app yaml
Answer: B
Explanation:
https://cloud.google.com/kuberun/docs/architecture-overview#components_in_the_default_installation
NEW QUESTION 99
You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled. The application exposes a TCP endpoint. There are
several replicas of this application. You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that
has no overlapping IP ranges with the first VPC. This instance needs to connect to the application on GKE. You want to minimize effort. What should you do?
A. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Set the service's externalTrafficPolicy to Cluster.3. Configure
the Compute Engine instance to use the address of the load balancer that has been created.
B. 1. In GKE, create a Service of type NodePort that uses the application's Pods as backend.2. Create a Compute Engine instance called proxy with 2 network
interfaces, one in each VPC.3. Use iptables on this instance to forward traffic from gce-network to the GKE nodes.4. Configure the Compute Engine instance to
use the address of proxy in gce-network as endpoint.
C. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add an annotation to this service: cloud.google.com/load-
balancer-type: Internal3. Peer the two VPCs together.4. Configure the Compute Engine instance to use the address of the load balancer that has been created.
D. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add a Cloud Armor Security Policy to the load balancer that
whitelists the internal IPs of the MIG'sinstances.3. Configure the Compute Engine instance to use the address of the load balancer that has been created.
Answer: A
Explanation:
performs a peering between the two VPC's (the statement makes sure that this option is feasible since it clearly specifies that there is no overlapping between the
ip ranges of both vpc's), deploy the LoadBalancer as internal with the annotation, and configure the endpoint so that the compute engine instance can access the
application internally, that is, without the need to have a public ip at any time and therefore, without the need to go outside the google network. The traffic,
therefore, never crosses the public internet.
https://medium.com/pablo-perez/k8s-externaltrafficpolicy-local-or-cluster-40b259a19404 https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-
balancing
clients in a VPC network connected to the LoadBalancer network using VPC Network Peering can also access the Service
https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters
A. Coldline Storage
B. Nearline Storage
C. Regional Storage
D. Multi-Regional Storage
Answer: A
Explanation:
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is ideal for data you plan to read or
modify at most once a quarter. Since we have a requirement to access data once a quarter and want to go with the most cost-efficient option, we should select
Coldline Storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Answer: C
Explanation:
This command correctly lists pods that have the label app=prod. When creating the deployment, we used the label app=prod so listing pods that have this label
retrieve the pods belonging to nginx deployments. You can list pods by using Kubernetes CLI kubectl get pods.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containe
Answer: D
Explanation:
Install the workload in a compute engine VM, start and stop the instance as needed, because as per the question the VM runs for 30 hours, process can be
performed offline and should not be interrupted, if interrupted we need to restart the batch process again. Preemptible VMs are cheaper, but they will not be
available beyond 24hrs, and if the process gets interrupted the preemptible VM will restart.
Answer: A
Explanation:
This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the
pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the
kubectl delete deployment command.
$ kubectl delete deployment nginx
deployment.apps nginx deleted
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
A. Ask the other team to grant your default App Engine Service account the role of BigQuery Job User.
B. Ask the other team to grant your default App Engine Service account the role of BigQuery Data Viewer.
C. In Cloud IAM of your project, ensure that the default App Engine service account has the role of BigQuery Data Viewer.
D. In Cloud IAM of your project, grant a newly created service account from the other team the role of BigQuery Job User in your project.
Answer: B
Explanation:
The resource that you need to get access is in the other project. roles/bigquery.dataViewer BigQuery Data Viewer
When applied to a table or view, this role provides permissions to: Read data and metadata from the table or view.
This role cannot be applied to individual models or routines. When applied to a dataset, this role provides permissions to:
Read the dataset's metadata and list tables in the dataset. Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the
running of jobs.
A. Grant all members of the DevOps team the role of Project Editor on the organization level.
B. Grant all members of the DevOps team the role of Project Editor on the production project.
C. Create a custom role that combines the required permission
D. Grant the DevOps team the custom role on the production project.
E. Create a custom role that combines the required permission
F. Grant the DevOps team the custom role on the organization level.
Answer: C
Explanation:
Understanding IAM custom roles
Key Point: Custom roles enable you to enforce the principle of least privilege, ensuring that the user and service accounts in your organization have only the
permissions essential to performing their intended functions.
Basic concepts
Custom roles are user-defined, and allow you to bundle one or more supported permissions to meet your specific needs. Custom roles are not maintained by
Google; when new permissions, features, or services are added to Google Cloud, your custom roles will not be updated automatically.
When you create a custom role, you must choose an organization or project to create it in. You can then grant the custom role on the organization or project, as
well as any resources within that organization or project.
https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts
A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add
the following value: logs-destination:bq://platform-logs.
B. 1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2.Create a Cloud Function that is triggered by messages in the
logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.
C. 1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3.Choose BigQuery as Sink Service, and the platform-logs
dataset as Sink Destination.
D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that
executes this query:INSERT INTOdataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp>
DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.
Answer: C
Explanation:
* 1. In Stackdriver Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs
dataset as Sink Destination.
A. Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.
B. Create a Google Cloud service account, and download the service account ke
C. Place the key file in a folder on your machine where gcloud CLI can find it.
D. Download your Cloud Identity user account ke
E. Place the key file in a folder on your machine where gcloud CLI can find it.
F. Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
G. Run gcloud config set project $my_project to set the default project for gcloud CLI.
Answer: AE
Explanation:
Before you run the gcloud compute instances list command, you need to do two things: authenticate with your user account and set the default project for gcloud
CLI.
To authenticate with your user account, you need to run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to
gcloud CLI. This will authorize the gcloud CLI to access Google Cloud resources on your behalf1.
To set the default project for gcloud CLI, you need to run gcloud config set project $my_project, where
$my_project is the ID of the project that contains the instances you want to list. This will save you from having to specify the project flag for every gcloud
command2.
Option B is not recommended, because using a service account key increases the risk of credential leakage and misuse. It is also not necessary, because you can
use your user account to authenticate to the gcloud CLI3. Option C is not correct, because there is no such thing as a Cloud Identity user account key. Cloud
Identity is a service that provides identity and access management for Google Cloud users and groups4. Option D is not required, because the gcloud compute
instances list command does not depend on the default zone. You can
list instances from all zones or filter by a specific zone using the --filter flag.
References:
1: https://cloud.google.com/sdk/docs/authorizing
2: https://cloud.google.com/sdk/gcloud/reference/config/set
3: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
4: https://cloud.google.com/identity/docs/overview
: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Answer: D
Explanation:
"Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to
the same project or the same organization."
https://cloud.google.com/vpc/docs/vpc-peering while
"Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Google Cloud Virtual
Private Cloud (VPC) networks."
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview and
"HA VPN is a high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to your VPC network through an IPsec VPN
connection in a single region."
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Answer: B
Answer: B
Explanation:
https://cloud.google.com/spanner/docs/iam#spanner.databaseUser
Using the gcloud tool, execute the gcloud iam roles describe roles/spanner.databaseUser command on Cloud Shell. Attach the users to a newly created Google
group and add the group to the role.
A. Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.
B. Set up a high-priority (1000) rule that pairs both ingress and egress ports.
C. Set up a high-priority (1000) rule that blocks all egress and a low-priority (65534) rule that allows only the appropriate ports.
D. Set up a high-priority (1000) rule to allow the appropriate ports.
Answer: A
Explanation:
Implied rules Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console: Implied allow egress rule. An egress
rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic
blocked by Google Cloud. A higher priority firewall rule may restrict outbound access. Internet access is allowed if no other firewall rules deny outbound traffic and
if the instance has an external IP address or uses a Cloud NAT instance. For more information, see Internet access requirements. Implied deny ingress rule. An
ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming connections to them. A
higher priority rule might allow incoming access. The default network includes some additional rules that override this one, allowing certain types of incoming
connections. https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules
Answer: D
Explanation:
Logged information Within Cloud Audit Logs, there are two types of logs: Admin Activity logs: Entries for operations that modify the configuration or metadata of a
project, bucket, or object. Data Access logs: Entries for operations that modify objects or read a project, bucket, or object. There are several sub-types of data
access logs: ADMIN_READ: Entries for operations that read the configuration or metadata of a project, bucket, or object. DATA_READ: Entries for operations that
read an object. DATA_WRITE: Entries for operations that create or modify an object. https://cloud.google.com/storage/docs/audit-logs#types
A. Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.
B. Create an IAM policy and grant all comput
C. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.
D. Create a custom role at the folder level and grant all comput
E. instanceAdml
F. * permissions to the role Grant the custom role to the DevOps group.
G. Grant the basic role roles/editor to the DevOps group.
Answer: A
Answer: C
A. Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.
B. Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshol
C. SREs would scale resources up or down accordingly.
D. Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshol
E. Google support would scale resources up or down accordingly.
F. Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshol
G. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.
Answer: D
Explanation:
As to mexblood1's point, CPU utilization is a recommended proxy for traffic when it comes to Cloud Spanner. See: Alerts for high CPU utilization The following
table specifies our recommendations for maximum CPU usage for both single-region and multi-region instances. These numbers are to ensure that your instance
has enough compute capacity to continue to serve your traffic in the event of the loss of an entire zone (for single-region instances) or an entire region (for multi-
region instances). - https://cloud.google.com/spanner/docs/cpu-utilization
A. Assign the pods of the image rendering microservice a higher pod priority than the older microservices
B. Create a node pool with compute-optimized machine type nodes for the image rendering microservice Use the node pool with general-purposemachine type
nodes for the other microservices
C. Use the node pool with general-purpose machine type nodes for lite mage rendering microservice Create a nodepool with compute-optimized machine type
nodes for the other microservices
D. Configure the required amount of CPU and memory in the resource requests specification of the imagerendering microservice deployment Keep the resource
requests for the other microservices at the default
Answer: B
Answer: A
Explanation:
Ref: https://cloud.google.com/compute/docs/machine-types#n1_machine_type
A. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.
B. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.
C. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.
D. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.
Answer: B
A. Migrate the web application to App Engine and the backend API to Cloud Run Use Cloud Tasks to run your background job on Compute Engine
B. Migrate the web application to App Engine and the backend API to Cloud Ru
C. Use Cloud Tasks to run your background job on Cloud Run.
D. Run the web application on a Cloud Storage bucket and the backend API on Cloud Run Use Cloud Tasks to run your background job on Cloud Run.
E. Run the web application on a Cloud Storage bucket and the backend API on Cloud Ru
F. Use Cloud Tasks to run your background job on Compute Engine
Answer: B
A. Deployment Manager
B. Cloud Composer
C. Managed Instance Group
D. Unmanaged Instance Group
Answer: A
Explanation:
https://cloud.google.com/deployment-manager/docs/configuration/create-basic-configuration
Answer: A
Answer: D
Explanation:
M1 machine series Medium in-memory databases such as SAP HANA Tasks that require intensive use of memory with higher memory-to-vCPU ratios than the
general-purpose high-memory machine types.
In-memory databases and in-memory analytics, business warehousing (BW) workloads, genomics analysis, SQL analysis services. Microsoft SQL Server and
similar databases.
https://cloud.google.com/compute/docs/machine-types
https://cloud.google.com/compute/docs/machine-types#:~:text=databases%20such%20as-,SAP%20HANA,-In%
https://www.sap.com/india/products/hana.html#:~:text=is%20SAP%20HANA-,in%2Dmemory,-database%3F
Answer: A
Explanation:
https://cloud.google.com/architecture/identity/migrating-consumer-accounts
A. The pending Pod's resource requests are too large to fit on a single node of the cluster.
B. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
C. The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
D. The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ statu
E. It is currently being rescheduled on a new node.
Answer: B
Explanation:
The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not
enough resources left to schedule the pending Pod. is the right answer.
When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes.
Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or
manually scale up the nodes.
Answer: A
Explanation:
App engine is a regional service, which means the infrastructure that runs your app(s) is located in a specific region and is managed by Google to be redundantly
available across all the zones within that region. Once an app engine deployment is created in a region, it cant be changed. The only way is to create a new project
and create an App Engine instance in europe-west3, send all user traffic to this instance and delete the app engine instance in us-central.
Ref: https://cloud.google.com/appengine/docs/locations
Answer: C
Answer: A
A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
C. Create a GKE node pool with a sandbox type configured to gviso
D. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
E. Use the cos_containerd image for your GKE node
F. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.
Answer: C
Answer: D
Explanation:
https://cloud.google.com/monitoring/settings/multiple-projects https://cloud.google.com/monitoring/workspaces
A. Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
B. Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
C. Export the billing data to BigQuer
D. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and
sends an email if it is over 100 dollar
E. Schedule the Cloud Function using Cloud Scheduler to run hourly.
F. Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging.Create a Cloud Function that uses BigQuery to parse the HTTP
response log data in Stackdriver for thecurrent month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over
100 dollar
G. Schedule the Cloud Function using Cloud Scheduler to run hourly.
Answer: C
Explanation:
https://blog.doit-intl.com/the-truth-behind-google-cloud-egress-traffic-6e8f57b5c2f8
A. Deploy me application on App Engine For each update, create a new version of the same service Configure traffic splitting to send a small percentage of traffic
to the new version
B. Deploy the application on App Engine For each update, create a new service Configure traffic splitting to send a small percentage of traffic to the new service.
C. Deploy the application on Kubernetes Engine For a new release, update the deployment to use the new version
D. Deploy the application on Kubernetes Engine For a now release, create a new deployment for the new version Update the service e to use the now deployment.
Answer: D
Explanation:
Keyword, Version, traffic splitting, App Engine supports traffic splitting for versions before releasing.
Answer: B
Answer: D
Answer: D
Explanation:
Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has
no delays prior to data access, so now it is the leading solution among competitors.
The Real description is about Coldline storage Class: Coldline Storage
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard
Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable
trade-offs for lowered at-rest storage costs.
Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving
purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
https://cloud.google.com/storage/docs/storage-classes#coldline
Answer: BE
Explanation:
https://cloud.google.com/bigquery/docs/custom-quotas https://cloud.google.com/bigquery/pricing#flat_rate_pricing
Answer: D
Explanation:
Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets you run your code locally or in the cloud without having to provision servers.
Cloud Functions scales up or down, so you pay only for compute resources you use. Cloud Functions have excellent integration with Cloud Pub/Sub, lets you
scale down to zero and is recommended by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub."If you’re building a simple API (a
small set of functions to be accessed via HTTP or Cloud Pub/Sub), we recommend using Cloud Functions."Ref: https://cloud.google.com/serverless-options
Answer: A
Explanation:
When we create subnets in the same VPC with different CIDR ranges, they can communicate automatically within VPC. Resources within a VPC network can
communicate with one another by using internal (private) IPv4 addresses, subject to applicable network firewall rules
Ref: https://cloud.google.com/vpc/docs/vpc
Answer: B
Explanation:
Google Cloud provides Cloud Audit Logs, which is an integral part of Cloud Logging. It consists of two log streams for each project: Admin Activity and Data
Access, which are generated by Google Cloud services to help you answer the question of who did what, where, and when? within your Google Cloud projects.
Ref: https://cloud.google.com/iam/docs/job-functions/auditing#scenario_external_auditors
A. Use Cloud Logging filters to create log-based metrics for firewall and instance action
B. Monitor the changes and set up reasonable alerts.
C. Install Kibana on a compute Instanc
D. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Su
E. Target the Pub/Sub topic to push messages to the Kibana instanc
F. Analyze the logs on Kibana in real time.
G. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.
H. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage.Use BigQuery to periodically analyze log events in
the storage bucket.
Answer: A
Explanation:
This answer is the simplest and most effective way to monitor unexpected firewall changes and instance creation in Google Cloud. Cloud Logging filters allow you
to specify the criteria for the log entries that you want to view or export. You can use the Logging query language to write filters based on the LogEntry fields, such
as resource.type, severity, or protoPayload.methodName. For example, you can filter for firewall-related events by using the following query:
resource.type=“gce_subnetwork” logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Ffirewall”
You can filter for instance-related events by using the following query: resource.type=“gce_instance”
logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Factivity_log”
You can create log-based metrics from these filters to measure the rate or count of log entries that match the filter. Log-based metrics can be used to create charts
and dashboards in Cloud Monitoring, or to set up alerts based on the metric values. For example, you can create an alert policy that triggers when the log-based
metric for firewall changes exceeds a certain threshold in a given time interval. This way, you can get notified of any unexpected or malicious changes to your
firewall rules.
Option B is incorrect because it is unnecessarily complex and costly. Installing Kibana on a compute instance requires additional configuration and maintenance.
Creating a log sink to forward Cloud Audit Logs to Pub/Sub also incurs additional charges for the Pub/Sub service. Analyzing the logs on Kibana in real time may
not be feasible or efficient, as it requires constant monitoring and manual intervention.
Option C is incorrect because Google Cloud firewall rules logging is a different feature from Cloud Audit Logs. Firewall rules logging allows you to audit, verify, and
analyze the effects of your firewall rules by creating connection records for each rule that applies to traffic. However, firewall rules logging does not log the insert,
update, or delete events for the firewall rules themselves. Those events are logged by Cloud Audit Logs, which record the administrative activities in your Google
Cloud project.
Option D is incorrect because it is not a real-time solution. Creating a log sink to forward Cloud Audit Logs to Cloud Storage requires additional storage space and
charges. Using BigQuery to periodically analyze log events in the storage bucket also incurs additional costs for the BigQuery service. Moreover, this option does
not provide any alerting mechanism to notify you of any unexpected or malicious changes to your firewall rules or instances.
A. Create a custom role, and add all the required compute.disks.list and compute, images.list permissions as includedPermission
B. Grant the custom role to the user at the project level.
C. Create a custom role based on the Compute Image User role Add the compute.disks, list to theincludedPermissions field Grant the custom role to the user at
the project level
D. Grant the Compute Storage Admin role at the project level.
E. Create a custom role based on the Compute Storage Admin rol
F. Exclude unnecessary permissions from the custom rol
G. Grant the custom role to the user at the project level.
Answer: B
Answer: A
Explanation:
https://cloud.google.com/resource-manager/docs/project-migration#oauth_consent_screen https://cloud.google.com/resource-manager/docs/project-migration
Answer: D
A. Run gcloud projects list to get the project ID, and then run gcloud services list --project <project ID>.
B. Run gcloud init to set the current project to my-project, and then run gcloud services list --available.
C. Run gcloud info to view the account value, and then run gcloud services list --account <Account>.
D. Run gcloud projects describe <project ID> to verify the project value, and then run gcloud services list--available.
Answer: A
Explanation:
`gcloud services list --available` returns not only the enabled services in the project but also services that CAN be enabled.
https://cloud.google.com/sdk/gcloud/reference/services/list#--available
Run the following command to list the enabled APIs and services in your current project: gcloud services list
whereas, Run the following command to list the APIs and services available to you in your current project: gcloud services list –available
https://cloud.google.com/sdk/gcloud/reference/services/list#--available
--available
Return the services available to the project to enable. This list will include any services that the project has already enabled.
To list the services the current project has enabled for consumption, run: gcloud services list --enabled
To list the services the current project can enable for consumption, run: gcloud services list –available
A. Open the Google Cloud console, and run a query to determine which resources this service account can access.
B. Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account.
C. Open the Google Cloud console, and check the organization policies.
D. Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from
the folder or organization levels.
Answer: D
Explanation:
This answer is the most effective way to validate whether the service account used by the CI/CD server has the appropriate roles in the specific project. By
checking the IAM roles assigned to the service account, you can see which permissions the service account has and which resources it can access. You can also
check if the service account inherits any roles from the folder or organization levels, which may affect its access to the project. You can use the Google Cloud
console, the gcloud command-line tool, or the IAM API to view the IAM roles of a service account.
Answer: D
A. Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket
B. Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket
C. Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket
D. Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket
Answer: B
A. Create a single budget for all projects and configure budget alerts on this budget.
B. Create a separate billing account per sandbox project and enable BigQuery billing export
C. Create a Data Studio dashboard to plot the spending per billing account.
D. Create a budget per project and configure budget alerts on all of these budgets.
E. Create a single billing account for all sandbox projects and enable BigQuery billing export
F. Create a Data Studio dashboard to plot the spending per project.
Answer: C
Explanation:
Set budgets and budget alerts Overview Avoid surprises on your bill by creating Cloud Billing budgets to monitor all of your Google Cloud charges in one place. A
budget enables you to track your actual Google Cloud spend against your planned spend. After you've set a budget amount, you set budget alert threshold rules
that are used to trigger email notifications. Budget alert emails help you stay informed about how your spend is tracking against your budget. 2. Set budget scope
Set the budget Scope and then click Next. In the Projects field, select one or more projects that you want to apply the budget alert to. To apply the budget alert to
all the projects in the Cloud Billing account, choose Select all.
https://cloud.google.com/billing/docs/how-to/budgets#budget-scop
A. Ask the auditor for their Google account, and give them the Viewer role on the project.
B. Ask the auditor for their Google account, and give them the Security Reviewer role on the project.
C. Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project.
D. Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project.
Answer: C
Explanation:
Using primitive roles The following table lists the primitive roles that you can grant to access a project, the description of what the role does, and the permissions
bundled within that role. Avoid using primitive roles except when absolutely necessary. These roles are very powerful, and include a large number of permissions
across all Google Cloud services. For more details on when you should use primitive roles, see the Identity and Access Management FAQ. IAM predefined roles
are much more granular, and allow you to carefully manage the set of permissions that your users have access to. See Understanding Roles for a list of roles that
can be granted at the project level. Creating custom roles can further increase the control you have over user permissions. https://cloud.google.com/resource-
manager/docs/access-control-proj#using_primitive_roles
https://cloud.google.com/iam/docs/understanding-custom-roles
A. Use App Engine and configure Cloud Scheduler to trigger the application using Pub/Sub.
Answer: B
Explanation:
Google Cloud Storage Triggers
Cloud Functions can respond to change notifications emerging from Google Cloud Storage. These notifications can be configured to trigger in response to various
events inside a bucket—object creation, deletion, archiving and metadata updates.
Note: Cloud Functions can only be triggered by Cloud Storage buckets in the same Google Cloud Platform project.
Event types
Cloud Storage events used by Cloud Functions are based on Cloud Pub/Sub Notifications for Google Cloud Storage and can be configured in a similar way.
Supported trigger type values are: google.storage.object.finalize google.storage.object.delete google.storage.object.archive google.storage.object.metadataUpdate
Object Finalize
Trigger type value: google.storage.object.finalize
This event is sent when a new object is created (or an existing object is overwritten, and a new generation of that object is created) in the bucket.
https://cloud.google.com/functions/docs/calling/storage#event_types
A. Your zonal capacity is limited, causing all preemptible VM's to be shutdown torecover capacit
B. Try deploying your group to another zone.
C. You have hit your instance quota for the region.
D. Your managed instance group's VM's are toggled to only last 1 minute inpreemptible settings.
E. Your managed instance group's health check is repeatedly failing, either to amisconfigured health check or misconfigured firewall rules not allowing the
healthcheck to access the instance
Answer: D
Explanation:
as the instances (normal or preemptible) would be terminated and relaunched if the health check fails either due to application not configured properly or the
instances firewall do not allow health check to happen.
GCP provides health check systems that connect to virtual machine (VM) instances on a configurable, periodic basis. Each connection attempt is called a probe.
GCP records the success or failure of each probe.
Health checks and load balancers work together. Based on a configurable number of sequential successful or failed probes, GCP computes an overall health state
for each VM in the load balancer. VMs that respond successfully for the configured number of times are considered healthy. VMs that fail to respond successfully
for a separate number of times are unhealthy.
GCP uses the overall health state of each VM to determine its eligibility for receiving new requests. In addition to being able to configure probe frequency and
health state thresholds, you can configure the criteria that define a successful probe.
A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete action
B. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 – 90)
C. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete action
D. Set the SetStorageClass action to 90 days and the Delete action to 365 days.
E. Use gsutil rewrite and set the Delete action to 275 days (365-90).
F. Use gsutil rewrite and set the Delete action to 365 days.
Answer: A
Explanation:
https://cloud.google.com/storage/docs/lifecycle#setstorageclass-cost
# The object's time spent set at the original storage class counts towards any minimum storage duration that applies for the new storage class.
Answer: A
A. * 1. Verify that you ace assigned the Billing Administrator IAM role tor your organization's Google Cloud Project for the Marketing department* 2. Link the new
project to a Marketing Billing Account
B. * 1. Verify that you are assigned the Billing Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project for the
Marketing department* 3. Set the default key-value project labels to department marketing for all services in this project
C. * 1. Verify that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department 3. Link the new project to a Marketing Billing Account.
D. * 1. Verity that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department* 3. Set the default key value project labels to department marketing for all services in this protect
Answer: A
Relate Links
https://www.exambible.com/Associate-Cloud-Engineer-exam/
Contact us
We are proud of our high-quality customer service, which serves you around the clock 24/7.
Viste - https://www.exambible.com/