Cloud Digital Leader - 0
Cloud Digital Leader - 0
Google
Exam Questions Cloud-Digital-Leader
Google Cloud Digital Leader exam
NEW QUESTION 1
- (Topic 1)
Your organization wants an economical solution to store data such as files, graphical images, and videos and to access and share them securely.
Which Google Cloud product or service should your organization use?
A. Cloud Storage
B. Cloud SQL
C. Cloud Spanner
D. BigQuery
Answer: A
Explanation:
- Google Storage is GCP's version of AWS Simple Storage Service (S3) and an S3 bucket would be equivalent to a Google Storage bucket across the two clouds
NEW QUESTION 2
- (Topic 1)
You are leading projects in an IT services company. Your customer's project requires analyzing im-ages. They have many 10s of 1000s of raw images that they
have made available to you. Your small technology team needs to build a machine learning model. The images are unlabeled. You don't have the people or the
capacity to label the images. What is your approach?
A. Look for open-source labeled images that closely resemble the given images.
B. Request data labeling service from Google.
C. Tell the customer it is their duty to label the images.
D. Hire temporary workers who can quickly label the images.
Answer: C
Explanation:
Google's Data Labeling Service lets you work with human labelers to generate highly accurate labels for a collection of data that you can use in machine learning
models.
References:
-> https://cloud.google.com/vertex-ai/docs/datasets/data-labeling-job
-> https://cloud.google.com/ai-platform/data-labeling/docs
NEW QUESTION 3
- (Topic 1)
Your organization needs to allow a production job to have access to a BigQuery dataset. The production job is running on a Compute Engine instance that is part
of an instance group.
What should be included in the IAM Policy on the BigQuery dataset?
Answer: C
Explanation:
When an identity calls a Google Cloud API, BigQuery requires that the identity has the appropriate permissions to use the resource. You can grant permissions by
granting roles to a user, a group, or a service account.
Reference link- https://cloud.google.com/bigquery/docs/access-control
NEW QUESTION 4
- (Topic 1)
Your organization needs to ensure that the Google Cloud resources of each of your departments are segregated from one another. Each department has several
environments of its own: development, testing, and production. Which strategy should your organization choose?
A. Create a project per department, and create a folder per environment in each project.
B. Create a folder per department, and create a project per environment in each folder.
C. Create a Cloud Identity domain per department, and create a project per environment in each domain.
D. Create a Cloud Identity domain per environment, and create a project per department in each domain.
Answer: B
Explanation:
NEW QUESTION 5
- (Topic 1)
Your organization is releasing its first publicly available application in Google Cloud. The application is critical to your business and customers and requires a
2-hour SLA.
How should your organization set up support to minimize costs?
Answer: B
Explanation:
Reference: https://www.secureauth.com/enhanced-support-offering/
SecureAuth is dedicated to providing the industry-leading enhancedsupport ensuring the long term success of your SecureAuth SaaS IAM deployment
NEW QUESTION 6
- (Topic 1)
Your organization needs to minimize how much it pays for data traffic from the Google network to the internet. What should your organization do?
Answer: A
Explanation:
Choose the Standard network service tier. While Premium tier is the default for all egress traffic and offers the highest performance, when cost is a consideration.
Standard tier is the more economical.
https://cloud.google.com/blog/products/networking/networking-cost-optimization-best- practices
NEW QUESTION 7
- (Topic 1)
Your organization is building an application running in Google Cloud. Currently, software builds, tests, and regular deployments are done manually, but you want to
reduce work for the team. Your organization wants to use Google Cloud managed solutions to automate your build, testing, and deployment process.
Which Google Cloud product or feature should your organization use?
A. Cloud Scheduler
B. Cloud Code
C. Cloud Build
D. Cloud Deployment Manager
Answer: C
Explanation:
Deploy your application to App Engine using the gcloud app deploy command. This command automatically builds a container image by using the Cloud Build
service and then deploys that image to the App Engine flexible environment.
Reference: https://cloud.google.com/appengine/docs/flexible/nodejs/testing-and-deploying-your-app
NEW QUESTION 8
- (Topic 1)
An IoT platform is providing services to home security systems. They have more than a million customers, each with many home devices. Burglaries or child safety
issues are concerns that the clients customers. Therefore, the platform has to respond very quickly in near real time. What could be a typical data pipeline used to
support this platform on Google Cloud?
Answer: A
Explanation:
Explanation
=> Cloud Pub/Sub- Cloud Pub/Sub is the best to be the end-point for ingesting large amounts of data. It will grow as required, can stream data to downstream
systems, and can also work with intermittently available backends.
=> Cloud Dataflow- supports streaming data and therefore is an appropriate option for processing the data that is ingested.
=> BigQuery- BigQuery also supports streaming data and its possible to do real time ana- lytics on it.
=> DataStudio- DataStudio and Looker are for visualization. They don't have any in-built analysis.
=> Cloud Functions- Cloud Functions is a useful serverless endpoint. However, Pub/Sub is better in this case because it can also retain messages for a set period
if it was not possi- ble to deliver it first time.
=>Cloud Dataproc- Cloud Dataproc is used for Hadoop/Spark workloads and won't be a good fit here.
NEW QUESTION 9
- (Topic 1)
What is the difference between Standard and Coldline storage?
A. Coldline storage is for data for which a slow transfer rate is acceptable.
B. Standard and Coldline storage have different durability guarantees.
C. Standard and Coldline storage use different APIs.
D. Coldline storage is for infrequently accessed data.
Answer: D
Explanation:
Reference: https://www.msp360.com/resources/blog/google-cloud-nearline-storage-vs-coldline-vs-standard/
Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has
no delays prior to data access, so now it is the leading solution among competitors.
NEW QUESTION 10
- (Topic 1)
Your organization is running all its workloads in a private cloud on top of a hypervisor. Your
organization has decided it wants to move to Google Cloud as quickly as possible. Your organization wants minimal changes to the current environment, while
using the maximim amount of managed services Google offers.
What should your organization do?
Answer: B
Explanation:
Migrate for Compute Engine enables you to lift and shift workloads at scale to Google Cloud Compute Engine with minimal changes and risk.
Reference: https://dataintegration.info/simplify-vm-migrations-with-migrate-for-compute-engine-as-a-service
NEW QUESTION 10
- (Topic 1)
Your company has recently acquired three growing startups in three different countries. You want to reduce overhead in infrastructure management and keep your
costs low without sacrificing security and quality of service to your customers.
How should you meet these requirements?
A. Host all your subsidiaries' services on-premises together with your existing services.
B. Host all your subsidiaries' services together with your existing services on the public cloud.
C. Build a homogenous infrastructure at each subsidiary, and invest in training their engineers.
D. Build a homogenous infrastructure at each subsidiary, and invest in hiring more engineers.
Answer: B
Explanation:
Host all your subsidiaries' services together with your existing services on the public cloud.
NEW QUESTION 14
- (Topic 1)
Your organization is developing an application that will capture a large amount of data from millions of different sensor devices spread all around the world. Your
organization needs a database that is suitable for worldwide, high-speed data storage of a large amount of unstructured data.
Which Google Cloud product should your organization choose?
A. Firestore
B. Cloud Data Fusion
C. Cloud SQL
D. Cloud Bigtable
Answer: D
Explanation:
Reference: https://cloud.google.com/bigtable
Cloud Bigtable is a sparsely populated table that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data.
A single value in each row is indexed; this value is known as the row key. Bigtable is ideal for storing very large amounts of single-keyed data with very low
latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations.
Bigtable is exposed to applications through multiple client libraries, including a supported extension to the Apache HBase library for Java. As a result, it integrates
with the existing Apache ecosystem of open-source Big Data software.
Bigtable's powerful back-end servers offer several key advantages over a self-managed HBase installation:
Incredible scalability. Bigtable scales in direct proportion to the number of machines in
your cluster. A self-managed HBase installation has a design bottleneck that limits the performance after a certain threshold is reached. Bigtable does not have
this bottleneck, so you can scale your cluster up to handle more reads and writes.
Simple administration. Bigtable handles upgrades and restarts transparently, and it automatically maintains high data durability. To replicate your data, simply add
a second cluster to your instance, and replication starts automatically. No more managing replicas or regions; just design your table schemas, and Bigtable will
handle the rest for you.
Cluster resizing without downtime. You can increase the size of a Bigtable cluster for a few hours to handle a large load, then reduce the cluster's size again—all
without any downtime. After you change a cluster's size, it typically takes just a few minutes under load for Bigtable to balance performance across all of the nodes
in your cluster.
Graphical user interface, text, application, email Description automatically generated
NEW QUESTION 18
- (Topic 1)
Your organization wants to migrate its data management solutions to Google Cloud because it needs to dynamically scale up or down and to run transactional
SQL queries against historical data at scale. Which Google Cloud product or service should your organization use?
A. BigQuery
B. Cloud Bigtable
C. Pub/Sub
D. Cloud Spanner
Answer: D
Explanation:
Reference: https://cloud.google.com/terms/services
Cloud Spanner is a fully-managed, mission-critical relational database service. It is designed to provide a scalable online transaction processing (OLTP) database
with high availability and strong consistency at global scale
NEW QUESTION 21
- (Topic 1)
Your organization is moving an application to Google Cloud. As part of that effort, it needs to migrate the application’s working database from another cloud
provider to Cloud SQL. The database runs on the MySQL engine. The migration must cause minimal disruption to users. Data must be secured while in transit.
Which should your organization use?
Answer: C
Explanation:
Reference: https://aws.amazon.com/dms/
NEW QUESTION 23
- (Topic 1)
Your large and frequently changing organization’s user information is stored in an on- premises LDAP database. The database includes user passwords and
group and organization membership.
How should your organization provision Google accounts and groups to access Google Cloud resources?
Answer: C
Explanation:
You can run a single instance of Google Cloud Directory Sync to synchronize user accounts and groups to Google Cloud.
Reference: https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction Text
Description automatically generated https://support.google.com/a/answer/106368?hl=en
NEW QUESTION 26
- (Topic 1)
As your organization increases its release velocity, the VM-based application upgrades take a long time to perform rolling updates due to OS boot times. You need
to make the application deployments faster.
What should your organization do?
A. Migrate your VMs to the cloud, and add more resources to them
B. Convert your applications into containers
C. Increase the resources of your VMs
D. Automate your upgrade rollouts
Answer: B
NEW QUESTION 27
- (Topic 1)
An organization wants to dynamically adjust its application to serve different user needs. What are the benefits of storing their data in the cloud for this use case?
Answer: C
Explanation:
By storing their application data in the cloud the organization will be able to gather and analyze user behavior data in real-time. This will enable them to
dynamically adjust their application for different user needs.
NEW QUESTION 29
- (Topic 1)
A company with its own private data center has called you in for help with their disaster recovery planning. News of multiple ransomware attacks has made them
very anxious. They want to make they are well prepared for such an eventuality. Which of these would be good recommendations?
A. It is better to have redundancy; so, set up another private data center nearby so that you can quickly go over in case of an emergency.
B. It is better to have redundancy; use one or many of the Google Cloud datacenters as a backup location.
C. The one data center is enough, as long as the data is encrypted; attackers won't be able to read the data.
D. The one data center is enough as long as you regularly back up data and save it in another place in the same DC.
Answer: B
Explanation:
A single data center is vulnerable. So any option involving that is not good. Reference Link:- https://www.coresite.com/blog/data-center-redundancy
NEW QUESTION 32
- (Topic 1)
Your organization recently migrated its compute workloads to Google Cloud. You want these workloads in Google Cloud to privately and securely access your
large volume of on- premises data, and you also want to minimize latency.
What should your organization do?
A. Use Storage Transfer Service to securely make your data available to Google Cloud
B. Create a VPC between your on-premises data center and your Google resources
C. Peer your on-premises data center to Google’s Edge Network
D. Use Transfer Appliance to securely make your data available to Google Cloud
Answer: C
Explanation:
Graphical user interface, text, application, Word, email
NEW QUESTION 35
- (Topic 1)
What are the key features of Google Cloud Identity.
Answer: D
Explanation:
Cloud Identity:
A unified identity, access, app, and endpoint management (IAM/EMM) platform.
- Give users easy access to apps with single sign-on.
- Multi-factor authentication protects user and company data.
- Endpoint management enforces policies for personal and corporate devices
KEY FEATURES :
Modernize IT and strengthen security Multi-factor authentication (MFA)
Help protect your user accounts and company data with a wide variety of MFA verification methods such as push notifications, Google Authenticator, phishing-
resistant Titan Security Keys, and using your Android or iOS device as a security key.
Endpoint management
Improve your company’s device security posture on Android, iOS, and Windows devices using a unified console. Set up devices in minutes and keep your
company data more secure with endpoint management. Enforce security policies, wipe company data, deploy apps, view reports, and export details.
Single sign-on (SSO)
Enable employees to work from virtually anywhere, on any device, with single sign-on to thousands of pre-integrated apps, both in the cloud and on-premises.
Works with your favorite apps
Cloud Identity integrates with hundreds of cloud applications out of the box—and we’re constantly adding more to the list so you can count on us to be your single
identity platform today and in the future.
NEW QUESTION 38
- (Topic 1)
Your organization runs many workloads in different Google Cloud projects, each linked to the same billing account. Each project's workload costs can vary from
month to month, but the overall combined cost of all projects is relatively stable. Your organization needs to
optimize its cost.
What should your organization do?
Answer: C
Explanation:
Turn on committed use discount sharing, and create a commitment for the combined usage
Sharing your committed use discounts across all your projects reduces the overhead of managing discounts on a per-project basis, and maximizes your savings by
pooling all your discounts across your projects' resource usage. If you have multiple projects that share the same Cloud Billing account, you can enable committed
use discount sharing so all of your projects within that Cloud Billing account share all of your committed use discount contracts. Your sustained use discounts are
also pooled at the same time. That is, sustained use discounts are calculated using the total resources across these projects, rather than just the resources within
a single project.
NEW QUESTION 41
- (Topic 1)
A retail store has discovered a cost-effective solution for creating self-service kiosks. They can use existing check-out hardware and purchase a virtual customer
service application. Why do they also need an API?
Answer: B
Explanation:
APIs can create new business value by connecting legacy systems (the checkout hardware) with new software (the virtual customer service application).
NEW QUESTION 42
- (Topic 1)
Your organization wants to be sure that is expenditures on cloud services are in line with the budget. Which two Google Cloud cost management features help
your organization gain greater visibility into its cloud resource costs? (Choose two.)
A. Billing dashboards
B. Resource labels
C. Sustained use discounts
D. Financial governance policies
E. Payments profile
Answer: AB
Explanation:
labels. Information about labels is forwarded to the billing system, so you can break down your billed charges by label.
Reference link- https://cloud.google.com/cost-management
NEW QUESTION 45
- (Topic 1)
Your organization needs to process large amounts of data from an online application that operates continuously. You do not want to be required to provision
infrastructure or create server clusters. What should your organization choose?
Answer: D
Explanation:
You do not want to be required to provision infrastructure or create server clusters. Because Unified stream and batch data processing that's serverless, fast, and
cost-effective.
Reference link- https://cloud.google.com/dataflow
NEW QUESTION 47
- (Topic 1)
Your company has been using a shared facility for data storage and will be migrating to Google Cloud. One of the internal applications uses Linux custom images
that need to be migrated.
Which Google Cloud product should you use to maintain the custom images?
Answer: B
Explanation:
NEW QUESTION 50
- (Topic 1)
Your customer currently has a hybrid cloud setup including their on-premises data center and AWS. They are consolidating all their services on Google Cloud as
part of a modernization plan and want to spend less IT effort in the future. There are about 10 MySQL and 25 PostgreSQL databases across the two DCs. What is
the best option to for them?
A. Use the Data Catalog Service to manage the metadata of the databases
B. Use Cloud Dataflow service and setup Google's Cloud SQL as the sink and the others as the source, which will cause the data to flow in as expected.
C. Use the Database Migration Service
D. Use the Bare Metal Solution and copy the databases directly as they are on-premises and on AWS.
Answer: C
Explanation:
Explanation
Database Migration is the right one to use: "Simplifying migrations to Cloud SQL. Now available for MySQL and PostgreSQL migrations, with SQL Server coming
soon." Since the customer also doesn't want to manage their own database installations in the future, Cloud SQL is the best option.
https://cloud.google.com/database-migration
NEW QUESTION 51
- (Topic 1)
Your company security team manages access control to production systems using an LDAP directory group.
How is this access control managed in the Google Cloud production project?
A. Assign the proper role to the Service Account in the project's IAM Policy
B. Grant each user the roles/iam.serviceAccountUser role on a service account that exists in the Google Group.
C. Assign the proper role to the Google Group in the project's IAM Policy.
D. Create the project in a folder with the same name as the LDAP directory group.
Answer: C
Explanation:
Reference: https://cloud.google.com/blog/products/identity-security/achieving-identity-and-access-governance-on-google-cloud
NEW QUESTION 52
- (Topic 1)
An organization wants to move from a strategic cloud adoption maturity level to a trans- formational one. How should the organization change the way they scale?
A. None of these
B. Deploy changes when problems arise.
C. Deploy changes programmatically.
D. Review changes manually.
Answer: C
Explanation:
Because automation is a transformational approach which ensures changes are constant and low-risk.
NEW QUESTION 55
- (Topic 1)
A video game organization has invested in cloud technology to generate insights from user behaviors. They want to ensure recommendations of games are
aligned to players' interests. What may have prompted this business decision?
Answer: C
Explanation:
Because in the cloud era, users expect more personalization and customization.
NEW QUESTION 57
- (Topic 1)
Which Google Cloud service or feature lets you build machine learning models using Standard SQL and data in a data warehouse?
A. BigQuery ML
B. TensorFlow
C. AutoML Tables
D. Cloud Bigtable ML
Answer: A
Explanation:
BigQuery ML lets you create and execute machine learning models in BigQuery using standard SQL queries.
Reference: https://cloud.google.com/bigquery-ml/docs/introduction#:~:text=BigQuery%20ML%20lets%20you%20create,the%20need%20 to%20move%20data
Graphical user interface, text, application, email Description automatically generated
https://cloud.google.com/bigquery-ml/docs/introduction
NEW QUESTION 62
- (Topic 1)
Your team has developed a machine learning model for your customer. The test results indicate very strong predictive capability. The model is then deployed in
production. Evaluation of the predictions in production show that they are off by a pronounced margin. What is the issue and how can you solve for it?
Answer: D
Explanation:
If our ML model does well on the training set than on the production set, then we're likely over fitting. Training with more data would be one solution.
NEW QUESTION 64
- (Topic 3)
How would an organization benefit from using Looker?
Answer: D
Explanation:
Looker is a business intelligence software and big data analytics platform that helps you explore, analyze and share real-time business analytics easily.
NEW QUESTION 66
- (Topic 3)
An organization needs frequent access to only a subset of their data. They want to reduce costs by depositing the rest of their data across Nearline Coldlme and
Archive repositories
Which Google Cloud product should the organization use?
A. Filestore
B. Cloud Spanner
C. Data Catalog
D. Cloud Storage
Answer: D
Explanation:
Per Google docs, specifically for GCP Cloud Storage there exists four types of storage with one of them, standard storage, being described as "storage for data
that is frequently accessed ("hot" data) and/or stored for only brief periods of time." https://cloud.google.com/storage
NEW QUESTION 69
- (Topic 3)
An organization needs to categorize a large group of photographs using pre-trained machine learning.
Which Google Cloud product or service should the organization use?
A. Vision API
B. BigQuery ML
C. AutoML Vision
D. Looker
Answer: A
Explanation:
https://cloud.google.com/vision
NEW QUESTION 70
- (Topic 3)
An organization is looking for a storage solution that will help them serve content to users worldwide. They need a solution that offers a high level of availability
What feature of Cloud Storage would they benefit from?
A. Global metadata
B. Object versioning
C. Data encryption
D. Multi-regional storage
Answer: D
NEW QUESTION 71
- (Topic 3)
A large retail organization uses traditional technology for their ecommerce website During peaks m traffic resources are often underutilized or overprovisioned
They have decided to migrate to cloud technology
What aspect of cloud technology will benefit their ecommerce business?
A. Agile infrastructure means that they only pay tor what they need, when they need it
B. Shared responsibility means that the cloud provider brings increased visibility during peaks in traffic
C. Operational expenditure means that their total cost of ownership is more predictable
D. Unlimited storage means that their website will never experience downtime
Answer: A
NEW QUESTION 73
- (Topic 3)
What is monitoring within the context of cloud operations?
A. Observing cloud expenditure in real time to ensure that budgets are not exceeded
B. Collecting predefined and custom metrics from applications and infrastructure
C. Tracking user activities to guarantee compliance with privacy regulations
D. Tracing user location to document regional access and utilization
Answer: B
NEW QUESTION 75
- (Topic 3)
An organization needs to search an application's source code to identify a potential issue. The application is distributed across multiple containers.
Which Google Cloud product should the organization use?
Answer: B
Explanation:
Cloud Trace is supposed to be the correct answer. It's an application performance management tool. It's a Google solution for monitoring application performance.
It is a distributed tracing system that helps developers debug or fix and optimize their code
NEW QUESTION 80
- (Topic 3)
An organization is making a strategic change to customer support in response to feedback. They plan to extend their helpline availability hours.
Why is the organization making this change?
Answer: C
NEW QUESTION 82
- (Topic 3)
An organization has servers running mission-critical workloads on-premises around the world. They want to modernize their infrastructure with a multi-cloud
architecture.
What benefit could the organization experience?
Answer: D
NEW QUESTION 85
- (Topic 3)
How can a streaming service meet global compliance requirements using the cloud?
Answer: A
NEW QUESTION 90
- (Topic 3)
An organization wants to build autoscaling web applications without having to manage application infrastructure
Which Google Cloud product should they use?
A. App Engine
B. AutoML
C. Anthos
D. Apigee
Answer: A
Explanation:
Per Google docs, App Engine, allows for "freeing up your developers with zero server management and zero configuration deployments".
https://cloud.google.com/appengine
NEW QUESTION 92
- (Topic 3)
An organization is altering their gaming product so that it is compatible with cloud technology.
What can they expect when moving from traditional technology to cloud technology?
Answer: B
NEW QUESTION 97
- (Topic 3)
An organization's developers are growing increasingly frustrated by the limitations of their on-premises infrastructure.
How would they benefit from leveraging cloud technology?
Answer: C
Explanation:
Google cloud have vast majority of products/tools that you can use to innovate. Additionally, there are products in google that scale automatically based from
usage (Ex. App Engine, Cloud Run, etc.)
Answer: C
Explanation:
https://cloud.google.com/docs/security/encryption/default-encryption#:~:text=Google%20uses%20the%20Advanced%20Encryption,to%202015%20t
hat%20use%20AES128
Answer: A
A. Compute Engine
B. Bare Metal Solution
C. Cloud Run
D. Cloud Functions
Answer: B
Explanation:
“This solution provides a path to modernize your application infrastructure landscape, while maintaining your existing investments and architecture. With Bare
Metal Solution, you can bring your specialized workloads to Google Cloud, allowing you access and integration with GCP services with minimal latency.”
Answer: CD
Answer: D
Answer: C
Answer: B
Answer: C
Answer: C
Answer: C
Explanation:
A Greenfield approach is a brand-new implementation , where companies then add their needed configurations and customizations. This approach provides a
clean slate to start from, does not carry over needless customizations and technical debt, and provides a solid foundation for business process re-engineering.
A greenfield deployment is the design, installation and configuration of computer infrastructure where none existed before, for example, in a new office. In contrast,
a brownfield deployment is an upgrade or addition to existing infrastructure using legacy components.
Answer: B
Explanation:
Moving legacy applications to the cloud can help organizations satisfy user expectations by enabling them to push out updates more quickly to repair bugs.
A. Dataproc
B. Compute Engine
C. Recommendations AI
D. Vertex AI
Answer: D
Explanation:
Recommendations AI enables you to build an end-to-end personalized recommendation system based on state-of-the-art deep learning ML models, without a
need for expertise in ML or recommendation systems. With Vertex AI, both AutoML training and custom training are available options. Whichever option you
choose for training, you can save models, deploy models, and request predictions with Vertex AI. https://cloud.google.com/vertex-ai
Answer: B
Answer: A
A. Cloud Storage
B. BigQuery
C. Cloud SQL
D. Dataflow
Answer: C
Answer: B
Explanation:
Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database.
Answer: D
A. Data automation
B. Trends analysis
C. Machine learning
D. Multiple regression
Answer: C
Answer: B
What DevOps practice should an organization use when developing their application to help minimize disruption caused by bugs?
Answer: C
Explanation:
One of the key principles of DevOps is to release changes frequently and in small batches. This helps to reduce the risk of disruption caused by bugs. If a bug is
introduced in a small change, it is easier to identify and fix the bug without affecting a large number of users.
Answer: B
A. Product inventory
B. Product photographs
C. Instructional videos
D. Customer chat history
Answer: A
A. Prioritize training current employees instead of hiring new recruits with cloud experience.
B. Prioritize giving privileged access to third-party partners and contractors to fill IT knowledge gaps.
C. Create a culture of self-motivated, isolated learning with official training materials.
D. Create a culture of continuous peer-to-peer learning with official training materials.
Answer: D
Answer: B
Explanation:
Apigee's API Monitoring enables you to track your APIs to make sure they are up and running correctly. API Monitoring provides near real-time insights into API
traffic and performance, to help you quickly diagnose and solve issues as they arise.
Apigee works with APIs not necessarily applications. It allows organizations to gain actionable insights across the entire API value chain and monetize API
products and maximize the business value of digital assets. https://cloud.google.com/apigee#section-11
A. Data field
B. Data lake
C. Database
D. Data warehouse
Answer: B
Explanation:
A data lake can store all types of data with no fixed limitation on account size or file and with no specific purpose defined yet. The data comes from disparate
sources and can be structured, semi-structured, or even unstructured. Data-lake data can be queried as needed.
https://cloud.google.com/learn/what-is-a-data-lake
A data lake is a centralized repository designed to store, process, and secure large amounts of structured, semistructured, and unstructured data. It can store data
in its native
format and process any variety of it, ignoring size limits.
Answer: A
Explanation:
Cloud Logging is a fully managed service that allows you to store, search, analyze, monitor, and alert on logging data and events from Google Cloud and Amazon
Web Services
Answer: B
Explanation:
https://cloud.google.com/architecture/migrating-a-monolithic-app-to- microservices-gke
A. TensorFlow
B. BigQuery ML
C. Vision API
D. AutoML Vision
Answer: A
Explanation:
https://en.wikipedia.org/wiki/TensorFlow TensorFlow is a free and open-source software library for machine learning and artificial intelligence. Developer Google
Brain Team
Answer: A
Answer: C
- (Topic 3)
An organization wants to move from a tactical cloud adoption approach to a transformational approach.
How should they adapt the way they lead the organization?
Answer: A
Answer: B
Explanation:
A relational database offers the functionality of storing transactional data, which can then be accessed electronically. Relational databases store structured data
that can be organized in tables with defined relationships between them. This makes them well- suited for transactional data, such as inventory data, that needs to
be accessed and updated frequently.
A. Hypervisor
B. Containers
C. Serverless computing
D. Open source
Answer: A
Answer: B
Answer: D
Answer: C
B. Roll out the new system to all employees to collect as much data as possible
C. Avoid rolling out the new system because it may have security flaws
D. Avoid rolling out the new system because it may violate privacy policy
Answer: A
Answer: D
Explanation:
This approach carries over as much custom components as possible from the source system and minimizes initial reengineering efforts.
A. 5 minutes
B. 500 minutes
C. 5 hours
D. 5 days
Answer: A
A. Compute Engine
B. Anthos
C. An application programming interlace
D. Google Kubernetes Engine
Answer: C
A. Implement code updates in real time without affecting the service level objective (SLO).
B. Inspect source code in real time without affecting user downtime.
C. Manage code and accelerate application development.
D. Analyze live source code during user downtime.
Answer: B
Explanation:
Cloud Debugger is a feature of Google Cloud Platform that lets you inspect the state of an application, at any code location, without stopping or slowing down the
running app. Cloud Debugger makes it easier to view the application state without adding logging statements.
A. False
B. None of the above
C. True
D. Not Defined by Google Cloud Platform
Answer: A
Explanation:
You can dynamically increase the size of a subnet in a custom network by expanding the range of IP addresses allocated to it. Doing that doesn’t affect already
configured VMs.
A. Retain the data in use in a single region bucket with nearline storag
B. Retain the data in use in a dual-region bucket.
C. Retain the data in use in a single region bucket with standard storage.
D. Retain the data in use in a multi-region bucket.
E. Retain the data in use in a dual-region bucket.
Answer: B
Explanation:
Integrated repository for analytics and ML: The highest level of availability
and performance within a single region is ideal for compute, analytics, and machine learning workloads in a particular region. Cloud Storage is also strongly
consistent, giving you confidence and accuracy in analytics workloads.
A. Set up a high-priority (1000) rule that blocks all egress and a low-priority (65534) rule that allows only the appropriate ports.
B. Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.
C. Set up a high-priority (1000) rule to allow the appropriate ports.
D. Set up a high-priority (1000) rule that pairs both ingress and egress ports.
Answer: B
Explanation:
Implied rules Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console:
Implied allow egress rule. An egress rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to
any destination, except for traffic blocked by Google Cloud. A higher priority firewall rule may restrict outbound access. Internet access is allowed if no other
firewall rules deny outbound traffic and if the instance has an external IP address or uses a Cloud NAT instance. For more information, see Internet access
requirements.
Answer: BC
Answer: B
Explanation:
Cloud Armor provides DDoS protection for applications. It can also "Filter your incoming traffic based on IPv4 and IPv6 addresses or CIDRs. Enforce geography-
based access controls to allow or deny traffic based on source geo using Google’s geoIP
mapping."
A. Use VPC Peering with the Google Cloud organization so that you can directly use services using only private IPs.
B. Use private addresses onl
C. No additional configuration is require
D. All Google services willbe accessible within Google Cloud on private addresses.
E. Use Shared VPCs with the Google Cloud organization so that you can directly use services using only private IPs.
F. Enable Private Google Access so that they can remove public IP addresses.
Answer: D
Explanation:
"VM instances that only have internal IP addresses (no external IP addresses) can use Private Google Access. They can reach the external IP addresses of
Google APIs and services. f you disable Private Google Access, the VM instances can no longer reach Google APIs and services; they can only send traffic within
the VPC network."
https://cloud.google.com/vpc/docs/private-google-access
A. Cloud DataStore and Cloud SQL have Terabytes + and Terabytes Capacity respec-tively.
B. Cloud Bigtable and Cloud Storage both have Petabytes + capacity.
C. Cloud Bigtable and Cloud Storage both have not Petabytes + capacity.
D. None of the above.
Answer: AB
Answer: A
Explanation:
Because cross-team partnerships are part of the visibility cost management strategy.
https://wa.aws.amazon.com/wat.question.COST_1.en.html
Answer: C
Explanation:
Firebase/Firestore is easy to build and is suitable for user information that could bvary in nature.
Answer: ACD
Explanation:
Compliance Reports Manager, GDPR Home Page, Compliance Offerings GCP provides three main compliance resource webpages
Compliance Reports Manager
– https://cloud.google.com/security/compliance/compliance-reports-manager
A. A/B testing
B. Notification Composer
C. Firebase Remote config.
D. None of the above
Answer: B
Explanation:
You can send notification messages using the Notifications composer in the Firebase console. Though this does not provide the same flexibility or scalability as
sending messages with the Admin SDK or the HTTP and XMPP protocols, it can be very useful for testing or for highly targeted marketing and user engagement.
The Firebase console provides analytics-based A/B testing to help refine and improve marketing messages.
After you have developed logic in your app to receive messages, you can allow non- technical users to send messages per the instructions on the Notifications
page in the Firebase Help Center.
A. Use a serverless option like Cloud Functions that will automatically scale as much as required.
B. Instead of using a "general purpose" machine family, use "compute-optimized" machine family.
C. Since processing could also be dependent on reading and writing data to the disk, use a fast Local SSD.
D. Attach GPUs to the virtual machine for number crunching.
Answer: D
Explanation:
Compute Engine provides graphics processing units (GPUs) that you can add to your virtual machines (VMs). You can use these GPUs to accelerate specific
workloads on your VMs such as machine learning and data processing. https://cloud.google.com/compute/docs/gpus
A. Premier
B. Standard
C. Enhanced
D. Role
E. Premium
Answer: BCE
Explanation:
Because GCP provides three options for paid support which are Standard, Enhanced and Premium.
Basic Support is included with your Google Cloud subscription which cover only Case, phone, and chat support for billing issues only
Reference link- https://cloud.google.com/support
A. Performance
B. App Distribution
C. Crashlytics
D. Test Lab
Answer: C
Explanation:
Firebase Crashlytics:
Get clear, actionable insight into app issues with this powerful crash reporting solution for iOS, Android, and Unity.
Firebase Crashlytics is a lightweight, real-time crash reporter that helps you track, prioritize, and fix stability issues that erode your app quality. Crashlytics saves
you troubleshooting time by intelligently grouping crashes and highlighting the circumstances that lead up to them.
Find out if a particular crash is impacting a lot of users. Get alerts when an issue suddenly increases in severity. Figure out which lines of code are causing
crashes.
A. Performance Agreement
B. Interconnection Agreement
C. Warranty
D. Service Level Agreement
Answer: D
Explanation:
Service Level Agreement (SLA)
A service level agreement (SLA) is a contract between a service provider (either internal or external) and the end user that defines the level of service expected
from the service provider. Some common SLA’s are uptime, Response Time, etc.
A. Use the popular open-source libraries SciPy and NumPy to create machine learn-ing models.
B. Use the Unified AI Platform to create a custom TensorFlow model.
C. Use BigQuery ML to create machine learning models using SQL queries.
D. Integrate the Cloud Vision API and the Cloud Speech API to create a custom mod-el that will suit the retail sector.
Answer: C
Explanation:
BigQuery ML allows you to create ML models using standard SQL queries. Those familiar with BigQuery and ML will be able to create ML models with just a basic
understanding of machine learning.
https://cloud.google.com/bigquery-ml/docs/
Answer: D
Explanation:
Answer: A
Explanation:
AutoML Vision Edge model can be deployed to one of several types of edge devices, such as mobile phones, ARM-based devices, and the Coral Edge TPU
https://cloud.google.com/vision/automl/docs/edge-quickstart
Answer: C
Explanation:
With Lift and Shift migrations, the customer could move workloads from a source environment to a target environment with few or no modifications or refactoring
https://cloud.google.com/architecture/migration-to-gcp-getting-started
A. Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topi
B. Create an application that sends all medical images to the Pub/Sub topic.
C. Create a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storag
D. Schedule the script as a cron job.
E. In the Cloud Console, go to Cloud Storag
F. Upload the relevant images to the ap-propriate bucket.
G. Deploy a Dataflow job from the batch template, “Datastore to Cloud Storage” Schedule the batch job on the desired interval.
Answer: B
Explanation:
Using sync for new images implies that you will continue to use your onprem
and keep synchronizing it forever, Sync just once for the old images, new images go directly to google cloud via pub/sub, and eventually get rid of the onprem.
Answer: C
Explanation:
Storage Transfer Service provides options that make data transfers and synchronization easier. We can also schedule one-time transfer operations or recurring
transfer operations.
Answer: A
Explanation:
https://cloud.google.com/blog/products/devops-sre/sre-fundamentals-slis-slas-and-slos
Answer: CDE
Explanation:
Instance is alive for no more than 24 hours, Can be pre-empted with a 30 second notice, Discounted Significantly.
Preemptible VM is an instance that you can create and run at a lower cost than normal instances.
However, Compute Engine might stop (pre-empt) these instances if it requires access to those resources for other tasks. Preemptible instances are excess
Compute Engine capacity, so their availability varies with usage.
Live at most 24 hours Can be pre-empted with a 30 second notification via API and are Discounted significantly
Reference link- https://cloud.google.com/compute/docs/instances/preemptible
A. Cloud Storage with a single region that is known to be within the European U
B. Cloud Filestore is connected to virtual machines which are guaranteed to be within the European U
C. Cloud Storage with the multi-region option of European U
D. Cloud Storage with the dual-region option of European U
Answer: C
Explanation:
Multi-region option will use multiple datacenters that are within the European Union. More regions will also help with lower latency since users are spread across
the European U.
https://cloud.google.com/storage/docs/locations#considerations
Answer: C
Explanation:
BigQuery is the data warehousing option on Google Cloud. Since the source data has already been used for analysis, it should easily fit the BigQuery structure
too.
A. Spanner
B. Bare Metal
C. BigQuery
D. Cloud SQL
Answer: D
Explanation:
Cloud SQL supports SQL Server, Since the IT team's attention is being focused on other activities, they will have less time for existing admin tasks, It would be
best to take a managed/hosted version.
A. Dynamic content
B. Static content.
C. Microservices.
D. All of the Above.
Answer: D
Explanation:
Firebase Hosting- Firebase Hosting provides fast and secure hosting for your web app, static and dynamic content, and microservices.
Firebase Hosting is production-grade web content hosting for developers. With a single command, you can quickly deploy web apps and serve both static and
dynamic content to a global CDN (content delivery network). You can also pair Firebase Hosting with Cloud Functions or Cloud Run to build and host
microservices on Firebase.
Key capabilities of Firebase Hosting:
Serve content over a secure connection:- The modern web is secure. Zero-configuration SSL is built into Firebase Hosting, so content is always delivered
securely.
Host static and dynamic content plus microservices:- Firebase Hosting supports all kinds of content for hosting, from your CSS and HTML files to your Express.js
microservices or APIs.
Deliver content fast: Each file that you upload is cached on SSDs at CDN edges around the world and served as gzip or Brotli. We auto-select the best
compression method for your content. No matter where your users are, the content is delivered fast.
A. IP Ranges for example Client IP Address range used for communication between your Google Cloud and Bare Metal Solution environments.
B. Google Cloud Project Id that you are using with your bare metal solution environ-ment.
C. Total number of VLANs you need in your Bare Metal Solution Environment.
D. All of the above
Answer: D
Explanation:
What Bare Metal Solution provides
Bare Metal Solution is a managed solution that provides purpose-built HPE or Atos bare- metal servers in regional extensions that are connected to Google Cloud
by a managed, high-performance connection with a low-latency network fabric.
With Bare Metal Solution, Google Cloud provides and manages the core infrastructure, the network, the physical and network security, and hardware monitoring
capabilities in an environment from which you can access all of the Google Cloud services. The core infrastructure includes secure, controlled-environment
facilities, and power.
The Bare Metal Solution also includes the provisioning and maintenance of custom, sole- tenancy servers with local SAN, and smart hands support.
The network, which is managed by Google Cloud, includes a low-latency Partner Interconnect connection into the customer Bare Metal Solution environment.
The available Google Cloud services include private API access, management tools, support, and billing.
A. Database as a Service.
B. Platform as a Service.
C. Infrastructure as a Service.
D. Software as a Service.
Answer: C
Explanation:
What is Infrastructure as a service :
IaaS (infrastructure as a service) is a computing model that offers resources on-demand to businesses and individuals via the cloud.
IaaS is attractive because acquiring computing resources to run applications or store data the traditional way requires time and capital. Enterprises must purchase
equipment through procurement processes that can take months. They must invest in physical spaces: typically specialized rooms with power and cooling. And
after deploying the systems, enterprises need IT, professionals, to manage them.
All this is challenging to scale when demand spikes or the business grows. Enterprises risk running out of capacity or overbuilding and ending up with
infrastructure that suffers from low utilization.
These challenges are why IaaS use is steadily growing. Learn more about Compute Engine, Cloud Storage, etc.
Answer: B
Explanation:
Since they have already paid for data center for another year. They have the time and resources to work with, They can make the change to their workloads
locally/on- promise Improve and MigrateMove to Google Cloud later on.
Answer: B
Explanation:
Preemptible VMs have all these features
Simple configuration
Create a preemptible instance simply by flipping a bit via command, API, or developer console.
Easy extensibility
Attach GPUs and local SSDs to preemptible instances for additional performance and savings.
Graceful shutdown
Compute Engine gives you 30 seconds to shut down when you're preempted, letting you save your work in progress for later.
Large scale computing
Spin up as many instances as you need and turn them off when you're done. You only pay for what you use.
Quickly reclaim capacity
Managed instance groups automatically recreate your instances when they're preempted (if capacity is available).
Fixed pricing
Preemptible VMs have fixed pricing up to 80% off regular instances. They show up on your bill separately so you'll see just how much you're saving.
Answer: B
Explanation:
The question is about a dynamic way to provision VM, it can be achieved by a Deployment manager or by using terraform. MIG is creating multiple machines
based on templates by load balancing
A. A hacker group has hired a bunch of people to create accounts and manually use the syste
B. Use Cloud Asset Inventory to see if there have been changes in the inventor
C. Bots are creating accounts and then using the
D. Use Google Cloud's Web App and API Protection (WAAP).
E. Bots are creating accounts and then using the
F. Use Identity-Aware Proxy to re-strict the users to known users.
G. Automated testing tools might still be running and creating account
H. Use Identity-Aware Proxy to restrict the users to known users.
Answer: B
Explanation:
Bots attacking the application is the most likely scenario in this case. Using WAAP is the right protection plan: Anti-DDoS, anti-bot, WAF, and API protection help
you protect against new and existing threats while helping you keep your apps and APIs compliant and continuously available.
https://cloud.google.com/solutions/web-app-and-api-protection
A. IaaS offers virtually infinite flexibility and scalability, enterprises can get their work done more efficiently, ensuring faster development life cycles.
B. IaaS resources are regularly available to businesses when they need the
C. As a result, enterprises reduce delays when expanding infrastructure and, alternatively, don’t waste resources by overbuilding capacity.
D. IaaS resources are used on demand and enterprises only have to pay for the compute, storage, and networking resources that are actually used, IaaS costs are
fairly predictable and can be easily contained and budgeted for.
E. All of the Above
Answer: D
Explanation:
These are the feature of Infrastructure as a Service (IaaS) It’s economical
Because IaaS resources are used on demand and enterprises only have to pay for the compute, storage, and networking resources that are actually used, IaaS
costs are fairly predictable and can be easily contained and budgeted for.
It’s efficient
IaaS resources are regularly available to businesses when they need them. As a result, enterprises reduce delays when expanding infrastructure and,
Answer: D
Explanation:
Sole-tenancy lets you have exclusive access to a sole-tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's
VMs. Use sole-tenant nodes to keep your VMs physically separated from VMs in other projects, or to group your VMs together on the same host hardware.
https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes
Answer: D
Explanation:
Cloud Spanner:
Fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability.
- Get all the benefits of relational semantics and SQL with unlimited scale
- Start at any size and scale with no limits as your needs grow
- Enjoy high availability with zero scheduled downtime and online schema changes
- Deliver high-performance transactions with strong consistency across regions and continents
- Focus on innovation, eliminating manual tasks with capabilities like automatic sharding.
Answer: B
Explanation:
The Bare Metal solution is the recommended approach. You can deploy Oracle capabilities like clustered databases, replication, and all performance features at
licensing costs that are similar to on-premise systems
https://cloud.google.com/architecture/migrating-bare-metal-workloads
Answer: B
Explanation:
Cloud SQL for MySQL:
Features
- Fully managed MySQL Community Edition databases in the cloud.
- Cloud SQL instances support MySQL 8.0, 5.7 (default), and 5.6, and provide up to 624 GB of RAM and 64 TB of data storage, with the option to automatically
increase the storage size, as needed.
- Create and manage instances in the Google Cloud Console.
- Instances are available in the Americas, EU, Asia, and Australia.
- Customer data is encrypted on Google's internal networks and in database tables, temporary files, and backups.
- Support for secure external connections with the Cloud SQL Auth proxy or with the SSL/TLS protocol.
- Support for private IP (private services access).
- Data replication between multiple zones with automatic failover.
- Import and export databases using mysqldump, or import and export CSV files.
- Support for MySQL wire protocol and standard MySQL connectors.
- Automated and on-demand backups and point-in-time recovery.
- Instance cloning.
- Integration with Google Cloud's operations suite logging and monitoring.
A. Use Cloud Identity-Aware Proxy to allow only specific users to access the data.
B. Use Security Command Center to have a centralized view of assets and get noti-fied on misconfigurations and vulnerabilities.
C. Use Cloud Data Loss Prevention to prevent the loss of any data.
D. Use Cloud Armor to block any DDoS attacks that could be a threat.
Answer: B
Explanation:
Security Command Center is the right tool for this use case. It can check resources for security issues and notify you when issues are found.
https://cloud.google.com/security-command-center
Answer: D
Explanation:
Answer: B
Explanation:
Cloud Bigtable
A fully managed, scalable NoSQL database service for large analytical and operational workloads with up to 99.999% availability.
- Consistent sub-10ms latency—handle millions of requests per second
- Ideal for use cases such as personalization, ad tech, fintech, digital media, and IoT
- Seamlessly scale to match your storage needs; no downtime during reconfiguration
- Designed with a storage engine for machine learning applications leading to better predictions
- Easily connect to Google Cloud services such as BigQuery or the Apache ecosystem
A. Cloud SQL
B. Cloud Bigtable
C. Cloud Spanner
D. Google Cloud BigQuery
Answer: C
Explanation:
- Cloud Spanner is the online transaction processing solution that is relational and offers petabyte scalability. Cloud SQL is not designed for petabyte-scale data.
A. Use Big Query Which has very fast data access and analysis
B. Use Cloud Storage which can be central, scalable storage
C. Use local SSDs with the VMs
D. Use Persistent Disk with the VMs
Answer: C
Explanation:
Local SSDs are attached to the VM and have very high throughput. However, when the VM shuts down, The local SSD is also shut down, Since our Workload
here is foult tolerant, than is not an issue.
A. Containerize the services and orchestrate them with Google Kubernetes Engine.
B. Retain the original application in Compute Engine and scale it as needed using Managed Instance Groups.
C. Retain the original application as a backup and also for separately scaling the ser-vices, create new application binaries.
D. Retain the original application in Compute Engine and scale it as needed using Unmanaged Instance Groups.
Answer: A
Explanation:
Containers and Kubernetes are ideal for the kind of requirement mentioned here - separate microservices that need to scale independently.
Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google
infrastructure. The GKE environment consists of multiple machines (specifically, Compute Engine instances) grouped together to form a cluster.
Reference link- https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes- engine-overview
A. Compliance Hub
B. Google Cloud Platform Status
C. Support Hub
D. Pricing Page
Answer: C
Explanation:
Google provides a page that brings together everything needed around support. Its called the Support Hub
Reference link- https://cloud.google.com/support-hub
A. Integrate the Cloud Vision API to create a custom model to handle the documents.
B. Create a model using TensorFlow and integrated it into the process workflow.
C. Integrate the Lending DocAI and Document AI in two there processes workflow of the processing loan requests.
D. Integrate the Natural Language API to read the request sent in by clients and to process the forms.
Answer: C
Explanation:
Lending DocAI is a pre-packaged AI solution that speeds "up the mortgage workflow processes to easily process loans and automate document data capture,
while ensuring the accuracy and breadth of different documents (e.g., tax statements and asset documents)."
https://cloud.google.com/solutions/lending-doc-ai
A. Cloud SQL
B. Cloud BigTable
C. Cloud Spanner
D. Cloud Firestore
Answer: B
Explanation:
Cloud BigTable
Key features
High throughput at low latency
Bigtable is ideal for storing very large amounts of data in a key-value store and supports high read and write throughput at low latency for fast access to large
amounts of data. Throughput scales linearly—you can increase QPS (queries per second) by adding Bigtable nodes. Bigtable is built with proven infrastructure that
powers Google products used by billions such as Search and Maps.
Cluster resizing without downtime
Scale seamlessly from thousands to millions of reads/writes per second. Bigtable throughput can be dynamically adjusted by adding or removing cluster nodes
without restarting, meaning you can increase the size of a Bigtable cluster for a few hours to handle a large load, then reduce the cluster's size again—all without
any downtime. Flexible, automated replication to optimize any workload
Write data once and automatically replicate where needed with eventual consistency—giving you control for high availability and isolation of reading and write
workloads. No manual steps are needed to ensure consistency, repair data, or synchronize writes and deletes. Benefit from a high availability SLA of 99.999% for
instances with multi- cluster routing across 3 or more regions (99.9% for single-cluster instances).
Answer: A
Explanation:
Compute Engine allows you to allocate VMs with different OSs - Windows and Linux, included.
A. CaaS
B. SaaS
C. PaaS
D. IaaS
Answer: B
Explanation:
SaaS – Software as a Service (SaaS) provides you a complete product that is run and managed by the service provider. You worry only about using the software
and not about infrastructure.
SaaS provides the lowest level of flexibility and management control over the infrastructure. (Example: Google Gsuite and MS O365)
A. Grant the financial team the IAM role of €Billing Account User€ on the billing ac-count linked to your credit card.
B. Change the billing account of your projects to the billing account of your company.
C. Create a ticket with Google Billing Support to ask them to send the invoice to your company.
D. Set up BigQuery billing export and grant your financial department IAM access to query the data.
Answer: B
Explanation:
To change the Cloud Billing account for a project, you need to be able to move a project from one Cloud Billing account to another. To accomplish this task, you
need permissions adequate to unlink the project from the existing Cloud Billing account AND to link the project to the target Cloud Billing account. Roles with
adequate permissions to perform this task: Project Owner or Project Billing Manager on the project, AND Billing Account Administrator or Billing Account User for
the target Cloud Billing account
interface, text, application, email Description automatically generated
Reference link- https://cloud.google.com/billing/docs/how-to/modify-
Answer: B
Explanation:
The recommended approach is to have folders corresponding to teams/departments and they manage the projects within that.
-> Sharing a single project will cause a conflict of resources, billing, concerns, etc.
-> One folder per project is unnecessary overuse of abstraction/grouping.
-> Teams and projects in a company should ideally be centrally managed in a single Organization.
A. Java Programming Spring Boot Framework for to solve the problem of APIs man-agement.
B. Cloud Functions with Firestore and payment gateways integration development.
C. Apigee API Management
D. Frontend & Backend Development with NodeJs and angular etc.
Answer: C
Explanation:
A top-level idea about Apigee API Management and its offered features can help you solve all questions related to Apigee in Cloud Digital Leader Practice Exam.
Apigee is a platform for developing and managing APIs. By fronting services with a proxy
layer, Apigee provides an abstraction or facade for your backend service APIs and provides security, rate limiting, quotas, analytics, and more.
Apigee services: The APIs that you use to create, manage, and deploy your API proxies. Apigee runtime: A set of containerized runtime services in a Kubernetes
cluster that Google maintains. All API traffic passes through and is processed by these services.
A. Pub/Sub
B. Dataflow
C. Data Catalog
D. Dataprep by Trifacta
Answer: B
Explanation:
Reference: https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline
Reference link- https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline
A. Select a public cloud provider that is only active in the required geographic area
B. Select a private cloud provider that globally replicates data storage for fast data access
C. Select a public cloud provider that guarantees data location in the required geographic area
D. Select a private cloud provider that is only active in the required geographic area
Answer: C
Explanation:
The goal of the migration is to serve customers worldwide as quickly as possible According to local regulations, certain data is required to be stored in a specific
geographic area, and it can be served worldwide" This characteristic are inherent to the public cloud provider
A. View the Security Command Center to identify virtual machines running vulnerable disk images
B. View the Compliance Reports Manager to identify and download a recent PCI audit
C. View the Security Command Center to identify virtual machines started more than 2 weeks ago
D. View the Compliance Reports Manager to identify and download a recent SOC 1 audit
Answer: A
Explanation:
Security Health Analytics and Web Security Scanner detectors generate vulnerabilities findings that are available in Security Command Center. Your ability to
view and edit findings is determined by the Identity and Access Management (IAM) roles and permissions you are assigned. For more information about IAM roles
in Security Command Center.
Reference link:-
https://cloud.google.com/security-command-center/docs/concepts-vulnerabilities-findings
Answer: B
Explanation:
Multi zone is also redundant within the region and it provides the lowest latency.
Reference link:-
https://cloud.google.com/solutions/best-practices-compute-engine-region-selection
A. Share the information in a Github repository and grant access to the repo in IAM asrequired.
B. Store the information in Secret Manager and give IAM read permissions as re-quired.
C. Store the information in Kubernetes Secrets and only grant read permissions to users as required.
D. Encrypt the information and store it in Cloud Storage for centralized acces
E. Give the decrypt key only to the users who need to access it.
Answer: B
Explanation:
Store the information in Secret Manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data. Secret
Manager provides a central place and single source of truth to manage access, and audit secrets across Google Cloud.
https://cloud.google.com/secret-manager
A. Developers want cloud providers to take full control of their application performance.
B. IT managers want cloud providers to automatically deploy their infrastructure.
C. IT managers want to stop making gradual changes.
D. Developers want to test ideas and experiment with more ease.
Answer: D
Explanation:
Modernizing applications means they can make alterations and innovate more easily.
A. BigQuery ML
B. AutoML Video Intelligence
C. Cloud Vision API
D. AutoML Tables
Answer: C
Explanation:
Reference: https://cloud.google.com/vision
Derive insights from your images in the cloud or at the edge with AutoML Vision or use pre- trained Vision API models to detect emotion, understand text, and
more.
Vision API offers powerful pre-trained machine learning models through REST and RPC APIs. Assign labels to images and quickly classify them into millions of
predefined categories. Detect objects and faces, read printed and handwritten text, and build valuable metadata into your image catalog.
A. They can rely on the cloud provider for all website source code.
B. Agile storage scalability.
C. 100% service availability.
D. They can shift from heavy operational expenditure to a capital expenditure model.
Answer: B
Explanation:
Organizations can scale in the cloud by paying for what they use, when they use it.
Answer:
Explanation:
Network requirements for Private Google Access:
- Because Private Google Access is enabled on a per-subnet basis, you must use a VPC network. Legacy networks are not supported because they don't support
subnets.
- Private Google Access does not automatically enable any API. You must separately enable the Google APIs you need to use via the APIs & services page in the
Google Cloud Console.
- If you use the private.googleapis.com or therestricted.googleapis.com domain names, you'll need to create DNS records to direct traffic to the IP addresses
associated with those domains.
- Your network must have appropriate routes for the destination IP ranges used by Google APIs and services. These routes must use the default internet gateway
next hop. If you use the private.googleapis.com or therestricted.googleapis.com domain names, you only need one route (per domain). Otherwise, you'll need to
create multiple routes.
- Egress firewalls must permit traffic to the IP address ranges used by Google APIs and services. The implied allow egress firewall rule satisfies this requirement.
For other ways to meet the firewall requirement.
A. Migrate to BigQuery
B. Migrate to Cloud Spanner
C. Migrate to Cloud SQL
D. Migrate to Bigtable
Answer: B
Explanation:
Cloud Spanner is a global scale SQL database that scales extremely well. That would be the best choice.
Answer: C
Explanation:
When you purchase a committed use contract, you purchase Compute Engine resources—such as vCPUs, memory, GPUs, local SSDs, and sole-tenant nodes—at
a discounted price in return for committing to paying for those resources for 1 year or 3 years
Answer: D
Explanation:
The PAYG model is more convenient because you only pay for usage. And the case describes that the workloads are only run on certain days.
* Cloud-Digital-Leader Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* Cloud-Digital-Leader Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year