ACE Module 4 v2.0
ACE Module 4 v2.0
Associate Cloud
Engineer Journey
What areas do you need to develop your skills in order to manage the different
aspects of a Google Cloud solution? This is another important area for an Associate
Cloud Engineer, and where you’ll likely spend much of your time on the job. Let’s
review the diagnostic questions to help you target your study time to focus on the
areas where you need to develop your skills.
Your study plan:
Ensuring successful operation of a cloud solution
                                                         4.1       Managing Compute
                                                                   Engine resources
 We’ll approach this review by looking at the objectives of this exam section and the
 questions you just answered about each one. We’ll introduce an objective, briefly
 review the answers to the related questions, then talk about where you can find out
 more in the learning resources and/or in Google Cloud documentation. As we go
 through each section objective, use the page in your workbook to mark the specific
 documentation, courses (and modules!), and skill badges you’ll want to emphasize in
 your study plan.
 Just like with the previous section, there are multiple objectives in this section that
 have many related tasks - so you will probably need to plan for more study time.
         Managing Compute
4.1 Engine resources
Tasks include:
● Managing a single VM instance (e.g., start, stop, edit configuration, or delete an instance)
● Remotely connecting to the instance
● Attaching a GPU to a new instance and installing necessary dependencies
● Viewing current running VM inventory (instance IDs, details)
● Working with snapshots (e.g., create a snapshot from a VM, view snapshots, delete a snapshot)
● Working with images (e.g., create an image from a VM or a snapshot, view images, delete an image)
● Working with instance groups (e.g., set autoscaling parameters, assign instance template, create an
  instance template, remove instance group)
● Working with management interfaces (e.g., Cloud Console, Cloud Shell, Cloud SDK)
 These are the diagnostic questions you answered that relate to this area:
 Question 1: Identify commands required to list and describe Compute Engine disk
 snapshots
 Question 2: Describe the incremental nature of Compute Engine disk snapshots
 Question 3: Implement an Instance Group based on an instance template
4.1 Diagnostic Question 01 Discussion
 Question:
 You want to view a description of your available snapshots using the command line
 interface (CLI). What gcloud command should you use?
4.1 Diagnostic Question 01 Discussion
 Feedback:
 *A. gcloud compute snapshots list
 Feedback: Correct! gcloud commands are built with groups and subgroups, followed
 by a command, which is a verb. In this example, Compute is the Group, snapshots is
 the subgroup, and list is the command.
 Where to look:
 https://cloud.google.com/compute/docs/disks/create-snapshots#listing-snapshots
 https://cloud.google.com/compute/docs/disks/create-snapshots#viewing-snapshot
 Content mapping:
   ●   Google Cloud Fundamentals: Core Infrastructure (ILT and On-demand)
         ○   M3 Virtual Machines and Networks in the Cloud
  ●    Architecting with Google Compute Engine (ILT)
         ○    M3 Virtual Machines
Summary:
Explanation/summary on the following slide.
Getting                              To list Compute Engine disk snapshots:
To describe snapshots, which returns creation time, size, and source disk, run the
following command:
       gcloud compute snapshots describe SNAPSHOT_NAME
       Flags available
             --format what kind of format you want printed out
             --project project to be used for command
             --quiet disables interactive prompts
4.1 Diagnostic Question 02 Discussion
 Question:
 You have a scheduled snapshot you are trying to delete, but the operation returns an
 error. What should you do to resolve this problem?
4.1 Diagnostic Question 02 Discussion
 Feedback:
 A. Delete the downstream incremental snapshots before deleting the main reference.
 Feedback: Incorrect. This is not required to delete a scheduled snapshot and would
 be a lot of manual work.
 Where to look:
 https://cloud.google.com/compute/docs/disks/snapshots#incremental-snapshots
 Content mapping:
   ●   Google Cloud Fundamentals: Core Infrastructure (ILT and On-demand)
         ○   M3 Virtual Machines and Networks in the Cloud
Summary:
Explanation/summary on the following slide.
                                                              Creating a Snapshot
Persistent Disk A
                                                                    Snapshot 3
                                                          *only contains blocks that are
                                                            different since Snapshot 2
Snapshots are incremental in nature and less expensive than creating full images of a
disk. You can only create them for persistent disks. They are stored in a Cloud
Storage bucket managed by the snapshot service and are automatically compressed.
You can choose regional or multi-regional storage, which will affect cost. Snapshots
are stored across multiple locations with automatic checksums. You schedule them
using the gcloud command line and cron. You can also set up a snapshot schedule in
the Google cloud console or command line. A Snapshot schedule and its source
persistent disk have to be in the same region. You can restore them to a new
persistent disk. The new disk can be in a different zone or region, so you can use
snapshots to move VMs. You can create them while they are running. Snapshots of a
disk have to be at least 10 minutes apart.
  ●    The first snapshot is full and contains all data on the persistent disk it was run
       on.
  ●    Each subsequent snapshot only contains new or modified data since the first
       snapshot.
  ●    However, sometimes to clean up resources and cost, a new snapshot might
       be a full backup.
Deleting a snapshot copies its data to downstream incremental snapshots that are
dependent on it, increasing the downstream snapshot’s size. You can’t delete a
snapshot that has a schedule associated with it.
4.1 Diagnostic Question 03 Discussion
 Question:
 Which of the following tasks are part of the process when configuring a managed
 instance group? (Pick two.)
4.1 Diagnostic Question 03 Discussion
 Feedback:
 *A. Defining Health checks
 Feedback: Correct! Health checks are part of your managed instance group
 configuration.
 Where to look:
 https://cloud.google.com/compute/docs/instance-templates
 https://cloud.google.com/compute/docs/instance-groups
 Content mapping:
  ●    Architecting with Google Compute Engine (ILT)
         ○    M9 Load Balancing and Autoscaling
Summary:
Explanation/summary on the following slide.
                                               01                           02
Step 01 Step 02
Implementing an
                                     Create instance template     Configure your instance
instance group                       e.g. identify machine        group e.g. number of
                                     type and boot disk           instances and
                                                                  autoscaling settings
Managed instance groups help you create and manage groups of identical VM
instances. They are based on an instance template that defines how new VMs added
to the instance group should be configured. You can specify the size of an instance
group and change it at any time. The managed instance group will make sure the
number of instances matches what you request, and monitors instances via health
checks. If an instance goes down, the managed instance group will start another
instance to replace it. Managed instance groups can be zonal or regional. Regional
instance groups create instances across multiple zones in a region so your
application can still run in case of a zonal outage.
The first step to creating a managed instance group is to create an instance template.
An instance template contains information about how to create instances in the group
by specifying machine type, boot disk, connectivity, disks, and other details pertinent
to your needs. This information is similar to what you would provide if you were
configuring an individual instance.
After you create an instance template you need to configure your managed instance
group. Here is where you specify location settings, describe port mappings, and
reference the instance template. You also specify the number of instances in your
group, configure autoscaling, and create health checks for your instances to
determine which instances should receive traffic.
         Managing Compute
4.1
         Engine resources                                                      Documentation
                                        ● M2 Load Balancing
                                          and Autoscaling
 Let’s take a moment to consider resources that can help you build your knowledge
 and skills in this area.
 The concepts in the diagnostic questions we just reviewed are covered in these
 modules and in this documentation. You’ll find this list in your workbook so you can
 take a note of what you want to include later when you build your study plan. Based
 on your experience with the diagnostic questions, you may want to include some or all
 of these.
 https://cloud.google.com/compute/docs/disks/create-snapshots#listing-snapshots
 https://cloud.google.com/compute/docs/disks/create-snapshots#viewing-snapshot
 https://cloud.google.com/compute/docs/disks/snapshots#incremental-snapshots
 https://cloud.google.com/compute/docs/instance-templates
 https://cloud.google.com/compute/docs/instance-groups
         Managing Google
4.2 Kubernetes Engine resources
Tasks include:
● Viewing current running cluster inventory (nodes, pods, services)
● Browsing Docker images and viewing their details in the Artifact Registry
● Working with node pools (e.g., add, edit, or remove a node pool)
● Working with pods (e.g., add, edit, or remove pods)
● Working with services (e.g., add, edit, or remove a service)
● Working with stateful applications (e.g. persistent volumes, stateful sets)
● Managing Horizontal and Vertical autoscaling configurations.
● Working with management interfaces (e.g., Cloud Console, Cloud Shell, Cloud SDK, kubectl)
Cymbal Superstore’s GKE cluster requires an     A. Annotate your ingress object with an ingress.class
internal http(s) load balancer. You are            of “gce.”
creating the configuration files required       B. Configure your service object with a type:
for this resource.                                 LoadBalancer.
                                                C. Annotate your service object with a “neg” reference.
What is the proper setting for this scenario?   D. Implement custom static routes in your VPC.
 Question:
 Cymbal Superstore’s GKE cluster requires an internal http(s) load balancer. You are
 creating the configuration files required for this resource. What is the proper setting
 for this scenario?
4.2 Diagnostic Question 04 Discussion
Cymbal Superstore’s GKE cluster requires an     A. Annotate your ingress object with an ingress.class
internal http(s) load balancer. You are            of “gce.”
creating the configuration files required       B. Configure your service object with a type:
for this resource.                                 LoadBalancer.
                                                C. Annotate your service object with a “neg” reference.
What is the proper setting for this scenario?   D. Implement custom static routes in your VPC.
 Feedback:
 Cymbal Superstore’s GKE cluster requires an internal http(s) load balancer. You are
 creating the configuration files required for this resource. What is the proper setting
 for this scenario?
 Where to look:
 https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingres
s
Content mapping:
  ●   Google Cloud Fundamentals: Core Infrastructure (ILT and On-demand)
        ○   M5 Containers in the Cloud
Summary:
Explanation/summary on the following slide.
Internal vs                         Ingress-managed
                                      load balancer                                   Cluster
External load
                                                                                                Pod
balancing in Pod
To implement network load balancing you create a service object with these settings:
  ●    type: LoadBalancer.
  ●    Set External Traffic Policy to cluster or local
Cluster - traffic will be load balanced to any healthy GKE node and then kube-proxy
will send it to a node with the pod.
Local - nodes without the pod will be reported as unhealthy. Traffic will only be sent to
nodes with the pod. Traffic will be sent directly to pod with source ip header info
included.
To implement external http(s) load balancing create an ingress object with the
following settings:
  ●    Routing depends on URL path, session affinity, and the balancing mode of
       backend Network endpoint groups (NEGS)
  ●    The object type is ingress.
  ●    Using ingress.class: “gce” annotation in the metadata deploys an
       external load balancer.
  ●    External load balancer is deployed at Google Points of presence.
  ●    Static IP for ingress lasts as long as the object.
To implement an internal http(s) load balancer create an ingress object with the
following settings:
●   Routing depends on URL path, session affinity, and balancing mode of the
    backend NEGS.
●   The object kind is ingress.
●   Metadata requires an Ingress.class: “gce-internal” to spawn an
    internal load balancer.
●   Proxies are deployed in a proxy only subnet in a specific region in your VPC.
●   Only NEGs are supported. Use the following annotation in your service
    metadata:
       ○    cloud.google.com/neg: '{"ingress": true}'
●   Forwarding rule is assigned from the GKE node address range.
4.2 Diagnostic Question 05 Discussion
                                           A. Pod templates
What Kubernetes object provides
access to logic running in your cluster    B. Pods
via endpoints that you define?             C. Services
                                           D. Deployments
 Question:
 What Kubernetes object provides access to logic running in your cluster via endpoints
 that you define?
4.2 Diagnostic Question 05 Discussion
                                             A. Pod templates
What Kubernetes object provides
access to logic running in your cluster      B. Pods
via endpoints that you define?              C. Services
                                             D. Deployments
 Feedback:
 A. Pod templates
 Feedback: Incorrect. Pod templates define how pods will be configured as part of a
 deployment.
 B. Pods
 Feedback: Incorrect. Pods provide the executable resources your containers run in.
 *C. Services
 Feedback: Correct! Service endpoints are defined by pods with labels that match
 those specified in the service configuration file. Services then specify how those pods
 are exposed.
 D. Deployments
 Feedback: Incorrect. Deployments help you with availability and the health of a set of
 pod replicas. They do not help you configure external access.
 Where to look:
 https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overvi
 ew
 https://cloud.google.com/kubernetes-engine/docs/concepts/pod
 https://cloud.google.com/kubernetes-engine/docs/concepts/deployment
 https://cloud.google.com/kubernetes-engine/docs/concepts/service
Content mapping:
  ●   Google Cloud Fundamentals: Core Infrastructure (ILT and On-demand)
        ○   M5 Containers in the Cloud
Summary:
Explanation/summary on the following slide.
                                                       Deployments
                                                                 Manages and
                                                                 Monitors
Kubernetes
                                            Pod            Pod                 Pod
objects
                                                                 Exposes
Services
What is a service? A service is a group of pod endpoints that you can configure
access for. You use selectors to define which pods are included in a service.
A service gives you a stable IP that belongs to the service. Pods have internal IP
addresses but they can change as pods get restarted and replaced.
A service can be configured to implement load balancing.
4.2 Diagnostic Question 06 Discussion
                                            A. kubectl apply
What is the declarative way to initialize   B. kubectl create
and update Kubernetes objects?
                                            C. kubectl replace
                                            D. kubectl run
 Question:
 What is the declarative way to initialize and update Kubernetes objects?
4.2 Diagnostic Question 06 Discussion
                                              A. kubectl apply
What is the declarative way to initialize     B. kubectl create
and update Kubernetes objects?
                                              C. kubectl replace
                                              D. kubectl run
 Feedback:
 *A. kubectl apply
 Feedback: Correct! kubectl apply creates and updates Kubernetes objects in a
 declarative way from manifest files.
 B. kubectl create
 Feedback: Incorrect. kubectl create creates objects in an imperative way. You can
 build an object from a manifest but you can’t change it after the fact. You will get an
 error.
 C. kubectl replace
 Feedback: Incorrect. kubectl replace downloads the current copy of the spec and
 lets you change it. The command replaces the object with a new one based on the
 spec you provide.
 D. kubectl run
 Feedback: Incorrect. kubectl run creates a Kubernetes object in an imperative way
 using arguments you specify on the command line.
 Where to look:
 https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-workloads-overvie
 w#imperative_commands
 https://kubernetes.io/docs/concepts/overview/working-with-objects/object-manageme
 nt/
Content mapping:
  ●   Getting Started with Google Kubernetes Engine (ILT and On-demand)
        ○    M4 Kubernetes Operations
Summary:
Explanation/summary on the following slide.
Types of kubectl commands
kubectl ...
Imperative Declarative
         -   run                                      -   apply
         -   create
         -   replace
                                                      ●   Works on a directory of config files
         -   delete
                                                      ●   Specifies what
You execute kubectl commands to manage objects such as pods, deployments, and
services.
Imperative commands such as run, create, replace, delete act on a live object or
single config file and overwrite any state changes that have occurred on an existing
object.
Declarative commands use a config stored in a directory to deploy and apply changes
to your app objects.
   ●    uses kubectl -apply on a directory
   ●    You don’t specify create, replace or delete commands.
Example commands:
Pods are not created by themselves but are based on template made available in
deployments.
You can use the name of an existing object or defined config file. The object you
create service for can be one of the following: deployment, service, replica set,
replication controller or pod.
If you need to change a deployment you change the config file and do a kubectl
-apply
 Let’s take a moment to consider resources that can help you build your knowledge
 and skills in this area.
 The concepts in the diagnostic questions we just reviewed are covered in these
 modules, skill badges, and documentation. You’ll find this list in your workbook so you
 can take a note of what you want to include later when you build your study plan.
 Based on your experience with the diagnostic questions, you may want to include
 some or all of these.
 https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb
 https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb
 https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
 https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingres
 s
 https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overvi
 ew
 https://cloud.google.com/kubernetes-engine/docs/concepts/pod
 https://cloud.google.com/kubernetes-engine/docs/concepts/deployment
https://cloud.google.com/kubernetes-engine/docs/concepts/service
https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-workloads-overvie
w#imperative_commands
https://kubernetes.io/docs/concepts/overview/working-with-objects/object-manageme
nt/
4.3 Managing Cloud Run resources
Tasks include:
● Adjusting application traffic splitting parameters
● Setting scaling parameters for autoscaling instances
● Determining whether to run Cloud Run (fully managed) or Cloud Run for Anthos
 Cloud Run and Cloud Functions are Google’s serverless approach to handling
 containers and functional code. In both of these technologies, you pay for resources
 based on how many requests are coming in. One big difference between the two of
 them is that Cloud Run is optimized for multiple concurrent connections to each
 instance, while Cloud Functions only lets you have only one connection per function
 instance.
 For Cymbal Superstore, Cloud Run could be used to quickly test updates to
 containers. As an Associate Cloud Engineer, you could be tasked to implement traffic
 splitting to test changes or rollback updates that didn’t work well. There are also
 settings you need to know for autoscaling, such as min and max instances, that will
 let you make tradeoffs of relative latency versus cost. You also have the choice of
 using a fully managed version in Google Cloud or a hybrid version available as part of
 Anthos. The hybrid version runs on abstracted GKE resources allocated by your
 Anthos cluster.
 Question:
 You have a Cloud Run service with a database backend. You want to limit the number
 of connections to your database. What should you do?
4.3 Diagnostic Question 07 Discussion
 Feedback:
 A. Set Min instances.
 Feedback: Incorrect. Min instances reduce latency when you start getting requests
 after a period of no activity. It keeps you from scaling down to zero.
 Where to look:
 https://cloud.google.com/run/docs/about-instance-autoscaling
 Content mapping:
   ●   Google Cloud Fundamentals: Core Infrastructure (ILT and On-demand)
         ○   M6 Applications in the Cloud
Summary:
Explanation/summary on the following slide.
                                                                Number of events
                                                 Set min                                    Set max
                                                instances                                  instances
Cloud Run
autoscaling
                                                                      {
                                                   00       {                          }    ...
                                                                      {
                                                                Scales servers based
                                                                   on # of events
Set concurrency
Cloud Run automatically scales the number of container instances required for each
deployed revision. When no traffic is received, the deployment automatically scales to
zero.
Instances that are started might remain idle for up to 15 minutes to reduce latency
associated with cold starts. You don’t get charged for these idle instances. You set a
min and max on the container tab in the advanced settings dialog.
4.3 Managing Cloud Run resources
Courses Documentation
 Let’s take a moment to consider resources that can help you build your knowledge
 and skills in this area.
 The concepts in the diagnostic question we just reviewed are covered in this modules
 and in this documentation. You’ll find this list in your workbook so you can take a note
 of what you want to include later when you build your study plan. Based on your
 experience with the diagnostic questions, you may want to include some or all of
 these.
 https://cloud.google.com/run/docs/about-instance-autoscaling
         Managing storage
4.4 and database solutions
Tasks include:
● Managing and securing objects in and between Cloud Storage buckets
● Setting object life cycle management policies for Cloud Storage buckets
● Executing queries to retrieve data from data instances (e.g., Cloud SQL, BigQuery, Cloud Spanner,
  Cloud Datastore, Cloud Bigtable)
● Estimating costs of data storage resources
● Backing up and restoring database instances (e.g., Cloud SQL, Cloud Datastore)
● Reviewing job status in Cloud Dataproc, Cloud Dataflow, or BigQuery
 Consider this example. You store static images of products for Cymbal Superstore’s
 ecommerce app in Cloud Storage. As an Associate Cloud Engineer, you would be
 expected to know how to secure access to these images from the application through
 IAM roles assigned to a service account. When you upgrade product images you
 would like to keep the previous images, but move them to a different storage type
 based on object versioning. You could do this using the object lifecycle management
 feature of Cloud Storage.
 Question:
 You want to implement a lifecycle rule that changes your storage type from Standard
 to Nearline after a specific date. What conditions should you use? (Pick two.)
4.4 Diagnostic Question 08 Discussion
 Feedback:
 A. Age
 Feedback: Incorrect. Age is specified by number of days, not a specific date.
 *B. CreatedBefore
 Feedback: Correct! CreatedBefore lets you specify a date.
 *C. MatchesStorageClass
 Feedback: Correct! MatchesStorageClass is required to look for objects with a
 Standard storage type.
 D. IsLive
 Feedback: Incorrect. IsLive has to do with whether or not the object you are looking at
 is the latest version. It is not date-based.
 E. NumberofNewerVersions
 Feedback: Incorrect. NumberofNewerVersions is based on object versioning and you
 don’t specify a date.
 Where to look:
 https://cloud.google.com/storage/docs/lifecycle
 Content mapping:
  ●    Google Cloud Fundamentals: Core Infrastructure (ILT and On-demand)
        ○    M4 Storage in the Cloud
Summary:
Explanation/summary on the following slide.
                                            ●   Age
                                            ●   Createdbefore        Conditions
                                            ●   Customtimebefore
                                            ●   ...
Apply to
Cloud Storage
Lifecycle Actions                                                     Objects
When met
                                            ● Delete
                                            ● SetStorageClass         Actions
                                            ●
Examples:
  ●   Downgrade the storage class of objects older than 365 days to coldline
      storage.
  ●   Delete an object that existed before a certain date.
  ●   Keep the 3 most recent versions of each object in a bucket with versioning
      enabled.
Object metadata has to match all rules for the action to fire. If an object state matches
more than one rule set, delete takes precedence, followed by storage class with
lowest price.
Courses Documentation
 Let’s take a moment to consider resources that can help you build your knowledge
 and skills in this area.
 The concepts in the diagnostic question we just reviewed are covered in this module
 and in this documentation. You’ll find this list in your workbook so you can take a note
 of what you want to include later when you build your study plan. Based on your
 experience with the diagnostic questions, you may want to include some or all of
 these.
 https://cloud.google.com/storage/docs/lifecycle
4.5 Managing networking resources
Tasks include:
 As an Associate Cloud Engineer, any app you deploy is going to have connectivity
 requirements. Google’s software defined networking stack is based on the idea of a
 Virtual Private Cloud. VPCs group regional resources into internal IP address ranges
 called subnets. As you manage network resources, you might have to add subnets or
 expand a subnet to let it support more devices. IP addresses assigned to both internal
 and external virtual machines are ephemeral, meaning as resources come and go
 your IP addresses might change. To get around this problem you can set and attach
 static IPs that persist across different individual resources.
 How do these tasks apply to Cymbal Superstore? The ecommerce app requires
 global external connectivity for users to access it. You can manage this through an
 ingress object in GKE. The ecommerce application middleware is going to need
 private, regional, internal access to a Spanner backend that stores order data.
 Cymbal’s supply chain app is going to need external regional connectivity with
 regional internal connectivity as well, only implemented with Google Cloud load
 balancers instead of GKE ingress objects.
 Question:
 Cymbal Superstore has a subnetwork called mysubnet with an IP range of
 10.1.2.0/24. You need to expand this subnet to include enough IP addresses for at
 most 2000 new users or devices. What should you do?
4.5 Diagnostic Question 09 Discussion
 Feedback:
 A. gcloud compute networks subnets expand-ip-range mysubnet --region
 us-central1 --prefix-length 20
 Feedback: Incorrect. A prefix length of 20 would expand the IP range to 4094, which is
 far too many for the scenario.
 Where to look:
 https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-r
 ange
 https://cloud.google.com/vpc/docs/using-vpc#expand-subnet
Content mapping:
  ●   Architecting with Google Compute Engine (ILT)
        ○    M2 Virtual Networks
Summary:
Explanation/summary on the following slide.
                                                          Current IP address range
Expand IP addresses
                                                            Reduce your mask:
in a subnet                                                    e.g. 24 to 20
Command syntax:
If 10.0.128.0/24 you can supply 20 to reduce your mask, which increases your
number of available ip addresses.
4.5 Managing networking resources
Courses Documentation
Let’s take a moment to consider resources that can help you build your knowledge
and skills in this area.
The concepts in the diagnostic question we just reviewed are covered in this module
and in this documentation. You’ll find this list in your workbook so you can take a note
of what you want to include later when you build your study plan. Based on your
experience with the diagnostic questions, you may want to include some or all of
these.
https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-r
ange
https://cloud.google.com/vpc/docs/using-vpc#expand-subnet
4.6 Monitoring and logging
Tasks include:
● Creating Cloud Monitoring alerts based on resource metrics
● Creating and ingesting Cloud Monitoring custom metrics (e.g., from applications or logs)
● Configuring log sinks to export logs to external systems (e.g., on-premises or BigQuery)
● Configuring logs routers
● Viewing and filtering logs in Cloud Logging
● Viewing specific log message details in Cloud Logging
● Using cloud diagnostics to research an application issue (e.g., viewing Cloud Trace data, using
  Cloud Debug to view an application point-in-time)
● Viewing Google Cloud Platform status
 For example, Cymbal Superstore’s supply chain app might require you to monitor
 CPU utilization for all the instances in your managed instance group.
 Cloud Monitoring allows you to build charts based on metrics you specify. You can
 also look at logs associated with resources from the dashboard. You could use this to
 monitor messages being posted to your pubsub topic for the transportation
 management app.
  Custom metrics allow you to define metrics descriptors for things you want to keep
 track of that aren’t included in standard metrics. For example, in Cymbal Superstore’s
 ecommerce app you want to track the number of requests going to sales and the
 number going to support.
 Cloud Logging allows you to log any timestamped data in logs you define and
 manage. There are a myriad of options for where you can save your logs and how to
 route them. There is also an interface provided to query your logs.
 Cloud Ops has several tools that will help you with debugging app performance
 issues. Cloud Trace, Cloud Debugger, and Cloud Profiler all help you figure out what
 might be causing latency and performance issues in your apps.
You explored these types of tasks in this question:
Question 10: Configure a Google Cloud Operations custom alert: specify conditions,
send optional notifications, and reference documentation.
4.6 Diagnostic Question 10 Discussion
 Question:
 Cymbal Superstore’s supply chain management system has been deployed and is
 working well. You are tasked with monitoring the system’s resources so you can react
 quickly to any problems. You want to ensure the CPU usage of each of your Compute
 Engine instances in us-central1 remains below 60%. You want an incident created if it
 exceeds this value for 5 minutes. You need to configure the proper alerting policy for
 this scenario. What should you do?
4.6 Diagnostic Question 10 Discussion
 Feedback:
 A. Choose resource type of VM instance and metric of CPU load, condition trigger if
 any time series violates, condition is below, threshold is .60, for 5 minutes.
 Feedback: Incorrect. CPU load is not a percentage, it is a number of processes.
 Where to look:
 https://cloud.google.com/monitoring/alerts/using-alerting-ui
 https://cloud.google.com/monitoring/alerts
Content mapping:
  ●   Architecting with Google Compute Engine (ILT)
        ○    M7 Resource Monitoring
  ●    Skill Badges
         ○     Perform Foundational Infrastructure Tasks in Google Cloud
               (https://www.cloudskillsboost.google/course_templates/637)
         ○     Set Up and Configure a Cloud Environment in Google Cloud
               (https://www.cloudskillsboost.google/course_templates/625)
Summary:
Explanation/summary on the following slide.
Cloud operations custom alerts
In Cloud Monitoring you implement alerts by defining alert policies. An alerting policy
specifies what you want to be alerted on and how you want to be notified. The “what”
is made of conditions that describe the state of a resource, or groups of resources,
that cause you to take action. The “how” is provided by notification channels, where
you specify who is notified when the condition of the alerting policy is met. Each
notification channel configures a different type of output, such as email, slack channel,
or posting a message to pub/sub topic. You can also specify documentation you want
included in your notification.
Conditions are made of a monitored resource, a metric for that resource, and the
threshold where the condition is met. An alerting policy can have up to 6 conditions. In
an alerting policy with 1 condition, when that condition is met, an incident is created.
For an alerting policy, you specify how those conditions are combined.
4.6 Monitoring and logging
             Courses                    Skill Badges             Documentation
= Google Cloud
 Let’s take a moment to consider resources that can help you build your knowledge
 and skills in this area.
 The concepts in the diagnostic question we just reviewed are covered in this module
 and in this documentation. You’ll find this list in your workbook so you can take a note
 of what you want to include later when you build your study plan. Based on your
 experience with the diagnostic questions, you may want to include some or all of
 these.
 https://cloud.google.com/monitoring/alerts/using-alerting-ui
 https://cloud.google.com/monitoring/alerts