Kubernetes Tutorial
Dr. Muhammad Imran
1
Agenda
Kubernetes
Cluster
• Historical Context
• Kubernetes
• Overview of Basic Components of Kubernetes
• Kubernetes Tutorial 6 Steps
1. Cluster Creation
2. App Deployment
3. App exploration
4. Expose applications
5. Scale applications
6. Update applications
2
Historical Context
Kubernetes
Cluster
3
Kubernetes?
• What is Kubernetes? Kubernetes
Cluster
• Open-source container orchestration platform
• Automates deployment, scaling, and management of containerized applications
• Key Features:
• Container Orchestration: Simplifies container management
• Scaling & Load Balancing: Automatic scaling based on demand
• Service Discovery: DNS-based service discovery mechanism
• Self-Healing: Restarts or replaces failed containers
• Declarative Configuration: Describes the desired state using YAML/JSON
• Benefits:
• Improved Scalability: Seamlessly scale applications
• Increased Reliability: Self-healing capabilities ensure high availability
• Portability: Works across various environments (cloud, on-premises)
• Use Cases:
• Microservices Architecture
• Continuous Integration/Continuous Deployment (CI/CD)
• Hybrid/Multi-cloud Deployments
4
Kubernetes Components
Kubernetes
Cluster
5
Let’s start with an analogy..
A Cargo Ship…
Carries containers across the sea
A Cargo Ship…
Host Application as Containers ~ Worker Nodes
Control Ships..
Managing & Monitoring of the cargo ships
9
Control Ships..
Manage, Plan, Schedule, Monitor ~ Master
1
0
Let’s talk about Master
Components..
Ship Cranes
Identifies the placement of containers
12
Ship Cranes
Identifies the right node to place a containers ~ Kube-Scheduler
13
Cargo Ship Profiles
HA database ~ Which containers on which ships? When was it loaded?
14
Cargo Ship Profiles
HA database ~ Which containers on which ships? When was it loaded? ~ The ETCD Cluster
15
Offices in Dock
- Operation Team Office ~ Ship Handling, Control
- Cargo Team Office ~ verify if containers are damaged, ensure that new containers are rebuilt
- IT & Communication Office – Communication in between various ships
16
Controllers
- Node Controllers – Takes care of Nodes | Responsible for onboarding new nodes in a cluster | Availability
of Nodes
- Replicas Controller – Ensures that desired number of containers are running at all times
- Controller Manager - Manages all these controllers in place
17
How does each of these services communicate with each other?
18
Kube API Server
- A primary management component of k8s
- Responsible for orchestrating all operations within a cluster
- Exposes K8s API ,used by external users to perform management operation in the cluster and
number of controller to monitor the state of the cluster
API Server
19
Let’s talk about Worker
Components..
Captain of the Ship
- Manages all sort of activity on the ship
- Let master ship knows they are interested to join
- Sending reports back to master about the status of the ship
- Sending reports about the status of the containers
21
Captain of the Ship ~ Kubelet
Agent which runs on each nodes of the container
22
Communication between Cargo Ships
How does two cargo ships communicate with each other?
23
Kube-proxy Service
How will web server running on one worker node reach out to DB server on another
worker node?
Communication between worker nodes
Kube-proxy
24
Overview Internet
Worker Node-1
Master
Kubelet Kube-proxy
Scheduler
API Server
Controller
ETCD
Manager
25
Let’s Deep Dive into Pods…
Atomic Unit of Scheduling
Virtualization Docker Kubernetes
VM
Pod
Container
27
How Pods are deployed?
Scheduler
API Server
Pod
Container
Master
Cluster
28
Scaling the Pods to accommodate increasing traffic
Scheduler
API Server
Pod
Container
Master
Worker Node
29
What if node resources is getting insufficient?
Scheduler
API Server
Pod
Container
Master
Worker Node
30
What if node resources is getting insufficient?
Worker-2
Scheduler
API Server
Worker-1
Pod
Container
Master
Cluster
31
What if node resources is getting insufficient?
Worker-2
Scheduler
API Server
Worker-1
Pod
Container
Master
Cluster
32
2 Containers in a same Pod
Worker-2
Scheduler
API Server
Worker-1
Pod
Container
Master
Cluster
33
Pod Networking
Pod 1 Pod 2
Main Supporting
Container Supporting
Container
Container
:8080 :3000
:7777
10.0.30.50 10.0.30.60
34
How does these containers inside
Pods communicate with External
World?
Network Namespace
Pod 1 Pod 2
Main Supporting
Container Supporting
Container
Container
:8080 :3000
:7777
10.0.30.50 10.0.30.60
10.0.30.50:8080 10.0.30.50:3000
50
How does one Pod talk to
another Pod?
Welcome to Inter-Pod Communication..
Pod Networking
Pod 1 Pod 2
Main Supporting
Container Supporting
Container
Container
:8080 :3000
:7777
10.0.30.50 10.0.30.60
Pod Network
52
How does Intra-Pod
communication take place?
Intra-Pod Communication
Pod 1
Supporting
Main Container
Container
:8080 :3000
Localhost
10.0.30.50
:8080 :3000
40
Kubernetes
Cluster
Kubernetes Basic Modules in
6 Steps
41
42
1. Create Kubernetes Cluster
• The abstractions in Kubernetes allow you to deploy containerised
applications to a cluster without tying them specifically to individual
machines.
• Kubernetes automates the distribution and scheduling of
application containers across a cluster in a more efficient way.
• Minikube is a lightweight Kubernetes implementation that creates a
V M on your local machine and deploys a simple cluster
containing only one node.
• The Minikube C L I provides basic bootstrapping operations for
working with your cluster, including start, stop, status, and delete.
43
Minikube
• minikube is local Kubernetes, focusing on making it easy to learn and develop for
Kubernetes.
• All you need is Docker (or similarly compatible) container or a Virtual Machine
environment, and Kubernetes is a single command away.
• Installation on Linux
• curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-
amd64
• sudo install minikube-linux-amd64 /usr/local/bin/minikube
• Start Cluster
• minikube start
• minikube start --nodes=2 (for multiple nodes)
• Dashboard
• minikube dashboard
• Minkube docs
• https://minikube.sigs.k8s.io/docs/start /
44
Kubectl
• The Kubernetes command-line tool
• kubectl, allows you to run commands against Kubernetes clusters.
• You can use kubectl to deploy applications, inspect and manage cluster
resources, and view logs.
• Installation on Linux
• curl -LO https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
• sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
• kubectl version --client
• Installation Instructions
• https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
45
2. Deploy an App
Once the Kubernetes cluster is running. We
can deploy containerised apps on top of
our cluster.
We create Kubernetes Deployment
configuration.
The Deployment instructs Kubernetes how
to create and update instances of your
application.
Once you've created a Deployment, the
Kubernetes control plane schedules the
application instances included in that
Deployment to run on individual Nodes in
the cluster.
Source: https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro
46
2. Deploy an App
• Let’s deploy our first app on Kubernetes with the kubectl
create deployment command. We need to provide the
deployment name and app image location (include the full
repository url for images hosted outside Docker Hub).
• kubectl create deployment kubernetes-bootcamp
--image=gcr.io/google-samples/kubernetes-bootcamp:v1
• searched for a suitable node where an instance of the application
could be run (we have only 1 available node)
• scheduled the application to run on that Node
• configured the cluster to reschedule the instance on a new Node
when needed
47
2. Deploy an App
• List deployments
• kubectl get deployments
• Pods that are running inside Kubernetes are running on a private,
isolated network.
• By default they are visible from other pods and services within
the same kubernetes cluster, but not outside that network.
• The kubectl command can create a proxy that will forward
communications into the cluster-wide, private network.
• Open a second terminal and run
• kubectl proxy
• Access
• curl http://localhost:8001/version
48
Kubernetes Basics: Pods
A Pod is a Kubernetes abstraction that represents a group of one or more
application containers (such as Docker), and some shared resources for those
containers.
Shared storage, as Volumes
Networking, as a unique cluster IP address
Information about how to run each container, such as the container image
version or specific ports to use
49
Kubernetes Basics: Nodes
• A Pod always runs on a Node. A Node is a
worker machine in Kubernetes and may be
either a virtual or a physical machine,
depending on the cluster. Each Node is
managed by the control plane.
• A Node can have multiple pods, and the
Kubernetes control plane automatically
handles scheduling the pods across the
Nodes in the cluster. The control plane's
automatic scheduling takes into account the
available resources on each Node.
• Kubelet, a process responsible for
communication between the Kubernetes
control plane and the Node; it manages the
Pods and the containers running on a
machine.
• A container runtime (like Docker)
responsible for pulling the container image
from a registry, unpacking the container,
and running the application.
50
3. Explore App
• kubectl get
• list resources
• kubectl describe
• show detailed information about a resource
• kubectl logs
• print the logs from a container in a pod
• kubectl exec
• execute a command on a container in a pod
• Get pods list
Anything that the application would normally send to S T D O U T becomes logs
• kubectl get pods
for the container within the Pod.
• Describe pods
• kubectl describe pods <podname>
• Login inside pod
• kubectl exec -it <podname> -- bash
• See logs of pods
• kubectl logs <podname>
51
Kubernetes Basics: ReplicaSet
• When a worker node dies, the Pods running on the Node are
also lost.
• A ReplicaSet might then dynamically drive the cluster back to
the desired state via the creation of new Pods to keep your
application running.
• Example
• consider an image-processing backend with 3 replicas.
• Those replicas are exchangeable;
• the front-end system should not care about backend replicas or even if a
Pod is lost and recreated.
A ReplicaSet’s purpose is to maintain a stable set of replica Po ds running at any given time.
It is often used to guarantee the availability of a specified number of identical Pods.
52
4. Expose App
• A Service in Kubernetes is an abstraction which defines a
logical set of Pods and a policy by which to access them.
• Services enable a loose coupling between dependent Pods.
• The set of Pods targeted by a Service is usually determined
by a label selector
• Although each Pod has a unique IP address, those IPs are not
exposed outside the cluster without a Service.
• Services allow your applications to receive traffic.
53
4. Expose App
• Services can be exposed in different ways by specifying
a type in the spec of the Service:
• ClusterIP (default) - Exposes the Service on an internal IP in
the cluster.
This type makes the Service only reachable from within the cluster.
• NodePort - Exposes the Service on the same port of each
selected Node in the cluster using NAT.
Makes a Service accessible from outside the cluster
using <NodeIP>:<NodePort>. Superset of ClusterIP.
• LoadBalancer - Creates an external load balancer in the
current cloud (if supported) and assigns a fixed, external IP
to the Service.
Superset of NodePort.
54
4. Expose App
• Services match a set of Pods
using labels and selectors, a
grouping primitive that allows
logical operation on objects in
Kubernetes.
• Labels are key/value pairs
attached to objects.
• Get services
• kubectl get services
55
4. Expose App
• Expose deployment
• kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --
port 8080
• Get services list
• kubectl get services
• Describe service
• kubectl describe services/kubernetes-bootcamp
• NODE_PORT environment variable
• export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o
go-template='{{(index .spec.ports 0).nodePort}}') "
• echo "NODE_PORT=$NODE_PORT"
• Access application using service.
• curl http://"$(minikube ip):$NODE_PORT"
56
5. Scale App
• Until now… the Deployment created only one Pod for running our
application.
• However, when traffic increases, we will need to scale the application to
keep up with user demand.
• Scaling is accomplished by changing the number of replicas in a
Deployment.
• Scaling out a Deployment will ensure new Pods are created and
scheduled to Nodes with available resources
• Scale up deployment
• kubectl scale deployments/kubernetes-bootcamp --replicas=4
• Scale down deployment
• kubectl scale deployments/kubernetes-bootcamp --replicas=2
57
6. Update App
• Users expect applications to be available all the time, and developers are
expected to deploy new versions of them several times a day.
• In Kubernetes this is done with rolling updates.
• A rolling update allows a Deployment update to take place with zero
downtime.
• It does this by incrementally replacing the current Pods with new ones.
• The new Pods are scheduled on Nodes with available resources, and
Kubernetes waits for those new Pods to start before removing the old Pods.
58
6. Update App
• Update app using new image with different tag
• kubectl set image deployments/kubernetes-bootcamp kubernetes-
bootcamp=jocatalin/kubernetes-bootcamp:v2
• List pods
• Verify update
• kubectl rollout status deployments/kubernetes-bootcamp
• NODE_PORT environment variable
• export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o
go-template='{{(index .spec.ports 0).nodePort}}') "
• echo "NODE_PORT=$NODE_PORT"
• Access application using service.
• curl http://"$(minikube ip):$NODE_PORT"
59
6. Update App
• Rollback update. Try to deploy an image that does not exist.
• kubectl set image deployments/kubernetes-bootcamp kubernetes-
bootcamp=gcr.io/google-samples/kubernetes-bootcamp:v10
• List pods
• Describe pods
• Rollback update
• kubectl rollout undo deployments/kubernetes-bootcamp
60
Questions?
61