0% found this document useful (0 votes)
17 views3 pages

Week 13 Lecture

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications, ensuring high availability and efficient resource utilization. The architecture consists of master node components like the API Server and Scheduler, and worker node components such as Kubelet and Container Runtime. The document also outlines steps to set up a Kubernetes cluster using Minikube and explains key Kubernetes objects like Pods, Deployments, and Services.

Uploaded by

jannatimtiaz288
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views3 pages

Week 13 Lecture

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications, ensuring high availability and efficient resource utilization. The architecture consists of master node components like the API Server and Scheduler, and worker node components such as Kubelet and Container Runtime. The document also outlines steps to set up a Kubernetes cluster using Minikube and explains key Kubernetes objects like Pods, Deployments, and Services.

Uploaded by

jannatimtiaz288
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Container Orchestration with Kubernetes

2. Introduction to Kubernetes
What is Kubernetes?

Open-source container orchestration platform developed by Google.


Automates deployment, scaling, and management of containerized applications.
Used for high availability, fault tolerance, and load balancing.

Why Use Kubernetes?

Manages multiple containers across a cluster.


Handles failures automatically (self-healing).
Load balancing and auto-scaling for performance optimization.
Efficient resource utilization compared to manually managing Docker containers.

3. Kubernetes Architecture
1. Master Node Components

• API Server – Entry point for Kubernetes commands.


• Scheduler – Assigns workloads to worker nodes.
• Controller Manager – Handles replicas, scaling, and updates.
• etcd – Stores cluster configuration data.

2. Worker Node Components

• Kubelet – Communicates with the master node.


• Container Runtime – Runs Docker or another container engine.
• Kube Proxy – Manages network connectivity inside the cluster.

4. Setting Up a Kubernetes Cluster


Step 1: Install Minikube (Single-Node Kubernetes)

Install Minikube:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-
linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

Start Minikube:

minikube start

Verify Installation:

kubectl version --client

Step 2: Deploy a Sample Application

Create a Deployment:

kubectl create deployment my-app --image=nginx

Expose the Deployment:

kubectl expose deployment my-app --type=NodePort --port=80

Get the Service URL:

minikube service my-app --url

Access the Application:

curl $(minikube service my-app --url)

5. Key Kubernetes Objects


1. Pods

The smallest deployable unit in Kubernetes.


Can run one or more containers inside it.

List all running pods:

kubectl get pods

2. Deployments
Ensure high availability by maintaining multiple replicas of an application.
Used for rolling updates and version control.

Scale a deployment:

kubectl scale deployment my-app --replicas=3

3. Services

Allows communication between pods and external users.


Types: ClusterIP (internal), NodePort (external), LoadBalancer (cloud-based).

List all services:

kubectl get services

6. Recap & Discussion


What did we learn today?

• Introduction to Kubernetes and its architecture.


• Setting up Minikube and running applications.
• Understanding pods, deployments, and services.

You might also like