0% found this document useful (0 votes)
35 views29 pages

Kubernetes for Developers

Kubernetes (K8s) is an open-source platform for automating the deployment, scaling, and management of containerized applications, built on the concept of containerization. It utilizes a master-worker node architecture, where the master node manages the cluster and the worker nodes execute workloads, with Pods being the smallest deployable units. MicroK8s is a lightweight version of Kubernetes for local development, requiring specific system resources and enabling users to manage containerized applications efficiently.

Uploaded by

jyothijyo0309
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views29 pages

Kubernetes for Developers

Kubernetes (K8s) is an open-source platform for automating the deployment, scaling, and management of containerized applications, built on the concept of containerization. It utilizes a master-worker node architecture, where the master node manages the cluster and the worker nodes execute workloads, with Pods being the smallest deployable units. MicroK8s is a lightweight version of Kubernetes for local development, requiring specific system resources and enabling users to manage containerized applications efficiently.

Uploaded by

jyothijyo0309
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Kubernetes (K8s) and Containerization

What is Kubernetes?
Kubernetes is an open-source platform designed to automate the
deployment, scaling, and management of containerized applications.
Containerization
Kubernetes is an extension of containerization, which focuses on
encapsulating an application and its dependencies into a container.
Container Runtime Examples:
 Docker
 CRI-O
 Container-d
Containers
A container is an open-source application that includes everything needed
to run: code, runtime, system tools, and libraries.
Example of Virtual Machines (VMs):
 VM Setup: Java Runtime, Operating System, Libraries, and
Application.
Microservices
Microservices is an architectural style where applications are built as a
collection of small, independent modules (services).
Advantages of Microservices:
1. Modules are independent and can be managed separately.
2. Communication between modules is facilitated through APIs or
inter-process communication.
Why Kubernetes?
Kubernetes serves as a container orchestration platform that manages:
1. Dying containers (automatic recovery).
2. Replication of containers to ensure high availability and scalability.
Minimum Requirements for Kubernetes Installation
 8 GB RAM (minimum)
 2 Nodes:
o Master Node: Handles the control plane.
o Worker Node: Executes the workloads.
Example Setup:
 Total resources: 24 GB RAM divided as follows:
o 8 GB for VM 1
o 8 GB for VM 2
o 8 GB for VM 3
Minikube (Single-Node Kubernetes)
Minikube provides a single-node Kubernetes cluster for testing and
development purposes.
Minikube Requirements:
 4 Cores
 4 GB RAM
 Suitable for Windows Professional (MicroK8s).
Steps to Install MicroK8s on Windows:
1. Check System Requirements:
o Open Task Manager and verify your system specifications.
2. Install WSL (Windows Subsystem for Linux):
o Open PowerShell as Administrator.
o Run the command:
wsl --install
3. Install MicroK8s:
o Use WSL to proceed with the MicroK8s installation.
MicroK8s provides a lightweight Kubernetes for local development and
testing environments.
Website go through Windows (Pro) https://microk8s.io/
 Open-source Container Orchestration tool
 Developed by GOOGLE
 Trend from Monolith to Microservices
 Increased usage of containers
 Features do Orchestration tools offer: High Availability, Scalability,
Plaster
 Pod: (Packaging of container)
 Smallest unit of k8s
 Abstraction over container
 Usually 1 application per Pod
 Each pod gets its own IP Address
 New IP Adress on re-creation
Storage on local machine or remote
 Master Node (Brain of the Kubernetes) – contain one file (one of the
Component) called as etcd (it decides where you create Pod)
 Pod is created in worker Node that will be decided by the Master
Node(etcd)
 Worker machine in KSs cluster
 Kubelet is one of the component (SERVICE) communicate from
master node – worker node to create Pod
 If you have replication (application in different areas) kubeproxy
came into the picture for communication
Service – manual
Volume
Etcd – master node
Kubelet
On Jan 24

Mini Kube
Microk8s - single node 8s cluster
Go to wsl 
PS C:\Windows\system32> wsl
To run a command as administrator (user "root"), use "sudo
<command>".
See "man sudo_root" for details.

jyothi@JYOTHISI:/mnt/c/Windows/system32$

if it is  $ use sudo otherwise no need


$ wsl -u root > wsl -u root
# sudo snap install microk8s –classic (to download microk8s)
# microk8s start if $ sudo
#microk8s kubectl get nodes (it will give one node where master
and worker node arrange proper)
#alias kubectl=“microk8s kubectl”
#kubectl get nodes
#Get node concepts are get the information
#kubectl get nodes -o wide
# microk8s enable dashboard (graphically we can see)
#microk8s dashboard-proxy (dashboard is available copy the id
with the port number completely then open the browser go to the
advanced and ope:ned page copy the token (in wsl cmnd mode)
completely paste the token in browser sign-in there then it will be
open) check Ip address left side select nodes we can see graphical
representation

We create a deployment the


The deployment manages the Pod
The pod manages the replication
This replication manages the Containers
#kubectl create deployment <nameofdeployment> --
image=nginx
(deployment is created)
#kubectl get deployments
#kubectl get deployments -o wide (for more information)
# kubectl get pods (status will be running)
# kubectl get pods -o wide (can get Ip address)
#kubectl get replicaset
#kubectl get replicaset -o wide
#kubectll get pods
#kubectl describe pod <podname(likeID)>
#kubectl describe deployment <deployname>
#systemctl kubectl create
# kubectl edit deployment <thub(deploymentname)>
Can see nginx image (its Latest)
insertion I :1.26<version> esc shift+: wq!
#kubectl get deployment <thub(deploymentname)>
If we create a pod directly, we can edit all of those so for that were
create deployment then pod its easy to edit all
Replica set also setting according to the
#kubectl get replicaset
(Change replicas into 2 in containers image same as old change
then save it ….)
#kubectl get pods
Change the configuration for replication also
#kubectl get replicaset
#kubectl exec -it <podname> -- it -- /bin/bash
# kubectl exec -it thub-94bb49d78-8kd8m -- /bin/bash
#kubectl edit
When you are started doing multiple pods, replicas files -yaml -
blueprint for k8s conf
Every pod having services they are two 1. Internal 2. External
External service called as “Ingress”
deploy – file
service(internal-Service) – file
ingress (external Service) – file
to deploy a sample application, recommend 10-15 cmnds to alternate
for that yaml file is recommend its edit change the configuration
within the seconds
If its related to deployment we cant change
Got to  C drive
#kubectl apply -f my-app-deplo.yaml
# kubectl get deployments
#kubectl apply -f <file.yaml>
#kubectl get service
Int the Kubernetes that uses the random port numbers 30000 -32700
<file-download> put in c drive
/mnt/c# kubectl get service
Different for diff environments
Let see ingress micro environment 1. Deployment 2. Service 3.
Ingress laptop – microk8s -pod -container
#kubectl describe node (see the IP Address)
Access the page with IP address
Copy the file and put your application

KUBERNETES: (K8’s)
Kubernetes is extension of containerization.
Kubernetes (often abbreviated as K8s) is an open-source platform
for managing and orchestrating containerized applications at scale. It
automates the deployment, scaling, and management of containers,
making it a powerful tool for modern cloud-native applications.
Container run times (worker machines):
1. Docker
2. Cri-o
3. Container-d
Kubernetes uses “container-d” as a run time.
Container is an open source
Main reason to choose Containers is flexibility.
MicroServices:
Application splits into a small and different modules.
Modules are independent and interconnected.
Instead of putting total web application into the container, it will be
divided as modulus and then stored in containers because if one
module crashes then the other modulus will not be crashed and
containers can’t store’s large amount of storage, so stored as
modules
Kubernetes is container orchestration/management
When we deploy containers into Kubernetes, it manages the
containers which are died then it replaces.
Kubernetes takes minimum 24gb data for installation (8gb for OS
and other GB for installation)
MiniKube – 1 node Kubernetes.
Requirement of installation of Kubernetes: 4 cores and 4gb ram
(check in Ur laptop at task manager) and windows 11 professional
version (check in settings>about) in laptop.
Requirement for Micro Kubernetes: cores:2, ram-4gb, windows prof
Microk8s – wsl win 11 home.
ARCHITECTURE OF KUBERNETES:
• Open-source container orchestrion tool
• Developed by google
• Helps you manage containerized applications
• In different deployment environments
• Trend from monolith (entire website relays on only one series
of code) to microservices, increased usage of containers.
• It also supports the local and remote modes
Features:
• High Availability
• Scalability
• Load balancing and Disaster recovery
• Extensibility
Pod:
Kubernetes uses the Pod concept where the container goes and saves
into the pod. (it’s better to put 1 container in 1 pod).
• Smallest unit of k8’s
• Abstraction over container
• Usually 1 application per pod
• Each pod gets its own IP
• New IP address on re-creation
• It’s a case of automatic IP address
Service:
• Manual/static IP address in Kubernetes is called as Service.
• Manual IP address doesn’t change on new re-creations.
• IP is a automatic IP address and service is a manual IP address
Volume:
It can be a local or remote storage
NODE:
A node can be a physical machine (bare metal) or a virtual machine
(e.g., AWS EC2, Google Compute Engine, or Azure VM).
Each node runs the essential services required to host and manage
containers.
Nodes communicate with the control plane to report their status and
receive instructions.
Node Types
Nodes in Kubernetes are classified based on their role:
a. Master Node
• Manages the Kubernetes cluster.
• Responsible for scheduling, orchestration, and state
management.
• Runs control plane components like API Server, Scheduler,
Controller Manager, and etcd.
• Typically, workloads (pods) are not run on the master node
unless explicitly allowed.
b. Worker Node
• Executes the actual workloads (containers in pods).
• Hosts applications and services that make up the Kubernetes-
managed system.
• Communicates with the master node to receive and run tasks.
Node Components:
1) Kubelet: Communication between master and worker node,
it’s a service, ensures that containers in a pod are running according
to the specifications provided by the control plane.
2) Kube Proxy: Implements network rules for communication
between worker to worker (pods and services)
3) Container run times:

In Kubernetes, Master Nodes and Worker Nodes are the two primary
roles in a cluster. Together, they form the architecture of Kubernetes
and ensure the efficient management of containerized workloads.
Master node is a brain of Kubernetes because ETCD presents in
master node.
Pods are created in a worker node.

Random port number were: 30,000 to 32,000


Service(internal service)
Ingress(external service)
Label, app name, service name
Laptop->mirok8s->pod->container

Directly creating pod and managing or editing pods is difficult so


that we follow below steps:
• First, we have to create deployment
• It automatically creates the pod
• Pods manages the replication manages containers

ETCD:
Etcd is a distributed, consistent, and highly available key-value store
used by Kubernetes to store all its cluster state and configuration
data. It is an essential part of the Kubernetes control plane, as it acts
as the single source of truth for the cluster.

STEPS:
Go to windows->type as poweshell->right click ->run as
administrator->type commands as:
 wsl –-install
# wsl (comes to linux)
# sudo snap install microk8s --classic (enter password of laptop
not pin)
// $ -> use sudo infront of any command in $ mode.
# wsl -u root
# microk8s start
# microk8s kubectl get nodes (status shows as ready)
// in above command, Microk8s is not part of kubernetes, it is just a
layer to use the Kubernetes.
// we can also replace the above command with this command ->
# alias kubectl=”microk8s kubectl”
// after using this command in place of the above microk8s
command from next command we don’t need to use the microk8s
before all the commands.
# kubectl get nodes
# kubectl get nodes -o wide // -o wide is only used with the get
command (it gives more information about when we use -o wide)

# microk8s enable dashboard


# microk8s dashboard-proxy (copy the IP which shows in output in
edge)
(There in website it asks about token go back to output where it
shows the token of 4-5 lines copy and paste in website.
(Website will be opened and then go to nodes in left side, there we
can see the statistics)
(Don’t click ctrl+c after the dashboard-proxy command, the website
will be closed so leave it open and open another PowerShell.)
# kubectl create deployment Durga --image=nginx
# kubectl get deployments
# kubectl get deployments -o wide
# kubectl get pods //status should shows as running (it
takes time and depends on internet)
// command for delete (but don’t use it now)-> Kubectl delete
deployment Durga
# kubectl get pods -o wide
# kubectl get replicaset -o wide
(Go to website now and click on replica sets in left side and click
then it shows the Durga which w have created before.)
# kubectl describe deployment Durga
# kubectl describe pod <podname>
# kubectl describe deployment <deployname>
//editing the version of pod:
# kubectl edit deployment Durga
// go to container ->image -> at nginx keep as :1.26 it looks like
nginx:1.26 to save-> :wq!
# kubectl get pods //version will be changed
# kubectl edit deployment Durga
// now at replicas change number as 2
# kubectl get pods
# kubectl get replicaset //it shows as 2 replicas because we
changed before
# kubectl get pods``
# kubectl exec -it <podname> -- /bin/bash kubectl exec -it
jyothi-7766bb4988-v6299 -- /bin/bash //it will go to the pod
# env
// we will use yaml files for deployment for Kubernetes
configuration.
# exit pod
// yaml file shared by sir, I stored in location c/users/pallas
//so now I have to give commands to change the location where the
yaml file I have saved.
cd /mnt/c/Users/pallas "C:\Users\jyoth\Downloads\my-app-
deplo.yaml"
# ls
# ld -l my*
# kubectl apply -f <yaml file> //name of deployment yaml file I
saved as-> my-app-deplo.yaml
# kubectl get deployments
# kubectl get pods
# kubectl get service
# kubectl apply -f <yaml file name> //service yaml
file name was: my-app-service.yaml
# kubectl get service
# kubectl apply -f <yaml file name> //service yaml
file name was: my-app-ingress.yaml
# kubectl get ingress
# kubectl apply -f <yaml file name> //file name is
my-app.yaml
# kubectl get ingress
# kubectl describe node
//in the output copy the internal ip-address into edge and now type
command as:
# kubectl get service //in this output we see at port like this
80:31613/TCP, then only copy the 31613
//and now go to edge where we already given the ip-address into
edge now paste this also it should appears like
<ip-address>:31613
Now as the output the nginx website will be opened.

microk8s start
2 microk8s kubectl get nodes
3 alias kubectl="microk8s kubectl"
4 kubectl get nodes
5 kubectl get nodes -o wides
6 kubectl get nodes -o wide
7 microk8s enable dashboard
8 microk8s dashboard-proxy
9 microk8s start
10 micok8s kubectl get nodes
11 microk8s kubectl get nodes
12 alias kubectl="microk8s kubectl"
13 kubectl get nodes
14 kubectl get nodes -o wide
15 microk8s enable dashboard
16 microk8s dashboard-proxy
17 microk8s kubectl -n kube-system get deployments
18 microk8s kubectl -n kube-system describe deployment
kubernetes-dashboard
19 microk8s kubectl -n kube-system get pods
20 microk8s dashboard-proxy
21 microk8s start
22 alias kubectl="microk8s kubectl"
23 kubectl get nodes
24 kubectl get all
25 kubectl get nodes -o wide
26 microk8s enable dashboard
27 microk8s dashboard-proxy
28 microk8s inspect
29 microk8s disable dashboard
30 microk8s enable dashboard
31 microk8s kubectl -n kube-system get pods
32 microk8s kubectl -n kube-system delete deployment
kubernetes-dashboard
33 microk8s enable dashboard
34 microk8s kubectl -n kube-system get events --sort-
by=.metadata.creationTimestamp
35 microk8s stop
36 microk8s start
37 microk8s disable dashboard
38 microk8s kubectl delete ns kube-system
39 microk8s enable dashboard
40 clear
41 microk8s enable dashboard
42 microk8s dashboard-proxy
43 microk8s reset
44 snap remove microk8s
45 snap install microk8s --classic
46 microk8s start
47 microk8s kubectl get nodes
48 alias kubectl="microk8s kubectl"
49 kubectl get nodes
50 kubectl get nodes -o wide
51 microk8s enable dashboard
52 microk8s dashboard-proxy
53 microk8s enable dashboard
54 microk8s dashboard-proxy
55 microk8s start
56 microk8s kubectl get nodes
57 kubectl get nodes -o wide
58 microk8s kubectl get nodes -o wide
59 alias kubectl="microk8s kubectl"
60 kubectl get nodes
61 microk8s enable dashboard
62 microk8s dashboard-proxy
63 kubectl create deployement thub
64 kubectl create deployment thub
65 kubectl create deployment thub image=nginx
66 kubectl create deployment thub --image=nginx
67 kubectl get deployments
68 kubectl get deployments -o wide
69 kubectl get pods
70 kubectl get pods -o wide
71 kubectl get replicaset
72 kubectl get replicaset -o wide
73 kubectl get pods
74 kubectl describe pod
eyJhbGciOiJSUzI1NiIsImtpZCI6ImlJRUQwTVNweU4wYXB4YW
9lWEVQTGdUeGVVZ2UyazZEQ1RDNDl5TTJmd0kifQ.eyJpc3Mi
OiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXR
lcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlL
XN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bn
Qvc2VjcmV0Lm5hbWUiOiJtaWNyb2s4cy1kYXNoYm9hcmQtdG9
rZW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3Nlc
nZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVyb
mV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2Nvd
W50LnVpZCI6ImUxZjM4Y2U0LTc5NjItNGM0MS1hZTUxLTMw
NzhlNzgxNjBlOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3Vu
dDprdWJlLXN5c3RlbTpkZWZhdWx0In0.OMrBFANhhPB_ypkuS
Fl9BS1j29GuJBcsyxE8HH0yHo6XinCBVM4eTrWVtsIgiOAmmK
_MSac0h4_t53EqzlUHi2A5SFv4A8XWP-
dMwxx5YmMBKO8xRnXAQoGlW3Qhg3PFYohzlsZM-
eJH2gMFCkk32wf-
aTW7iFiTml8DDDRun55LoW8U_ok2X5cFZX-pphIAj-
0JH7MvqHqBfeV0pvxFXdLDU93FKN1bbqU8gWyvQJMpxd8aBh
KfKMU6YyICpa1nAuC5_R_TBI_FXpwD6ybj-
nC4KYlcD3BteFKH2Xym6ySTtg4Fc
75 kubectl descirbe pod thub-94bb49d78-8kd8m
76 kubectl describe pod thub-94bb49d78-8kd8m
77
eyJhbGciOiJSUzI1NiIsImtpZCI6ImlJRUQwTVNweU4wYXB4YW
9lWEVQTGdUeGVVZ2UyazZEQ1RDNDl5TTJmd0kifQ.eyJpc3Mi
OiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXR
lcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlL
XN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bn
Qvc2VjcmV0Lm5hbWUiOiJtaWNyb2s4cy1kYXNoYm9hcmQtdG9
rZW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3Nlc
nZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVyb
mV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2Nvd
W50LnVpZCI6ImUxZjM4Y2U0LTc5NjItNGM0MS1hZTUxLTMw
NzhlNzgxNjBlOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3Vu
dDprdWJlLXN5c3RlbTpkZWZhdWx0In0.OMrBFANhhPB_ypkuS
Fl9BS1j29GuJBcsyxE8HH0yHo6XinCBVM4eTrWVtsIgiOAmmK
_MSac0h4_t53EqzlUHi2A5SFv4A8XWP-
dMwxx5YmMBKO8xRnXAQoGlW3Qhg3PFYohzlsZM-
eJH2gMFCkk32wf-
aTW7iFiTml8DDDRun55LoW8U_ok2X5cFZX-pphIAj-
0JH7MvqHqBfeV0pvxFXdLDU93FKN1bbqU8gWyvQJMpxd8aBh
KfKMU6YyICpa1nAuC5_R_TBI_FXpwD6ybj-
nC4KYlcD3BteFKH2Xym6ySTtg4FcEzvd5bitDtqwZm4VY_gnt-
97rkhSyZ1bcKT3g
78 kubectl describe deployment thub
79 systemctl kubectl create
80 kubectl version --client
81 kubectl create
82 systemctl kubectl create
83 kubectl edit deployment
84 kubectl get deployment thub
85 kubectl get replicaset
86 kubectl get pods
87 kubectl exec -it thub-94bb49d78-8kd8m --it -- /bin/bash
88 kubectl exec -it thub-94bb49d78-8kd8m -- it -- /bin/bash
89 kubectl exec -it thub-94bb49d78-8kd8m /bin/bash
90 kubectl exec -it thub-94bb49d78-8kd8m -- /bin/bash
91 kubectl edit deployment my-app
92 kubectl apply -f my-app-deplo.yaml
93 kubectl edir
94 kubectl edit
95 ls
96 systemctl kubectl create
97 cat my-app-deplo.yaml
98 kubectl get pods
99 kubectl edit deployment thub
100 clear
101 kubectl apply -f
https://k8s.io/examples/application/guestbook/redis-leader-
deployment.yaml
102 kubectl get pods
103 kubectl apply -f
https://k8s.io/examples/application/guestbook/redis-leader-
service.yaml
104 kubectl get service
105 kubectl apply -f
https://k8s.io/examples/application/guestbook/redis-follower-
deployment.yaml
106 kubectl get pods
107 kubectl apply -f
https://k8s.io/examples/application/guestbook/redis-follower-
service.yaml
108 kubectl get servic
109 kubectl get service
110 kubectl apply -f
https://k8s.io/examples/application/guestbook/frontend-
deployment.yaml
111 kubectl get pods -l app-guestbook -l tier-frontend
112 kubectl get pods -l app-guestbook -l tier=frontend
113 kubectl apply -f
https://k8s.io/examples/application/guestbook/frontend-service.yaml
114 kubectl get service
115 kubectl port-forward svc/frontend 8080:80
116 kubectl get pods
117 kubectl port-forward svc/frontend 8080:80
118 kubectl get pods
119 kubectl port-forward svc/frontend 8080:80
120 microk8s start
121 microk8s kubectl get nodes
122 alias kubectl="microk8s kubectl"
123 kubectl get nodes
124 kubectl get nodes -o wide
125 microk8s enable dashboard
126 microk8s dashboard-proxy
127 kubectl create deployment jyothi --image=nginx
128 kubectl get deployment
129 kubectl get deployment -o wide
130 kubectl get pods
131 kubectl get pods -o wide
132 kubectl get replicaset -o wide
133 kubectl describe deployment jyothi
134 kubectl get pod
135 kubectl describe pod jyothi-7766bb4988-v6299
136 kubectl get deployments
137 kubectl describe deployment jyothi
138 kubectl edit deployment jyothi
139 nano /tmp/kubectl-edit-2349766263.yaml
140 kubectl get pods
141 nano /tmp/kubectl-edit-2349766263.yaml
142 kubectl get pods
143 kubectl get replicaset
144 kubectl get pods
145 kubectl exec -it jyothi-7766bb4988-v6299 --/bin/bash
146 kubectl exec -it jyothi-7766bb4988-v6299 -- /bin/bash
147 ls
148 ld -l my*
149 cd /mnt/c/Users/jyoth/Downloads
150 ls
151 kubectl apply -f my-app-deplo.yaml
152 kubectl apply -f /mnt/c/Users/jyoth/Downloads/my-app-
deplo.yaml
153 kubectl get deployments
154 kubectl describe deployment my-app
155 kubectl get pods
156 kubectl get service
157 kubectl apply -f my-app-service.yaml
158 kubectl get service
159 kubectl apply -f my-app-ingress.yaml
160 kubectl get ingress
161 kubectl apply -f my-app.yaml
162 kubectl get ingress
163 kubectl describe node
164 kubectl get service
165 kubectl get pods
166 kubectl describe pod jyothi-7766bb4988-v6299
167 kubectl get deployment
168 kubectl get deployment s
169 kubectl get deployments
170 kubectl describe deployment jyothi
171 kubectl get services
172 kubectl describe service my-app-service
173 kubectl describe node
174 kubectl get pods
175 kubectl get service

# MicroK8s Nginx Deployment Guide

Today, I successfully deployed an Nginx page using Kubernetes on


MicroK8s with the following commands. This experience allowed
me to deepen my understanding of Kubernetes deployment, scaling,
and using YAML files for configuration. Below are the detailed steps
I followed to set up and manage the deployment.

---

## Prerequisites

- **Windows 10/11** with WSL installed


- **MicroK8s** installed
- **Kubernetes** knowledge (basic)
- **kubectl** installed
---

## Steps

### Step 1: Install WSL

1. Go to Windows, search for **PowerShell**.


2. Right-click on PowerShell and select **Run as Administrator**.
3. In PowerShell, type the following command:

```bash
wsl --install
4. Once installed, launch WSL (which brings you into the Linux shell).
5. Type the following command to install MicroK8s:
sudo snap install microk8s --classic
(Enter the password of your laptop, not the PIN)

Step 2: Set Up MicroK8s


1. Start MicroK8s:
microk8s start
2. Check node status:
microk8s kubectl get nodes
 The status should show "Ready".
3. Alias kubectl:
Use the following alias to simplify commands:
alias kubectl="microk8s kubectl"
 Now, you can use kubectl directly instead of microk8s kubectl.
4. Verify node details:
kubectl get nodes
5. To see more node details:
kubectl get nodes -o wide

Step 3: Enable Kubernetes Dashboard


1. Enable the dashboard:
microk8s enable dashboard
2. Start the dashboard proxy:
microk8s dashboard-proxy
 Copy the IP address shown in the output and paste it into the
browser (Edge or Chrome).
 The website will ask for a token; copy the token from the output and
paste it into the website.

Step 4: Deploy Nginx Application


1. Create a deployment with the Nginx image:
kubectl create deployment jyothi --image=nginx
2. Check the deployment status:
kubectl get deployments
3. To get more detailed information:
kubectl get deployments -o wide
4. Get pod status (it may take some time for the pods to start):
kubectl get pods
 The status should be "Running" (depending on your internet speed).
5. If needed, you can delete the deployment (don't use this now):
kubectl delete deployment jyothi

Step 5: Inspect and Edit the Deployment


1. Check the status of pods:
kubectl get pods -o wide
2. Check the ReplicaSets:
kubectl get replicaset -o wide
 Go to the dashboard and click on ReplicaSets on the left side to see
the details of the jyothi replica set.
3. Describe the deployment:
kubectl describe deployment jyothi
4. Describe a specific pod:
kubectl describe pod <pod-name>
5. Describe the deployment:
kubectl describe deployment <deploy-name>
Step 6: Update Deployment Version
1. Edit the deployment to change the image version:
kubectl edit deployment jyothi
 In the editor, find the nginx image and change it to nginx:1.26.
 Save and exit (:wq).
2. Check the pods again:
kubectl get pods

Step 7: Scale the Deployment


1. Edit the deployment again to change the replica count:
kubectl edit deployment jyothi
 Change the replica count to 2.
 Save and exit.
2. Check the status of the pods and replicaset:
kubectl get pods
kubectl get replicaset

Step 8: Interact with the Pod


1. Exec into the pod:
kubectl exec -it <pod-name> -- /bin/bash
 Example:
kubectl exec -it jyothi-7766bb4988-v6299 -- /bin/bash
2. To view environment variables:
env
3. To exit the pod:
exit

Step 9: Use YAML Files for Configuration


1. Change directory to the location where the YAML file is stored:
cd /mnt/c/Users/pallas
2. List the files:
ls
3. Apply the deployment configuration:
kubectl apply -f my-app-deplo.yaml
4. Check the deployments and pods:
kubectl get deployments
kubectl get pods
5. Apply the service configuration:
kubectl apply -f my-app-service.yaml
6. Check the services:
kubectl get service
7. Apply the ingress configuration:
kubectl apply -f my-app-ingress.yaml
8. Check the ingress:
kubectl get ingress
9. Apply the final configuration:
kubectl apply -f my-app.yaml
10. Check the ingress again:
kubectl get ingress
11. Describe the node:
kubectl describe node

Step 10: Expose the Application


1. Copy the internal IP address from the node description.
2. Go to the browser and paste the IP address.
3. Open a new PowerShell window and run:
kubectl get service
4. Find the external port (e.g., 80:31613/TCP) and copy the port
number (31613).
5. In the browser, enter the IP address followed by the port number:
<ip-address>:31613
 This will open the Nginx website.

Conclusion
This guide showcases my recent accomplishment in deploying
Nginx using MicroK8s and Kubernetes. By following the above
steps, I was able to deploy, scale, and manage the Nginx application
while also utilizing various Kubernetes commands and YAML files
for configuration management.
### Instructions for Use:

1. Copy the entire content above.


2. Save it as `README.md`.
3. Upload it to your GitHub repository.

Now, your accomplishment is highlighted at the beginning, making


the process more personal. Let me know if you need any more
changes!
Thank you for your patience! Below is the README.md based on
the exact commands you've provided, with your preferences applied.
The commands are organized and formatted as you requested.
# MicroK8s Nginx Deployment Guide

This guide will walk you through the steps to deploy Nginx on
MicroK8s using Windows Subsystem for Linux (WSL) and
Kubernetes commands.

---

## Prerequisites

- **Windows 10/11** with WSL installed


- **MicroK8s** installed
- **Kubernetes** knowledge (basic)
- **kubectl** installed

---

## Steps

### Step 1: Install WSL

1. Go to Windows, search for **PowerShell**.


2. Right-click on PowerShell and select **Run as Administrator**.
3. In PowerShell, type the following command:

```bash
wsl --install
4. Once installed, launch WSL (which brings you into the Linux shell).
5. Type the following command to install MicroK8s:
sudo snap install microk8s --classic
(Enter the password of your laptop, not the PIN)

Step 2: Set Up MicroK8s


1. Start MicroK8s:
microk8s start
2. Check node status:
microk8s kubectl get nodes
 The status should show "Ready".
3. Alias kubectl:
Use the following alias to simplify commands:
alias kubectl="microk8s kubectl"
 Now, you can use kubectl directly instead of microk8s kubectl.
4. Verify node details:
kubectl get nodes
5. To see more node details:
kubectl get nodes -o wide

Step 3: Enable Kubernetes Dashboard


1. Enable the dashboard:
microk8s enable dashboard
2. Start the dashboard proxy:
microk8s dashboard-proxy
 Copy the IP address shown in the output and paste it into the
browser (Edge or Chrome).
 The website will ask for a token; copy the token from the output and
paste it into the website.

Step 4: Deploy Nginx Application


1. Create a deployment with the Nginx image:
kubectl create deployment jyothi --image=nginx
2. Check the deployment status:
kubectl get deployments
3. To get more detailed information:
kubectl get deployments -o wide
4. Get pod status (it may take some time for the pods to start):
kubectl get pods
 The status should be "Running" (depending on your internet speed).
5. If needed, you can delete the deployment (don't use this now):
kubectl delete deployment jyothi

Step 5: Inspect and Edit the Deployment


1. Check the status of pods:
kubectl get pods -o wide
2. Check the ReplicaSets:
kubectl get replicaset -o wide
 Go to the dashboard and click on ReplicaSets on the left side to see
the details of the jyothi replica set.
3. Describe the deployment:
kubectl describe deployment jyothi
4. Describe a specific pod:
kubectl describe pod <pod-name>
5. Describe the deployment:
kubectl describe deployment <deploy-name>

Step 6: Update Deployment Version


1. Edit the deployment to change the image version:
kubectl edit deployment jyothi
 In the editor, find the nginx image and change it to nginx:1.26.
 Save and exit (:wq).
2. Check the pods again:
kubectl get pods

Step 7: Scale the Deployment


1. Edit the deployment again to change the replica count:
kubectl edit deployment jyothi
 Change the replica count to 2.
 Save and exit.
2. Check the status of the pods and replicaset:
kubectl get pods
kubectl get replicaset

Step 8: Interact with the Pod


1. Exec into the pod:
kubectl exec -it <pod-name> -- /bin/bash
 Example:
kubectl exec -it jyothi-7766bb4988-v6299 -- /bin/bash
2. To view environment variables:
env
3. To exit the pod:
exit

Step 9: Use YAML Files for Configuration


1. Change directory to the location where the YAML file is stored:
cd /mnt/c/Users/pallas
2. List the files:
ls
3. Apply the deployment configuration:
kubectl apply -f my-app-deplo.yaml
4. Check the deployments and pods:
kubectl get deployments
kubectl get pods
5. Apply the service configuration:
kubectl apply -f my-app-service.yaml
6. Check the services:
kubectl get service
7. Apply the ingress configuration:
kubectl apply -f my-app-ingress.yaml
8. Check the ingress:
kubectl get ingress
9. Apply the final configuration:
kubectl apply -f my-app.yaml
10. Check the ingress again:
kubectl get ingress
11. Describe the node:
kubectl describe node

Step 10: Expose the Application


1. Copy the internal IP address from the node description.
2. Go to the browser and paste the IP address.
3. Open a new PowerShell window and run:
kubectl get service
4. Find the external port (e.g., 80:31613/TCP) and copy the port
number (31613).
5. In the browser, enter the IP address followed by the port number:
<ip-address>:31613
 This will open the Nginx website.

Conclusion
This guide walked you through the steps to deploy Nginx using
MicroK8s, configure it using Kubernetes commands, and view the
application through the dashboard and browser. You can now
manage your Kubernetes resources with ease.

### Instructions for Use:

1. Copy the entire content above.


2. Save it as `README.md`.
3. Upload it to your GitHub repository.

Let me know if any adjustments are needed!

You might also like