0% found this document useful (0 votes)
5 views5 pages

Commands

The document details the process of cloning a Kubernetes repository, applying a deployment configuration, and scaling a deployment using kubectl commands. It also includes the initialization of a Kubernetes control plane with kubeadm, highlighting various warnings and configurations encountered during the setup. The Kubernetes version and Docker version information are provided, along with the status of the deployment and pods created.

Uploaded by

chhavi.jerath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views5 pages

Commands

The document details the process of cloning a Kubernetes repository, applying a deployment configuration, and scaling a deployment using kubectl commands. It also includes the initialization of a Kubernetes control plane with kubeadm, highlighting various warnings and configurations encountered during the setup. The Kubernetes version and Docker version information are provided, along with the status of the deployment and pods created.

Uploaded by

chhavi.jerath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 5

node2 k8s]$ git clone https://github.

com/chhavijerath/k8s
Cloning into 'k8s'...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 11 (delta 2), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (11/11), done.
[node2 k8s]$ ls -al
total 4
drwxr-xr-x 4 root root 45 Apr 3 17:04 .
drwxr-xr-x 4 root root 45 Apr 3 17:03 ..
drwxr-xr-x 8 root root 163 Apr 3 17:03 .git
drwxr-xr-x 3 root root 34 Apr 3 17:04 k8s
-rw-r--r-- 1 root root 681 Apr 3 17:03 pod1.yml
[node2 k8s]$ cd k8s
[node2 k8s]$ ls -al
total 4
drwxr-xr-x 3 root root 34 Apr 3 17:04 .
drwxr-xr-x 4 root root 45 Apr 3 17:04 ..
drwxr-xr-x 8 root root 163 Apr 3 17:04 .git
-rw-r--r-- 1 root root 514 Apr 3 17:04 pod2.yml
[node2 k8s]$ kubectl apply -f pod2.yml
deployment.apps/hello-deploy created
[node2 k8s]$ kubectl get rs
NAME DESIRED CURRENT READY AGE
hello-deploy-6768d9889f 10 10 0 26s
my-nginx-cbdccf466 3 3 0 91m
[node2 k8s]$ kubectl get rs
NAME DESIRED CURRENT READY AGE
hello-deploy-6768d9889f 10 10 0 44s
my-nginx-cbdccf466 3 3 0 91m
[node2 k8s]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-deploy-6768d9889f-2kcv8 0/1 Pending 0 105s
hello-deploy-6768d9889f-64n87 0/1 Pending 0 105s
hello-deploy-6768d9889f-8l5q5 0/1 Pending 0 105s
hello-deploy-6768d9889f-bljt2 0/1 Pending 0 105s
hello-deploy-6768d9889f-hlf98 0/1 Pending 0 105s
hello-deploy-6768d9889f-mdf2d 0/1 Pending 0 105s
hello-deploy-6768d9889f-pgsz8 0/1 Pending 0 105s
hello-deploy-6768d9889f-qm2tb 0/1 Pending 0 105s
hello-deploy-6768d9889f-wjt5w 0/1 Pending 0 105s
hello-deploy-6768d9889f-xdl85 0/1 Pending 0 105s
hello-pod 0/1 Pending 0 29m
my-nginx-cbdccf466-5tvl7 0/1 Pending 0 92m
my-nginx-cbdccf466-9phx5 0/1 Pending 0 92m
my-nginx-cbdccf466-sk8bc 0/1 Pending 0 92m
[node2 k8s]$
[node2 k8s]$
[node2 k8s]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-deploy-6768d9889f-2kcv8 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-64n87 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-8l5q5 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-bljt2 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-hlf98 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-mdf2d 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-pgsz8 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-qm2tb 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-wjt5w 0/1 Pending 0 2m39s
hello-deploy-6768d9889f-xdl85 0/1 Pending 0 2m39s
hello-pod 0/1 Pending 0 30m
my-nginx-cbdccf466-5tvl7 0/1 Pending 0 93m
my-nginx-cbdccf466-9phx5 0/1 Pending 0 93m
my-nginx-cbdccf466-sk8bc 0/1 Pending 0 93m
[node2 k8s]$ kubectl get deploy hello-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hello-deploy 0/10 10 0 3m16s
[node2 k8s]$ kubectl scale deploy hello-deploy --replicas 5
deployment.apps/hello-deploy scaled
[node2 k8s]$ kubectl get deploy hello-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hello-deploy 0/5 5 0 4m1s
[node2 k8s]$ kubectl scale deploy hello-deploy --replicas 12
deployment.apps/hello-deploy scaled
[node2 k8s]$ kubectl get deploy hello-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hello-deploy 0/12 12 0 5m14s
[node2 k8s]$ kubectl get deploy hello-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hello-deploy 0/12 12 0 5m24s
[node2 k8s]$ kubectl get rs
NAME DESIRED CURRENT READY AGE
hello-deploy-6768d9889f 12 12 0 5m33s
my-nginx-cbdccf466 3 3 0 96m
[node2 k8s]$

===================================================================================
===
*******
[node1 ~]$ docker version
Client: Docker Engine - Community
Version: 24.0.2
API version: 1.43
Go version: go1.20.4
Git commit: cb74dfc
Built: Thu May 25 21:55:21 2023
OS/Arch: linux/amd64
Context: default

Server: Docker Engine - Community


Engine:
Version: 24.0.2
API version: 1.43 (minimum version 1.12)
Go version: go1.20.4
Git commit: 659604f
Built: Thu May 25 21:54:24 2023
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.6.21
GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
runc:
Version: 1.1.7
GitCommit: v1.1.7-0-g860f061
docker-init:
Version: 0.19.0
GitCommit: de40ad0
==============================================================================

[node1 ~]$ kubectl version --output yaml


clientVersion:
buildDate: "2023-05-17T14:20:07Z"
compiler: gc
gitCommit: 7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647
gitTreeState: clean
gitVersion: v1.27.2
goVersion: go1.20.4
major: "1"
minor: "27"
platform: linux/amd64
kustomizeVersion: v5.0.1
==============================================================================
node1 ~]$ kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-
cidr 10.5.0.0/16
W0403 15:18:52.880232 3154 initconfiguration.go:120] Usage of CRI endpoints
without URL scheme is deprecated and can cause kubelet errors in the future.
Automatically prepending scheme "unix" to the "criSocket" with value
"/run/docker/containerd/containerd.sock". Please update your configuration!
I0403 15:18:53.183137 3154 version.go:256] remote version is much newer:
v1.29.3; falling back to: stable-1.27
[init] Using Kubernetes version: v1.27.12
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the
verification:
KERNEL_VERSION: 4.4.0-210-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[WARNING SystemVerification]: failed to parse kernel config: unable to load
kernel module: "configs", output: "", err: exit status 1
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]:
/proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your
internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config
images pull'
W0403 15:18:53.589585 3154 images.go:80] could not find officially supported
version of etcd for Kubernetes v1.27.12, falling back to the nearest etcd version
(3.5.7-0)
W0403 15:19:01.039500 3154 checks.go:835] detected that the sandbox image
"registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used
by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI
sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes
kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local
node1] and IPs [10.96.0.1 192.168.0.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs
[192.168.0.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs
[192.168.0.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0403 15:19:11.380949 3154 images.go:80] could not find officially supported
version of etcd for Kubernetes v1.27.12, falling back to the nearest etcd version
(3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static
Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502345 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the
"kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the
configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels:
[node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-
load-balancers]
[mark-control-plane] Marking the node node1 as control-plane by adding the taints
[node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 5y3h0p.o587crg8wjnhweqb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs
in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller
automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node
client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public"
namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.


Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as
root:

kubeadm join 192.168.0.8:6443 --token 5y3h0p.o587crg8wjnhweqb \


--discovery-token-ca-cert-hash
sha256:07d48e24411b12de85aab36ce306ab9bf567374f923d68e7aaf12d98af62d819
Waiting for api server to startup
Warning: resource daemonsets/kube-proxy is missing the kubectl.kubernetes.io/last-
applied-configuration annotation which is required by kubectl apply. kubectl apply
should only be used on resources created declaratively by either kubectl create --
save-config or kubectl apply. The missing annotation will be patched automatically.
daemonset.apps/kube-proxy configured
No resources found

===================================================================================
===============================================================

[node2 ~]$ kubectl get nodes


NAME STATUS ROLES AGE VERSION
node2 Ready control-plane 5m46s v1.27.2

==============================================================

You might also like