Docker
•                                    DOCKER
    Docker is the world’s leading software
    container platform. It was launched in
    2013 by a company called Dotcloud, Inc
    which was later renamed Docker, Inc. It is
    written in the Go language.
•   Developers can write code without
    worrying about the testing and production
    environment. System admins need not
    worry about infrastructure as Docker can
    easily scale up and scale down the number
    of systems. Docker comes into play at the
    deployment stage of the software
    development cycle.
•   Containerization is OS-based virtualization that creates multiple virtual units in
                               Containerization
    the user space, known as Containers.
•   Containers share the same host kernel but are isolated from each other through
    private namespaces and resource control mechanisms at the OS level.
•   These containers run on top of the same shared operating system kernel of the
    underlying host machine and one or more processes can be run within each
    container.
•   In containers you don’t have to pre-allocate any RAM, it is allocated dynamically
    during the creation of containers
•    Containers virtualize CPU, memory, storage, and network resources at the OS
    level, providing developers with a sandboxed view of the OS logically isolated
    from other applications.
•   Docker is the most popular open-source container format available and is
    supported on Google Cloud Platform and by Google Kubernetes Engine.
                              Docker Architecture
•   Docker architecture consists of Docker client, Docker Daemon running on Docker Host, and Docker Hub
    repository.
•   Docker has client-server architecture in which the client communicates with the Docker Daemon running
    on the Docker Host using a combination of REST APIs, Socket IO, and TCP.
•   If we have to build the Docker image, then we use the client to execute the build command to Docker
    Daemon then Docker Daemon builds an image based on given inputs and saves it into the Docker registry.
•   If we want to create an image then just execute the pull command from the client and then Docker Daemon
    will pull the image from the Docker Hub.
•   Finally if we want to run the image then execute the run command from the client which will create the
    container.
Docker Architecture
                                 Docker Architecture
•   Docker daemon manages all the services by communicating with other daemons. It manages docker
    objects such as images, containers, networks, and volumes with the help of the API requests of Docker.
•   Docker Client: With the help of the docker client, the docker users can interact with the docker. The docker
    command uses the Docker API. The Docker client can communicate with multiple daemons. When a docker
    client runs any docker command on the docker terminal then the terminal sends instructions to the
    daemon. The Docker daemon gets those instructions from the docker client withinside the shape of the
    command and REST API’s request.
•   The main objective of the docker client is to provide a way to direct the pull of images from the docker
    registry and run them on the docker host. The common commands which are used by clients are docker
    build, docker pull, and docker run.
•   Docker Host: A Docker host is a type of machine that is responsible for running
    more than one container. It comprises the Docker daemon, Images, Containers,
    Networks, and Storage.
•   Docker Registry: All the docker images are stored in the docker registry. There is a
    public registry which is known as a docker hub that can be used by anyone. We can
    run our private registry also. With the help of docker run or docker pull commands,
    we can pull the required images from our configured registry. Images are pushed
    into configured registry with the help of the docker push command.
Docker objects
                            Docker objects
•   Docker Images : An image contains instructions for creating a docker
    container. It is just a read-only template. It is used to store and ship
    applications. Images are an important part of the docker experience as they
    enable collaboration between developers in any way which is not possible
    earlier.
•   Docker Containers: Containers are created from docker images as they are
    ready applications. With the help of Docker API or CLI, we can start, stop,
    delete, or move a container. A container can access only those resources
    which are defined in the image unless additional access is defined during the
    building of an image in the container.
                             Docker objects
•   Docker Storage: We can store data within the writable layer of the
    container but it requires a storage driver. Storage driver controls and
    manages the images and containers on our docker host.
                       Types of Docker Storage
•   Data Volumes: Data Volumes can be mounted directly into the filesystem of the container
    and are essentially directories or files on the Docker Host filesystem.
•   Volume Container: In order to maintain the state of the containers (data) produced by the
    running container, Docker volumes file systems are mounted on Docker containers.
    independent container life cycle, the volumes are stored on the host. This makes it simple
    for users to exchange file systems among containers and backup data.
•   Directory Mounts: A host directory that is mounted as a volume in your container might be
    specified.
•   Storage Plugins: Docker volume plugins enable us to integrate the Docker containers with
    external volumes like Amazon EBS by this we can maintain the state of the container.
                           Docker Networking
•   Docker networking provides complete isolation for docker containers. It means a
    user can link a docker container to many networks. It requires very less OS
    instances to run the workload.
•   Types of Docker Network
•   Bridge: It is the default network driver. We can use this when different containers
    communicate with the same docker host.
•   Host: When you don’t need any isolation between the container and host then it
    is used.
•   Overlay: For communication with each other, it will enable the swarm services.
•   None: It disables all networking.
•   macvlan: This network assigns MAC(Media Access control) address to the
    containers which look like a physical address.
                  Components of Docker
The main components of Docker include – Docker clients and servers, Docker
images, Dockerfile, Docker Registries, and Docker containers.
•   Docker Clients and Servers– Docker has a client-server architecture. The
    Docker Daemon/Server consists of all containers. The Docker
    Daemon/Server receives the request from the Docker client through CLI or
    REST APIs and thus processes the request accordingly. Docker client and
    Daemon can be present on the same host or different host.
                         Components of Docker
•   Docker Images– Docker images are used to build docker containers by using a read-only
    template. The foundation of every image is a base image eg. base images such as –
    ubuntu14.04 LTS, and Fedora 20. Base images can also be created from scratch and then
    required applications can be added to the base image by modifying it thus this process of
    creating a new image is called “committing the change”.
•   Docker File– Dockerfile is a text file that contains a series of instructions on how to build
    your Docker image. This image contains all the project code and its dependencies. The
    same Docker image can be used to spin ‘n’ number of containers each with modification to
    the underlying image. The final image can be uploaded to Docker Hub and shared among
    various collaborators for testing and deployment. The set of commands that you need to
    use in your Docker File is FROM, CMD, ENTRYPOINT, VOLUME, ENV, and many more.
                       Components of Docker
•   Docker Registries– Docker Registry is a storage component for Docker images. We
    can store the images in either public/private repositories so that multiple users can
    collaborate in building the application. Docker Hub is Docker’s cloud repository.
    Docker Hub is called a public registry where everyone can pull available images and
    push their images without creating an image from scratch.
•   Docker Containers– Docker Containers are runtime instances of Docker images.
    Containers contain the whole kit required for an application, so the application can
    be run in an isolated way. For eg.- Suppose there is an image of Ubuntu OS with
    NGINX SERVER when this image is run with the docker run command, then a
    container will be created and NGINX SERVER will be running on Ubuntu OS.
                        Kubernetes
Kubernetes or K8s was a project spun out of Google as a open source
next-gen container scheduler designed with the lessons learned from
developing and managing Borg and Omega.
Kubernetes was designed from the ground-up as a loosely coupled collection
of components centered around deploying, maintaining, and scaling
applications.
Kubernetes is the linux kernel of distributed systems.
It abstracts away the underlying hardware of the nodes and provides a
uniform interface for applications to be both deployed and consume the
shared pool of resources.
K u b e r n e t e s Architecture
                     Architecture Overview
•   Masters -Acts as the primary control plane for Kubernetes. Masters are
    responsible ataminimumforrunning theAPI Server, scheduler,andcluster
    controller. They commonly alsomanage storing cluster state,cloud-provider
    specific components andother cluster essential services.
•   Nodes -Are the ‘workers’ of aKubernetes cluster.Theyrun aminimal agent that
    manages the node itself,andare tasked with executingworkloads as designated by
    the master.
Master C o m p o n e n t s
          ●   Kube-apiserver
          ●   Etcd
          ●   Kube-controller-manager
          ●   Cloud-controller-manager
          ●   Kube-scheduler
                        kube-apiserver
•   The api server provides a forward facing REST interface into the
    kubernetes control plane and datastore. All clients, including nodes,
    users and other applications interact with kubernetes strictly through
    theAPIServer.
•   It is the true core of Kubernetes acting as the gatekeeper to the cluster by
    handling authentication and authorization, request validation, mutation,
    and admission control in addition to being the front-end to the backing
    datastore.
•   E t c d : Etcd acts as the cluster datastore; providing a strong,
    consistent and highly available key-value store used for persisting
    cluster state.
                         kube-apiserver
    e tc d
    Etcd acts as the cluster datastore; providing a strong, consistent and highly
•   The cloud-controller-manager: It is a daemon that provides cloud-provider
    available key-value store used for persisting cluster state.
    specific knowledge and integration capability into the core control loop of
    Kubernetes. The controllers include Node, Route, Service, and add an
    additional controller to handle PersistentVolumeLabels .
•   Kube-scheduler: It is a verbose policy-rich engine that evaluates workload
    requirements and attempts to place it on a matching resource. These
    requirements can include such things as general hardware reqs, affinity,
    anti-affinity, and other custom resource requirements.
Node Components
    ● Kubelet
    ● Kube-proxy
    ● Container runtime
      engine
                                kubelet
•   Acts as the node agent responsible for managing pod lifecycle on its host. Kubelet
    understands YAML container manifests that it can read from several sources:
•   File path
•   HTTP Endpoint
•   Etcd watch acting on any changes
•   HTTP Server mode accepting container manifests over a
                               kube- proxy
•   Manages the network rules on each node and performs connection forwarding
    or load balancing for Kubernetes cluster services.
•   Available Proxy Modes:
     •   Userspace
     •   iptables
     •   ipvs (alpha in 1.8)
           Container Ru n t i m e
With respect to Kubernetes, A container runtime is a CRI
(Container Runtime Interface) compatible application that
executes and manages containers.
 ●   Containerd (docker)
 ●   Cri-o
 ●   Rkt
 ●   Kata (formerly clear and hyper)
 ●   Virtlet (VM CRI compatible runtime)
          Additional Services
Kube-dns - Provides cluster wide DNS Services. Services are
resolvable to
<service>.<namespace>.svc.cluster.local.
Heapster - Metrics Collector for kubernetes cluster, used by some
resources such as the Horizontal Pod Autoscaler. (required for
kubedashboard metrics)
Kube-dashboard - A general purpose web based UI for kubernetes.