Docker Course
Docker Course
Docker Essential
                               Dan Pomohaci
Introduction
1. Course Goal
2. Course Target
3. Prerequisites
4. Course Structure
Course Goal
This course covers the essential information about docker:
 installation,
 basic usage,
 image creation,
 swarm
Course Target
This course is addressed for those who want to learn to use docker (devs, admins,
testers, devops, .etc)
Prerequisites
      Basic knowledge about shell scripting
                                       1
Course Structure
      4 day/4 hours
 4 parts:
1. General Knowledge
2. Creating Images
Containers vs VMS
 Containers: Docker
Virtual Machines
Pros/Cons
                                         Cons
Pros
                                                Big hypervisor overhead
      Flexibility (dierent OS)
                                                Need extra software
      Full separation of users
                                                Need more resources
      Security
                                                Complex conguration
Containers
   Multiple containers can run on the same machine and share the OS kernel
with other containers, each running as isolated processes in user space.
                                         2
Pros/Cons
Pros
                                            Cons
        No overhead
                                                   Not so secure as we expect
Docker Dictionary
When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects. This section is a brief overview of some of
those concepts.
Image
An    image is a read-only template with instructions for creating a Docker con-
tainer.
     An image typically contains a union of layered lesystems stacked on top of
each other.
     An image does not have state and it never changes.
     Often, an image is based on another image, with some additional customiza-
tion.
     An image is identied by:
                                            3
Container
A   container    is a standardized, encapsulated environment that runs applica-
tions.
     A container is dened by its image as well as any conguration options you
provide to it when you create or start it.
     When a container is removed, any changes to its state that are not stored in
persistent storage disappear.
     A container has the following attributes:
        ID  unique ID of container
        image  source image from witch Docker container has been crafted
        command  command executed during container launch
        created  creation date
        status  current status of container
        ports  ports assigned to container
        name  name set at start or random generated human friendly name
Volume
A   volume is a specially-designated directory within one or more containers that
bypasses the Union File System.
     Volumes are designed to persist data, independent of the container's life
cycle.
     Docker therefore never automatically delete volumes when you remove a
container,
Docker Engine
Docker Engine is the underlying client-server technology that builds and runs
containers using Docker's components and services.
                                          4
           A   server which is a type of long-
            running program called a daemon
            process (the dockerd command).
Service
Services are really just      containers in production.
     A service only runs one image, but it codies the way that image runswhat
ports it should use, how many replicas of the container should run so the service
has the capacity it needs, and so on.
Docker Compose
Compose is a tool for dening and running multi-container Docker applications.
     With Compose, you use a YAML le to congure your application's services.
Then, with a single command, you create and start all the services from your
conguration.
Swarm
A   swarm is a cluster of one or more Docker Engines running in swarm mode.
Node
A   node is an instance of the Docker engine participating in the swarm.
     You can run one or more nodes on a single physical computer or cloud server,
but production swarm deployments typically include Docker nodes distributed
across multiple physical and cloud machines.
Task
A   task carries a Docker container and the commands to run inside the container.
     It is the atomic scheduling unit of swarm.
     Manager nodes assign tasks to worker nodes according to the number of
replicas set in the service scale. Once a task is assigned to a node, it cannot
move to another node. It can only run on the assigned node or fail.
Registry
The   registry     is a stateless, highly scalable server side application that stores
and lets you distribute Docker images.
     Besides private registers the most used registry is Docker Hub.
                                               5
Exercise: Use Docker Hub
     1. Create an account on: Docker Hub
3. Other repositories
Docker Installation
Linux
Docker is available in all main distributions Archlinux, Debian, Fedora, Ubuntu,
Mint, etc.
CENT OS
To install Docker Engine - Community, you need a maintained version of CentOS
7.
     Archived versions aren't supported or tested.
     The centos-extras repository must be enabled.
                                          6
Uninstall old versions
Older versions of Docker were called docker or docker-engine. If these are in-
stalled, uninstall them, along with associated dependencies.
Install Docker CE
sudo yum install docker-ce docker-ce-cli containerd.io
Fedora
To install Docker, you need the 64-bit version of one of these Fedora versions:
 28
 29
                                       7
         docker-latest \
         docker-latest-logrotate \
         docker-logrotate \
         docker-selinux \
         docker-engine-selinux \
         docker-engine
Install Docker CE
sudo dnf install docker-ce docker-ce-cli containerd.io
   Verify that Docker CE is installed correctly by running the hello-world image:
 Buster 10
 Stretch 9 (stable)
 Cosmic 18.10
                                          8
Install packages to allow apt to use a repository over HTTPS
sudo apt-get update
sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
     gnupg2 \
     software-properties-common
Install Docker CE
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli \
     containerd.io docker-compose
                                         9
Install docker
pacman -S docker
Log out and log back in so that your group membership is re-evaluated.
Windows
System Requirements
      Windows 10 64bit: Pro, Enterprise or Education (Build 15063 or later).
      CPU SLAT-capable feature. (see Hyper-V list of SLAT capable CPUs for
       more info)
                                        10
Installation
Download the installer from download.docker.com and run it.
Mac
System Requirements
      Mac hardware must be a 2010 or newer model, with Intel's hardware
       support for memory management unit (MMU) virtualization, including
       Extended Page Tables (EPT) and Unrestricted Mode. You can check to
       see if your machine has this support by running the following command
       in a terminal:
sysctl kern.hv_support
Installation:
Download the installer from Docker Hub and run it.
       lsb_release -a
       uname -a
Docker Basics
docker help
   In this section we will explore the most common and used commands.
                                      11
List Images
docker images [OPTIONS] [REPOSITORY[:TAG]]
   The default command will show all top level images, their repository and
tags, and their size:
Get an Image
docker pull [OPTIONS] NAME[:TAG]
Run an Image
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
                                     12
Run an Image - main options
    Options              Description
    detach         -d   run container in background and print container ID
    interactive    -i   keep STDIN open even if not attached
    publish        -p   publish a container's port(s) to the host
    tty            -t   allocate a pseudo-TTY
    rm                  automatically remove the container when it exits
                    -e   set any environment variable in the container
    name                dene a name for the container
List Containers
docker ps [OPTIONS]
                                        13
Logs
docker logs [OPTIONS] CONTAINER
 --tail N : number of lines to show from the end of the logs (default all)
Stop
docker stop CONTAINER [CONTAINER ...]
   The main process inside the container will receive   SIGTERM, and after a grace
period,   SIGKILL.
Restart
docker restart [OPTIONS] CONTAINER [CONTAINER ...]
Kill
docker kill [OPTIONS] CONTAINER [CONTAINER...]
Exec
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
 Options                  Description
 --detach            -d   run command in background and print container ID
 --interactive       -i   keep STDIN open even if not attached
 --publish           -p   publish a container's port(s) to the host
 --tty               -t   allocate a pseudo-TTY
 --user              -u   username or UID (format: <name/uid>[:<group/gid>])
 --env               -e   set environment variable in the container
 --workdir           -w   working directory inside the container
                                        14
Recap and cheat sheet (1)
## List Docker CLI commands
docker
docker container --help
Sharing Data
By default all les created inside a container are stored on a writable container
layer.
   This means that:
        The data doesn't persist when that container no longer exists, and it can
         be dicult to get the data out of the container if another process needs
         it.
   Docker has two options for containers to store les in the host machine, so
that the les are persisted even after the container stops:
 bind mounts,
 volumes.
                                         15
Bind Mounts
When you use a bind mount, a le or directory on the   host machine is mounted
into a container.
   The le or directory is referenced by its full or relative path on the host
machine.
   The le or directory does not need to exist on the Docker host already. It is
created on demand if it does not yet exist.
Volumes
Volumes are the preferred mechanism for persisting data generated by and used
by Docker containers.
   Volumes are created and managed by Docker.          You can create a volume
explicitly using the docker volume create command, or Docker can create a
volume during container or service creation.
   When you create a volume, it is stored within a directory on the Docker
host. When you mount the volume into a container, this directory is what is
mounted into the container. This is similar to the way that bind mounts work,
except that volumes are managed by Docker and are isolated from the core
functionality of the host machine.
   A given volume can be mounted into multiple containers simultaneously.
When no running container is using a volume, the volume is still available to
Docker and is not removed automatically.
Create a volume
docker volume create my-vol
List volumes
docker volume ls
Inspect a volume
docker volume inspect my-vol
[
    {
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
                                       16
"Name": "my-vol",
"Options": {},
"Scope": "local"
    }
]
Remove a volume
docker volume rm my-vol
      When the Docker host is not guaranteed to have a given directory or le
       structure. Volumes help you decouple the conguration of the Docker host
       from the container runtime.
      When you want to store your container's data on a remote host or a cloud
       provider, rather than locally.
      When you need to back up, restore, or migrate data from one Docker host
       to another, volumes are a better choice.
Flags
Originally, the   -v   or   --volume   ag was used for standalone containers and the
--mount   ag was used for swarm services.
   However, starting with Docker 17.06, you can also use          --mount   with stan-
dalone containers and it becomes the recomended syntax.
volume ag
Consists of three elds, separated by colon characters (:). The elds must be in
the correct order:
      In the case of bind mounts, the rst eld is the path to the le or directory
       on the host machine.
      The second eld is the path where the le or directory is mounted in the
       container.
Example:
                                             17
docker run -d \
       -it \
       --name devtest \
       -v "$(pwd)"/target:/app \
       nginx:latest
mount ag
Consists of multiple key-value pairs, separated by commas and each consisting
of a   <key>=<value> tuple.
   The   --mount      syntax is more verbose than   --volume,   but the order of the
keys is not signicant:
      target (or dst) takes as its value the path where the le or directory is
       mounted in the container.
Example:
docker run -d \
       -it \
       --name devtest \
       --mount type=bind,source="$(pwd)"/target,target=/app \
       nginx:latest
Networking
Network Commands
All network commands have the syntax:
Bridge
The default network driver.        If you don't specify a driver, this is the type of
network you are creating.
   Bridge networks are usually used when your applications run in standalone
containers that need to communicate.
                                           18
Default Bridge
When you start Docker, a default bridge network (also called      bridge )   is cre-
ated automatically, and newly-started containers connect to it unless otherwise
specied.
docker network ls
   If you do not specify a network using the   --network ag, and you do specify
a network driver, your container is connected to the default bridge network by
default. Containers connected to the default bridge network can communicate,
but only by IP address.
User-dened Bridge
Command to create a user-dened bridge network:
                                      19
advantages of user-dened bridge
      User-dened bridges provide better isolation and interoperability between
       containerized applications.
host
For standalone containers, remove network isolation between the container and
the Docker host, and use the host's networking directly.
   host is only available for swarm services on Docker 17.06 and higher.
Creating Images
Docker can build images automatically by reading the instructions from a   Dock-
erle.
   A Dockerle        is a text document that contains all the commands a user
could call on the command line to assemble an image.
   Using      docker build   users can create an automated build that executes
several command-line instructions in succession.
Build Image
docker build [OPTIONS] PATH
   The docker build command builds Docker images from a Dockerle and a
context.
   A build's context is the set of les located in the specied PATH.
Build Options
   Options            Description
   --file        -f   Name of the Dockerle (Default is `PATH/Dockerle')
   --tag         -t   Name and optionally a tag in the `name:tag' format
   --rm               Remove intermediate containers after a successful build
                                        20
Build Examples
Build the image using current directory. A Dockerle le must exist in it:
docker build .
   Build the image using current directory and a specic Dockerle:
Dockerle
Dockerle is a plain text document with a simple syntax:
# Comment
INSTRUCTION arguments
   The instruction is not case-sensitive. However, convention is for them to be
UPPERCASE to distinguish them from arguments more easily.
   Docker runs instructions in a Dockerle in order.
   Docker treats lines that begin with # as a comment. A # marker anywhere
else in a line is treated as an argument.
   Next we will go through the most common commands.
   For a complete list and more details see: Dockerle reference
FROM
FROM <image>[:<tag>] [AS <name>]
   The FROM instruction initializes a new build stage and sets the Base Image
for subsequent instructions.
   As such, a valid Dockerle must start with a FROM instruction.
ENV
ENV <key>=<value>
   The ENV instruction sets the environment variable       <key>   to the value
<value>.
   This value will be in the environment for all subsequent instructions in the
build stage and can be replaced inline in many as well.
WORKDIR
WORKDIR /path/to/workdir
   The WORKDIR instruction sets the working directory for any RUN, CMD,
ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerle.
   If the WORKDIR doesn't exist, it will be created even if it's not used in any
subsequent Dockerle instruction.
                                       21
   The WORKDIR instruction can be used multiple times in a Dockerle.
If a relative path is provided, it will be relative to the path of the previous
WORKDIR instruction.
   The WORKDIR instruction can resolve environment variables previously
set using ENV. You can only use environment variables explicitly set in the
Dockerle.
USER
USER <user>[:<group>] or
USER <UID>[:<GID>]
   The USER instruction sets the user name (or UID) and optionally the user
group (or GID) to use when running the image and for any RUN, CMD and
ENTRYPOINT instructions that follow it in the Dockerle.
   This command doesn't create the user. He should exist before the command
is issued.
RUN
# shell form
RUN <command>
# exec form
RUN ["executable", "param1", "param2"]
   The RUN instruction will execute any commands in a new layer on top of
the current image and commit the results. The resulting committed image will
be used for the next step in the Dockerle.
   In the shell form you can use a    \   (backslash) to continue a single RUN
instruction onto the next line.
COPY
COPY [--chown=<user>:<group>] <src> <dest>
   The COPY instruction copies new les or directories from      <src>   and adds
them to the lesystem of the container at the path   <dest>.
   There is also the command ADD which does the same thing but it also sup-
ports 2 other sources (URL and archive). The docker documentation recomends
tu use always COPY because it's more explicit.
EXPOSE
EXPOSE <port> [<port>/<protocol>...]
   The EXPOSE instruction informs Docker that the container listens on the
specied network ports at runtime.
   You can specify whether the port listens on TCP or UDP, and the default
is TCP if the protocol is not specied.
   This instruction does not actually publish the port. It functions as a type of
documentation between the person who builds the image and the person who
runs the container, about which ports are intended to be published. To actually
publish the port when running the container, use the   -p   ag on   docker run.
                                       22
LABEL
LABEL <key>=<value> <key>=<value> <key>=<value> ...
LABEL version=1.0"\
      description== "This image is used for              ..."
ENTRYPOINT
# exec form, preferred
ENTRYPOINT ["executable", "param1", "param2"]
# shell form
ENTRYPOINT command param1 param2
CMD
# exec form, this is the preferred form
CMD ["executable","param1","param2"]
# as default parameters to ENTRYPOINT
CMD ["param1","param2"]
# shell form
CMD command param1 param2
  3. CMD will be overridden when running the container with alternative ar-
      guments.
   Some combination:
   A. If ENTRYPOINT is in shell mode:
ENTRYPOINT cmd1 p1
                                          23
   CMD is ignored and docker execute:
/bin/sh -c cmd1 p1
   B. If ENTRYPOINT is in exec form:
["cmd1", "p1"]
cmd1 p1
cmd1 p1 p2 p3
   C. If no ENTRYPOINT,
   CMD is executed depending on its mode.
 Wrappers
       For example, suppose your service was written to read its conguration
       from a le instead of from environment variables.     In such a situation,
       you might include a wrapper script that generates the app's cong le
       from the environment variables, then launches the app by calling exec
       /path/to/app at the very end.
 Single-Purpose Images
       If your image is built to do only one thing  for example, run a web server
        use ENTRYPOINT to specify the path to the server binary and any
       mandatory arguments.
                                         24
       Then you can append program arguments naturally on the command line,
       such as:
just like you would if you were running nginx without Docker.
 Multi-Mode Images
       It's also a common pattern for images that support multiple       modes   to
       use the rst argument to docker run <image> to specify a verb that maps
       to the mode, such as shell, migrate, or debug.
ENTRYPOINT ["/bin/parse_container_args"]
                                        25
Use multi-stage builds
Studiu individual:
   https://docs.docker.com/develop/develop-images/multistage-build/
Decouple applications
Each container should have only one concern.
   Decoupling applications into multiple containers makes it easier to scale
horizontally and reuse containers.
   For instance, a web application stack might consist of three separate contain-
ers, each with its own unique image, to manage the web application, database,
and an in-memory cache in a decoupled manner.
Publishing Images
By publishing the image you save the image in a safe place and made it available
for future use.
Push
docker push [OPTIONS] NAME[:TAG]
                                       26
Exercise1: Push a image on Docker Hub
  1. Login on Docker Hub,
2. Create a repository:
              name it  test,
              select Private,
  3. Open the terminal and sign in to Docker Hub on your computer by running
     docker login
  4. Create an image:
Docker Compose
                                          27
Command General Syntax
docker-compose [OPTIONS] [COMMAND] [ARGS]
Options
 Options                           Description
 --file                       -f   Specify the compose le (default: docker-compose.yml)
 --project-name               -p   Project name (default: directory name)
 --project-directory               Working directory (default: the path of the compose le)
Commands
      build : build or rebuild services
 ps : list containers
Compose le
The Compose le is a YAML le dening services, networks and volumes.
   The default path for a Compose le is        ./docker-compose.yml.
   A service denition contains conguration that is applied to each container
started for that service, much like passing command-line parameters to      docker
container create. Likewise, network and volume denitions        are analogous to
docker network create and docker volume create.
   Next we will go through the most common options.
   For a complete list and more details see: Compose le reference.
Structure of a compose-le
version: "3.7"
services:
   <service_name1>:
       <options>
   ...
volumes:
   <volume_name1>:
       <options>
   ...
networks:
                                           28
   <network_name1>:
       <options>
   ...
configs:
   <config_name1>:
       <options>
secrets:
   <secrets_name1>:
        <options>
version: "3.7"
services:
  webapp:
    build:
      context: ./dir
      dockerfile: Dockerfile-alternate
version: "3.7"
services:
  web:
    image: ubuntu:14.04
  db:
    image: postgres:9.5.4
version: "3.7"
services:
  app:
    image: nginx:alpine
    networks:
       app_net:
         ipv4_address: 172.16.238.10
...
version: "3.7"
services:
                                      29
  web:
    image: ubuntu:14.04
    ports:
       - "80:8080"
       - "90:8090"
  db:
    image: postgres:9.5.4
    ports:
       - "5432:5432"
version: "3.7"
services:
  web:
    image: nginx:alpine
    volumes:
       - type: volume
         source: mydata
         target: /data
  db:
    image: postgres:9.5.4
    volumes:
       - "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
       - "dbdata:/var/lib/postgresql/data"
volumes:
  mydata:
  dbdata:
Volumes Section
This section allows you to create named volumes that can be reused across mul-
tiple services, and are easily retrieved and inspected using the docker command
line or API.
version: "3.7"
services:
  db:
    image: postgres
    volumes:
      - data:/var/lib/postgresql/data
volumes:
  data:
    external: true
                                      30
Network Section
The top-level networks key lets you specify networks to be created.
version: "3.7"
services:
  app:
    image: nginx:alpine
    networks:
       app_net:
         ipv4_address: 172.16.238.10
networks:
  app_net:
    ipam:
      driver: default
      config:
        - subnet: "172.16.238.0/24"
Swarm
A swarm consists of multiple Docker hosts which run in   swarm mode and act
as   managers   (to manage membership and delegation) and   workers   (which run
swarm services).
     One of the key advantages of swarm services over standalone containers
is that you can modify a service's conguration, including the networks and
volumes it is connected to, without the need to manually restart the service.
Docker will update the conguration, stop the service tasks with the out of date
conguration, and create new ones matching the desired conguration.
Swarm mode
Current versions of Docker include   swarm mode for natively managing a clus-
ter of Docker Engines called a   swarm.
     We will use the Docker CLI to create a swarm, deploy application services
to a swarm, and manage swarm behavior.
Init
docker swarm init [OPTIONS]
                                          31
   Initialize a swarm.
   It generates two random tokens, a worker token and a manager token. When
you join a new node to the swarm, the node joins as a worker or manager node
based upon the token you pass to   swarm join.
Join-token
docker swarm join-token [OPTIONS] (worker|manager)
Join
docker swarm join [OPTIONS] HOST:PORT
Leave
docker swarm leave [--force]
   When you run this command on a worker, that worker leaves the swarm.
   You can use the force option on a manager to remove it from the swarm.
However, this does not recongure the swarm to ensure that there are enough
managers to maintain a quorum in the swarm.
   The safe way to remove a manager from a swarm is to demote it to a worker
and then direct it to leave the quorum without using force.
   Only use force in situations where the swarm will no longer be used after
the manager leaves, such as in a single-node swarm.
   https://docs.docker.com/engine/reference/commandline/swarm_join/
                                       32
  # daemon restart when autolock is on
docker swarm unlock-key                            # Print key needed for 'unlock'
Custom User
https://medium.com/faun/set-current-host-user-for-docker-container-4e521cef9ffc
backup-vol
#!/usr/bin/env bash
script_name=$(basename "$0")
usage() {
  >&2 echo "Usage: $script_name volume_name"
                                       33
   exit 1
}
if [ $# -ne 1 ]; then
   usage
fi
set -e
volume=$1
today="$(date '+%Y%m%d%H%M')"
bin_dir=$(dirname "$(readlink -f "$0")")
project_dir=$(dirname "$bin_dir")
bkp_dir=$project_dir/backups
bkp_file="$volume-$today.tar.bz2"
cd "$bkp_dir"
echo "backup $volume"
docker run -v "$volume":/volume --rm loomchild/volume-backup backup - > "$bkp_file"
echo "$bkp_file" > "$volume.last"
cd "$project_dir"
restore-vol
#!/usr/bin/env bash
script_name=$(basename $0)
usage() {
   >&2 echo "Usage: $script_name volume_name backup_tar"
   exit 1
}
if [ $# -ne 2 ]; then
   usage
fi
volume=$1
bkp_file=$2
echo "restore $bkp_file in $volume"
cat $bkp_file | docker run -i -v $volume:/volume --rm loomchild/volume-backup restore -
Global Resources
1. Docker Documentation
2. Awesome-docker
34