Introduction to Docker
An Open Platform to Build, Ship, and Run Distributed Applications
Table of Contents
Objective
After completing this , you will be able to deploy Docker and Docker containers
using best practices.
Ov erview
You will:
Learn what Docker is and what it is for.
Learn the terminology, and define images, containers, etc.
Learn how to install Docker.
Use Docker to run containers.
Use Docker to build containers.
Learn how to orchestrate Docker containers.
Interact with the Docker Hub website.
Acquire tips and good practice.
Know what's next: the future of Docker, and how to get help.
Agenda
Part 1
Part 2
About Docker
Instant Gratification
Local Development Workflow with Docker
Install Docker
Building Images with Dockerfiles
Initial Images
Working with Images and the Docker Hub website.
Our First Container
Connecting Containers
Building on a Container
Working with Volumes
Daemonizing Containers
Continuous Integration Workflow with Docker
More Container Daemonization
The Docker API
The Birth of Docker
Docker was first introduced to the world with no pre-announcement and little fan-fare by Solomon Hykes, founder and CEO of dotCloud, in a five-minute lightningtalk
at the Python Developers Conference in Santa Clara, California, on March 15 2013.
At the time of this announcement, only about 40 people outside dotCloud been
given the opportunity to play with Docker.
Within a few weeks of this announcement, there was a surprising amount of press.
The project was quickly open-sourced and made publicly available on GitHub, where
anyone could download and contribute to the project. Over the next few months,
more and more people in the industry started hearing about Docker and how it was
going to revolutionize the way software was built, delivered, and run. And within a year,
almost no one in the industry was unaware of Docker, but many were still unsure what
it was exactly, and why people were so excited about.
Overview: About Docker
Objectives
At the end of this lesson, you will be able to:
About Docker Inc.
About Docker.
How to create an account on the Docker Hub.
About Docker Inc.
Focused on Docker and growing the Docker ecosystem:
Founded in 2009.
Formerly dotCloud Inc.
Released Docker in 2013.
What does Docker Inc. do?
Docker Engine - open source container management.
Docker Hub - online home and hub for managing your Docker containers.
Docker Enterprise Support - commercial support for Docker.
Docker Services & Training - professional services and training to help you get
the best out of Docker.
The problem
The Matrix from Hell
An inspiration and some ancient history!
Intermodal shipping containers
This spawned a Shipping Container Ecosystem!
A shipping container system for applications
Eliminate the matrix from Hell
Step 1: Create a lightweight container
Step 2: Make containers easy to use
Container technology has been around for a while (c.f. LXC, Solaris Zones, BSD
Jails)
But, their use was largely limited to specialized organizations, with special tools
& training. Containers were not portable
Analogy: Shipping containers are not just steel boxes. They are steel boxes that
are a standard size, and have hooks and holes in all the same places
With Docker, Containers get the following:
Ease of use, tooling
Re-usable components
Ability to run on any Linux server today: physical, virtual, VM, cloud,
OpenStack, +++
Ability to move between any of the above in a matter of seconds-no
modification or delay
Ability to share containerized components
Self contained environment - no dependency hell
Tools for how containers work together: linking, nesting, discovery,
orchestration, ++
Technical & cultural revolution: separation of concerns
Step 3: Make containers super lightweight
Step 4: Build a System for creating, managing,
deploying code
Process Simplification
Docker can simplify both workflows and communication, and that usually starts with
the deployment story. Traditionally, the cycle of getting an application to production
often looks something like
As a company, Docker preaches an approach of batteries included but removable.
Which means that they want their tools to come with everything most people need to
get the job done, while still being built from interchangeable parts that can easily be
swapped in and out to support custom solutions.
By using an image repository as the hand-off point, Docker allows the responsibility
of building the application image to be separated from the deployment and operation
of the container.
What this means in practice is that development teams can build their application
with all of its dependencies, run it in development and test environments, and then
just ship the exact same bundle of application and dependencies to production.
Because those bundles all look the same from the outside, operations engineers can
then build or install standard tooling to deploy and run the applications.
The Docker Workflow features
Docker strongly encourages a particular workflow. Its a very enabling workflow
that maps well to how many companies are organized
Revision Control
The first thing that Docker gives you out of the box is two forms of revision control.
One is used to track the filesystem layers that images are made up of, and the other is
a tagging systems for built containers.
Filesystem layers
Docker containers are made up of stacked filesystem layers, each identified by a
unique hash, where each new set of changes made during the build process is laid on
top of the previous changes. Thats great because it means that when you do a new
build, you only have to rebuild the layers that include and build upon the change
youre deploying. This saves time and bandwidth because containers are shipped
around as layers and you dont have to ship layers that a server already has stored.
Image tags
Docker has a built-in mechanism for versioning : it provides image tagging at
deployment time. You can leave multiple revisions of your application on the
server and just tag them at release.
Building
Docker doesnt solve all the problems, but it does provide a standardized
tool configuration and tool set for builds. That makes it a lot easier
The Docker command-line tool contains a ' build ' flag that will consume a Dockerfile
and produce a Docker image. Each command in a Dockerfile generates a new layer
in the image. The great part of all of this standardization is that any engineer who
has worked with a Dockerfile can dive right into and modify the build of any other
applications .
What Docker Isn't
-Deployment Framework (Capistrano, Fabric, etc.)
-Enterprise Virtualization Platform (VMware, KVM, etc.)
-Cloud Platform (Openstack, CloudStack, etc.)
-Configuration Management (Puppet, Chef, etc.)
-Workload Management Tool (Mesos, Fleet, etc.)
-Development Environment (Vagrant, etc.)
Containers Are Not Virtual Machines
Start shaping your understanding of how to leverage Docker is to think of containers
not as virtual machines, but as very lightweight wrappers around a single Linix process.
During actual implementation, that process might spawn other process, but on
the other hand, one statically compiled binary could be all that s inside the container.
Containers Are Lightweight
Creating a container takes very little space. A quick test on Docker reveals that a newly
created container from an existing image takes a whopping couple hundred mehabytes
of disk space, that's pretty lightweight. One the other hand, a new virtual machine
created from a golden imagemight require thousands of megabytes.
The new container is so small because it is just a reference to a layered filesystem
image and some metadata about the configuration.
Isolation
Containers are isolated from each other . You can put limits on their resources,
the default container configuration just has them all sharing CPU and memory
on the host system .
Stateless Applications
A good example of the kind of application that containerizes well is a web application
If you think about your web application, though, it probably has local state that you rely on,
like configuration files. That might not seem like a lot of state, but it means that youve limited
the reusability of your container, and made it more challenging to deploy into different
environments, without maintaining configuration data in your codebase.
In many cases, the process of containerizing your application means that you move
configuration state into environment variables that can be passed to your application
from the container. This allows you to easily do things like use the same container to
run in either production or staging environments. In most companies, those environments
would require many different configuration settings, from the names of databases to
the hostnames of other service dependencies.
With containers, you might also find that you are always decreasing the size of your
containerized application as you optimize it down to the bare essentials required to run.
Installing Docker
Objectives
At the end of this lesson, you will be able to:
Install Docker.
Run Docker without sudo.
Installing Docker
Docker is easy to install.
It runs on:
A variety of Linux distributions.
OS X via a virtual machine.
Microsoft Windows via a virtual machine.
Requirements for installing Docker on linux
Be running a 64-bit architecture (currently x86 64 and amd 64 only). 32-bit is NOT currently
supported
Be running a Linux 3.8 or later kernel .
The kernel must apport an appropaite storage driver. for eg :
-Device Mapper
-AUFS
-vfs
-btrfs
cgroups and namespaces kernel features must be supported and enabled .
Installing Docker on Linux
It can be installed via:
Distribution supplied packages on virtually all distros.
(Includes at least: Arch Linux, CentOS, Debian, Fedora, Gentoo, openSUSE,
RHEL, Ubuntu.)
Packages supplied by Docker.
Installation script from Docker.
Binary download from Docker (it's a single file).
Installing Docker on your Linux distribution
On Debian and derivatives.
$ sudo apt-get install docker.io
This command installs docker version 1.6.x
Installation script from Docker
You can use the curl command to install on several platforms.
$ curl -s https://get.docker.io/ubuntu/ | sudo sh
This currently works on:
Ubuntu;
Debian;
Fedora;
Gentoo.
Installing Docker's latest version on ubuntu:14.04 (lts)
1. Open a terminal window.
2. Add the new gpg key.
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 -recv-keys 58118E89F3A912897C070ADBF76221572C52609D
3. Open the /etc/apt/sources.list.d/docker.list file . (Remove any existing entries.)
if it doesn't exist create one
4. Add the following entry .
$ deb https://apt.dockerproject.org/repo ubuntu-trusty main
5. Save and close the /etc/apt/sources.list.d/docker.list file.
6. Update your package manager.
$ sudo apt-get update
7.Install the recommended package.
$ sudo apt-get install linux-image-extra-$(uname -r)
8. Install Docker.
$ sudo apt-get install docker-engine
9. Start the docker daemon.
$ sudo service docker start
10. Verify docker is installed correctly.
$ sudo docker run hello-datreon
To enable memory and swap on system using GNU GRUB (GNU GRand Unified Bootloader)
do the following:
1 . Edit the /etc/default/grub file.
2 . Set the GRUB_CMDLINE_LINUX value as follows:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
Overview: Instant Gratification
Objectives
At the end of this lesson, you will have:
Seen Docker in action
Hello World
$ sudo docker run busybox echo datreon
datreon
That was our first container!
Busybox (small, lean image)
Ran a single process and echo'ed datreon
Download the ubuntu image
The Busybox image doesn't do much.
We will use a beefier Ubuntu image later.
It will take a few minutes to download, so let's do that now.
$ sudo docker pull ubuntu14.04
Pulling repository ubuntu
9f676bd305a4: Download complete
9cd978db300e: Download complete
bac448df371d: Downloading
[============>
] 10.04 MB/39.93 MB 23s
e7d62a8128cf: Downloading
[======>
] 8.982 MB/68.32 MB
1m21s
f323cf34fd77: Download complete
321f7f4200f4: Download complete
Docker architecture
Docker is a client-server application.
The Docker daemon
The Docker server.
Receives and processes
incoming Docker API requests.
The Docker client
Command line tool - the docker binary.
Talks to the Docker
daemon via the Docker API.
Docker Hub Registry
Public image registry.
The Docker daemon talks to it via the registry API.
Test Docker is working
Using the docker client:
$ sudo docker version
Client version: 1.1.1
Client API version: 1.13
Go version (client): go1.2.1
Git commit (client): bd609d2
Server version: 1.1.1
Server API version: 1.13
Go version (server): go1.2.1
Git commit (server): bd609d2
The docker group
Warning!
The docker user is root equivalent.
It provides root level access to the host.
You should restrict access to it like you would protect root.
Add the Docker group
$ sudo groupadd docker
Add ourselves to the group
$ sudo gpasswd -a $USER docker
Restart the Docker daemon
$ sudo service docker restart
Log out
$ exit
Hello World again without sudo
$ docker run ubuntu:14.04 echo datreon
Install Docker
Section summary
We've learned how to:
Install Docker.
Run Docker without sudo.
Introducing Docker Hub
Introducing Docker Hub
Objectives
At the end of this lesson, you will be able to:
Register for an account on Docker Hub.
Login to your account from the command line.
Sign up for a Docker Hub account
Having a Docker Hub account will allow us to store our images in the registry.
To sign up, you'll go to hub.docker.com and fill out the form.
Note: if you have an existing Index/Hub account, this step is not needed.
Activate your account through e-mail.
Check your e-mail and click the confirmation link.
Introducing Docker Hub
Login
Let's use our new account to login to the Docker Hub!
$ docker login
Username: my_docker_hub_login
Password:
Email: my@email.com
Login Succeeded
Our credentials will be stored in ~/.dockercfg. /.docker/config.json
The .dockercfg configuration file
The ~/.dockercfg configuration file holds our Docker registry authentication
credentials.
{
"https://index.docker.io/v1/": {
"auth":"amFtdHVyMDE6aTliMUw5ckE=",
"email":"education@docker.com"
}
The auth section is Base64 encoding of your user name and password.
It should be owned by your user with permissions of 0600.
You should protect this file!
Introducing Docker Hub
Section summary
We've learned how to:
Register for an account on Docker Hub.
Login to your account from the command line.
Getting started with Images
Getting started with Images
Objectives
At the end of this lesson, you will be able to:
Understand images and image tags.
Search for images.
Download an image.
Understand Docker image namespacing.
Images
What are they?
An image is a collection of files.
Base images (ubuntu, busybox, fedora etc.) are what you build your own custom
images on top of.
Images are layered, and each layer represents a diff (what changed) from the
previous layer. For instance, you could add Python 3 on top of a base image.
Add node
So what's the difference between Containers and
Images?
Containers represent an encapsulated set of processes based on an image.
You spawn them with the docker run command.
In our "Instant Gratification" example, you created a shiny new container by
executing docker run. It was based on the busybox image, and we ran the
echo command.
Images are like templates or stencils that you can create containers from.
How do you store and manage images?
Images can be stored:
On your Docker host.
In a Docker registry.
You can use the Docker client to manage images.
Searching for images
Searches your registry for images:
$ docker search datreon
NAME
datreon/ssh_image
datreon/node_image
datreon/mongo
datreon/git
datreon/ubuntu:14.04
DESCRIPTION
STARS
0
0
0
0
0
OFFICIAL
AUTOMATED
[OK]
[OK]
[OK]
[OK]
[OK]
Images belong to a namespace
There are three namespaces:
Root-like
ubuntu
User registry.example.com:5000/my-private-image
datreon/docker-fundamentals-image
Self-Hosted
registry.example.com:5000/my-private-image
Root namespace
The root namespace is for official images. They are put there by Docker Inc., but they
are generally authored and maintained by third parties.
Those images include some barebones distro images, for instance:
ubuntu
fedora
centos
Those are ready to be used as bases for your own images.
User namespace
The user namespace holds images for Docker Hub users and organizations.
For example:
datreon/ docker-fundamentals-image
The Docker Hub user is:
datreon
The image name is:
docker-fundamentals-image
Self-Hosted namespace
This namespace holds images which are not hosted on Docker Hub, but on third party
registries.
They contain the hostname (or IP address), and optionally the port, of the registry
server.
For example:
localhost:5000/ubuntu
The remote host and port is:
localhost:5000
The image name is:
ubuntu
Note: self-hosted registries used to be called private registries, but this is misleading!
A self-hosted registry can be public or private.
A registry in the User namespace on Docker Hub can be public or private.
Downloading images
We already downloaded two root images earlier:
The busybox image, implicitly, when we did docker run busybox.
The ubuntu image, explicitly, when we did docker pull ubuntu.
Download a user image.
$ docker pull datreon/node_image
Pulling repository datreon/node_image
8144a5b2bc0c: Download complete
511136ea3c5a: Download complete
8abc22fbb042: Download complete
58394af37342: Download complete
6ea7713376aa: Download complete
71ef82f6ed3c: Download complete
Showing current images
Let's look at what images are on our host now.
$ docker images
REPOSITORY
training/docker-fundamentals-image
ubuntu
ubuntu
ubuntu
ubuntu
ubuntu
ubuntu
ubuntu
ubuntu
ubuntu
ubuntu
ubuntu
TAG
latest
13.10
saucy
raring
13.04
12.10
quantal
10.04
lucid
12.04
latest
precise
IMAGE ID
8144a5b2bc0c
9f676bd305a4
9f676bd305a4
eb601b8965b8
eb601b8965b8
5ac751e8d623
5ac751e8d623
9cc9ea5ea540
9cc9ea5ea540
9cd978db300e
9cd978db300e
9cd978db300e
CREATED
5 days ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
VIRTUAL SIZE
835 MB
178 MB
178 MB
166.5 MB
166.5 MB
161 MB
161 MB
180.8 MB
180.8 MB
204.4 MB
204.4 MB
204.4 MB
$ docker images [options] [name]
Flag
Explanation
-a, --all
This shows all images, including intermediate layers.
-f, --filter=[]
This provides filter values.
--no-trunc
This doesn't truncate output (shows complete ID).
-q, --quiet
This shows only the image IDs.
Image and tags
Images can have tags.
Tags define image variants.
When using images it is always best to be specific.
$ docker tag IMAGE [REGISTRYHOST/][USERNAME/]NAME[:TAG]
The rmi command
The rmi command command removes images. Removing an image also removes all the
underlying images that it depends on and were downloaded when it was pulled:
$ docker rmi [OPTION] {IMAGE(s)]
Flag
Explanation
-f, --force
This forcibly removes the image (or images).
--no-prune
This command does not delete untagged parents.
This command removes one of the images from your machine: for eg an image named 'test'
$ docker rmi test
The save command
The save command saves an image or repository in a tarball and this streams to
the stdout file, preserving the parent layers and metadata about the image
$ docker save -o codeit.tar code.it
The -o flag allows us to specify a file instead of streaming to thstdout file. It is
used to create a backup that can then be used with the docker load command.
The load command
The load command loads an image from a tarball, restoring the filesystem layers
and the metadata associated with the image:
$ docker load -i codeit.tar
The -i flag allows us to specify a file instead of trying to get a stream from th
stdin file
The export command
The export command saves the filesystem of a container as a tarball and streams
to the stdout file. It flattens filesystem layers. In other words, it merges all the
filesystem layers. All of the metadata of the image history is lost in this process
$ sudo Docker export red_panda > latest.tar
Here, redpanda is the name of one of my containers.
Building
uilding Docker Images
Objectives
At the end of this lesson, you will be able to:
Understand the instructions for a Dockerfile.
Create your own Dockerfiles.
Build an image from a Dockerfile.
Override the CMD when you docker run.
Introduction to building images
Docker relies heavily on its storage backend, which communicates with the
underlying Linux filesystem to build and manage the multiple layers that combine
into a single usable image. The primary storage backends that are supported include:
AUFS, BTRFS , Device-mapper and overlayfs. Each storage backend provides a
fast copy-on-write (COW) system for image management .
Dockerfile and docker build
Let's see how to build our own images using:
A Dockerfile which holds Docker image definitions.
You can think of it as the "build recipe" for a Docker image.
It contains a series of instructions telling Docker how an image is constructed.
The docker build command which builds an image from a Dockerfile.
Our first Dockerfile
FROM specifies a source image for our new image. It's mandatory.
MAINTAINER tells us who maintains this image.
Each RUN instruction executes a command to build our image.
CMD defines the default command to run when a container is launched from this
image.
EXPOSE lists the network ports to open when a container is launched from this
image.
Building Docker images
Dockerfile usage summary
Dockerfile instructions are executed in order.
Each instruction creates a new layer in the image.
Instructions are cached. If no changes are detected then the instruction is
skipped and the cached layer used.
The FROM instruction MUST be the first non-comment instruction.
Lines starting with # are treated as comments.
You can only have one CMD and one ENTRYPOINT instruction in a
Dockerfile.
Building our Dockerfile
We use the docker build command to build images.
$ docker build -t node:1.0 .
The -t flag tags an image.
The . indicates the location of the Dockerfile being built.
Building Docker images
The FROM instruction
Specifies the source image to build this image.
Must be the first instruction in the Dockerfile.
(Except for comments: it's OK to have comments before FROM.)
Can specify a base image:
FROM ubuntu
An image tagged with a specific version:
FROM ubuntu:14.04
A user image:
FROM datreon/ubuntu
Or self-hosted image:
FROM localhost:5000/node
More about FROM
The FROM instruction can be specified more than once to build multiple images.
FROM ubuntu:14.04
. . .
FROM fedora:20
. . .
Each FROM instruction marks the beginning of the build of a new image.
The `-t command-line parameter will only apply to the last image.
If the build fails, existing tags are left unchanged.
An optional version tag can be added after the name of the image.
E.g.: ubuntu:14.04.
Building Docker images
The MAINTAINER instruction
The MAINTAINER instruction tells you who wrote the Dockerfile.
MAINTAINER datreon <datreon@docker.com>
It's optional but recommended.
The RUN instruction
The RUN instruction can be specified in two ways.
With shell wrapping, which runs the specified command inside a shell, with /bin/sh
-c:
RUN apt-get update
Or using the exec method, which avoids shell string expansion, and allows execution in
images that don't have /bin/sh:
RUN [ "apt-get", "update" ]
More about the RUN instruction
RUN will do the following:
Execute a command.
Record changes made to the filesystem.
Work great to install libraries, packages, and various files.
RUN will NOT do the following:
Record state of processes.
Automatically start daemons.
If you want to start something automatically when the container runs, you should use
CMD and/or ENTRYPOINT.
Building Docker images
The EXPOSE instruction
The EXPOSE instruction tells Docker what ports are to be published in this image.
EXPOSE 4000
All ports are private by default.
The Dockerfile doesn't control if a port is publicly available.
When you docker run -p <port> ..., that port becomes public.
(Even if it was not declared with EXPOSE.)
When you docker run -P ... (without port number), all ports declared
with EXPOSE become public.
A public port is reachable from other containers and from outside the host.
A private port is not reachable from outside.
The ADD instruction
The ADD instruction adds files and content from your host into the image.
ADD /src/node /opt/node
This will add the contents of the /src/node
directory in the image.
directory to the /opt/mpde
Note: /src/node is not relative to the host filesystem, but to the directory
containing the Dockerfile.
Otherwise, a Dockerfile could succeed on host A, but fail on host B.
The ADD instruction can also be used to get remote files.
ADD http://www.example.com/node /opt/
This would download the node
file and place it in the /opt directory.
More about the ADD instruction
ADD is cached. If you recreate the image and no files have changed then a cache
is used.
If the local source is a zip file or a tarball it'll be unpacked to the destination.
Sources that are URLs and zipped will not be unpacked.
Any files created by the ADD instruction are owned by root with permissions of
0755.
The VOLUME instruction
The VOLUME instruction will create a data volume mount point at the specified path.
VOLUME [ "/opt/mongo/data" ]
Data volumes bypass the union file system.
In other words, they are not captured by docker commit.
Data volumes can be shared and reused between containers.
We'll see how this works in a subsequent lesson.
It is possible to share a volume with a stopped container.
Data volumes persist until all containers referencing them are destroyed.
The WORKDIR instruction
The WORKDIR instruction sets the working directory for subsequent instructions.
t also affects CMD and ENTRYPOINT, since it sets the working directory used when
starting the container.
WORKDIR /opt/node
You can specify WORKDIR again to change the working directory for further operations.
The ENV instruction
The ENV instruction specifies environment variables that should be set in any container
launched from the image.
ENV node_PORT 4000
This will result in an environment variable being created in any containers created from
this image of
node_PORT=8080
You can also specify environment variables when you use docker run.
$ docker run -e node_PORT=8000 -e node_HOST=www.example.com ...
The USER instruction
The USER instruction sets the user name or UID to use when running the image.
It can be used multiple times to change back to root or to another user..
The CMD instruction
The CMD instruction is a default command run when a container is launched from the
image.
CMD [ "node_container" ,"-d" ]
Means we don't need to specify nginx -g "daemon off;" when running the
container.
Instead of:
$ docker run -d --name=node_container
We can just do:
$ docker run node:1.0
node:1.0
More about the CMD instruction
Just like RUN, the CMD instruction comes in two forms. The first executes in a shell:
CMD node_container, -d
The second executes directly, without shell processing:
CMD [ "node_container" ,"-d" ]
Overriding the CMD instruction
The CMD can be overridden when you run a container.
$ docker run -t -i node:1.0 /bin/bash
Will run /bin/bash instead of node_container,-d with. a random name
The ENTRYPOINT instruction
The ENTRYPOINT instruction is like the CMD instruction, but arguments given on the
command line are appended to the entry point.
Note: you have to use the "exec" syntax ([ "..." ]).
ENTRYPOINT [ "/bin/ls" ]
If we were to run:
$ docker run datreon/ls -l
Instead of trying to run -l, the container will run /bin/ls -l.
Overriding the ENTRYPOINT instruction
The entry point can be overriden as well.
$ docker run --entrypoint /bin/bash -t -i datreon/ls
How CMD and ENTRYPOINT interact
The CMD and ENTRYPOINT instructions work best when used together.
The ENTRYPOINT specifies the command to be run and the CMD specifies its options.
On the command line we can then potentially override the options when needed.
$ docker run -d node:1.0
-t
This will override the options CMD provided with new flags.
The ONBUILD instruction
The ONBUILD instruction is a trigger. It sets instructions that will be executed when
another image is built from the image being build.
This is useful for building images which will be used as a base to build other images.
ONBUILD ADD . /app/src
You can't chain ONBUILD instructions with ONBUILD.
ONBUILD can't be used to trigger FROM and MAINTAINER instructions.
Working with
Images
Working
with
Images
Working with Images
Objectives
At the end of this lesson, you will be able to:
Pull and push images to the Docker Hub.
Explore the Docker Hub.
Understand and create Automated Builds.
Working with images
In the last section we created a new image for our web application.
This image would be useful to the whole team but how do we share it?
Using the Docker Hub!
Pulling images
Earlier in this training we saw how to pull images down from the Docker Hub.
$ docker pull ubuntu:14.04
This will connect to the Docker Hub and download the ubuntu:14.04 image to allow
us to build containers from it.
We can also do the reverse and push an image to the Docker Hub so that others can use
it.
Before pushing a Docker image ...
We push images using the docker push command.
Images are uploaded via HTTP and authenticated.
You can only push images to the user namespace, and with your own username.
This means that you cannot push an image called node
.
It has to be called my_docker_hub_login/node
.
Name your image properly
Here are different ways to ensure that your image has the right name.
Of course, in the examples below, replace my_docker_hub_login with your actual
login on the Docker Hub.
If you have previously built the node image, you can re-tag it:
$ docker tag node my_docker_hub_login/node
Pushing a Docker image to the Docker Hub
Now that the image is named properly, we can push it:
$ docker push my_docker_hub_login/node
You will be prompted for a user name and password.
(Unless you already did docker login earlier.)
Please login prior to push:
Username: my_docker_hub_login
Password: *********
Email: ...
Login Succeeded
You will login using your Docker Hub name, account and email address you created
earlier in the training.
Your authentication credentials
Docker stores your credentials in the file ~/.dockercfg.
$ cat ~/.dockercfg
Will show you the authentication details.
{
"https://index.docker.io/v1/": {
"auth":"amFtdHVyMDE6aTliMUw5ckE=",
"email":"datreon@gmail.com"
}
More about pushing an image
If the image doesn't exist on the Docker Hub, a new repository will be created.
You can push an updated image on top of an existing image. Only the layers
which have changed will be updated.
When you pull down the resulting image, only the updates will need to be
downloaded.
Viewing our uploaded image
Let's sign onto the Docker Hub and review our uploaded image.
Browse to:
https://hub.docker.com/
Logging in to the Docker Hub
Now click the Login link and fill in the Username and Password fields.
And clicking the Log In button.
Your account screen
This is the master account screen. Here you can see your repositories and recent
activity.
Review your webapp repository
Click on the link to your my_docker_hub_login/node repository
.
You can see the basic information about your image.
You can also browse to the Tags tab to see image tags, or navigate to a link in
the "Settings" sidebar to configure the repo.
epo.
Automated Builds
In addition to pushing images to Docker Hub you can also create special images call
Automated Builds. An Automated Build is created from a Dockerfile in a GitH
repository.
This provides a guarantee that the image came from a specific source and allows y
ensure that any downloaded image is built from a Dockerfile you can review
Creating an Automated build
To create an Automated Build click on the Add Repository button on your main
account screen and select Automated Build.
Connecting your GitHub account
If this is your first Automated Build you will be prompted to connect your GitHub
account to the Docker Hub.
Working with Images
Select specific GitHub repository
You can then select a specific GitHub repository.
It must contain a Dockerfile.
If you don't have a repository with a Dockerfile, you can fork https://github.com/
docker-training/staticweb, for instance.
Configuring Automated Build
You can then configure the specifics of your Automated Build and click the Create
Repository button.
Automated Building
Once configured your Automated Build will automatically start building an image from
the Dockerfile contained in your Git repository.
Every time you make a commit to that repository a new version of the image will be
built.
Section summary
We've learned how to:
Pull and push images to the Docker Hub.
Explore the Docker Hub.
Understand and create Automated Builds.
A Container to call your own
Containers
Containers are created with the docker run command.
Containers have two modes they run in:
Daemonized.
Interactive.
Daemonized containers
Runs in the background.
The docker run command is launched with the -d command line flag.
The container runs until it is stopped or killed.
Interactive containers
Runs in the foreground.
Attached a pseudo-terminal, i.e. let you get input and output from the container.
The container also runs until its controlling process stops or it is stopped or
killed.
The daemon command
While starting the daemon, you can run it with arguments that control
the Domain Name (DNS) configurations, storage drivers, and execution
drivers for the containers:
$ Docker -d -D -e lxc -s btrfs -dns 8.8.8.8 -dns-search example.com
The following table describes the various flags
Flag
Explanation
-d
This runs Docker as a daemon.
-D
This runs Docker in debug mode.
-e [option]
This is the execution driver to be used. The default execution
driver is native, which uses libcontainer.
-s [option]
This forces Docker to use a different storage driver. The default
value is "", for which Docker uses AUFS.
--dns [option(s)]
This sets the DNS server (or servers) for all Docker containers.
--dns-search
[option(s)]
This sets the DNS search domain (or domains) for all Docker
containers.
-H [option(s)]
This is the socket (or sockets) to bind to. It can be one or more of
tcp://host:port, unix:///path/to/socket, fd://*
or fd://socketfd.
The run command
The run command is the command that we will be using most frequently.
It is used to run Docker containers:
$ docker run [options] IMAGE [command] [args]
The following table describes the various flags
Flags
Explanation
-a, --attach=[]
Attach to the stdin, stdout, or stderr files (standard input,
output, and error files.).
-d, --detach
This runs the container in the background.
-i, --interactive
This runs the container in interactive mode (keeps the stdin file
open).
-t, --tty
This allocates a pseudo tty flag (which is required if you want
to attach to the container's terminal).
-p, --publish=[]
This publishes a container's port to the host
(ip:hostport:containerport).
--rm
This automatically removes the container when exited (it cannot
be used with the -d flag).
--privileged
This gives additional privileges to this container.
-v, --volume=[]
This bind mounts a volume (from host => /host:/
container; from docker => /container).
--volumes-from=[]
This mounts volumes from specified containers.
-w, --workdir=""
This is the working directory inside the container.
--name=""
This assigns a name to the container.
-h, --hostname=""
This assigns a hostname to the container.
-u, --user=""
This is the username or UID the container should run on.
-e, --env=[]
This sets the environment variables.
--env-file=[]
This reads environment variables from a new line-delimited file.
--dns=[]
This sets custom DNS servers.
--dns-search=[]
This sets custom DNS search domains.
--link=[]
This adds link to another container (name:alias).
-c, --cpushares=0
This is the relative CPU share for this container.
--cpuset=""
These are the CPUs in which to allow execution; starts with 0.
(For example, 0 to 3).
-m, --memory=""
This is the memory limit for this container
(<number><b|k|m|g>).
--restart=""
(v1.2+) This specifies a restart policy in case the container
crashes.
Flags
Explanation
--cap-add=""
(v1.2+) This grants a capability to a container (refer to Chapter 4,
Security Best Practices).
--cap-drop=""
(v1.2+) This blacklists a capability to a container (refer to Chapter
4, Security Best Practices).
--device=""
(v1.2+) This mounts a device on a container.
Launching a container
Let's create a new container from the ubuntu image:
$ docker run -i -t ubuntu:14.04 /bin/bash
root@268e59b5754c:/#
The -i flag sets Docker's mode to interactive.
The -t flag creates a pseudo terminal (or PTY) in the container.
We've specified the ubuntu:14.04 image from which to create our container.
We passed a command to run inside the container, /bin/bash.
That command has launched a Bash shell inside our container.
The hexadecimal number after root@ is the container's identifier.
(The actual ID is longer than that. Docker truncates it for convenience, just like
git or hg will show shorter ID instead of full hashes.)
Inside our container
root@268e59b5754c:/#
Let's run a command.
root@268e59b5754c:/# uname -rn
268e59b5754c 3.10.40-50.136.amzn1.x86_64
Now let's exit the container.
root@268e59b5754c:/# exit
After we run exit the container stops.
Check the kernel version and hostname again, outside the container:
[docker@ip-172-31-47-238 ~]$ uname -rn
ip-172-31-47-238.ec2.internal 3.10.40-50.136.amzn1.x86_64
The kernel version is the same. Hostname is different.
Container status
You can see container status using the docker ps command.
e.g.:
$ docker ps
CONTAINER ID
STATUS
IMAGE
PORTS
COMMAND
NAMES
CREATED
What? What happened to our container?
The docker ps command only shows running containers.
Last container status
Since the container has stopped we can show it by adding the -l flag. This shows the
last run container, running or stopped.
$ docker ps -l
CONTAINER ID IMAGE
COMMAND
CREATED
STATUS
a2d4b003d7b6 ubuntu:12.04 /bin/bash 5 minutes ago Exit 0
PORTS
NAMES
sad_pare
The status of all containers
We can also use the docker ps command with the -a flag. The -a flag tells Docker
to list all containers both running and stopped.
$ docker ps -a
CONTAINER ID
CREATED
PORTS
a2d4b003d7b6
ago
Exit 0
sad_pare
c870de4523bf
ago
Up 32
acc65c24dceb
ago
Up 41
furious_perlman
833daa3d9708
ago
Up 44
sharp_lovelace
6fb68e7b7451
ago
Up 22
IMAGE
STATUS
ubuntu:12.04
COMMAND
NAMES
/bin/bash
5 minutes
nathanleclaire/devbox:latest
/bin/bash -l
16 hours
minutes
b2d
training/webapp:latest
python -m SimpleHTTP
16 hours
minutes
5000/tcp, 0.0.0.0:49154->8000/tcp
training/webapp:latest
minutes
python -m SimpleHTTP
16 hours
nathanleclaire/devbox:latest
minutes
/bin/bash -l
16 hours
bar
What does docker ps tell us?
We can see a lot of data returned by the docker ps command.
CONTAINER ID
a2d4b003d7b6
IMAGE
COMMAND
CREATED
STATUS
ubuntu:12.04 /bin/bash 5 minutes ago Exit 0
PORTS
NAMES
sad_pare
Let's focus on some items:
CONTAINER ID is a unique identifier generated by Docker for our container.
You can use it to manage the container (e.g. stop it, examine it...)
IMAGE is the image used to create that container.
We did docker run ubuntu, and Docker selected ubuntu:12.04.
COMMAND is the exact command that we asked Docker to run: /bin/bash.
You can name your containers (with the --name option).
If you don't, Docker will generate a random name for you, like sad_pare.
That name shows up in the NAMES column.
Get the ID of the last container
In the next slides, we will use a very convenient command:
$ docker ps -l -q
ee9165307acc
-l means "show only the last container started".
-q means "show only the short ID of the container".
This shows the ID (and only the ID!) of the last container we started.
Flag
Explanation
-a, --all
This shows all containers, including stopped ones.
-q, --quiet
This shows only container ID parameters.
-s, --size
This prints the sizes of the containers.
-l,
--latest
This shows only the latest container (including stopped containers).
-n=""
This shows the last n containers (including stopped containers). Its
default value is -1.
--before=""
This shows the containers created before the specified ID or name.
It includes stopped containers.
--after=""
This shows the containers created after the specified ID or name. It
includes stopped containers.
Inspecting our container
ou can also get a lot more information about our container by using the docker
inspect command.
$ docker inspect $(docker ps -l -q) | less
[{
"ID": "<yourContainerID>",
"Created": "2014-03-15T22:05:42.73203576Z",
"Path": "/bin/bash",
"Args": [],
"Config": {
Inspecting something specific
We can also use the docker inspect command to find specific things about our
container, for example:
$ docker inspect --format='{{.State.Running}}' $(docker ps -l -q)
false
Here we've used the --format flag and specified a single value from our inspect hash
result. This will return its value, in this case a Boolean value for the container's status.
(Fun fact: the argument to --format is a template using the Go text/template
package.)
Being lazy
We used $(docker ps -q -l) in the previous examples.
That's a lot to type!
From now on, we will use the ID of the container.
In the following examples, you should use the ID of your container.
$ docker inspect <yourContainerID>
Wait, that's still a lot to type!
Docker lets us type just the first characters of the ID.
$ docker inspect a2d4
The start command
The docker run that the container state is preserved on exit unless it is explicitly
removed. The docker start command starts a stopped container:
$ docker start [-i] [-a] <container(s)>
The stop command
The stop command stops a running container by sending the SIGTERM signal and
then the SIGKILL signal after a grace period:
SIGTERM and SIGKILL are Unix signals. A signal is a form of
interprocess communication used in Unix, Unix-like, and other
POSIX-compliant operating systems. SIGTERM signals the
process to terminate. The SIGKILL signal is used to forcibly
kill a process.
Restarting our container
You can (re-)start a stopped container using its ID.
$ docker restart <yourContainerID>
<yourContainerID>
Or using its name.
$ docker restart sad_pare
sad_pare
T he container will be restarted using the same options you launched it with.
Attaching to our restarted container
Once the container is started you can attach to it. In our case this will attach us to the
Bash shell we launched when we ran the container initially.
$ docker attach <yourContainerID>
root@<yourContainerID>:/#
Note: if the prompt is not displayed after running docker attach, just press "Enter"
one more time. The prompt should then appear.
You can also attach to the container using its name.
$ docker attach sad_pare
root@<yourContainerID>:/#
if w e ran exit here the container would stop again because the /bin/bash process
would be ended.
You can detach from the running container using <CTRL+p><CTRL+q>.
Starting and attaching shortcut
There's also a shortcut we can use that combines the docker start and docker
attach commands.
$ docker start -a <yourContainerID>
The -a flag combines the function of docker attach when running the docker
start command.
The rm command
The rm command removes the stopped/killed docker container
$ docker rm 'container_id'
The logs command
The logs command shows the logs of the container
$ docker logs 'container_id'
The top command
The top command shows the running processes in a container and their statistics,
mimicking the Unix top command
$ docker top 'container_id'
The kill command
The kill command kills a container and sends the SIGTERM signal to the process
running in the container:
$ docker kill 'container_id'
The diff command
The diff command shows the difference between the container and the image it is
based on .
$ docker diff 'container_id'
A Container to Call your own : Configuring Containers
Container name
By default, Docker randomly names your container by combining an adjective with
the name of a famous person. This results in names like ecstatic-babbage and
serene- albattani . If you want to give your container a specific name, you can do
so using the --name argument.
$ docker run -it --name="awesome-service" ubuntu:latest
CPU shares
Docker thinks of CPU in terms of cpu shares. The computing power of all the CPU
cores in a system is considered to be the full pool of shares. 1024 is the number that
Docker assigns to represent the full pool. By configuring a containers CPU shares,
you can dictate how much time the container gets to use the CPU for. If you want the
container to be able to use at most half of the computing power of the system, then
you would allocate it 512 shares. Note that these are not exclusive shares, meaning
that assigning all 1024 shares to a container does not prevent all other containers
from running.
$ docker run -ti -c 1024 ubuntu:14.04 /bin/bash
Memory
We can control how much memory a container can access in a manner similar to
constraining the CPU. There is, however, one fundamental difference: while constraining the CPU only impacts the applications priority for CPU time, the memory
limit is a hard limit. Even on an unconstrained system with 96 GB of free memory, if
we tell a container that it may only have access to 24 GB, then it will only ever get to
use 24 GB regardless of the free memory on the system.
$ docker run -ti -m 1G ubuntu:14.04 /bin/bash
Working with Volumes
Working with Volumes
Objectives
At the end of this lesson, you will be able to:
Create containers holding volumes.
Share volumes across containers.
Share a host directory with one or many containers.
Working with Volumes
Docker volumes can be used to achieve many things, including:
Bypassing the copy-on-write system to obtain native disk I/O performance.
Bypassing copy-on-write to leave some files out of docker commit.
Sharing a directory between multiple containers.
Sharing a directory between the host and a container.
Sharing a single file between the host and a container.
Volumes are special directories in a container
Volumes can be declared in two different ways.
Within a Dockerfile, with a VOLUME instruction.
VOLUME /src/node
On the command-line, with the -v flag for docker run.
$ docker run -d -v /src/node
datreon/node
In both cases, /src/node (inside the container) will be a volume.
Volumes bypass the copy-on-write system
Volumes act as passthroughs to the host filesystem.
The I/O performance on a volume is exactly the same as I/O performance on
the Docker host.
When you docker commit, the content of volumes is not brought into the
resulting image.
If a RUN instruction in a Dockerfile changes the content of a volume, those
changes are not recorded neither.
Volumes can be shared across containers
You can start a container with exactly the same volumes as another one.
The new container will have the same volumes, in the same directories.
They will contain exactly the same thing, and remain in sync.
Under the hood, they are actually the same directories on the host anyway.
$ docker run --name=alpha -t -i -v /var/log ubuntu bash
root@99020f87e695:/# date >/var/log/now
In another terminal, let's start another container with the same volume.
$ docker run --volumes-from alpha ubuntu cat /var/log/now
Fri May 30 05:06:27 UTC 2014
Volumes exist independently of containers
If a container is stopped, its volumes still exist and are available.
In the last exemple, it doesn't matter if container alpha is running or not.
Managing volumes yourself (instead of letting Docker
do it)
In some cases, you want a specific directory on the host to be mapped inside the
container:
You want to manage storage and snapshots yourself.
(With LVM, or a SAN, or ZFS, or anything else!)
You have a separate disk with better performance (SSD) or resiliency (EBS) than
the system disk, and you want to put important data on that disk.
You want to share your source directory between your host (where the source
gets edited) and the container (where it is compiled or executed).
Sharing a directory between the host and a container
$ docker run -t -i -v /src/node/code:/src/node ubuntu:14.04 /bin/bash
This will mount the/src/node/code directory into the container at/src/node
It defaults to mounting read-write but we can also mount read-only.
$ docker run -t -i -v /src/node/code:/src/node:ro ubuntu:14.04 /bin/bash
Checking volumes defined by an image
Wondering if an image has volumes? Just use docker inspect:
$ # docker inspect datreon/node
[{
"config": {
. . .
"Volumes": {
"/src/node": {}
},
. . .
}]
Checking volumes used by a container
To look which paths are actually volumes, and to what they are bound, use docker
inspect (again):
$ docker inspect cadb250b18be
[{
"ID": "<yourContainerID>3d483d3e1a647edbd5683fd6aeae0dd87810d5f523e2d7cb781c",
. . .
"Volumes": {
"/src/node": "/var/lib/docker/aufs/dir/
f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468"
},
"VolumesRW": {
"/var/webapp": true
},
}]
We can see that our volume is present on the file system of the Docker host.
Sharing a single file between the host and a container
The same -v flag can be used to share a single file.
$ echo 4815162342 > /tmp/numbers
$ docker run -t -i -v /tmp/numbers:/numbers ubuntu bash
root@274514a6e2eb:/# cat /numbers
4815162342
All modifications done to /numbers in the container will also change /tmp/
numbers on the host!
It can also be used to share a socket.
$ docker run -t -i -v /var/run/docker.sock:/docker.sock ubuntu bash
This pattern is frequently used to give access to the Docker socket to a given
container.
Warning: when using such mounts, the container gains access to the host. It can
potentially do bad things.
Section summary
We've learned how to:
Create containers holding volumes.
Share volumes across containers.
Share a host directory with one or many containers.
Container Networking Basics
Container Networking Basics
Objectives
At the end of this lesson, you will be able to:
Expose a network port.
Manipulate container networking basics.
Find a container's IP address.
Stop and remove a container.
Bridge network in docker
Container Networking Basics
Now we've seen the basics of a daemonized container let's look at a more complex (and
useful!) example.
To do this we're going to build a web server in a daemonized container.
Running our web server container
Let's start by creating our new container.
$ docker run -it -p 8080 --name-node_container datreon/node
72bbff4d768c52d6ce56fae5d45681c62d38bc46300fc6cc28a7642385b99eb5
We've used the -it flag to run the container
The -p flag exposes port 8080 in the container
We've used the datreon/node image, which happens to have node & npm
Checking the container is running
Let's look at our running container.
$ docker ps
CONTAINER ID
72bbff4d768c
IMAGE
...
COMMAND
...
CREATED
...
STATUS
...
PORTS
NAMES
0.0.0.0:49153->8080/tcp ...
The -p flag maps a random high port, here 49153 to port 8080 inside the
container. We can see it in the PORTS column.
The other columns have been scrubbed out for legibility here.
The docker port shortcut
We can also use the docker port command to find our mapped port.
$ docker port <yourContainerID> 80 80
0.0.0.0:49153
We specify the container ID and the port number for which we wish to return a mapped
port.
We can also find the mapped network port using the docker inspect command.
$ docker inspect -f "{{ .HostConfig.PortBindings }}" <yourContainerID>
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"49153"}]}
Viewing our web server
We open a web browser and view our web server on <yourHostIP>:49153.
Container networking basics
You can map ports automatically.
You can also manually map ports.
$ docker run -it -p 8080:8080
--name-node_container datreon/node
The -p flag takes the form:
host_port:container_port
This maps port 8080 on the host to port 8080 inside the container
.
Note that this style prevents you from spinning up multiple instances of the
same image (the ports will conflict).
Containers also have their own private IP address.
Finding the container's IP address
We can use the docker inspect command to find the IP address of the container.
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' <yourContainerID>
172.17.0.3
We've again used the --format flag which selects a portion of the output
returned by the docker inspect command.
The default network used for Docker containers is 172.17.0.0/16.
If it is already in use on your system, Docker will pick another one.
Pinging our container
We can test connectivity to the container using the IP address we've just discovered.
Let's see this now by using the ping tool.
$ ping 172.17.0.3
64 bytes from 172.17.0.3: icmp_req=2 ttl=64 time=0.085 ms
64 bytes from 172.17.0.3: icmp_req=2 ttl=64 time=0.085 ms
64 bytes from 172.17.0.3: icmp_req=2 ttl=64 time=0.085 ms
Stopping the container
Now let's stop the running container.
$ docker stop <yourContainerID>
Removing the container
Let's be neat and remove our container too.
$ docker rm <yourContainerID>
Bridge Networking in docker
When Docker starts, it creates a virtual
interface named docker0 on the host machine.
It configures a new virtual bridge interface
called docker0 on the host system.
This interface allows Docker to allocate a
virtual subnet for use among the
containers it will run.
We can check by
$ sudo apt-get install bridge-utils
#install the required tool
$brctl show docker0
#shows all the running container's interfaces
$ iptables -L -n | check ip of the running containers
Check existing created networks
$ docker network ls
Connecting Containers
Connecting containers
Objectives
At the end of this lesson, you will be able to:
Launch named containers.
Create links between containers.
Use names and links to communicate across containers.
Use these features to decouple app dependencies and reduce complexity.
Connecting containers
We will learn how to use names and links to expose one container's port(s) to
another.
Why? So each component of your app (e.g., DB vs. web app) can run
independently with its own dependencies.
What we've got planned
We're going to get two images: a node image and a mongodb image
We're going to start containers from each image.
We're going to link the containers running our node backend and mongo db
using Docker's link primitive.
The datreon/mongodb Dockerfile
FROM ubuntu:14.04
MAINTAINER datreon <datreon@gmail.com>
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release sc)"/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb.list
RUN apt-get update
RUN apt-get install -y mongodb-org
# Create the MongoDB data directory
RUN mkdir -p /data/db
RUN mkdir -p /mongo/log
# Expose port 27017 from the container to the host
EXPOSE 27017
# Set usr/bin/mongod as the dockerized entry-point application
ENTRYPOINT ["/usr/bin/mongod"]
The datreon/mongodb Dockerfile in detail
Based on the ubuntu:14.04 base image.
Adding the required key and the repository
Updating ubuntu:14.04 and installing the required packages
Making directories for data and logs
Exposes port 27017
.
Setting and entry point
More about our EXPOSE instruction
We saw EXPOSE earlier. We were exposing port 22 to allow SSH connections. Here we
see it again.
EXPOSE 27017
This time we will expose a port to our Rails application in another container, not to the
host. But the Dockerfile syntax is identical.
Connecting Containers
Section summary
We've learned how to:
Launch named containers.
Create links between containers.
Use names and links to communicate across containers.
Use these features to decouple app dependencies and reduce complexity.