0% found this document useful (0 votes)
280 views88 pages

Dockers Ramesh1

This document provides an overview and instructions for installing and using Docker. It describes Docker as a software to create portable containers that can run on any machine. It then covers the key differences between dedicated hosts, virtual machines, and containers. The document proceeds to explain how to install Docker on Windows, Linux, and MacOS. It provides examples of common Docker commands to pull images, run containers, inspect objects, and more. Finally, it illustrates how to run the nginx image in a container.

Uploaded by

Sawkat Ali Sk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
280 views88 pages

Dockers Ramesh1

This document provides an overview and instructions for installing and using Docker. It describes Docker as a software to create portable containers that can run on any machine. It then covers the key differences between dedicated hosts, virtual machines, and containers. The document proceeds to explain how to install Docker on Windows, Linux, and MacOS. It provides examples of common Docker commands to pull images, run containers, inspect objects, and more. Finally, it illustrates how to run the nginx image in a container.

Uploaded by

Sawkat Ali Sk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

Real-time Approach K.

RAMESH

DOCKERS
[Docker Machine, Docker Engine, Docker Client, Docker Hub,
Docker Compose, Docker Swarm, Docker for AWS, Kitematic]

K.RAMESH

ASPIRE Technologies
#501, 5th Floor, Mahindra Residency, Maithrivanam Road, Ameerpet, Hyderabad

Ph: 07799 10 8899, 07799 20 8899

E-Mail:ramesh@java2aspire.com website: www.java2aspire.com

1
Real-time Approach K.RAMESH

1. INTRODUCTION
Docker is a software to create Containers which run on any machine i.e., portable across any
environment such as development or testing or production or cloud, etc.

Dedicated Hosts vs Virtual Machines vs Containers


In below diagram, the Dedicated Machine has three applications, all sharing the same software stack
(same Host OS, JVM, Tomcat). Running multiple applications on a single machine could lead to a "noisy
neighbor" problem. There is no isolation among the applications running on the same machine. As a
result, applications deployed on a single machine may eat up other’s space, thus impacting their
performance.

The Virtual Machines allow us to run three applications in 3 different software stacks (diff Guest OS,
JVM, Tomcat). Even though VMs are better than Dedicated Machine but the drawback in VM is having
Guest OS on top of Host OS which (Guest OS) is still heavy weight.

In below diagram, three applications are deployed into 3 containers with different software stacks (diff
Bootstrap OS, JVM, Tomcat). Since docker container doesn’t need Guest OS rather needs Bootstrap OS
which is certainly light weight. There is no need for a complete operating system every time we need to

2
Real-time Approach K.RAMESH

bring up a new container, which cuts down on the overall size of containers. Hence the advantages with
containers are lightweight, small, compact, easy to ship and run on any machine.

Installation
We can install Docker on Windows, MacOS and Linux.
Docker natively runs on Linux and MacOS; So, if we are using Linux / MacOS, then it's pretty
straightforward to install Docker on Linux / MacOS systems.
However, if we are using Windows 10, then it operates a little differently since Docker does not natively
run on Windows. Hence Virtualization Platform such as Hyper-V (From Microsoft) or VMware (From
EMC/Dell) or VirtualBox (From Oracle Corp) is needed to install Linux Virtual Machine.
Windows 10 has Enterprise or Professional or Home editions. By default, Windows 10 Professional or
Enterprise editions provide Hyper-V. The Hyper-V is the virtualization platform on which multiple Virtual
Machines can run.
If we install Docker on Windows 10 Enterprise or Professional editions then Docker installation uses
Hyper-V virtualization platform to install lightweight Linux VM and then installs Docker Engine inside
Linux VM.
If we install Docker on Windows 10 Home edition or Windows 8 which doesn’t have Hyper-V, then
Docker installation automatically installs Oracle VirtualBox virtualization platform to install lightweight
Linux VM and subsequently installs Docker Engine into Linux VM.
So, we need to download appropriate Docker software depending on Windows edition and version:
1) In case of Windows 10 Enterprise or Professional, we need to download Docker for Windows
installer (Docker for Windows Installer.exe) from the Docker Store
at https://store.docker.com/editions/community/docker-ce-desktop-windows/ .

3
Real-time Approach K.RAMESH

2) In case of Windows10 Home edition or Windows8 or Windows7, we need to download Docker


Toolbox (DockerToolbox.exe) installer from
https://docs.docker.com/toolbox/toolbox_install_windows/
Note: Anyways the VirtualBox is part of Docker Toolbox hence no need to download and install
VirtualBox explicitly.
Docker Toolbox includes these Docker tools:
1. Docker Command Line Interface (Docker CLI / Docker Client / Docker Terminal)
2. Docker Engine for running the docker commands
3. Docker Machine tool for running docker-machine commands
4. Docker Compose for running the docker-compose commands
5. Docker Swarm for creating clusters
6. Kitematic, the Docker GUI
7. Oracle VirtualBox

Docker Client
Once Docker installation was done then double click on Docker Quickstart Terminal icon on desktop to
start Docker Client.
The terminal connected with docker-machine named “default” which runs on 192.168.99.100 in my
laptop.

The below command is used to check installation:


$ docker version

4
Real-time Approach K.RAMESH

The $docker version prints two different sections: client and server. The client in Windows version
shows Windows and server shows Linux. This is because Windows version of Docker download and run a
virtual machine with small lightweight operating system based on Alpine Linux. This VM is installed by
using Hyper-V virtualization platform in case of Windows 10 Professional or Enterprise. This VM is
installed by using Oracle VirtualBox virtualization platform in case of Windows 10 Home or Windows 8
or Windows 7.

$ docker --help
Usage: docker COMMAND
Management Commands:
container Manage containers
image Manage images
node Manage Swarm nodes
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
volume Manage volumes

Commands:
build Build an image from a Dockerfile
commit Create a new image from a container's changes
create Create a new container
exec Run a command in a running container
images List images

5
Real-time Approach K.RAMESH

info Display system-wide information


inspect Return low-level information on Docker objects
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rm Remove one or more containers
run Run a command in a new container
start Start one or more stopped containers
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
version Show the Docker version information

The below command is used to know system-wide information:


$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.06.0-ce

For further help with a particular command, we can run the following:
$ docker <command> --help
$ docker image --help
Usage: docker image COMMAND
Commands:
build Build an image from a Dockerfile
history Show the history of an image
inspect Display detailed information on one or more images
ls List images
prune Remove unused images
pull Pull an image or a repository from a registry

6
Real-time Approach K.RAMESH

push Push an image or a repository to a registry


rm Remove one or more images
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

The below command is used to pull nginx image from docker hub (https://hub.docker.com/):
$ docker pull nginx:alpine
alpine: Pulling from library/nginx
b1f00a6a160c: Pull complete
ec6f7dec8de2: Pull complete
a803070bff46: Pull complete
3871f3a05be4: Pull complete
Digest: sha256:dc5f67a48da730d67bf4bfb8824ea8a51be26711de090d6d5a1ffff2723168a3
Status: Downloaded newer image for nginx:alpine

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx alpine 5c6da346e3d6 4 days ago 15.5MB

The following command is used to inspect image:


$ docker image inspect nginx:alpine
[
{
"Id": "sha256:5c6da346e3d6c5839a0cefcf6ec6c276d0103e0f6fbb17c28bd7ccd6915bbd08",
"RepoTags": [
"nginx:alpine"
],

"ExposedPorts": {
"80/tcp": {}
},

]

We can run container using either $ docker container run or $ docker run commands:
$ docker container --help
Usage: docker container COMMAND
Commands:
commit Create a new image from a container's changes
exec Run a command in a running container

7
Real-time Approach K.RAMESH

inspect Display detailed information on one or more containers


logs Fetch the logs of a container
ls List containers
port List port mappings or a specific mapping for the container
prune Remove all stopped containers
rm Remove one or more containers
run Run a command in a new container
start Start one or more stopped containers
stop Stop one or more running containers
top Display the running processes of a container

The below command is used to run container in default docker-machine:


$ docker container run -p 80:80 nginx:alpine
Press ctrl + c to return back to prompt.
We can check containers list using:
$ docker container ls

$ docker-machine ip default
192.168.99.100
Try http://192.168.99.100:80 in web browser.

The following command is used to interact with container:


$ docker container exec -it fervent_wiles /bin/ash [in case of alpine]

8
Real-time Approach K.RAMESH

Note: Containers reserve their own filesystem, IP address, network interfaces, internal processes,
namespaces, OS libraries, application binaries, dependencies, and other application configurations.
Note: A Docker container is a running instance of a Docker image.

The following command is used to inspect container:


$ docker container inspect fervent_wiles
[
{
"Id": " 2f5304a85952a65013e945fe8fef824e5cf0955bc309b70814a3e088e9671511",
"Created": "2017-10-24T07:16:33.840850663Z",
"Path": "nginx",
"Args": [
"-g",
"daemon off;"
],
"State": {
"Status": "running",

}

The following command is used to force remove i.e., stop and remove running container(s):
$ docker container rm -f fervent_wiles
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

The following is used to pull latest tag of nginx image:


$ docker pull nginx:latest
Using default tag: latest
latest: Pulling from library/nginx
bc95e04b23c0: Pull complete

9
Real-time Approach K.RAMESH

110767c6efff: Pull complete


f081e0c4df75: Pull complete
Digest: sha256:004ac1d5e791e705f12a17c80d7bb1e8f7f01aa7dca7deee6e65a03465392072
Status: Downloaded newer image for nginx:latest

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 40960efd7b8f 3 days ago 108MB
nginx alpine 5c6da346e3d6 4 days ago 15.5MB

$ docker container run -p 80:80 nginx:latest


Press ctrl+c to return back to prompt.

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ac649ce37b0 nginx:latest "nginx -g 'daemon ..." 2 minute ago Up 18 seconds 0.0.0.0:80->80/tcp relaxed_davinci

$ docker-machine ip default
192.168.99.100
Try http://192.168.99.100:80 in web browser.

The following command is used to interact with container:


$ docker container exec -it relaxed_davinci /bin/bash [In case of Debian]
root@26055d0c317b:/#

10
Real-time Approach K.RAMESH

The below command is used to remove container:


$ docker container rm -f relaxed_davinci

Below command is used to remove image:


$ docker image rm -f nginx:latest

Docker Ecosystem
There are lot of tools supplied and supported by Docker.
Docker Engine
This is the core of Docker. When people say “Docker” they typically mean Docker Engine. The Docker
Engine is a client-server application made up of the Docker daemon, REST API, and Command Line
Interface (CLI). The CLI talks to the Docker daemon through the REST API. Docker Engine (daemon)
accepts docker commands from the CLI, such as docker image, docker image ls, docker container,
docker container ls, docker ps, and so on and run them. Finally, the Docker Engine (daemon) sends
response back to CLI.

There are currently two versions of Docker Engines; there is the Docker Enterprise Edition (Docker EE)
and the Docker Community Edition (Docker CE).
Note: Docker daemon (means Docker Engine) runs inside Docker Machine (means Docker Host).
Note: Docker Client is a CLI used to interact with daemon.

Docker Machine
Docker Machine is also called as Docker Host.
Docker Machine is nothing but Linux VM with Docker Engine installed inside it.
We can create Docker machine by using Docker Machine tool.

11
Real-time Approach K.RAMESH

Docker Machine tool is installed along with other Docker products when we install the Docker for Mac,
Docker for Windows, or Docker Toolbox.
Docker Machine tool creates Docker Host with Docker Engine.
We can use Docker Machine tool to install Docker Host on one or more virtual systems. These virtual
systems can be local (using VirtualBox on Windows) or remote (cloud providers).
By default, the docker installation comes with docker machine named “default”.
We can create docker machines using $ docker-machine command.
$ docker-machine --help
Usage: docker-machine.exe [OPTIONS] COMMAND [arg...]
Commands:
config Print the connection config for machine
create Create a machine
env Display the commands to set up the environment for the Docker client
inspect Inspect information about a machine
ip Get the IP address of a machine
ls List machines
regenerate-certs Regenerate TLS Certificates for a machine
restart Restart a machine
rm Remove a machine
ssh Log into or run a command on a machine with SSH.
start Start a machine
status Get the status of a machine
stop Stop a machine

$ docker-machine create --driver virtualbox aspire-machine-practice


$ docker-machine create --driver hyperv aspire-machine-practice

12
Real-time Approach K.RAMESH

Note: The driver determines where the virtual machine is created. For example, on a local Windows
system, the driver is typically Oracle VirtualBox.

$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
aspire-machine-practice - virtualbox Running tcp://192.168.99.101:2376 v17.10.0-ce
default * virtualbox Running tcp://192.168.99.100:2376 v17.06.0-ce

Note: The Docker Host IP address is the IP address of the VM. The 2376 is the default port for docker-
machine.

The following command is used to get docker-machine environment which is used by CLI.
$ docker-machine env aspire-machine-practice

13
Real-time Approach K.RAMESH

The following command is used to connect with aspire-machine-practice:


$ eval $(docker-machine env aspire-machine-practice)
$ docker info | grep Name
Name: aspire-machine-practice

The following command is used to connect to docker-machine:


$ docker-machine ssh aspire-machine-practice
docker@aspire-machine-practice:~$
docker@aspire-machine-practice:~$ pwd
/home/docker
docker@aspire-machine-practice:~$ exit

Execute below docker commands to pull image and run container:


$ docker pull nginx:latest
$ docker images
The following command is used to run container in detached mode, give container name and map port
number.
$ docker container run -d --name NGINX -p 9090:80 nginx:latest
$ docker container ls

Check IP address of aspire-machine-practice using below command:


$ docker-machine ip aspire-machine-practice
192.168.99.101
Try http://192.168.99.101:9090 web browser. It should displays nginx home page.

The following command is used to get container logs:


$ docker container logs NGINX

14
Real-time Approach K.RAMESH

The following command is used to stop container:


$ docker container stop NGINX

The following command is used to print all containers:


$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ac649ce37b0 nginx:latest "nginx -g 'daemon ..." 2 minute ago Exited (255) 1 minutes ago 0.0.0.0:80->80/tcp NGINX

The following command is used to start same container:


$ docker container start NGINX

The $ docker container prune is used to remove all stopped containers.


$ docker container stop NGINX
$ docker container prune

The following command is used to force remove i.e., stop and remove running container(s):
$ docker container rm -f NGINX

The following command is used to stop docker-machine:


$ docker-machine stop aspire-machine-practice

15
Real-time Approach K.RAMESH

Stopping "aspire-machine-practice"...
Machine "aspire-machine-practice" was stopped.

The following command is used to start docker-machine:


$ docker-machine start aspire-machine-practice
Starting "aspire-machine-practice"...
(aspire-machine-practice) Check network to re-create if needed...
(aspire-machine-practice) Waiting for an IP...
Machine "aspire-machine-practice" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. We may need to re-run the `docker-machine env`
command.

If IP address is changed then $docker-machine ls command shows “Unable to query docker version: Get
https://192.168.99.102:2376/v1.15/version: x509: certificate is valid for 192.168.99.101, not
192.168.99.102”. Hence, we have to re-generate certificates using below command:
$ docker-machine regenerate-certs aspire-machine-practice
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

Connect to docker-machine using docker-machine eval command:


$ eval $(docker-machine env aspire-machine-practice)

Docker Compose
The Docker Compose is used to create multiple containers at a time.

Docker Hub
A repository for Docker images.

Docker Registry

16
Real-time Approach K.RAMESH

Docker Store
A storefront for official Docker images and plugins as well as licensed products.

Docker Swarm
A muli-host-aware orchestration tool.

Docker Cloud

Developers Vs Operational Vs Enterprises


Developers use Docker to eliminate “works on my machine” problem when collaborating on code
with co-workers. A developer using Windows OS version of Java may differ in Linux server that hosts the
production code. Even if the versions match but may differ in the file permissions between operating
system. All of this comes to head when it is time for a developer to deploy their code in the host but it
doesn't work. Should the production environment be configured to match the developer's machine, or
vice versa? In an ideal world, everything should be consistent from the developer's machine all the way
through to the production servers; however, this is difficult to achieve. The docker solution is, a
developer can easily wrap their code in a container and run it anywhere means any environment.

Operational team uses Docker to run and manage applications side-by-side in isolated containers to
get better compute density. For example, an operational team needs to deploy Application2 on the
same server on which Application1 was already running. But let us assume that Application2 needs
higher version of software than Application1. The docker solution is creating a separate container for
each application i.e., creating 2 containers for 2 applications.

Enterprises use Docker to build agile (means quick) software delivery pipelines to ship new features
faster, more securely and with confidence for both Linux and Windows Server apps. Enterprises need
to test every deployment before it is released. This means that new features and fixes are stuck while:
1. Test environments are spun up and configured.
2. Applications are deployed across the newly launched environments.
3. Requests for change are written, submitted, and discussed to get the updated application
deployed to production.
This process can take days or weeks or even months.
The docker solution is, the testing tool can use same containers to run tests. Once the containers have
been used, they can be removed to free up resources for the next lot of tests. This means we can reuse
same environment rather than redeploy or reimage servers for the next set of testing.

17
Real-time Approach K.RAMESH

2. BUILDING IMAGES
In this chapter we will build our own image. In order to build images, we need to write Dockerfile.

Dockerfile
A Dockerfile is simply a plain text file which contains a set of user-defined instructions, which are
executed by the docker image build command.
Let's take a look at the commands used in the Dockerfile in the order in which they appear:
 FROM
 LABEL
 RUN
 COPY vs ADD
 WORKDIR
 EXPOSE
 CMD vs ENTRYPOINT
 ENV

FROM
The FROM command tells about base image which we would like to use for our image, such as Linux
Alpine, Debian, Ubuntu, etc. Search name of the image in Docker Hub ( https://hub.docker.com/ ). It is
always recommended to use official images. Put the name of the image and the release tag we wish to
use. It is mandatory to set FROM as the first line of a Dockerfile.
#Dockerfile
FROM alpine:latest

Alpine Linux is a small, independently developed, non-commercial Linux distribution designed for
security, efficiency, and ease of use. Alpine Linux, due to both its size and powerfulness, has become the
default image base for the official container images supplied by Docker. To get an idea of just how small
the official image for Alpine Linux is, let's compare it to some of the other distributions available:

$ docker image build --file=Dockerfile --tag=nginxaspire:latest .

18
Real-time Approach K.RAMESH

Sending build context to Docker daemon 2.048kB


Step 1/1 : FROM alpine:latest
latest: Pulling from library/alpine
88286f41530e: Pull complete
Digest: sha256:f006ecbb824d87947d0b51ab8488634bf69fe4094959d935c0c103f4820a417d
Status: Downloaded newer image for alpine:latest
---> 76da55c8019d
Successfully built 76da55c8019d
Successfully tagged test:latest

$ docker images
$ docker image inspect nginxaspire:latest

LABEL
The LABEL command can be used to add extra information to the image. This information can be
anything from a version number to a description. It's recommended to limit the number of labels in
Dockerfile.
Example:
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="This example Dockerfile installs NGINX."

We can view the image's labels with the docker image inspect command.
$ docker image build --file=Dockerfile --tag nginxaspire:latest .
$ docker image inspect nginxaspire:latest

RUN
The RUN command is used to interact with image to install software and run scripts, commands, and
other tasks.
RUN echo "Hello Image!"
RUN apk add --update nginx && \
rm -rf /var/cache/apk/* && \
mkdir -p /tmp/nginx/

The first of three commands, apk add --update nginx, installs NGINX using Alpine Linux's package
manager; we are using the && operator to move on to the next command if the previous command was
successful. The \ is used to split the command over multiple lines, making it easy to read. The next
command in our chain is rm -rf /var/cache/apk/*; this removes any temporary files and so on to keep
the size of our image to a minimum. The final command in our chain, mkdir -p /tmp/nginx/, creates a
folder called /tmp/nginx/ so that NGINX starts correctly.

We could have also used the following in our Dockerfile:

19
Real-time Approach K.RAMESH

RUN apk add --update nginx


RUN rm -rf /var/cache/apk/*
RUN mkdir -p /tmp/nginx/
This would create an individual layer for each of the RUN command, which for the most part we should
try and avoid.

Most of the commands in Dockerfile create a separate layer. Hence, I would like to compare layers in
image with layers in onion:

$ docker image build --file Dockerfile --tag nginxaspire:latest .


$ docker image inspect nginxaspire:latest

COPY vs ADD
COPY instruction is used to copy files and directories from a specified source to a destination (in the file
system of the container).
Example:
COPY files/nginx.conf /etc/nginx/nginx.conf
COPY files/default.conf /etc/nginx/conf.d/default.conf

At first glance, COPY and ADD look like they are doing the same task; however, there are some
important differences.
Example:
ADD files/html.tar.gz /usr/share/nginx/

20
Real-time Approach K.RAMESH

We are adding a file called html.tar.gz, but we are not actually doing anything with the archive to
uncompress it in our Dockerfile. This is because ADD automatically uploads, uncompresses, and puts the
resulting folders and files at the path we tell it to, which in our case is /usr/share/nginx/

$ docker image build --file=Dockerfile --tag=nginxaspire:latest .


$ docker image inspect nginxaspire:latest

WORKDIR
The WORKDIR directive is used to set where the command defined with CMD is to be executed.
Example:
WORKDIR /opt/aspire/

EXPOSE
The EXPOSE command is used to associate a specified port to enable networking between the running
process inside the container and the outside world (i.e. anywhere means outside docker).
The EXPOSE command here to inform what port the container will be listening on. However, we can
always change (means overwrite) the container port via the -p parameter as needed.
Note: The EXPOSE is an optional instruction.

Basically, there are four options:


1. Neither specify EXPOSE nor -p
If we do not specify any of those, the server in the container will not be accessible from
anywhere except within same container itself.

2. Only specify EXPOSE


If we add only EXPOSE instruction in Dockerfile without -p option in “docker container run”
command then container port is same as EXPOSE port. Such container is not accessible from
outside Docker Host, but accessible from other containers with in same docker host. So, this is
good for inter-container communication.
#Dockerfile
EXPOSE 81

3. Specify only -p
If only -p option in “docker container run” is set then container port is same as port set with -p
option. The container port is directly mapped with server port running inside container.
$ docker container run -d --name=NginxAspire -p 82:80 nginxaspire:latest
Try this url: http://192.168.99.100:82/ from web browser.
Note: The EXPOSE instruction in Dockerfile is optional.
Note: If we use option -p, then the service in the container is accessible from anywhere, even
from outside Docker host.

21
Real-time Approach K.RAMESH

4. Specify both EXPOSE and -p


If both EXPOSE instruction in Dockerfile and -p option in “docker container run” are set then
container port is same as port set by -p option. The ‘EXPOSE’ instruction in Dockerfilke is
ignored. The container port is directly mapped with server port running inside container and
ignores EXPOSE instruction.
#Dockerfile
EXPOSE 81

$ docker container run -d --name=NginxAspire -p 82:80 nginxaspire:latest


Try this url: http://192.168.99.100:82/ from web browser.

$ docker image build --file=Dockerfile --tag nginxaspire:latest .


$ docker image inspect nginxaspire:latest

CMD
The main purpose of CMD is to provide defaults for an executing container.
These defaults can include an executable, or can omit the executable, in this case we must specify
executable by using an ENTRYPOINT instruction.
The CMD instruction has three forms:
1. CMD ["executable", "param1", "param2"] (exec form, this is the preferred form)
Example #1:
CMD ["nginx", "-g", "daemon off;"]
Example #2:
CMD ["java", "Welcome"]
Note: The exec form is parsed as a JSON array, which means that we must use double-quotes (“)
around words not single-quotes (‘).

2. CMD ["param1", "param2"] (parameters to ENTRYPOINT)


In this case executable is missing, hence we must specify executable with ENTRYPOINT
instruction.
If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and
ENTRYPOINT instructions should be specified with the JSON array format.
Example #1:
CMD ["-g", "daemon off;"]
ENTRYPOINT ["nginx"]
Example #2:
CMD ["Welcome"]
ENTRYPOINT ["java"]

3. CMD command param1 param2 (shell form)


Example:

22
Real-time Approach K.RAMESH

CMD echo "Hello Container!"

$ docker image build --file=Dockerfile --tag nginxaspire:latest .


$ docker image inspect nginxaspire:latest

Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell
processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on
$HOME.
If we want shell processing then either use the shell form or execute a shell directly, for example:
CMD “echo” "$HOME"
CMD ["sh", "-c", "echo $HOME"]

There can only be one CMD instruction in a Dockerfile. If we list more than one CMD then only the last
CMD will take effect.
https://docs.docker.com/engine/reference/builder/#cmd

Note: The command CMD, similarly to RUN, can be used for executing a specific command once
container is started. That means the CMD is not executed while building image rather executed while
container is being created.
#Dockerfile
FROM alpine
RUN echo "Hello Image!"
CMD echo "Hello Container!"

ENTRYPOINT
If we would like our container to run the same executable every time, then we should consider using
ENTRYPOINT.
ENTRYPOINT has two forms:
1. ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)
Example #1:
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Example #2:
CMD ["-g", "daemon off;"]
ENTRYPOINT ["nginx"]
Example #3:
ENTRYPOINT ["java", "Welcome"]
Example #4:
CMD ["Welcome"]
ENTRYPOINT ["java"]

2. ENTRYPOINT command param1 param2 (shell form)

23
Real-time Approach K.RAMESH

Example:
ENTRYPOINT echo "Hello Container!"

$ docker image build --file=Dockerfile --tag nginxaspire:latest .


$ docker image inspect nginxaspire:latest

Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell
processing does not happen. For example, ENTRYPOINT ["echo", "$HOME"] will not do variable
substitution on $HOME.
If we want shell processing then either use the shell form or execute a shell directly, for example:
ENTRYPOINT “echo” "$HOME"
ENTRYPOINT ["sh", "-c", "echo $HOME"]

ENV
To use environmental variables in our Dockerfile, we can use the ENV instruction. The structure of the
ENV instruction is as follows:
ENV <key> <value>
Example:
ENV PHPVERSION 7
ENV username admin

.dockerignore
The .dockerignore file is used to exclude those files or folders we don't want include in the docker build
as by default, all files in the Dockerfile folder will be uploaded.

Putting it altogether, below is the final Dockerfile:


#Dockerfile
FROM alpine:latest

LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"


LABEL description="This example Dockerfile installs NGINX."

RUN apk add --update nginx && \


rm -rf /var/cache/apk/* && \
mkdir -p /tmp/nginx/

#RUN apk add --update nginx


#RUN rm -rf /var/cache/apk/*
#RUN mkdir -p /tmp/nginx/

COPY files/nginx.conf /etc/nginx/nginx.conf

24
Real-time Approach K.RAMESH

COPY files/default.conf /etc/nginx/conf.d/default.conf


ADD files/html.tar.gz /usr/share/nginx/

EXPOSE 80/tcp

#CMD ["nginx", "-g", "daemon off;"]

#CMD ["-g", "daemon off;"]


#ENTRYPOINT ["nginx"]

ENTRYPOINT ["nginx", "-g", "daemon off;"]

$ docker-machine start aspire-machine-practice


$ eval $(docker-machine.exe env aspire-machine-practice)
Navigate to /d/Dockers/Practice/NginxDocker
$ docker image build --file=Dockerfile --tag nginxaspire:latest .
$ docker container run --name NginxAspire -p 82:80 nginxaspire:latest
ctrl +c
$ docker container ls
$ docker container exec -it NginxAspire /bin/ash

25
Real-time Approach K.RAMESH

3. MANAGING CONTAINERS
Now we are going to look at how we can create containers and also how we can use the Docker CLI to
manage and interact with them.
$ docker container --help

The following command automatically pulls ‘hello-world’ image from docker hub
(https://hub.docker.com), creates and starts docker container.
$ docker container run hello-world

As the process exists, our container also stops. Now run docker container ls without and with -a flag:
$ docker container ls
$ docker container ls -a

Note: -a, --all Show all containers (default shows just running)

We can remove the container with an exited status by running the following:
$ docker container rm reverent_pasteur
Note: We can use prune which removes all exited containers.

26
Real-time Approach K.RAMESH

The following command is used to pull image (if doesn’t exist locally), create and start container, map
port 9090 (container port) to 80 (nginx server port), assign custom name to container, and runs in
detach mode (-d flag):
$ docker container run -d --name NGINX -p 9090:80 nginx:latest

Try http://192.168.99.101:9090

# docker container logs --help


Usage: docker container logs [OPTIONS] CONTAINER
Options:
--tail string Number of lines to show from the end of the logs
-t, --timestamps Show timestamps
-f, --follow Follow log output

The following command is used to see logger statements:
$ docker container logs NGINX

The following command is used to show 5 lines from the end of the logs.
$ docker container logs --tail 5 NGINX

To view the logs in real time, we simply need to run the following:
$ docker container logs -f NGINX

27
Real-time Approach K.RAMESH

The -f flag is shorthand for --follow.

The following command is used to connect with container:


$ docker container exec -it NGINX /bin/bash
root@64b2100a063e:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var

The flag ‘-i’ is shorthand for ‘--interactive’ and the flag ‘-t’ is shorthand for --tty.

Observe the difference between following two commands:


$ date
Fri, Oct 27, 2017 12:49:05 PM

$ docker container exec NGINX date


Fri Oct 27 07:19:23 UTC 2017

28
Real-time Approach K.RAMESH

4. APPLICATIONS
In this chapter we will build images for Standalone Java, JDBC, JSP, HIBERNATE, SPRING, Microservices,
etc applications and run them in containers.

Application #1: This application is used to create image for Standalone Java application using Docker.
D:/dockers/practice/HelloDocker
//Welcome.java
public class Welcome {
public static void main(String[] args) {
System.out.println("Welcome to Docker Training!");
}
}

//Sample.java
public class Sample {
public static void main(String[] args) {
System.out.println("Hello World!");
}
}
Note: Compile above two files from console.

#Dockerfile
FROM frolvlad/alpine-oraclejdk8

#FROM ubuntu:latest

LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"


LABEL description="Standalone Java Application"

#RUN apt-get update && apt-get install -y openjdk-8-jre

COPY Welcome.class ./
COPY Sample.class ./

#COPY *.class ./

#COPY *.class /opt/aspire/


#WORKDIR /opt/aspire/

29
Real-time Approach K.RAMESH

CMD ["java", "Welcome"]


#ENTRYPOINT ["java", "Welcome"]

#CMD ["Welcome"]
#ENTRYPOINT ["java"]

$ eval $(docker-machine env aspire-machine-practice)


$ docker info | grep Name
aspire-machine-practice

Navigate to /d/Dockers/Practice/HelloDocker folder


$ docker images

$ docker image build --file=Dockerfile –tag=helloaspire:latest .

$ docker images

30
Real-time Approach K.RAMESH

$ docker image inspect helloaspire:latest

$ docker container run --name HelloAspire helloaspire:latest


Welcome to Docker Training!

$ docker container ls -a

Since container status was Exited hence, we can use prune to remove container.
$ docker container prune

Add sleep() method in Welcome.java file.


//Welcome.java
public class Welcome {
public static void main(String[] args) {
System.out.println("Welcome to Docker Training!");
try{
Thread.sleep(1000*1000);
}catch(Exception e){
e.printStackTace();
}
}
}

$ docker container run --name HelloAspire helloaspire:latest


Welcome to Docker Training!

ctrl+c

Interact with container by using exec command:


$ docker container exec -it HelloAspire/bin/ash
/opt/aspire #ls
Sample.class Welcome.class

31
Real-time Approach K.RAMESH

/opt/aspire # exit
$ docker container inspect HelloAspire

If we use CMD instruction in Dockerfile then we can pass executable and arguments from CLI.
$ docker container run --name HelloAspire helloaspire:latest java Welcome
Welcome to Docker Training!

$ docker container run --name HelloAspire helloaspire:latest java Sample


Hello World!

Application #2: This application is used to create image for JDBC application using Docker.
#connection.properties
jdbc.driverClass=oracle.jdbc.driver.OracleDriver
#My Laptop IP address
jdbc.url=jdbc:oracle:thin:@ 169.254.50.84:1521:xe
jdbc.username=system
jdbc.password=manager

// DBConnectionUtils.java
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;
public class DBConnectionUtils {
private static Properties props = null;
static{
try {
props = new Properties();
props.load(new FileInputStream("connection.properties"));
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}

public static Connection createConnection(){

32
Real-time Approach K.RAMESH

Connection con = null;


try{
Class.forName(props.getProperty("jdbc.driverClass"));
con = DriverManager.getConnection(props.getProperty("jdbc.url"),
props.getProperty("jdbc.username"), props.getProperty("jdbc.password"));
}catch(SQLException e){ //Exception handling
e.printStackTrace();
}catch(Exception e){
e.printStackTrace();
}
return con;
}
}

// DatabaseMetaDataEx.java
import java.sql.Connection;
import java.sql.DatabaseMetaData;
public class DatabaseMetaDataEx {
public static void main(String[] args)throws Exception {
Connection con = DBConnectionUtils.createConnection();

DatabaseMetaData dbmd = con.getMetaData();

System.out.println("Database Name:"+dbmd.getDatabaseProductName());
System.out.println("Database Product version:"+dbmd.getDatabaseProductVersion());
System.out.println("Driver Name:"+dbmd.getDriverName());
System.out.println("Driver Version:"+dbmd.getDriverVersion());

con.close();
}
}

Export as a runnable jar:


Right click on JdbcDocker project  Export  Expand Java  Select ‘Runnable JAR file’  Give Export
Destination location [D:\Dockers\Practice\JdbcDocker\JdbcDocker.jar] Select ‘Package required
libraries into generated JAR’  Click on Finish.

Copy connection.properties file into D:\Dockers\Practice\JdbcDocker

Run application from console.

33
Real-time Approach K.RAMESH

D:\Dockers\Practice\JdbcDocker>java -jar JdbcDocker.jar


Database Name:Oracle
Database Product version:Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Driver Name:Oracle JDBC driver
Driver Version:11.1.0.7.0-Production

#Dockerfile
FROM frolvlad/alpine-oraclejdk8
#FROM java:8-jre
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Jdbc Application"
COPY JdbcDocker.jar /opt/aspire/
COPY connection.properties /opt/aspire/
WORKDIR /opt/aspire/
ENTRYPOINT ["java", "-jar", "JdbcDocker.jar"]

Navigate to /d/Dockers/Practice/JdbcDocker in Docker CLI and run below commands.


$ docker image build --file=Dockerfile --tag jdbcaspire:latest .
$ docker images
$ docker image inspect jdbcaspire
$ docker container run --name JdbcAspire jdbcaspire:latest
Database Name:Oracle
Database Product version:Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Driver Name:Oracle JDBC driver
Driver Version:11.1.0.7.0-Production

Oracle DB inside Container


docker pull christophesurmont/oracle-xe-11g:latest
docker run -d -p 1521:1521 -e ORACLE_ALLOW_REMOTE=true christophesurmont/oracle-xe-11g:latest
host: 192.168.99.101
port: 1521
SID: xe
username: system
password: oracle

MySql DB inside Container


docker pull mysql:latest
docker container run --name AspireMysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=aspire1234 -d
mysql:latest
jdbc.driverClass=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://192.168.99.101:3306/mysql

34
Real-time Approach K.RAMESH

jdbc.username=root
jdbc.password=aspire1234

Application #3: This application is used to create image for JSP application using Docker.
Export JspDocker.war file to D:\Dockers\Practice\JspDocker.

#Dockerfile
FROM tomcat:latest
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="JSP Web Application"
COPY JspDocker.war /usr/local/tomcat/webapps/
EXPOSE 9090
ENTRYPOINT ["/usr/local/tomcat/bin/catalina.sh", "run"]

Navigate to /d/Dockers/Practice/JspDocker in Docker CLI and run below commands.


$ docker image build --file=Dockerfile --tag=jspaspire:latest .
$ docker container run --name JspAspire -p 9090:8080 jspaspire:latest
29-Oct-2017 07:54:01.764 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server
version: Apache Tomcat/8.5.23
29-Oct-2017 07:54:01.766 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server
built: Sep 28 2017 10:30:11 UTC
29-Oct-2017 07:54:01.766 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server
number: 8.5.23.0
29-Oct-2017 07:54:01.766 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name:
Linux
29-Oct-2017 07:54:01.766 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS
Version: 4.4.93-boot2docker

29-Oct-2017 07:54:04.042 INFO [main] org.apache.coyote.AbstractProtocol.start Starting
ProtocolHandler ["http-nio-8080"]
29-Oct-2017 07:54:04.089 INFO [main] org.apache.coyote.AbstractProtocol.start Starting
ProtocolHandler ["ajp-nio-8009"]
29-Oct-2017 07:54:04.106 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 2116
ms

ctrl +c
$ docker container ls

35
Real-time Approach K.RAMESH

$ docker container exec -it JspAspire /bin/bash


Try http://192.168.99.101:9090/index.jsp and http://192.168.99.101:9090/JspDocker/index.jsp

$ docker container logs JspAspire


Application #4: This application is used to create image for Hibernate application using Docker.
Create Runnable jar file and place it in D:\Dockers\Practice\HibernateDocker folder.
#Dockerfile
FROM frolvlad/alpine-oraclejre8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Hibernate Application"
COPY HibernateDocker.jar /opt/aspire/
WORKDIR /opt/aspire/
ENTRYPOINT ["java", "-jar", "HibernateDocker.jar"]

Navigate to /d/Dockers/Practice/HibernateDocker in Docker CLI and run below commands:


$ docker image build --file=Dockerfile --tag=hibernateaspire:latest .
$ docker images
$ docker image inspect hibernateaspire:latest
$ docker container run --name=HibernateAspire hibernateaspire:latest
Oct 30, 2017 5:37:06 AM org.hibernate.cfg.Environment <clinit>
INFO: Hibernate 3.5.6-Final
Oct 30, 2017 5:37:06 AM org.hibernate.cfg.Environment <clinit>
INFO: hibernate.properties not found
Oct 30, 2017 5:37:06 AM org.hibernate.cfg.Environment buildBytecodeProvider
INFO: Bytecode provider name : javassist

INFO: Running hbm2ddl schema export
Oct 30, 2017 5:37:47 AM org.hibernate.tool.hbm2ddl.SchemaExport execute
INFO: exporting generated schema to database
Oct 30, 2017 5:37:47 AM org.hibernate.tool.hbm2ddl.SchemaExport execute
INFO: schema export complete
Hibernate:
/* insert edu.aspire.domain.Student
*/ insert
into
STUDENT
(SNAME, EMAIL, MOBILE, SNO)
values
(?, ?, ?, ?)

36
Real-time Approach K.RAMESH

5. MICROSERVICES WITH DOCKER


Microservice is an architecture style which says decompose big applications into smaller services that
work together to form larger business services. Microservices are autonomous, self-contained, and
independently deployable.
Dedicated machines, as in traditional monolithic application deployments, are not the best solution for
deploying microservices. Automation such as automatic provisioning, the ability to scale on demand,
self-service, and payment based on usage are essential capabilities required to manage large-scale
microservice deployments efficiently. In general, a cloud infrastructure provides all these essential
capabilities.
Running one microservice instance per bare metal is not cost effective. Therefore, in most cases,
enterprises end up deploying multiple microservices on a single bare metal server. Running multiple
microservices on a single bare metal could lead to a "noisy neighbor" problem. There is no isolation
among the microservice instances running on the same machine. As a result, services deployed on a
single machine may eat up other’s space, thus impacting their performance.
An alternate approach is to run the microservices on VMs. However, VMs are heavyweight in nature.
Therefore, running many smaller VMs on a physical machine is not resource efficient. This generally
results in resource wastage. In case of sharing a VM to deploy multiple services, we would end up facing
the same issues of sharing the bare metal, as explained earlier.
In case of Java-based microservices, sharing a VM or bare metal to deploy multiple microservices also
results in sharing JRE among microservices. This is because the fat JARs created using Spring Boot
abstract only application code, its dependencies (libs) and embedded tomcat server but not JREs and
OS. Any update on JRE installed on the machine will have an impact on all the microservices deployed on
this machine. Similarly, if there are OS-level parameters, libraries, or tunings that are required for
specific microservices, then it will be hard to manage them on a shared environment.
The microservice principle insists that it should be self-contained and autonomous by fully
encapsulating its end-to-end runtime environment. In order to align with this principle, all
components, such as the OS, JRE, and microservice binaries, have to be self-contained and isolated.
The one option to achieve this is deploying one microservice per VM. However, this will result in
underutilized virtual machines, and in many cases, extra cost due to this can nullify the benefits of
microservices. The other and preferred option is deploying one microservice per container. Since
containers are light weight hence the second option is preferred.

37
Real-time Approach K.RAMESH

Create separate schema for every microservice by following ‘Documents/misc/Airline_PSS_Schema.doc’


guide.

FaresFlight MicroService
Step1: Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case
my laptop IP address).
#application.properties
server.port=8081

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@169.254.50.84:1521:xe
#spring.datasource.url=jdbc:oracle:thin:@aws.c3clzczg3bmy.us-east-2.rds.amazonaws.com:1521:ORCL
spring.datasource.username=fareuser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=FAREUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=20
spring.datasource.tomcat.max-active=25

spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true

38
Real-time Approach K.RAMESH

management.security.enabled=false

Step2: Create Fat jar file using Spring Boot. Copy fat jar file into
D:\Dockers\Practice\MicroServicesDocker\FaresFlightTickets folder.

Step3: Add instructions in Dockerfile.


#Dockerfile
FROM frolvlad/alpine-oraclejre8:latest
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Fares Microservice"
COPY FaresFlightTickets.jar /opt/aspire/
WORKDIR /opt/aspire/
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "FaresFlightTickets.jar"]

Step4: Navigate to /d/Dockers/Practice/MicroServicesDocker/FaresFlightTickets in CLI and run below


docker commands.
$ eval $(docker-machine.exe env aspire-machine-practice)
$ docker image build --file=Dockerfile --tag=faresflighttickets:1.0 .
$ docker images
$ docker image inspect faresflighttickets:1.0
$ docker container run --name=FaresFlightTickets -p=8081:8081 faresflighttickets:1.0

Ctrl + c
$ docker container ls

39
Real-time Approach K.RAMESH

$ docker container inspect FaresFlightTickets


$ docker container exec -it FaresFlightTickets /bin/ash
/opt/aspire # ls
FaresFlightTickets.jar
/opt/aspire # exit

$ docker container logs -f FaresFlightTickets

Step5: Try http://192.168.99.101:8081/health in web browser.

RabbitMQ Server
$ docker pull rabbitmq:latest
latest: Pulling from library/rabbitmq
bc95e04b23c0: Already exists
2e65f0b00e4c: Pull complete
f2bd80317989: Pull complete
7b05ca830283: Pull complete
0bb5a4bbcce5: Pull complete
cf840d8999f6: Pull complete
be339ca44883: Pull complete
ce35cd9f9b5b: Pull complete
a4fe32a0a00d: Pull complete
77408ca9e94e: Pull complete
db03407a1aba: Pull complete
Digest: sha256:9a0de56d27909c518f448314d430f8eda3ad479fc459d908ff8b281c4dfc1c00
Status: Downloaded newer image for rabbitmq:latest

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
rabbitmq latest 8e186865bff8 2 weeks ago 124MB

$ docker container run --name=AspireRabbit -p=5672:5672 rabbitmq:latest


Wait and press ctrl + c
$ docker container ls

40
Real-time Approach K.RAMESH

SearchFlightTickets Microservice
Step1: Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case
my machine IP address) and RabbitMQ Server IP address (docker machine IP address).
# application.properties
server.port=8090

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@169.254.50.84:1521:xe
#spring.datasource.url=jdbc:oracle:thin:@aws.c3clzczg3bmy.us-east-2.rds.amazonaws.com:1521:ORCL
spring.datasource.username=searchuser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=SEARCHUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=2
spring.datasource.tomcat.max-active=3

spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true

spring.rabbitmq.host=192.168.99.101
#spring.rabbitmq.host=18.221.215.108
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

management.security.enabled=false

Step2: Create Fat jar file using Spring Boot. Copy fat jar file into
D:\Dockers\Practice\MicroServicesDocker\SearchFlightTickets folder.

Step3: Add instructions to Dockerfile.


#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Search Microservice"
COPY SearchFlightTickets.jar /opt/aspire/
WORKDIR /opt/aspire/
EXPOSE 8090
ENTRYPOINT ["java", "-jar", "SearchFlightTickets.jar"]

41
Real-time Approach K.RAMESH

Step4: Navigate to /d/Dockers/Practice/MicroServicesDocker/SearchFlightTickets in CLI and run below


docker commands.
$ docker image build --file=Dockerfile --tag=searchflighttickets:1.0 .
$ docker images
$ docker container run --name=SearchFlightTickets -p=8090:8090 searchflighttickets:1.0
ctrl + c
$ docker container ls

Step5: Try http://192.168.99.101:8090/health web browser.

BookingFlightTickets
Step1: Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case
my machine IP address) and RabbitMQ Server IP address (Docker Host IP address).
# application.properties
server.port=8060

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@169.254.50.84:1521:xe
spring.datasource.username=bookinguser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=BOOKINGUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=2
spring.datasource.tomcat.max-active=3

spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true

spring.rabbitmq.host=192.168.99.101
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

#debug=true

management.security.enabled=false

Step2: Set fare container’s IP address in BookingComponent.java file.


private static final String FareURL = "http://192.168.99.101:8081/fares";

42
Real-time Approach K.RAMESH

Step3: Create Fat jar file using Spring Boot. Copy fat jar file into
D:\Dockers\Practice\MicroServicesDocker\BookingFlightTickets folder.

Step4: Add instructions in Dockerfile.


#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Booking Microservice"
COPY BookingFlightTickets.jar /opt/aspire/
WORKDIR /opt/aspire/
EXPOSE 8060
ENTRYPOINT ["java", "-jar", "BookingFlightTickets.jar"]

Step5: Navigate to /d/Dockers/Practice/MicroServicesDocker/BookingFlightTickets in CLI and run below


docker commands:
$ docker image build --file=Dockerfile --tag=bookingflighttickets:1.0 .
$ docker images
$ docker container run --name=BookingFlightTickets -p=8060:8060 bookingflighttickets:1.0
ctrl + c
$ docker container ls

Step6: Try http://192.168.99.101:8060/health in web browser.

CheckinFlightTickets
Step1: Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case
my laptop IP address) and RabbitMQ Server IP address (rabbitmq container IP address).
#application.properties
server.port=8070

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@ 169.254.50.84:1521:xe
spring.datasource.username=checkinuser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=CHECKINUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=20
spring.datasource.tomcat.max-active=25

spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true

43
Real-time Approach K.RAMESH

spring.rabbitmq.host=192.168.99.101
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

management.security.enabled=false

Step2: Set booking container’s IP address in CheckinComponent.java file.


private static final String bookingURL = "http://192.168.99.101:8060/booking";

Step3: Create Fat jar file using Spring Boot. Copy fat jar file into
D:\Dockers\Practice\MicroServicesDocker\CheckinCustomers folder.

Step4: Add instructions in Dockerfile.


#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Checkin Microservice"
COPY CheckInCustomers.jar /opt/aspire/
WORKDIR /opt/aspire/
EXPOSE 8070
ENTRYPOINT ["java", "-jar", "CheckInCustomers.jar"]

Step5: Navigate to /d/Dockers/Practice/MicroServicesDocker/CheckInCustomers in CLI and run below


docker commands.
$ docker image build --file=Dockerfile --tag=checkincustomers:1.0 .
$ docker images
$ docker container run --name=CheckinCustomers -p=8070:8070 checkincustomers:1.0
Ctrl + c
$ docker container ls

Step6: Try http://192.168.99.101:8070/health url in web browser.

FlightsWebSite
Step1: No changes in application.properties file in src/main/resources folder.
#application.properties
server.port=8001
security.user.name=guest
security.user.password=guest123
management.security.enabled=false

44
Real-time Approach K.RAMESH

Step2: Set search, booking and checkin container’s IP addresses in Application.java and
BrownFieldSiteController.java files.

Step3: Create Fat jar file using Spring Boot. Copy fat jar file into
D:\Dockers\Practice\MicroServicesDocker\ FlightsWebSite folder.

Step4: Add instructions in Dockerfile.


#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Flight Website"
COPY FlightsWebSite.jar /opt/aspire/
WORKDIR /opt/aspire/
EXPOSE 8001
ENTRYPOINT ["java", "-jar", "FlightsWebSite.jar"]

Step5: Navigate to /d/Dockers/Practice/MicroServicesDocker/FlightsWebSite in CLI and run below


docker commands.
$ docker image build --file=Dockerfile --tag=flightswebsite:1.0 .
$ docker images
$ docker container run --name=FlightsWebSite -p=8001:8001 flightswebsite:1.0
ctrl + c
$ docker container ls

Step6: Try http://192.168.99.101:8001/health in web browser. Make some bookings through website
using http://192.168.99.101:8001/ and check logs in docker containers using ‘docker container logs
<container-name> ’.

Step7: Finally check all containers.

45
Real-time Approach K.RAMESH

6. DOCKER COMPOSE
Compose is a tool for creating and running multi-container Docker applications. This tool uses YAML file
named as docker-compose.yml / docker-compose.yaml to configure our application’s services (both
image and container information). Then, with a single command, we can create and start all the services
from our configuration.
The Compose file provides a way to document and configure all of the application’s service
dependencies. Using the Compose command line tool we can create and start one or more containers
for each dependency with a single command (docker-compose up).

Install Compose
Docker Compose relies on Docker Engine. The Docker Toolbox installation automatically comes with
Docker compose.
We can check version using following command:
$ docker-compose version
docker-compose version 1.13.0, build 1719ceb8
docker-py version: 2.2.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.2j 26 Sep 2016

Compose File
The compose file is represented in YAML format and its default name is docker-compose.yml. There are
several versions of the Compose file format – 1, 2.x, and 3.x.
#docker-compose.yml
version : '3'
services:
welcome:
container_name: HelloAspire
image: helloaspire:latest
build:
context: .
dockerfile: Dockerfile

container_name: Specify a custom container name, rather than a generated default name.
image: Specify an image name to start the container from.
Build: Configuration options that are applied at build time.
Context: Either a path to a directory containing a Dockerfile or a url to a git repository.
dockerfile: Specify docker file name

46
Real-time Approach K.RAMESH

deploy: Specify configuration related to the deployment and running of services. This
only takes effect when deploying to a swarm
Replicas Specify the number of containers that should be running at any given time.
restart_policy Configures if and how to restart containers when they exit.
condition: none or on-failure or any (default: any).
delay: How long to wait between restart attempts, specified as a duration
(default: 0).
max_attempts: How many times to attempt to restart a container before
giving up (default: never give up).
window: How long to wait before deciding if a restart has succeeded,
specified as a duration (default: decide immediately).
depends_on Express dependency between services.
The depends_on will not wait for dependent service(s) to be “ready” rather
will wait only until dependent service has been started.
Note: The ‘depends_on’ maintains start order but not ready order.
If we need to wait for a service to be ready, we have to write extra code (by
using ping, curl, etc).
Version 3 no longer supports the condition form of depends_on.
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
environment: Add environment variables.
- spring.cloud.config.server.git.uri=https://github.com/dockerramesh/config-
repo
Ports: Expose ports
ports:
- "8888:8888"

The below command is used to validate compose file:


$ docker-compose config
If there are no errors, then above command will print a rendered copy of our Docker Compose YAML file
on screen.

$ docker-compose --help
Define and run multi-container applications with Docker.
Usage:
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]

Options:
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
--verbose Show more output
-v, --version Print version and exit

Commands:

47
Real-time Approach K.RAMESH

build Build or rebuild services


config Validate and view the compose file
create Create services
down Stop and remove containers, networks, images, and volumes
images List images
logs View output from containers
ps List containers
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
up Create and start containers
version Show the Docker-Compose version information

$ docker-compose up --help
Builds, (re)creates, starts, and attaches to containers for a service.
Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]
Options:
-d Detached mode: Run containers in the background, print new container names.
--force-recreate Recreate containers even if their configuration and image haven't changed.
--build Build images before starting containers.

The below command is used to remove images which were built in previous chapters:
$ docker image rm -f helloaspire jdbcaspire jspaspire hibernateaspire

Application #1: Execute Hello Docker example using docker compose file
Step1: Prepare Dockerfile file
#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Standalone Java Application"
COPY Sample.class ./
CMD ["java", "Sample"]

Step2: Write below compose file:


# docker-compose.yml
version : '3'
services:
welcome:

48
Real-time Approach K.RAMESH

container_name: HelloAspire
image: helloaspire:latest
build:
context: .
dockerfile: Dockerfile

Step3: Navigate to /d/Dockers/Practice/HelloDocker in CLI and run below docker commands.


$ eval $(docker-machine.exe env aspire-machine-practice)

Validate compose file using below command:


$ docker-compose config

Step4: Run below command to build image (re-build if Dockerfile is modified) and run container
(recreate if exists):
$ docker-compose up --build --force-recreate
Building welcome
Step 1/6 : FROM frolvlad/alpine-oraclejdk8
---> b3d49f9bbdff
Step 2/6 : LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
---> Using cache
---> f4010e7530b0
Step 3/6 : LABEL description="Standalone Java Application"
---> Using cache
---> 933ed63d7143
Step 4/6 : COPY Welcome.class ./
---> Using cache
---> 974d5ebdebfa
Step 5/6 : COPY Sample.class ./
---> Using cache
---> 9f3209fa4660
Step 6/6 : CMD ["java", "Sample"]
---> Running in e8bd7cf874ba
---> 55b2c5861c6f
Removing intermediate container e8bd7cf874ba
Successfully built 55b2c5861c6f
Successfully tagged helloaspire:latest
Recreating HelloAspire ...
Recreating HelloAspire ... done
Attaching to HelloAspire
HelloAspire | Hello World!
HelloAspire exited with code 0

49
Real-time Approach K.RAMESH

Step5: List images and containers.


$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
helloaspire latest 55b2c5861c6f 5 minutes ago 170MB

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ee44d9f349d helloaspire:latest "java Sample" 5 minutes ago Exited (0) 5 minutes ago
HelloAspire

Application #2: Execute Jdbc Docker example using docker compose file
Step1: Prepare Dockerfile file
#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Jdbc Application"
COPY JdbcDocker.jar /opt/aspire/
COPY connection.properties /opt/aspire/
WORKDIR /opt/aspire/
ENTRYPOINT ["java", "-jar", "JdbcDocker.jar"]

Step2: Write below compose file:


# docker-compose.yml
version : '3'
services:
jdbc:
container_name: JdbcAspire
image: jdbcaspire:latest
build:
context: .
dockerfile: Dockerfile

Step3: Navigate to /d/Dockers/Practice/JdbcDocker in CLI and run below docker commands.


Validate compose file using below command:
$ docker-compose config

Step4: Run below command to build image (re-build if dockerfile is modified) and run container (re-run
if exists):
$ docker-compose up --build --force-recreate
Building jdbc
Step 1/7 : FROM frolvlad/alpine-oraclejdk8

50
Real-time Approach K.RAMESH

---> b3d49f9bbdff
Step 2/7 : LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
---> Using cache
---> f4010e7530b0
Step 3/7 : LABEL description="Jdbc Application"
---> Using cache
---> 653c69bfa51c
Step 4/7 : COPY JdbcDocker.jar /opt/aspire/
---> Using cache
---> 537f1949af4e
Step 5/7 : COPY connection.properties /opt/aspire/
---> Using cache
---> 2ed96808ad0e
Step 6/7 : WORKDIR /opt/aspire/
---> Using cache
---> 943bc3744c90
Step 7/7 : ENTRYPOINT ["java", "-jar", "JdbcDocker.jar"]
---> Using cache
---> 06e4a3f0ae11
Successfully built 06e4a3f0ae11
Successfully tagged jdbcaspire:latest
Recreating JdbcAspire ...
Recreating JdbcAspire ... done
Attaching to JdbcAspire
JdbcAspire | Database Name:Oracle
JdbcAspire | Database Product version:Oracle Database 10g Express Edition Release 10.2.0.1.0 -
Production
JdbcAspire | Driver Name:Oracle JDBC driver
JdbcAspire | Driver Version:11.1.0.7.0-Production
JdbcAspire exited with code 0

Step5: List images and containers.


$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jdbcaspire latest 06e4a3f0ae11 3 hours ago 172MB

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3c9230628264 jdbcaspire:latest "java -jar JdbcDoc..." About a minute ago Exited (0) About a minute ago
JdbcAspire

Application #3: Execute Jsp Docker example using docker compose file

51
Real-time Approach K.RAMESH

Step1: Prepare Dockerfile file


#Dockerfile
FROM tomcat:latest
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="JSP Web Application"
COPY JspDocker.war /usr/local/tomcat/webapps/
EXPOSE 9090
ENTRYPOINT ["/usr/local/tomcat/bin/catalina.sh", "run"]

Step2: Write below compose file:


# docker-compose.yml
version : '3'
services:
jsp:
container_name: JspAspire
image: jspaspire:latest
build:
context: .
dockerfile: Dockerfile
ports:
- "9090:8080"

Step3: Navigate to /d/Dockers/Practice/JspDocker in CLI and run below docker commands.


Validate compose file using below command:
$ docker-compose config

Step4: Run below command to build image (re-build if dockerfile is modified) and run container (re-run
if exists):
$ docker-compose up --build --force-recreate
Step 1/6 : FROM tomcat:latest
---> 1269f3761db5

JspAspire | 30-Oct-2017 14:54:10.635 INFO [main] org.apache.coyote.AbstractProtocol.start Starting
ProtocolHandler ["http-nio-8080"]
JspAspire | 30-Oct-2017 14:54:10.661 INFO [main] org.apache.coyote.AbstractProtocol.start Starting
ProtocolHandler ["ajp-nio-8009"]

Step5: List images and containers.


ctrl +c
$ docker images
$ docker container ls -a

52
Real-time Approach K.RAMESH

Step6: Try http://192.168.99.101:9090/JspDocker/index.jsp in web browser.

Application #4: Execute Hibernate Docker example using docker compose file
Step1: Prepare Dockerfile file
#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Hibernate Application"
COPY HibernateDocker.jar /opt/aspire/
WORKDIR /opt/aspire/
ENTRYPOINT ["java", "-jar", "HibernateDocker.jar"]

Step2: Write below compose file:


# docker-compose.yml
version : '3'
services:
hibernate:
container_name: HibernateAspire
image: hibernateaspire:latest
build:
context: .
dockerfile: Dockerfile

Step3: Navigate to /d/Dockers/Practice/HibernateDocker in CLI and run below docker commands.


Validate compose file using below command:
$ docker-compose config

Step4: Run below command to build image (re-build if dockerfile is modified) and run container (re-run
if exists):
$ docker-compose up --build --force-recreate
Building hibernate
Step 1/6 : FROM frolvlad/alpine-oraclejdk8
---> b3d49f9bbdff

bernateAspire | INFO: schema export complete
HibernateAspire | Hibernate:
HibernateAspire | /* insert edu.aspire.domain.Student
HibernateAspire | */ insert
HibernateAspire | into
HibernateAspire | STUDENT

53
Real-time Approach K.RAMESH

HibernateAspire | (SNAME, EMAIL, MOBILE, SNO)


HibernateAspire | values
HibernateAspire | (?, ?, ?, ?)
HibernateAspire exited with code 0

Step5: List images and containers.


$ docker images
$ docker container ls -a

Microservices with Docker Compose


Prepare single compose file in D:\Dockers\Practice\MicroServicesDocker folder for all microservices
including website and Rabbitmq server.
# docker-compose.yml
version: '2.1'
services:
rabbit:
container_name: AspireRabbit
image: rabbitmq:alpine
ports:
- "5672:5672"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:5672"]
interval: 10s
timeout: 30s
retries: 2

fare:
container_name: FaresFlightTickets
image: faresflighttickets:2.0
build:
context: ./FaresFlightTickets
dockerfile: Dockerfile
ports:
- "8081:8081"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8081"]
interval: 10s
timeout: 30s
retries: 2

search:

54
Real-time Approach K.RAMESH

container_name: SearchFlightTickets
image: searchflighttickets:2.0
build:
context: ./SearchFlightTickets
dockerfile: Dockerfile
depends_on:
rabbit:
condition: service_healthy
ports:
- "8090:8090"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8090"]
interval: 10s
timeout: 30s
retries: 2

booking:
container_name: BookingFlightTickets
image: bookingflighttickets:2.0
build:
context: ./BookingFlightTickets
dockerfile: Dockerfile
depends_on:
rabbit:
condition: service_healthy
fare:
condition: service_healthy
search:
condition: service_healthy
ports:
- "8060:8060"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8060"]
interval: 10s
timeout: 30s
retries: 2

checkin:
container_name: CheckInCustomers
image: checkincustomers:2.0
build:

55
Real-time Approach K.RAMESH

context: ./CheckInCustomers
dockerfile: Dockerfile
depends_on:
rabbit:
condition: service_healthy
booking:
condition: service_healthy
ports:
- "8070:8070"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8070"]
interval: 10s
timeout: 30s
retries: 2

website:
container_name: FlightsWebSite
image: flightswebsite:2.0
build:
context: ./FlightsWebSite
dockerfile: Dockerfile
depends_on:
search:
condition: service_healthy
booking:
condition: service_healthy
checkin:
condition: service_healthy
ports:
- "8001:8001"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8001"]
interval: 10s
timeout: 30s
retries: 2
Validate yaml file using https://codebeautify.org/yaml-validator

Open new Docker CLI and connect with docker-compose-practice machine by using below command:
$ eval $(docker-machine.exe env aspire-machine-practice)
$ docker-compose up --build --force-recreate

56
Real-time Approach K.RAMESH

Open new Docker CLI and connect with docker-compose-practice machine by using below command:
$ eval $(docker-machine.exe env aspire-machine-practice)
$ docker container ls
Try booking tickets using http:192.168.99.101:8001 url.
Username: aspire
Password: aspire123

57
Real-time Approach K.RAMESH

7. DOCKER HUB
The Docker Hub has both a freely available option where we can only host publicly accessible images
and also a subscription option which allows us to host our own private images.
Create account in Docker Hub (https://hub.docker.com/) . During account creation provide
DockerId/Username, Email Id and Password. Sign In using docker id.
Note: Docker Hub is free to use and if we do not need to upload or manage our own images then we do
not need an account to pull images.

Pushing our own image


Step1: We need to connect our Docker CLI with Docker Hub by running the following command:
$ docker login
Login with our Docker ID to push and pull images from Docker Hub. If we don't have a Docker ID, head
over to https://hub.docker.com to create one.
Username: aspiredockerhub
Password: xxxxx
Login Succeeded
Now that our Docker CLI is authenticated to interact with Docker Hub.

$ eval $(docker-machine.exe env aspire-machine-practice)


Step2: We need to tag image before pushing to Docker Hub.
Navigate to /d/Dockers/Practice/HelloDocker and run below command:
$ docker image build --file=Dockerfile --tag aspiredockerhub/helloaspire:latest .

58
Real-time Approach K.RAMESH

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aspiredockerhub/helloaspire latest 819c36c5ea44 About a minute ago 170MB

Step3: Push image using below command


$ docker image push aspiredockerhub/helloaspire:latest

Step4: Refresh Dashboard (https://hub.docker.com/)

Step5: Remove helloaspire images (optional)


$ docker image rm -f helloaspire aspiredockerhub/helloaspire

Step6: Pull image from docker hub


$ docker pull aspiredockerhub/helloaspire
Using default tag: latest
latest: Pulling from aspiredockerhub/helloaspire

Digest: sha256:62d0ceca52d1993c2215db79b03fa0d99e2b70bfd81263b37a15afb2efbddf96
Status: Downloaded newer image for aspiredockerhub/helloaspire:latest

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aspiredockerhub/helloaspire latest 819c36c5ea44 14 minutes ago 170MB

Step7: Run container


$ docker container run --name=HelloAspire aspiredockerhub/helloaspire:latest
Hello World!

59
Real-time Approach K.RAMESH

Step8: We can delete either tag or repository from docker hub.


Goto ‘tags’ tab and click on delete icon which deletes only particular tag but not entire repository.
Goto ‘Settings’ tab, click on Delete button, enter repository name ‘helloaspire’ and click on Delete.

Pushing Microservices Images to Docker Hub


Step1: Build and push microservice images to docker hub.
Navigate to /d/Dockers/Practice/MicroServicesDocker/FaresFlightTickets and run below commands:
$ docker image build --file=Dockerfile --tag=aspiredockerhub/faresflighttickets:1.0 .
$ docker image push aspiredockerhub/faresflighttickets:1.0

Navigate to /d/Dockers/Practice/MicroServicesDocker/SearchFlightTickets and run below commands:


$ docker build --tag aspiredockerhub/searchflighttickets:1.0 .
$ docker image push aspiredockerhub/searchflighttickets:1.0

Navigate to /d/Dockers/Practice/MicroServicesDocker/BookingFlightTickets and run below commands:


$ docker build --tag aspiredockerhub/bookingflighttickets:1.0 .
$ docker image push aspiredockerhub/bookingflighttickets:1.0

Navigate to /d/Dockers/Practice/MicroServicesDocker/CheckInCustomers and run below commands:


$ docker build --tag aspiredockerhub/checkincustomers:1.0 .
$ docker image push aspiredockerhub/checkincustomers:1.0

Navigate to /d/Dockers/Practice/MicroServicesDocker/FlightsWebSite and run below commands:


$ docker build --tag aspiredockerhub/flightswebsite:1.0 .
$ docker image push aspiredockerhub/flightswebsite:1.0

Step2: Remove microservices images (optional).


$ docker image rm -f aspiredockerhub/faresflighttickets:1.0 aspiredockerhub/searchflighttickets:1.0
aspiredockerhub/bookingflighttickets:1.0 aspiredockerhub/checkincustomers:1.0
aspiredockerhub/flightswebsite:1.0

Step3: Prepare docker compose file and change image tags


version: '2.1'
services:
rabbit:
container_name: AspireRabbit
image: rabbitmq:alpine
ports:
- "5672:5672"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:5672"]

60
Real-time Approach K.RAMESH

interval: 10s
timeout: 30s
retries: 2

fare:
container_name: FaresFlightTickets
image: aspiredockerhub/faresflighttickets:3.0
# build:
# context: ./FaresFlightTickets
# dockerfile: Dockerfile
ports:
- "8081:8081"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8081"]
interval: 10s
timeout: 30s
retries: 2

search:
container_name: SearchFlightTickets
image: aspiredockerhub/searchflighttickets:3.0
depends_on:
rabbit:
condition: service_healthy
ports:
- "8090:8090"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8090"]
interval: 10s
timeout: 30s
retries: 2

booking:
container_name: BookingFlightTickets
image: aspiredockerhub/bookingflighttickets:3.0
depends_on:
rabbit:
condition: service_healthy
fare:
condition: service_healthy
search:

61
Real-time Approach K.RAMESH

condition: service_healthy
ports:
- "8060:8060"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8060"]
interval: 10s
timeout: 30s
retries: 2

checkin:
container_name: CheckInCustomers
image: aspiredockerhub/checkincustomers:3.0
depends_on:
rabbit:
condition: service_healthy
booking:
condition: service_healthy
ports:
- "8070:8070"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8070"]
interval: 10s
timeout: 30s
retries: 2

website:
container_name: FlightsWebSite
image: aspiredockerhub/flightswebsite:3.0
depends_on:
search:
condition: service_healthy
booking:
condition: service_healthy
checkin:
condition: service_healthy
ports:
- "8001:8001"
healthcheck:
test: ["CMD", "ping", "-c","1", "192.168.99.101:8001"]
interval: 10s
timeout: 30s

62
Real-time Approach K.RAMESH

retries: 2

$ docker-compose up --build --force-recreate

63
Real-time Approach K.RAMESH

8. DOCKER SWARM
With Docker Swarm, we can create and manage Docker clusters. Swarm can be used to distribute
containers across multiple nodes (hosts) in a cluster. Swarm also has the ability to scale containers.
The following topics will be discussed in this chapter:
 Installing Swarm
 Swarm roles
 Swarm usage
 Swarm commands: service and stack
 Swarm load balancing

Once we install docker toolbox, we automatically get Docker Swarm too. We can verify Docker Swarm
by running the following command:
$ docker swarm --help

There are two roles in Docker Swarm:


1. Swarm Manager
2. Swarm Worker

Swarm Manager
The Swarm manager is the host that is the central management point for all Swarm hosts (nodes). The
Swarm manager is where we issue all our commands to control nodes in the cluster. We can switch
between the nodes, join nodes, leave nodes, and manipulate those hosts.

Swarm Worker
Swarm workers (earlier referred to as docker hosts) are those that run the docker containers. Swarm
workers are managed by the swarm manager.

64
Real-time Approach K.RAMESH

We see that the Docker swarm manager talks to each swarm worker which runs the containers.

Docker Swarm usage


Docker swarm is used for:
1) Creating a cluster
2) Joining workers
3) Listing nodes
4) Managing a cluster
5) Swarm commands (such as service and stack)

Creating a cluster
Let's start by creating a cluster, which contains single swarm manager and multiple swarm workers.
$ docker-machine create -d virtualbox swarm-manager
$ docker-machine create -d virtualbox swarm-worker01
$ docker-machine create -d virtualbox swarm-worker02

The swarm-manager and swarm worker nodes are now up and running using VirtualBox. We can
confirm this by running:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v17.06.0-ce
swarm-manager - virtualbox Running tcp://192.168.99.101:2376 v17.09.0-ce

65
Real-time Approach K.RAMESH

swarm-worker01 - virtualbox Running tcp://192.168.99.102:2376 v17.09.0-ce


swarm-worker02 - virtualbox Running tcp://192.168.99.103:2376 v17.09.0-ce

So far, we have not done anything to create swarm cluster rather we have only launched the hosts.
We may have noticed that one of the columns is SWARM without information. This only contains
information if we have launched our Docker hosts using the standalone Docker Swarm command, which
is built into Docker Machine.

Now let's point Docker Client to the swarm manager.


$ eval $(docker-machine env swarm-manager)

Run below command to confirm that we are connected to swarm-manager:


$ docker info | grep Name
Name: swarm-manager

Let's bootstrap our Swarm manager. To do this, we will pass the results of a few Docker Machine
commands to our host. The command to create manager is:
$ docker $(docker-machine config swarm-manager) swarm init \
--advertise-addr $(docker-machine ip swarm-manager):2377
Swarm initialized: current node (xabo6v5e7j7klauxvjg4ev5we) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-30lcqv1cjxaozeypthw0j7o9rains5dh9n1u6m4em53fy8rfy5-by5xundtj3zz9u5xq4lail352 \
192.168.99.101:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Note: This token will be needed by the worker nodes to authenticate themselves and join to cluster.
Note: The default docker swarm port is 2377.
Note: We may get an error saying Error response from daemon: This node is already part of a swarm.
Use "docker swarm leave" to leave this swarm and join another one.
$ docker swarm leave --force
Node left the swarm.

$ docker-machine config swarm-manager


--tlsverify
--tlscacert="C:\\Users\\Aspire-Ramesh\\.docker\\machine\\machines\\swarm-manager\\ca.pem"
--tlscert="C:\\Users\\Aspire-Ramesh\\.docker\\machine\\machines\\swarm-manager\\cert.pem"
--tlskey="C:\\Users\\Aspire-Ramesh\\.docker\\machine\\machines\\swarm-manager\\key.pem"
-H=tcp://192.168.99.101:2376

66
Real-time Approach K.RAMESH

$ docker-machine ip swarm-manager
192.168.99.101

The ‘init’ is used to initialize swarm means cluster.


--advertise-addr
It is the address which will be used by other nodes (workers) in the Docker swarm to connect with
swarm manager i.e., the other nodes in the swarm must be able to access the manager at this IP
address.
Format: <ip|interface>[:port])
Specifying port is optional. If the value is a bare IP address (192.168.99.101) or interface name (eth0),
the default port 2377 will be used.
Usage: docker swarm init --advertise-addr <MANAGER-IP>:[port]
Example: docker swarm init --advertise-addr 192.168.99.101:2377

Run below command from swarm manager node:


$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
xabo6v5e7j7klauxvjg4ev5we * swarm-manager Ready Active Leader
The above command will print all nodes that form cluster.

Joining workers
To add two workers to the cluster, run the following commands, making sure that replace the token
with the one which was generated above when initializing swarm manager:
$ eval $(docker-machine env swarm-worker01)
$ docker swarm join \
--token SWMTKN-1-30lcqv1cjxaozeypthw0j7o9rains5dh9n1u6m4em53fy8rfy5-by5xundtj3zz9u5xq4lail352 \
192.168.99.101:2377
This node joined a swarm as a worker.

$ eval $(docker-machine env swarm-worker02)


$ docker swarm join \
--token SWMTKN-1-30lcqv1cjxaozeypthw0j7o9rains5dh9n1u6m4em53fy8rfy5-by5xundtj3zz9u5xq4lail352 \
192.168.99.101:2377
This node joined a swarm as a worker.

Note: In case of Error response from daemon: If nodes were already joined to cluster then run below
commands:
$ eval $(docker-machine env swarm-worker01)
$ docker swarm leave

67
Real-time Approach K.RAMESH

$ eval $(docker-machine env swarm-worker02)


$ docker swarm leave

Now join workers.


$ eval $(docker-machine env swarm-worker01)
$ docker swarm join \
--token SWMTKN-1-30lcqv1cjxaozeypthw0j7o9rains5dh9n1u6m4em53fy8rfy5-by5xundtj3zz9u5xq4lail352 \
192.168.99.101:2377
This node joined a swarm as a worker.

$ eval $(docker-machine env swarm-worker02)


$ docker swarm join \
--token SWMTKN-1-30lcqv1cjxaozeypthw0j7o9rains5dh9n1u6m4em53fy8rfy5-by5xundtj3zz9u5xq4lail352 \
192.168.99.101:2377
This node joined a swarm as a worker.

Run below command from swarm manager node:


$ eval $(docker-machine env swarm-manager)
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
mlkxmbb2ljecin60vtzz75iyh swarm-worker02 Ready Active
xabo6v5e7j7klauxvjg4ev5we * swarm-manager Ready Active Leader
zokmc85zil3gn9d926f5s1nsj swarm-worker01 Ready Active
The above command prints all nodes that form cluster.

Managing a cluster
The docker node command is cluster aware, so we can use that to get information on each node within
our cluster, like this, for example:
$ docker node inspect swarm-manager --pretty
ID: xabo6v5e7j7klauxvjg4ev5we
Hostname: swarm-manager
Joined at: 2017-10-17 06:29:49.558173442 +0000 utc
Status:
State: Ready
Availability: Active
Address: 192.168.99.101
Manager Status:
Address: 192.168.99.101:2377
Raft Status: Reachable
Leader: Yes
Platform:
Operating System: linux

68
Real-time Approach K.RAMESH

Architecture: x86_64
Resources:
CPUs: 1
Memory: 995.8MiB
Plugins:
Log: awslogs, fluentd, gcplogs, gelf, journald, json-file, logentries, splunk, syslog
Network: bridge, host, macvlan, null, overlay
Volume: local
Engine Version: 17.09.0-ce
Engine Labels:
- provider=virtualbox

$ docker node inspect swarm-worker01 --pretty


$ docker node inspect swarm-worker02 --pretty

Passing the --pretty flag with the docker node inspect command will render the output in the easy-to-
read format as we have seen above. If --pretty is left out, Docker will return the raw JSON object
containing the results of the query the inspect command runs against the cluster.

In swarm there are service and stack commands to execute tasks that in turn launch, scale, and manage
containers within our Swarm cluster.

Service
The service command is an alternative way of creating containers that take an advantage of the swarm
cluster.
Let's look at creating a really basic single-container service on our Swarm cluster using following
command.
$ docker service create --name nginxservice --detach=false --constraint "node.role == worker" -p 80:80
nginx:latest
This will create a service called nginxservice and it will only be running on nodes that have the role of
worker.
We can list the services again by running this command:
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
dyf991fap41s nginxservice replicated 1/1 nginx *:80->80/tcp

$ docker node ps
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS

$ docker node ps swarm-worker01


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
umwsmvad868n nginxservice.1 nginx swarm-worker01 Running Running 57 seconds ago

69
Real-time Approach K.RAMESH

$ docker node ps swarm-worker02


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS

The worker nodes are swarm-worker01 (192.168.99.102) and swarm-worker02 (192.168.99.103). We


can use swarm manager node IP address such as http://192.168.99.101 or either of the worker nodes
(such as http://192.168.99.102/ or http://192.168.99.103 ) to display web page.

Let's look at scaling our service to six instances of our application container.
$ docker service scale nginxservice=6

Run the following commands to scale and check our service:


$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
dyf991fap41s nginxservice replicated 6/6 nginx *:80->80/tcp

$ docker node ps swarm-worker01


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
umwsmvad868n nginxservice.1 nginx swarm-worker01 Running Running 7 hours ago
jfl9e41s2pys nginxservice.4 nginx swarm-worker01 Running Running 17 seconds ago
tb5ashmfjr7k nginxservice.6 nginx swarm-worker01 Running Running 17 seconds ago

$ docker node ps swarm-worker02


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
sfmulaa4rx6t nginxservice.2 nginx swarm-worker02 Running Running 22 seconds ago
pj1059ohrs2s nginxservice.3 nginx swarm-worker02 Running Running 22 seconds ago
6cbtq42zwlt9 nginxservice.5 nginx swarm-worker02 Running Running 22 seconds ago

As we can see from the above Terminal output, we now have three containers running on each of our
worker nodes.
The following command is used to remove service:
$ docker service rm nginxservice
The above command removes all containers while leaving the downloaded image on the worker nodes.

Stack
In a non-swarm cluster, manually launching each set of containers for a part of an application can get to
be a little laborious and also difficult to share.
Note: Compose does not use swarm mode to deploy services to multiple workers in a swarm. Hence in
case of Docker Compose, all containers will be scheduled on the current node. Also, docker compose
ignores 'deploy' key i.e., compose doesn’t support deploy configuration. Hence use `docker stack
deploy` to deploy to a swarm.

70
Real-time Approach K.RAMESH

The following Docker Compose file will create the same service we launched in the previous section:
# D:\Dockers\Online\NginxSwarm\docker-compose.yml
version: "3"
services:
nginxservice:
image: nginx:latest
ports:
- "80:80"
deploy:
replicas: 6
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == worker

The stack can be made up of multiple services, each defined under services section of the Docker
compose file. In addition to the normal Docker Compose commands, we can add a deploy section; this
is where we can define everything relating to the Swarm element of our stack. If a container becomes
unresponsive, it is always restarted.
To launch our stack, then run the following command:
$ eval $(docker-machine env swarm-manager)
$ docker info | grep Name
Name: swarm-manager

Navigate to /d/Dockers/Practice/NginxSwarm and run below commands:


$ docker stack deploy --compose-file=docker-compose.yml nginxstack
Creating network nginxstack_default
Creating service nginxstack_nginxservice

The following command is used to get stack list:


$ docker stack ls
NAME SERVICES
nginxstack 1

We can get details of the service created by the stack by running this command:
$ docker stack services nginxstack
ID NAME MODE REPLICAS IMAGE PORTS
9j6umhupczkp nginxstack_nginxservice replicated 6/6 nginx *:80->80/tcp

Finally, running the following command will show where the containers within the stack are running:

71
Real-time Approach K.RAMESH

$ docker stack ps nginxstack


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
563rvfo53juh nginxstack_nginxservice.1 nginx swarm-worker02 Running Running 9 minutes ago
e8sjna9xim2v nginxstack_nginxservice.2 nginx swarm-worker01 Running Running 9 minutes ago
t0eui5k1hrd7 nginxstack_nginxservice.3 nginx swarm-worker02 Running Running 9 minutes ago
q318x6sndwa7 nginxstack_nginxservice.4 nginx swarm-worker01 Running Running 9 minutes ago
d79bbv2jay3i nginxstack_nginxservice.5 nginx swarm-worker02 Running Running 9 minutes ago
hmy23wcy5vh8 nginxstack_nginxservice.6 nginx swarm-worker01 Running Running 9 minutes ago

Again, we will be able to access the stack using the IP addresses of our nodes, and we will be routed to
one of the running containers.

To remove a stack, simply run this command:


$ docker stack rm nginxstack
This will remove all services and networks created by the stack when it was launched.

Ingress load balancing


In the last few sections, we looked at launching services and stacks. To access the applications we
launched, we were able to use any of the host IP addresses in our cluster; how was this possible?

Docker Swarm has an ingress load balancer built in, making it easy to distribute traffic to our public
facing containers.

This means that we can expose applications within our Swarm cluster to services, for example an
external load balancer such as Amazon Elastic Load Balancer, knowing that our request will be routed to
the correct container(s) no matter which host happens to be currently hosting it, as demonstrated by
the following diagram:

72
Real-time Approach K.RAMESH

This means that our application can be scaled up or down, fail, or be updated, all without the need to
have the external load balancer reconfigured.

Microservices with Swarm Cluster


D:\Dockers\Practice\MicroServicesDocker\RabbitMQ
# docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq:alpine
ports:
- "5672:5672"
# healthcheck:
# test: ["CMD", "ping", "-c","1", "192.168.99.101:5672"]
# interval: 10s
# timeout: 30s
# retries: 2

deploy:
replicas: 1
restart_policy:

73
Real-time Approach K.RAMESH

condition: on-failure
placement:
constraints:
- node.role == worker

$ eval $(docker-machine env swarm-manager)


$ docker info | grep Name
Name: swarm-manager

Navigate to D:\Dockers\Practice\MicroServicesDocker\RabbitMQ and run below command:


$ docker stack deploy --compose-file=docker-compose.yml microservicesstack
$ docker stack ps microservicesstack

Note:
1) Compose tool does not support deploy configuration. Hence use ‘docker stack deploy’ to
deploy to a swarm.
2) Docker compose version 2.1 does not support ‘deploy’ key. Rather the ‘deploy’ key is supported
by docker compose version 3.
3) The ‘depends_on’ option is ignored when deploying a stack in swarm mode with a version 3
Compose file.

D:\Dockers\Practice\MicroServicesDocker\FaresFlightTickets
#docker-compose.yml
version : '3'
services:
fare:
image: aspiredockerhub/faresflighttickets:1.0
ports:
- "8081:8081"
# healthcheck:
# test: ["CMD", "ping", "-c","1", "192.168.99.101:8081"]
# interval: 10s
# timeout: 30s
# retries: 2
deploy:
replicas: 2
restart_policy:
condition: on-failure
placement:
constraints:

74
Real-time Approach K.RAMESH

- node.role == worker

Navigate to /d/Dockers/Practice/MicroServicesDocker/FaresFlightTickets and run:


$ docker build --tag aspiredockerhub/faresflighttickets:1.0 .
$ docker image push aspiredockerhub/faresflighttickets:1.0
$ docker stack deploy --compose-file=docker-compose.yml microservicesstack
$ docker stack ps microservicesstack
Try http://192.168.99.101:8081/health in web browser

D:\Dockers\Practice\MicroServicesDocker\SearchFlightTickets
#docker-compose.yml
version : '3'
services:
search:
image: aspiredockerhub/searchflighttickets:1.0
ports:
- "8090:8090"
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == worker

Navigate to /d/Dockers/Practice/MicroServicesDocker/SearchFlightTickets and run:


$ docker build --tag aspiredockerhub/searchflighttickets:1.0 .
$ docker image push aspiredockerhub/searchflighttickets:1.0
$ docker stack deploy --compose-file=docker-compose.yml microservicesstack
$ docker stack ps microservicesstack
Try http://192.168.99.101:8090/health in web browser

D:\Dockers\Practice\MicroServicesDocker\BookingFlightTickets
#docker-compose.yml
version : '3'
services:
booking:
image: aspiredockerhub/bookingflighttickets:1.0
ports:
- "8060:8060"

75
Real-time Approach K.RAMESH

deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == worker

Navigate to /d/Dockers/Practice/MicroServicesDocker/BokingFlightTickets and run:


$ docker build --tag aspiredockerhub/bookingflighttickets:1.0 .
$ docker image push aspiredockerhub/bookingflighttickets:1.0
$ docker stack deploy --compose-file=docker-compose.yml microservicesstack
$ docker stack ps microservicesstack
Try http://192.168.99.101:8060/health in web browser

D:\Dockers\Practice\MicroServicesDocker\CheckInCustomers
#docker-compose.yml
version : '3'
services:
checkin:
image: aspiredockerhub/checkincustomers:1.0
ports:
- "8070:8070"
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == worker

Navigate to /d/Dockers/Practice/MicroServicesDocker/CheckinCustomers and run:


$ docker build --tag aspiredockerhub/checkincustomers:1.0 .
$ docker image push aspiredockerhub/checkincustomers:1.0
$ docker stack deploy --compose-file=docker-compose.yml microservicesstack
$ docker stack ps microservicesstack
Try http://192.168.99.101:8070/health in web browser

D:\Dockers\Practice\MicroServicesDocker\FlightsWebSite
#docker-compose.yml

76
Real-time Approach K.RAMESH

version : '3'
services:
website:
image: aspiredockerhub/flightswebsite:1.0
ports:
- "8001:8001"
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == worker

Navigate to /d/Dockers/Practice/MicroServicesDocker/FlightsWebsite and run:


$ docker build --tag aspiredockerhub/flightswebsite:1.0 .
$ docker image push aspiredockerhub/flightswebsite:1.0
$ docker stack deploy --compose-file=docker-compose.yml microservicesstack
$ docker stack ps microservicesstack
Try http://192.168.99.101:8001/health in web browser

Try http:192.168.99.101:8001 and make couple of booking. We can notice load balancing by observing
logs of two fare instances running in swarm-worker01 and swarm-worker02 nodes.

77
Real-time Approach K.RAMESH

Remove swarm cluster using below command:


$ docker stack rm microservicesstack

Deleting a Swarm cluster


If we no longer require swarm cluster, we can delete our Swarm cluster by running the following
command:
$ docker-machine rm swarm-manager swarm-worker01 swarm-worker02

Differences between Docker Compose and Docker Swarm:


Docker Compose Docker Swarm
Multiple containers run in single machine. Multiple containers run in multiple worker nodes
in a cluster.
Load balancing is not possible. Provides Ingress load balancing.
Supports ‘depends_on’ option but doesn’t Supports ‘deploy’ option but doesn’t support
support ‘deploy’ option in docker-compose.xml ‘depends_on’ option in docker-compose.xml file.
file.

78
Real-time Approach K.RAMESH

9. DOCKER IN AWS
Run below command from Docker CLI to create docker machine in AWS:
$ docker-machine create --driver amazonec2 --engine-install-
url=https://web.archive.org/web/20170623081500/https://get.docker.com --amazonec2-region us-
east-1 --amazonec2-zone a --amazonec2-vpc-id vpc-53d69329 aspire-machine-aws
[OR]
$ docker-machine create --driver amazonec2 --engine-install-
url=https://web.archive.org/web/20170623081500/https://get.docker.com --amazonec2-access-key
AKIAJURYSCPLVSMZFIKQ --amazonec2-secret-key 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 -
-amazonec2-region us-east-1 --amazonec2-zone a --amazonec2-vpc-id vpc-53d69329 aspire-machine-
aws
Running pre-create checks...
Creating machine...
(aspire-machine-aws) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!

$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
aspire-machine-aws - amazonec2 Running tcp://18.221.215.108:2376 v17.05.0-ce

$ eval $(docker-machine env aspire-machine-aws)


$ docker info | grep Name
Name: aspire-machine-aws
$ docker images
$ docker container ls -a

The below command is used to pull nginx image from docker hub (https://hub.docker.com/):

79
Real-time Approach K.RAMESH

$ docker pull nginx:alpine


$ docker images
$ docker container run -d --name NGINX -p 80:80 nginx:alpine
$ docker-machine ip aspire-machine-aws
18.221.215.108

Note: Goto security-group in aws  select ‘docker-machine’  Click on 'Inbound' tab  Edit  Add
Rule
Type[All ICMP -IPV4], Protocol [ICMP], Port Range [0-65535], Source [Anywhere] --> save
Type[All TCP], Protocol [TCP], Port Range [0-65535], Source [Anywhere] --> save

Instead of adding multiple rules as above, we rather can add single rule as given below:
Type[All traffic], Protocol [All], Port Range [0-65535], Source [Anywhere] --> save

Try http://18.221.215.108:80 in web browser.

Application #1: HelloDocker


Navigate to /d/Dockers/Practice/HelloDocker and run below commands:
$ docker image build --file=Dockerfile --tag helloaspire:latest .
$ docker images
$ docker container run --name HelloAspire helloaspire:latest
Hello World!

$ docker image rm -f helloaspire:latest


$ docker container prune
Alternatively, we can run docker compose file as given below:
$ docker-compose --file=docker-compose.yml up

Application #2: JdbcDocker


Create DB instance in RDS in aws cloud and provide below info during configuration:
DB Instance Identifier: aspireorcl
Master Username: awsuser
Master Password: aspire1234
Database Name: ORCL
Database port: 1521
aspireorcl.co6tmhd9bey4.us-east-1.rds.amazonaws.com

#connection.properties
jdbc.driverClass=oracle.jdbc.driver.OracleDriver
jdbc.url=jdbc:oracle:thin:@aspireorcl.co6tmhd9bey4.us-east-1.rds.amazonaws.com:1521:ORCL
jdbc.username=awsuser

80
Real-time Approach K.RAMESH

jdbc.password=aspire1234

Navigate to /d/Dockers/Practice/JdbcDocker in Docker CLI and run below commands:


$ docker image build --file=Dockerfile --tag jdbcaspire:latest .
$ docker images
$ docker container run --name JdbcAspire jdbcaspire:latest
Database Name:Oracle
Database Product version:Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
Driver Name:Oracle JDBC driver
Driver Version:11.1.0.7.0-Production

$ docker image rm -f jdbcaspire:latest


$ docker container prune
Alternatively, we can run docker compose file as given below:
$ docker-compose --file=docker-compose.yml up

Application #3: JspDocker


Navigate to /d/Dockers/Practice/JspDocker in Docker CLI and run below commands:
$ docker image build --file=Dockerfile --tag=jspaspire:latest .
$ docker container run --name JspAspire -p 9090:8080 jspaspire
ctrl+c
$ docker container ls -a
Try http://18.221.215.108:9090/JspDocker/index.jsp

$ docker image rm -f jspaspire:latest


$ docker container rm -f JspAspire
Alternatively, we can run docker compose file as given below:
$ docker-compose --file=docker-compose.yml up
Try http://18.221.215.108:9090/JspDocker/index.jsp

Application #4: HibernateDocker


# hibernate.cfg.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.connection.driver_class">oracle.jdbc.driver.OracleDriver</property>

81
Real-time Approach K.RAMESH

<property name="hibernate.connection.url"> jdbc:oracle:thin:@aws.cqssuvm1qmtx.us-


east-2.rds.amazonaws.com:1521:ORCL</property>
<property name="hibernate.connection.username">awsuser</property>
<property name="hibernate.connection.password">aspire1234</property>
<property name="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</property>

</hibernate-configuration>

Create Executable jar file and place it in D:\Dockers\Practice\aws\HibernateDocker folder.


Navigate to /d/Dockers/Practice/HibernateDocker in Docker CLI and run below commands:
$ docker image build --file=Dockerfile --tag=hibernateaspire:latest .
$ docker container run --name=HibernateAspire hibernateaspire:latest

Microservices with AWS


Run below commands from SYSTEM schema:
CREATE TABLESPACE tbs_fareuser DATAFILE 'tbs_fareuser.dat' SIZE 10M AUTOEXTEND ON;
CREATE USER fareuser IDENTIFIED BY aspire123 DEFAULT TABLESPACE tbs_fareuser QUOTA unlimited
on tbs_fareuser;
GRANT create session TO fareuser;
GRANT create table TO fareuser;
GRANT create sequence TO fareuser;

CREATE TABLESPACE tbs_searchuser DATAFILE 'tbs_searchuser.dat' SIZE 10M AUTOEXTEND ON;


CREATE USER searchuser IDENTIFIED BY aspire123 DEFAULT TABLESPACE tbs_searchuser QUOTA
unlimited on tbs_searchuser;
GRANT create session TO searchuser;
GRANT create table TO searchuser;
GRANT create sequence TO searchuser;

CREATE TABLESPACE tbs_bookinguser DATAFILE 'tbs_bookinguser.dat' SIZE 10M AUTOEXTEND ON;


CREATE USER bookinguser IDENTIFIED BY aspire123 DEFAULT TABLESPACE tbs_bookinguser QUOTA
unlimited on tbs_bookinguser;
GRANT create session TO bookinguser;
GRANT create table TO bookinguser;
GRANT create sequence TO bookinguser;

CREATE TABLESPACE tbs_checkinuser DATAFILE 'tbs_checkinuser.dat' SIZE 10M AUTOEXTEND ON;


CREATE USER checkinuser IDENTIFIED BY aspire123 DEFAULT TABLESPACE tbs_checkinuser QUOTA
unlimited on tbs_checkinuser;
GRANT create session TO checkinuser;
GRANT create table TO checkinuser;

82
Real-time Approach K.RAMESH

GRANT create sequence TO checkinuser;

$ eval $(docker-machine env aspire-machine-aws)

FaresFlight MicroService
Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case RDS
AWS Oracle db).
#application.properties
server.port=8081

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@aws.c3clzczg3bmy.us-east-2.rds.amazonaws.com:1521:ORCL
spring.datasource.username=fareuser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=FAREUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=2
spring.datasource.tomcat.max-active=3
spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true
management.security.enabled=false

Create Fat jar.

Navigate to /d/Dockers/Practice/aws/MicroServicesDocker/FaresFlightTickets in CLI and run below


docker commands:
$ docker image build --file=Dockerfile --tag=faresflighttickets:1.0 .
$ docker images
$ docker container run --name=FaresFlightTickets -p=8081:8081 faresflighttickets:1.0
ctrl + c
$ docker container ls

RabbitMQ Server
$ docker pull rabbitmq:alpine
$ docker container run --name=AspireRabbit -p=5672:5672 rabbitmq:alpine
Wait until Server startup complete and then press ctrl + c
$ docker container ls -a

83
Real-time Approach K.RAMESH

SearchFlightTickets Microservice
Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case RDS
aws oracle db) and RabbitMQ Server IP address (rabbitmq container IP address).
# application.properties
server.port=8090

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
#spring.datasource.url=jdbc:oracle:thin:@192.168.56.1:1521:xe
spring.datasource.url=jdbc:oracle:thin:@aws.c3clzczg3bmy.us-east-2.rds.amazonaws.com:1521:ORCL
spring.datasource.username=searchuser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=SEARCHUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=2
spring.datasource.tomcat.max-active=3

spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true

#spring.rabbitmq.host=192.168.99.101
spring.rabbitmq.host=18.221.215.108
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

management.security.enabled=false

Create Fat jar.

Navigate to /d/Dockers/Practice/aws/MicroServicesDocker/SearchFlightTickets in CLI and run below


docker commands:
$ docker image build --file=Dockerfile --tag=searchflighttickets:1.0 .
$ docker images
$ docker container run --name=SearchFlightTickets -p=8090:8090 searchflighttickets:1.0
Ctrl + c
$ docker container ls

Try http://18.221.215.108:8090/env in web browser.

84
Real-time Approach K.RAMESH

$ docker-machine create --driver amazonec2 --engine-install-


url=https://web.archive.org/web/20170623081500/https://get.docker.com --amazonec2-access-key
AKIAJURYSCPLVSMZFIKQ --amazonec2-secret-key 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 -
-amazonec2-region us-east-1 --amazonec2-zone a --amazonec2-vpc-id vpc-53d69329 aspire-machine-
aws2

$ eval $(docker-machine.exe env aspire-machine-aws2)


$ docker info | grep Name
Name: aspire-machine-aws2

BookingFlightTickets
Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case RDS
aws oracle db) and RabbitMQ Server IP address (rabbitmq container IP address).
# application.properties
server.port=8060

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@aws.c3clzczg3bmy.us-east-2.rds.amazonaws.com:1521:ORCL
spring.datasource.username=bookinguser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=BOOKINGUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=2
spring.datasource.tomcat.max-active=3

spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true

spring.rabbitmq.host= 18.221.215.108
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
#debug=true
management.security.enabled=false

Set fares container’s IP address in BookingComponent.java file.


private static final String FareURL = "http://18.221.215.108:8081/fares";

Create Fat jar

85
Real-time Approach K.RAMESH

Navigate to /d/Dockers/Practice/aws/MicroServicesDocker/BookingFlightTickets in CLI and run below


docker commands:
$ docker image build --file=Dockerfile --tag=bookingflighttickets:1.0 .
$ docker images
$ docker container run --name=BookingFlightTickets -p=8060:8060 bookingflighttickets:1.0
ctrl + c
$ docker container ls
$ docker-machine ip aspire-machine-aws2
18.220.105.175

Try http://18.220.105.175:8060/health in web browser.

CheckinFlightTickets
Step1: Edit application.properties file in src/main/resources folder. Configure DB IP address (In this case
RDS aws Oracle DB) and RabbitMQ Server IP address (rabbitmq container IP address).
#application.properties
server.port=8070

spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@aws.c3clzczg3bmy.us-east-2.rds.amazonaws.com:1521:ORCL
spring.datasource.username=checkinuser
spring.datasource.password=aspire123
spring.jpa.properties.hibernate.default_schema=CHECKINUSER

#tomcat-connection settings
spring.datasource.tomcat.initialSize=2
spring.datasource.tomcat.max-active=3

spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=true

spring.rabbitmq.host= 18.221.215.108
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

management.security.enabled=false

Step2: Set booking container’s IP address in CheckinComponent.java file.


private static final String bookingURL = "http://18.220.105.175:8060/booking";

86
Real-time Approach K.RAMESH

Step3: Create Fat jar file using Spring Boot.

Step4: Add instructions in Dockerfile.


#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Checkin Microservice"
COPY CheckInCustomers.jar /opt/aspire/
WORKDIR /opt/aspire/
EXPOSE 8070
ENTRYPOINT ["java", "-jar", "CheckInCustomers.jar"]

Step5: Navigate to /d/Dockers/Practice/aws/MicroServicesDocker/CheckInCustomers in CLI and run


below docker commands.
$ docker image build --file=Dockerfile --tag=checkincustomers:1.0 .
$ docker images
$ docker container run --name=CheckinCustomers -p=8070:8070 checkincustomers:1.0
ctrl + c
$ docker container ls

Step6: Try http://18.220.105.175:8070/health url in web browser.

FlightsWebSite
Step1: No changes in application.properties file in src/main/resources folder.
#application.properties
server.port=8001
security.user.name=guest
security.user.password=guest123
management.security.enabled=false

Step2: Set search, booking and checkin container’s IP addresses in Application.java and
BrownFieldSiteController.java files.

Step3: Create Fat jar file using Spring Boot.

Step4: Add instructions in Dockerfile.


#Dockerfile
FROM frolvlad/alpine-oraclejdk8
LABEL maintainer="Kandepu Ramesh <ramesh@java2aspire.com>"
LABEL description="Flight Website"

87
Real-time Approach K.RAMESH

COPY FlightsWebSite.jar /opt/aspire/


WORKDIR /opt/aspire/
EXPOSE 8001
ENTRYPOINT ["java", "-jar", "FlightsWebSite.jar"]

Step5: Navigate to /d/Dockers/Practice/aws/MicroServicesDocker/FlightsWebSite in CLI and run below


docker commands.
$ docker image build --file=Dockerfile --tag=flightswebsite:1.0 .
$ docker images
$ docker container run --name=FlightsWebSite -p=8001:8001 flightswebsite:1.0
ctrl + c
$ docker container ls

Step6: Check all containers.

Step7: Try http://18.220.105.175:8001/ in web browser. Make some bookings through website and
check logs in docker containers using ‘docker container logs <container-name> ’.

88

You might also like