Docker port expose
We know that container does not have any public ip of its own….
So if the application is running inside the container how we can
access it then
In such cases we use the concept of port export where we can
expose a port of the container with port of the machine …now if we
open publicip:portofmachine we will be able to access the
application running inside the container
## lets say we have an ubuntu container with a specific port
exposed
docker run -it --name 29juncon -p 8902:80 ubuntu /bin/bash
#in this way we will create a container with name 29juncon with port
8902 of machine (ec2/virtual machine/instance) exposed with port
80 of the container
apt update -y
apt install apache2 -y
service apache2 start
cd /var/www/html
#default location of apache where whatever files i will upload would
be accessible from net
rm index.html
apt install git -y
git clone https://github.com/akshu20791/apachewebsite .
Press ctrl p and ctrl q to come out of the container without stopping
the container
You can see your container is running:
docker ps
### THE FIREWALL OF THE MACHINE (SECURITY GROUP ) OF
THE MACHINE WILL NOT ALLOW ME TO ACCESS THE PORT
8902 from outside the network
We need to enable the inbound rule 8902 of the machine
Edit inbound rule
Add rule
Save
Go to your instance
Copy the public ip of the machine
# Docker files
It is text file which consist of set of instructions
And it automates the docker image creation
Case study:
Lets suppose you are working for Netflix ..and you are devops
engineer…there is an application you need to deploy for netflix…
Now problem which you are facing is that app requires more then
100 dependencies and alot of environment variables need to be
passed when the container was created. You went to Developer
and how to configure so many things inside the container ..
So developer said : let me a dockerfile…which will consist of app,
base image, dependencies , and everything required by the
application to run inside the container..after the developer shared
you that dockerfile you simply executed the dockerfile to create a
custom image…and from the custom image you created n number
of containers.
How to write the dockerfile ?????????????????
Dockerfile is made of key-component system
FROM -> For the base image the command need to be on the top
of the dockerfile
E.g FROM ubuntu
FROM alpine
FROM mysql
RUN -> To execute some command . It will create the layer of
image
E,g
RUN apt update
RUN apt install apache2 -y
RUN touch file1
RUN echo “hello world”> myfile.txt
MAINTAINER -> Author /ownerr of the dockerfile. For
documentation purpose this will help .
E.g
MAINTAINER akshat<akshu20791@gmail.com>
MAINTAINER akshatgupta
COPY -> COPY the files from local system (Docker virtual
machine/EC2) . For example: lets say there is a index.html file in
your machine …and when the container is getting created you want
to copy that file inside the container in this case you will use COPY
COPY sourcelocation destinationincontianer
E.g COPY /home/ubuntu/index.html /var/www/html/index.html
(it will copy the index.html from the machine present in home
ubuntu to /var/www/html )
ADD -> Similar to copy…it can copy the file from your machine to
container but can also download the file from internet into your
container
E.g
ADD /home/ubuntu/index.html /var/www/html/index.html
EXPOSE -> It exposes the port of the container like port 80 if your
app is running over internet ….or port 8080 if you are creating
jenkins container ….or port 3306 for mysql…
If you dont put expose then also you will be able to do port
expose…but since the devops engineer they might not aware on
which port the developer has configured the application…if expose
is there it makes it easier devops engineer to deploy the application
E.g EXPOSE 3306
EXPOSE 80
WORKDIR -> Set the working directory for the container…which
means lets say you set the
WORKDIR /akshat
RUN touch file1
This file1 would be created in akshat directory inside the container
ENV -> Environment variables …to set some environment variables
which might be utilized by my application
Eg
ENV name=akshat
CMD -> Executes the command during the container creation . Lets
say if we want to start a software then we will use CMD
ENTRYPOINT -> Entrypoint is similar to CMD but has more priority
of CMD .
### Practicals
vi Dockerfile
(always remember the Dockerfile name is always Dockerfile with D
capital)
FROM ubuntu #base image would be ubuntu image
RUN echo “hello world” > testfile
# we will create a file with name testfile with content hello world
RUN touch myfile #create a blank file with name file
RUN apt update #update all the packages in ubuntu
RUN apt install apache2 -y #install apache inside the machine
Press esc :wq
docker build -t mynewimg .
# this command will build the docker file present in current directory
(.) and build a custom image named mynewimg
From this custom image i want to create a container 👍
docker run -it --name mycon mynewimg /bin/bash
ls
### lets suppose we have installed apache ….we need to start
apache as well…automatically after the container is created
What would be our approach to start the apache during creation of
the container?
vi Dockerfile
FROM ubuntu
RUN echo "hello world" >testfile.txt
RUN touch myfile.txt
RUN apt update
RUN apt install apache2 -y
CMD ["apache2ctl", "-D", "FOREGROUND"]
docker build -t mysecondimg .
docker images
docker run -dt --name webapp3 mysecondimg
(Here dt means detached terminal which means you will not enter
inside the container….and you will not use /bin/bash as well
because we want to execute apache2ctl (CMD) rather then
bin/bash while executing the container)
Lets enter inside the container
You will see apache2 is running
## we can even expose the port and see if apache2 contianer is
accessible from the internet
docker run -dt -p 8000:80 --name webapp4 mysecondimg
Go to aws ec2 machine - security group
’’
Edit inbound rule
Add rule
Save
WE can use the ENTRYPOINT as well in the same way
vi Dockerfile
FROM ubuntu
RUN echo "hello world" >testfile.txt
RUN touch myfile.txt
RUN apt update
RUN apt install apache2 -y
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
docker build -t mythirdimg .
docker images
docker run -dt -p 8999:80 --name webapp5 mythirdimg
docker exec -it webapp5 /bin/bash
## Lets say you are working for Dmart client ..where in some config file
need to be shared inside the container when the container is created
OR
SUPPOSE MY DEVELOPER HAVE SHARED ME INDEX.HTML
FILE AND I NEED TO COPY THAT FILE in container WHENEVER
THE CONTAINER IS CREATED
How such situations can be handled
vi index.html
Press i to start inserting
Hello this is the file shared by developer
Press :wq to save and quit
vi Dockerfile
FROM ubuntu
RUN echo "hello world" >testfile.txt
RUN touch myfile.txt
RUN apt update
RUN apt install apache2 -y
COPY index.html /var/www/html
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
Press esc :wq
docker build -t fourtimg .
docker run -dt -p 8099:80 --name websever7 fourtimg
Select the machine -> security group -> click on security group ->
edit inbound rule -> add new rule -> add port 8099
Save
(if sometimes you get error here…it might be that you are accessing
the application via https
E,g:
http://13.126.192.54:8099/
If we use https://13.126.192.54:8099/
Because we dont have SSL certificate which is paid certificate
…some free are also there but they are not much secured)
## SEE HOW WORKDIR ,MAINTAINER AND ENV WORKS IN DOCKERFILE
vi Dockerfile
FROM ubuntu
MAINTAINER akshat<akshu20791@gmail.com>
WORKDIR mydirectory
ENV name=akshat
RUN echo "hello world" >testfile.txt
RUN touch myfile.txt
RUN apt update
RUN apt install apache2 -y
COPY index.html /var/www/html
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
# in above we are putting maintainer which helps us to identify who
has last maintained this dockerfile…if you working in multitier app
and have multiple dockerfile it helps the devops engineer to reach
to correct developer
# workingdir : this sets the working directory…so now whatever
commands we have executed by default will happen in workingdir
itself
#env sets the key value pair for my container
docker build -t myfifthimg .
docker run -dt --name webapp8 myfifthimg
docker exec -it webapp8 /bin/bash
Since our workdir was ydirectory you will see the two files is created
in mydirectory
Grep is command to find any text word from paragraph
###################
PRUNE IN DOCKERS
Docker prune commands removes all unused images, containers,
👍
volumes, or network from the local storage.
It uses the simple syntax
docker system prune [OPTIONS]
The options could be:
--all , -a : to remove all unused resources (stopped container,
images which are not associated with any containers, networking )
-f : to remove forefully
--volumes: prune volumes
docker system prune --all
docker image prune
docker container prune
## DOCKER COMPOSE
In cases when we are working with application with alot of
functionality we have separate teams managing each functionality
and have there own dockerfiles…lets suppose in flipkart case study
we have 20+ dockerfiles…now if we want to deploy flipkart we need
to execute all the 20+ dockerfiles
Suppose later on company wants us to migrate the application to
other cloud…again we need to run all the dockerfiles…and if by
chance we forgot to run any dockerfile we will get errors.
So in such cases we use Docker compose …wherein we can
deploy the multitier application by writing a docker-compose file
where we will have the path of all the docker files
READING MATERIAL :
https://drive.google.com/file/d/1YW1OpRPGUsh27brU8ixZKzraOmi
ccZsU/view?usp=drive_link
FOR DOCKER COMPOSE WE WILL CREATE A NEW EC2
MACHINE WITH UBUNTU 22.04 AMI
GO TO AWS CLICK ON LAUNCH INSTANCE
Now simply launch machine in normal way
Connect to the machine
(select the machine -> connect -> in ec2 instance connect -> click
on connect)
sudo su
apt update
apt install docker.io -y
apt install docker-compose -y
apt install git -y
git clone https://github.com/akshu20791/docker-compose-lab-01
(this is a multitier application with frontend, API and backend )
cd docker-compose-lab-01
cd api
vi index.php
(update the line 4 with the public ip of your machine)
cd ..
cd frontend
vi index.html
(update line 12 with the public ip of the machine)
docker-compose up -d
(This will take my docker-compose up)
Now go to your machine …select the machine → security → click
on security group → enable inbound rule for port 8080, 5000, 3000
Now copy the public ip
http://publicip:3000 (show the frontend)
http://publicip:5000 (show the api)
http://publicip:8080 (php my admin -database)
Username: todo_admin
Pass: password
Go
Click on browse
Now lets check if it is visible on application or not
################
# DOCKER NETWORKING : WAY BY WHICH THE CONTAINERS
TALK TO EACH OTHER
There are several drivers available by default , providing core
network functionality
1) BRIDGE : DEFAULT NETWORK DRIVER
2) HOST NETWORK : for standalone containers , remove the
network isolation between the containers and dockerhost , and
the host’s networking directly (PORT EXPOSE)
3) NONE network : IN THIS NETWORK DRIVER THE container
will not be able to talk to anyone…it would be isolated
# LAB
docker network ls
## IN THE LAB WE WILL CREATE CONTAINERS AND WE WILL
SEE IF WE ARE ABLE TO ACCESS EACH ONE OF THEM FROM
ONE ANOTHER AND IN WHICH NETWORK THESE
CONTAINERS ARE CREATED
docker run -it --name server1 ubuntu /bin/bash
hostname -i
#this would be the ip address of the container
Press ctrl p and ctrl q
docker inspect bridge
(this command will show all the containers present in the bridge
network )
Lets create one more container …and from that container we will
see that if we are able to access the server1 container or not
docker run -it --name akshatcon ubuntu /bin/bash
Now we will try to see if we are able to ping server1 container from
akshatcon
ping ipaddressofserver1contianer
We will need to install ping which is iputils
apt update
apt install iputils-ping -y
Press ctrl c
Press ctrl p and ctrl q
We are getting response because they are using the same default
bridge network
##Now lets create our own custom network with the name
akshatothernet
docker network ls
docker network create akshatothernet
docker network ls
Lets create a container in the akshatothernet network
docker run -it --network akshatothernet --name myserver ubuntu
/bin/bash
apt update -y
apt install iputils-ping -y
Lets try to ping 172.17.0.2 (which is the ip address of the server1
container)
No response coming because both the containers are in two
different network …
Press ctrl c
Press ctrl p and ctrl q to come out of container
We can connect the akshatothernet with the container in bridge
network
docker network connect akshatothernet server1
docker inspect akshatothernet
(you will see that server1 container has also become part of the
akshatothernet)
(you will see that a new network interface is attached to the
server1)
docker inspect bridge
(See server1 is also present in the bridge network as well)
docker exec -it myserver /bin/bash
ping server1
Ctrl c to stop this execution
Ctrl p and ctrl q to come out of container
## NONE NETWORK
When you use or create containers in none network it means the
container does not have any network connectivity
USE CASE 👍
1) ISOLATION : In some cases you want to run a Docker
container in a compute isolation from the network
2) Security : For security you might want to perform tasks in
isolated environment like malware analysis or testing.
#lab on none networking
docker run -it --name nonecon --network none alpine sh
hostname -i
ping google.com
See we are not able see the ip of the container in none network
…also we are not able to ping google.com
###############
DOCKER LOGGING (WORKING WITH DOCKERHUB)
Docker Hub simplifies the storage, management, and sharing of Docker images,
making it easy to organize and access container images from anywhere. Enhanced
Security: It runs security checks on images and provides detailed reports on potential
vulnerabilities, ensuring safer deployments
Lets say we want to deploy the containerized app in client machine
PROCESS:
>We will first create a custom image of our container
>We will then push the image to hub.docker.com (it could our own
dockerhub account we can later give collaboration access to client
or we can request the client to give authentication token of there
dockerhub account)
>client will now pull the image and then deploy the container
# YOU CAN LAUNCH TWO UBUNTU MACHINE AND INSTALL
DOCKER IN THAT
IN YOUR MACHINE (SERVICE PROVIDER MACHINE)
docker run -it --name mywebapp ubuntu /bin/bash
touch file1 file2 file3 file4 file5 file6 file7 file8
exit
Now we want to create a custom image of mywebapp container
docker commit mywebapp 30webappimg
docker images
Go to hub.docker.com …if you dont have account create the
account
Account settings
Security
New access token
We will copy these commands and put in machine
(when you put the passwork nothing will come so copy paste
wisely) (password is the token generated previously)
docker push 30webappimg
(we will get error because we need to append our username in front
of the image name)
Here we need to tag 30webappimg with
yourdockerhubusername/imagename
My dockerhub username is akshu20791
docker tag 30webappimg akshu20791/30webappimg
docker push akshu20791/30webappimg
Lets see if image is pushed in dockerhub or not
## we will go to different machine (client machine) and pull this
image
docker login