DOCKER
1. What is Docker and why is it used?
Answer:
Docker is an open-source platform that automates the deployment, scaling, and
management of applications using containerization. It enables developers to package
applications and dependencies into containers that run reliably across different
environments.
✅ Why use Docker?
Consistent environment across development, testing, and production.
Lightweight and faster than virtual machines.
Simplifies CI/CD pipelines.
Resource efficiency through container sharing.
2. What is a Docker Container?
Answer:
A Docker container is a lightweight, standalone, and executable software package
that includes everything needed to run an application — code, runtime, system
tools, libraries, and settings.
3. What is a Docker Image? How is it different from a Container?
Answer:
Docker Image: A template with instructions to create a Docker container. It is a
read-only blueprint.
Docker Container: A runtime instance of a Docker image.
Example:
Image: Recipe
Container: Cooked Dish
4. Explain Docker Architecture.
Answer:
Docker architecture consists of:
Docker Client: Interacts with Docker Daemon.
Docker Daemon (dockerd): Builds, runs, and manages Docker objects.
Docker Images: Read-only templates to create containers.
Docker Registries: Stores Docker images (e.g., Docker Hub).
Docker Containers: Isolated running instances.
5. What is Dockerfile? How is it used?
Answer:
A Dockerfile is a script containing a series of commands and instructions to build
a Docker image automatically.
Example:
dockerfile
Copy
Edit
FROM python:3.9
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
✅ Usage:
Defines the environment.
Automates image creation.
6. Explain Docker Compose and its use cases.
Answer:
Docker Compose is a tool to define and run multi-container Docker applications
using a YAML file.
Example docker-compose.yml:
yaml
Copy
Edit
version: '3'
services:
web:
image: nginx
ports:
- "80:80"
db:
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
✅ Use Cases:
Multi-container applications.
Simplified configuration.
Easy environment replication.
7. What is the difference between Docker and Virtual Machines (VMs)?
Answer:
Feature Docker Virtual Machine
Startup Time Seconds Minutes
Resource Usage Shares Host OS resources Full OS with dedicated resources
Performance Near-native performance Overhead due to hypervisor
Portability Highly portable Less portable
8. What is Docker Swarm? How is it different from Kubernetes?
Answer:
Docker Swarm: Native clustering and orchestration tool for Docker.
Kubernetes: Open-source orchestration tool with broader functionality.
Feature Docker Swarm Kubernetes
Setup Easier Complex
Scalability Good for small apps Excellent for large-scale apps
Community Smaller community Larger ecosystem
Load Balancing Auto Load Balancing External Load Balancers
9. How do you manage persistent data in Docker?
Answer:
Docker uses Volumes and Bind Mounts for persistent data.
Volume: Managed by Docker.
docker volume create my-volume
docker run -v my-volume:/data my-image
Bind Mounts: Uses host file system.
docker run -v /host/path:/container/path my-image
✅ Use Volumes for portability; Bind Mounts for debugging.
The key differences between Bind Mounts and Volumes in Docker are:
Feature Bind Mounts
Volumes
Location Any directory on the host system. Managed by Docker in
/var/lib/docker/volumes.
Performance Can be slower due to filesystem overhead. Optimized for Docker; better
performance.
Portability Not portable (depends on host path). Portable (Docker manages the
path).
Use Case When you need to access specific host files. When you want data
persistence and isolation.
Security Less secure, full host access. More secure, isolated from the
host.
Backup/Ease of Use Manual management needed. Easier to back up and manage
via Docker.
10. How do you optimize Docker images?
Answer:
Use multi-stage builds to reduce image size.
Use .dockerignore file to exclude unnecessary files.
Use alpine-based minimal base images.
Avoid using latest tag; specify exact versions.
11. What is Docker Networking? How many network types are there?
Answer:
Docker networking enables containers to communicate with each other or external
systems.
Network Types:
Bridge: Default, isolated network.
Host: Shares host network namespace.
Overlay: For multi-host communication.
None: No network.
Macvlan: Assigns MAC address to containers.
12. How do you troubleshoot Docker issues?
Answer:
Check Container Logs:
docker logs <container_id>
Inspect Containers:
docker inspect <container_id>
Monitor Container Stats:
docker stats
Access Running Container:
docker exec -it <container_id> /bin/bash
13. What is Docker Hub?
Answer:
Docker Hub is a cloud-based registry for Docker images. It allows you to push,
pull, and store Docker images.
14. How do you secure Docker containers?
Answer:
Run containers with least privilege (non-root user).
Use read-only file systems where possible.
Implement Docker Bench for Security.
Limit container resource usage (CPU, Memory).
Enable image scanning (Docker Hub, Trivy).
15. Explain how you implement CI/CD using Docker.
Answer:
Build: Create Docker image after code commit.
Test: Run automated tests inside containers.
Push: Push the image to a Docker registry.
Deploy: Deploy containers using tools like Jenkins, GitLab CI, or GitHub Actions.
16. What are Docker Tags, and how do you version your images?
Answer:
Docker tags label images for versioning and easy identification.
docker build -t my-app:1.0 .
docker push my-app:1.0
✅ Best Practice:
Semantic versioning (e.g., v1.0.0).
Avoid using the latest tag in production.
17. How do you handle container orchestration in production?
Answer:
Use Kubernetes or Docker Swarm for high availability.
Implement auto-scaling, load balancing, and health checks.
Use monitoring tools like Prometheus and Grafana.
18. What is the difference between COPY and ADD in Dockerfile?
Answer:
COPY: Copies files or directories.
ADD: Copies files and can also extract tar archives or download files.
✅ Use COPY unless you need the additional features of ADD.
19. How do you clean up unused Docker resources?
Answer:
bash
Copy
Edit
docker system prune -f # Removes unused data
docker image prune -a # Removes dangling images
docker volume prune # Removes unused volumes
20. How do you handle Docker image versioning in production?
Answer:
Use Immutable Tags (e.g., app:v1.2.0).
Implement Git commit hash for unique versioning.
Keep latest and stable versions separated.
Would you like more advanced questions or hands-on
FROM maven-3.9.9 AS builder
WORKDIR /usr/src/app
COPY pom.xml
COPY ./src ./src
COPY . .
RUN mvn clean package
FROM eclipse:temurin
WORKDIR /usr/src/app
COPY --from=builder usr/src/app/target usr/src/app/target .
ENTRYPOINT ["jar" , "java" "usr/src/app/target/my-app.jar"]
What is the purpose of a Docker image and a Docker container?
A2: A Docker image is a snapshot of the application and its dependencies, which can
be used to create containers. A Docker container is a running instance of an image.
Q3: How do you build a Docker image from a Dockerfile?
A3: You can build a Docker image from a Dockerfile by running:
bash
Copy
docker build -t <image_name>:<tag> .
The . refers to the current directory, which should contain the Dockerfile.
: To create a custom bridge network:
bash
Copy
docker network create --driver bridge <network_name>
Q11: How would you connect a running container to a network?
A11: To connect a container to an existing network:
bash
Copy
docker network connect <network_name> <container_id_or_name>
How do you mount a volume to a Docker container?
A15: You can mount a volume to a container using the -v option:
bash
Copy
docker run -v <volume_name>:/path/in/container <image_name>
w would you define a multi-container application with Docker Compose?
A20: Here's an example of a docker-compose.yml for a simple web application with a
backend and a database:
yaml
Copy
version: '3'
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: example
If you're running a web application in a Docker container, how would you ensure
that the application persists its data even after the container is removed?
A30: I would use Docker volumes to persist data. For example:
bash
Copy
docker run -v /path/on/host:/path/in/container <image_name>
This ensures data is stored outside the container's filesystem.
he difference between the ADD and COPY instructions in a Dockerfile is as follows:
COPY: It is a simpler instruction that copies files or directories from the host
system into the container’s filesystem. It does not perform any additional
processing.
ADD: It works similarly to COPY but with additional features. In addition to
copying files, it can:
Extract tar archives (e.g., .tar, .tar.gz) automatically when added.
Fetch files from remote URLs.
However, this makes ADD less predictable and potentially insecure in certain cases.
CMD: Provides default arguments or command, and can be overridden when running the
container.
ENTRYPOINT: Defines the main command that will always run when the container
starts, and cannot be overridden as easily.
Bridge Network:
Default network for containers when no other network is specified.
Containers can communicate with each other, but not directly with the host or other
external networks.
Use case: Isolated environments for single-host communication.
Command: docker network create --driver bridge <network_name>
Host Network:
The container shares the host's network namespace, meaning it can directly access
the host’s network interfaces.
Use case: High-performance scenarios where network isolation is not required.
Command: docker run --network host <image_name>
None Network:
No networking is provided to the container, effectively isolating it from any
network.
Use case: For containers that don’t need networking (e.g., for isolated tasks or
security).
Command: docker run --network none <image_name>
Overlay Network:
Allows containers on different Docker hosts to communicate securely over a virtual
network.
Use case: For multi-host communication, such as in Docker Swarm or Kubernetes.
Command: docker network create --driver overlay <network_name>
Macvlan Network:
Assigns a MAC address to each container, making it appear as a physical device on
the network.
Use case: When containers need to be part of the same network as physical devices
or have direct access to the physical network.
Command: docker network create --driver macvlan <network_name>
distroless vs alpine
Your Docker container exits immediately after starting. How would you troubleshoot
and fix the issue?
first of all check logs - docker log <container id>
then i will inspect the containers for knowing state , exitcode , error by running
a command docker inspect <container id>
The container ran a command that completed and exited (common for base images like
alpine, or one-shot scripts).
A script failed due to a missing dependency.
Memory limits exceeded (OOMKilled: true)
Entry point/command syntax is incorrect
2) Follow-up Scenario: What if the app inside the container works fine locally,
but crashes in Docker — what would you check next?
Port Binding Issues
Is the port exposed in the Dockerfile?
Did you map it correctly when running the container?
docker run -p 8080:8080
Are they set via ENV in Dockerfile or passed with -e at runtime?
docker run -e NODE_ENV=production ...
File Paths or Volume Mounts
Maybe your app expects a file or folder that isn’t copied into the container
Check COPY instructions and .dockerignore
Permission Issues
Some files might not be executable or accessible inside the container
Might need to chmod in your Dockerfile
Network Connectivity
DNS or localhost behavior may differ in Docker
Use host.docker.internal for host access on some systems
Missing Dependencies
Local has them, Docker image might not (forgot a RUN apt install?)
Imagine your containerized backend service can’t connect to your PostgreSQL
container — both are running via Docker Compose. How would you troubleshoot it?
Docker Compose auto-connects services to the same bridge network ,So they can talk
to each other using service names as hostnames!
services:
backend:
depends_on:
- db
environment:
DB_HOST: db
Make sure you're using service name, not localhost
Don’t use localhost inside containers — use db if your service is named db.
Check Ports
You don’t need to expose DB ports unless accessing from outside
Containers in the same network can talk via internal ports
3)
You built a Docker image for your Node.js app. When you run the container, it says:
Error: Cannot find module 'express'" —
even though express is in your package.json. ,What went wrong, and how would you
fix it?
You included express in package.json, but maybe you forgot to run npm install
inside the Docker container, so the dependencies were never actually installed.
It could also happen if you copied only part of your project (like forgetting to
COPY package.json first) or if you accidentally .dockerignore'd your node_modules
or package.json.
4)
How would you optimize Dockerfile build times during active development?
Docker caches layers. So order matters!
Do this:
COPY package*.json ./
RUN npm install
COPY . .
Avoid COPY’ing everything too early
If you COPY . . too soon, Docker will re-run every layer after it every time any
file changes. Big waste!
Use .dockerignore file 📁
Exclude node_modules, .git, logs, temp files, etc.
5)
. How do you remove all stopped containers in one go?
docker container prune
6)
. How do you remove all unused images (dangling + unreferenced)?
docker image prune -a
7)
How do you tag a Docker image before pushing it to a registry like Docker Hub?
docker tag <image_name> <username>/<repo_name>:<tag>
8)
How do you remove a Docker image?
docker rmi <image_name_or_id>
9)
You build a Docker image and try to run it, but you get:permission denied:
./start.sh
You’ve confirmed start.sh exists and looks fine. What do you check next? How would
you fix this?
Set proper file permissions in your Dockerfile:
COPY start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
ENTRYPOINT ["./start.sh"]
10)
You pushed a Docker image to Docker Hub. Your teammate pulls and runs it, but the
app crashes saying missing environment variables.
It worked fine on your machine.
What went wrong? How do you fix it?
Environment variables you had locally on your machine (maybe from .env or shell)
were never passed into the container when running it.
Docker containers run in an isolated environment — so unless you explicitly pass
environment variables, your app won’t get them.
Pass env vars during docker run:docker run -e DB_HOST=localhost -e DB_USER=admin
myimage
11)
How do you see all layers that were used to build a Docker image?
docker history <image_name>
12)
You make a change in your app code — how do you rebuild your Docker image and tag
it as v2?
docker build -t <image_name>:v2 .
13)how do you ssh into Jenkins to know is Jenkins added to the docker ?
sudo usermod -aG docker Jenkins // for adding
f Jenkins is on a Linux VM (like on AWS EC2, GCP Compute Engine, etc.):
You’d use:
bash
Copy
Edit
ssh -i /path/to/private-key.pem username@jenkins-server-ip
For example, on AWS EC2:
bash
Copy
Edit
ssh -i ~/keys/devops-key.pem ec2-user@13.235.10.100
-i: Path to your private key file
username: Typically ec2-user, ubuntu, or centos depending on OS
jenkins-server-ip: The public IP or hostname of the Jenkins server
☁️ If Jenkins is inside a GCP VM:
Same idea:
bash
Copy
Edit
gcloud compute ssh jenkins-vm --zone us-central1-a
(Assuming you're using Cloud SDK and have access to that instance)
🐳 If Jenkins is running in a Docker container:
You’d first SSH into the host machine, then:
bash
Copy
Edit
docker exec -it jenkins-container-name /bin/bash
🧠 Pro Tip (Interview angle):
You can add:
"I make sure to have SSH access enabled and security group/firewall rules
configured to allow SSH (port 22). I also always avoid hardcoding keys in pipelines
and use a secure method like a secrets manager."