0% found this document useful (0 votes)
51 views57 pages

Docker Material

Docker is an open-source platform that allows users to build, package, and run applications in portable containers, ensuring consistency across different environments. It addresses the 'It works on my machine' problem by providing features like portability, isolation, and speed. Docker Hub serves as the default registry for storing and sharing container images, while Docker Compose simplifies the management of multi-container applications.

Uploaded by

nramureddy911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views57 pages

Docker Material

Docker is an open-source platform that allows users to build, package, and run applications in portable containers, ensuring consistency across different environments. It addresses the 'It works on my machine' problem by providing features like portability, isolation, and speed. Docker Hub serves as the default registry for storing and sharing container images, while Docker Compose simplifies the management of multi-container applications.

Uploaded by

nramureddy911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

🐳 What is Docker?

"Docker is an open-source platform that enables you to build, package, and run
applications in lightweight, portable containers.

A Docker container includes everything the app needs — code, runtime,


libraries, and dependencies — so it runs the same everywhere."

🧠 Why is Docker Important?


"Docker solves the classic problem of 'It works on my machine' by making apps
portable

and consistent across development, testing, and production environments."

------------------------------------------------------------------------------------------

| Feature | Why It Matters |

| ----------------------- | ------------------------------------------------------------ |

| **Portability** | Containers run the same on any system — laptop, cloud,


CI/CD |

| **Isolation** | Each app runs in its own isolated container (no conflicts)
|

| **Speed** | Containers are lightweight and fast to start |

| **Resource efficiency** | Uses fewer resources than full virtual machines


|

| **DevOps-friendly** | Works well with CI/CD, microservices, Kubernetes, etc.


|

------------------------------------------------------------------------------------------

🔧 Real-World Use Case Example:

“In my projects, we use Docker to containerize our backend, frontend, and


database —

making local development easy and ensuring consistent deployments across


dev, staging, and prod using CI/CD pipelines.”

✅ Summary (1-liner):

"Docker is a platform that lets you package and run applications as portable,
isolated containers —

ensuring consistency, speed, and flexibility across environments."


📦 What is a Docker Container?
"A Docker container is a lightweight, standalone, and executable unit that
includes

everything needed to run an application — such as the code, runtime, libraries,


environment variables,

and config files — isolated from the host system."

🧠 How it Works (Simple View):

A container is created from a Docker image and runs as a process on the host
OS using the Linux kernel.

It shares the OS but stays isolated using:

Namespaces (for isolation)

cgroups (for resource limits)

| Feature | Description |

| ---------------- | ------------------------------------------------- |

| **Lightweight** | Starts in milliseconds, minimal resource overhead |

| **Portable** | Runs the same across all environments |

| **Isolated** | No conflict with other containers or the host |

| **Reproducible** | Consistent results regardless of where it runs |

| **Ephemeral** | Can be created, destroyed, and recreated quickly |

docker run nginx

➡️This starts an nginx web server inside a container — with its own config, file
system, and port.

✅ Summary (1-liner):

"A Docker container is a lightweight, isolated environment that runs your


application with all
its dependencies, ensuring consistency across systems."

🆚 Docker vs Virtual Machines (VMs)


"Docker containers and virtual machines both provide isolated environments,
but Docker is lightweight and

shares the host OS kernel,

while VMs run a full guest OS, making them heavier and slower to start."

| Feature | **Docker Containers** | **Virtual Machines**


|

| ------------------- | ----------------------------------- | --------------------------------------------------


|

| **Architecture** | Shares host OS kernel | Has full OS (guest OS +


kernel) |

| **Startup Time** | Starts in **seconds** | Takes **minutes**


|

| **Size** | Few **MBs** (lightweight) | **GBs** (heavy)


|

| **Resource Usage** | Low (no OS overhead) | High (runs full OS per


VM) |

| **Isolation Level** | Process-level (less overhead) | Hardware-level


(stronger isolation) |

| **Use Cases** | Microservices, CI/CD, dev workflows | Full OS testing,


legacy apps, secure multi-tenancy |

Visual Analogy:

🐳 Docker: Apartment units in one building — share the structure (kernel),


isolated by design.

VM: Separate houses — each with its own foundation (OS), more isolated but
more expensive.

🔧 Real-World Insight:
“In my experience, we use Docker for microservices, CI/CD pipelines, and rapid
development.

VMs are used where strong isolation or a full OS is required — like running
Windows inside a Linux host.”

✅ Summary (1-liner):

"Docker containers are lightweight, faster, and share the host OS, while VMs are
heavier, slower,

and provide stronger isolation by running their own OS."

🌟 Benefits of Using Docker


"Docker brings consistency, portability, and efficiency to application
development and deployment.

It enables teams to build once and run anywhere — across dev, staging, and
production."

🚀 1. Portability

"Docker containers run the same way across any environment —

local machine, cloud, or CI/CD pipeline — eliminating the classic 'works on my


machine' issue."

⚡ 2. Lightweight and Fast

Starts in seconds (vs minutes for VMs)

Uses less memory and CPU

No full OS overhead

🎯 3. Consistency and Reproducibility

Same Docker image can be used in all environments

No environment drift between dev, test, and prod

🧱 4. Isolation

Each container runs in its own process space

No conflict between app versions, dependencies, or libraries


🔁 5. Easier CI/CD Integration

Containers work great with Jenkins, GitHub Actions, GitLab CI

Fast and repeatable deployments

Simplifies rollback and testing

📦 6. Simplified Dependency Management

All dependencies are defined in a Dockerfile

No need to install tools/libraries on host

🔁 7. Scalability & Microservices Support

Docker makes it easy to containerize and scale microservices

Works seamlessly with Kubernetes for orchestration

🔐 8. Improved Security

Containers can run with minimal access

Images can be scanned and signed

Supports isolation via namespaces, cgroups

✅ Summary (1-liner):

"Docker improves speed, portability, and consistency across environments, m

aking it ideal for modern DevOps, microservices, and cloud-native workflows."

📦 What is Docker Hub?


"Docker Hub is a cloud-based registry where Docker users can store, share, and
manage container images.

It’s the default registry used by Docker when you run

docker pull or docker push without specifying a custom registry."


# Log in to Docker Hub

docker login

# Pull image from Docker Hub

docker pull nginx

# Tag and push custom image

docker tag my-app myusername/my-app:v1

docker push myusername/my-app:v1

🧠 Real-World Use Case:

“In my projects, we build custom images and push them to Docker Hub from our
CI/CD pipeline.

Then Kubernetes pulls them during deployment.”

✅ Summary (1-liner):

"Docker Hub is the default image registry for Docker,

used to store and share container images, both publicly and privately."

What is a Docker Image?


"A Docker image is a lightweight, standalone, and read-only template that
contains the application code, runtime, libraries, and everything needed to run a
container."

🧠 Key Points:

Immutable: Once built, the image doesn't change.

Layered: Each instruction in a Dockerfile creates a new image layer.

Reusable: One image can spin up multiple identical containers.


# Dockerfile

FROM node:18

COPY . /app

WORKDIR /app

RUN npm install

CMD ["npm", "start"]

# Build image

docker build -t my-app-image .

# Run container from image

docker run -d -p 3000:3000 my-app-image

🧠 Real-World Example:

“We package our backend service as a Docker image using a multistage build.

The CI pipeline pushes it to Docker Hub, and Kubernetes pulls the same image
into staging and prod environments.”

✅ Summary (1-liner):

"A Docker image is a snapshot of everything needed to run an app —

it's the foundation from which Docker containers are created."


📄 Basic Dockerfile for NGINX
# Use the official NGINX base image from Docker Hub

FROM nginx:latest

COPY ./html /usr/share/nginx/html

EXPOSE 80

🆚 Docker Image vs Docker Container


"A Docker image is the blueprint for an application, while a Docker container is a
running instance of that image."

📦 Docker Image

Read-only template with everything needed to run an app:

App code

Runtime

Dependencies

Config

Built using a Dockerfile

Stored in registries like Docker Hub, ECR, or GCR

You can create many containers from a single image

🚀 Docker Container

A running, isolated process created from an image

It’s writable (unlike the image)

Has its own:

Filesystem

Network

PID namespace

Runs until stopped or deleted

# Build image from Dockerfile


docker build -t my-app .

What is a Dockerfile?
"A Dockerfile is a script containing a set of instructions that Docker uses to build
an image.

It defines what goes inside the image — like the base OS, application code,
environment variables, and commands to run."

📦 Why is a Dockerfile important?


Automates image creation

Ensures consistent builds

Easily versioned with code

Supports multi-stage builds for optimized images

| Instruction | Purpose |

| ------------ | ----------------------------------------- |

| `FROM` | Base image (e.g. `nginx`, `ubuntu`) |

| `COPY` | Copy files from host to image |

| `RUN` | Execute commands (e.g., install packages) |

| `CMD` | Default command when container starts |

| `ENTRYPOINT` | Preferred startup script for containers |

| `ENV` | Set environment variables |

| `WORKDIR` | Set working directory inside container |

| `EXPOSE` | Document port the container listens on |


# Use official Node image

FROM node:18

# Set working directory

WORKDIR /app

# Copy app files

COPY . .

# Install dependencies

RUN npm install

# Expose port

EXPOSE 3000

# Run app

CMD ["npm", "start"]

✅ Summary (1-liner):

"A Dockerfile is a step-by-step recipe for building a Docker image — it defines


everything your container needs to run."

🧩 What is Docker Compose?


"Docker Compose is a tool that lets you define and run multi-container Docker
applications using a YAML file.

It’s used to manage services, networks, and volumes for complex apps with just
a single command."

🧠 Why Use Docker Compose?

Simplifies running multi-container apps (e.g., web + database)

Eliminates the need for long docker run commands

Great for local development, testing, and CI/CD


version: '3'

services:

web:

image: nginx

ports:

- "8080:80"

app:

build: .

ports:

- "3000:3000"

depends_on:

- db

db:

image: mysql

environment:

MYSQL_ROOT_PASSWORD: password

This setup includes an nginx web server, a custom app container,

and a MySQL database — all started together with one command

docker-compose up # Start all services

docker-compose down # Stop and remove all


containers/networks/volumes

docker-compose logs # View logs from all services

docker-compose exec app sh

🧠 Real-World Use Case:


“We use Docker Compose during development to spin up our backend, frontend,
and database in one go —

it helps mimic production locally without Kubernetes.”

✅ Summary (1-liner):

"Docker Compose lets you define and run multi-container apps using a YAML file

making local dev and service orchestration fast and easy."

🧱 Main Components of Docker


"Docker is made up of several core components that work together to build,
ship, and run containers efficiently."

🔧 1. Docker Engine

The heart of Docker — a client-server application that runs and manages


containers.

Docker Daemon (dockerd)

Runs in the background and handles building, running, and managing


containers.

Docker Client (docker)

The CLI tool you use to interact with the Docker Daemon.

🧊 2. Docker Images

Read-only templates used to create containers.

Built using Dockerfiles and stored in registries.

📦 3. Docker Containers

Running instances of Docker images.

Lightweight, isolated environments for applications.


📄 4. Dockerfile

A text file with instructions to build Docker images.

It defines the base image, app code, dependencies, and startup commands.

📦 5. Docker Registries

Where Docker images are stored and shared.

Docker Hub (default public registry)

Private Registries (like AWS ECR, GitHub Container Registry)

🔁 6. Docker Compose

Tool to manage multi-container applications using a YAML file.

Useful for local dev and microservices.

✅ Summary (1-liner):

"Docker's main components include the Docker Engine (client & daemon),
Docker images, containers,

Dockerfile, registries, and Docker Compose for multi-container setups."

⚙️What is the Docker Daemon?


"The Docker Daemon (dockerd) is the background service in Docker that
manages images, containers, networks, and volumes.

It listens for Docker API requests from the client and handles the container
lifecycle."

🔁 How it works:

When you run:

docker run nginx


The Docker client sends the command to the Docker Daemon

The Daemon:

Pulls the nginx image (if needed)

Creates the container

Runs it

🔐 Notes:

It requires root privileges or must run as part of the Docker group.

Runs on the host system (Linux, Windows, or Mac).

✅ Summary (1-liner):

"The Docker Daemon is the core service that handles all Docker operations —

from running containers to managing images, volumes, and networks."

🐳 Docker CLI

Docker CLI (Command-Line Interface) is the main tool used to interact with the
Docker engine.

It allows users and automation scripts to build, run, manage, and monitor
Docker containers and images directly from the terminal.

🐳 Essential Docker CLI Commands

Here are the most commonly used Docker CLI commands you should know for a
DevOps role:

1. Image Management

docker build -t <image_name>:<tag> .

Build an image from a Dockerfile in the current directory.

docker pull <image_name>:<tag>


Download an image from a Docker registry.

docker push <image_name>:<tag>

Upload an image to a Docker registry.

docker images

List all images stored locally.

docker rmi <image_id_or_name>

Remove an image from local storage.

2. Container Management

docker run -d -p <host_port>:<container_port> --name <container_name>


<image_name>

Run a container in detached mode, mapping ports and naming the container.

docker ps

List running containers.

docker ps -a

List all containers (running and stopped).

docker stop <container_id_or_name>

Stop a running container gracefully.

docker kill <container_id_or_name>

Force stop a container immediately.


docker rm <container_id_or_name>

Remove a stopped container.

docker exec -it <container_id_or_name> /bin/bash

Open an interactive shell inside a running container.

docker logs <container_id_or_name>

View logs of a container.

docker inspect <container_id_or_name>

Get detailed information about a container or image.

docker restart <container_id_or_name>

Restart a stopped or running container.

3. Networking

docker network ls

List all Docker networks.

docker network create <network_name>

Create a new Docker network.

docker network inspect <network_name>

View details of a Docker network.

docker network connect <network_name> <container_name>

Connect a container to a network.

docker network disconnect <network_name> <container_name>


Disconnect a container from a network.

4. Volumes & Storage

docker volume ls

List all volumes.

docker volume create <volume_name>

Create a new volume.

docker volume inspect <volume_name>

Get details of a volume.

docker volume rm <volume_name>

Remove a volume.

5. System & Utilities

docker system df

Show Docker disk usage.

docker system prune

Clean up unused data (containers, networks, images, volumes).

docker stats

Show real-time container resource usage (CPU, memory, etc.).

docker login

Authenticate to a Docker registry.


docker logout

Log out from a Docker registry.

6. Docker Compose (Multi-container management)

docker-compose up

Start containers defined in a docker-compose.yml.

docker-compose up -d

Run containers in detached mode.

docker-compose down

Stop and remove containers, networks created by Compose.

docker-compose logs

View logs for all services.

docker-compose exec <service_name> <command>

Run a command inside a running service container.

7. Tagging and Image Versions

docker tag <source_image>:<tag> <target_image>:<tag>

Create a new tag for an existing image.

8. Dockerfile & Build Options

docker build --no-cache -t <image_name> .

Build image without using cache.


docker build --target <stage_name> -t <image_name> .

Build a specific stage in a multi-stage Dockerfile.

9. Container Resource Limits

docker run -m 512m --cpus="1.5" <image>

Run a container with memory limit of 512MB and 1.5 CPU cores.

10. Inspect & Debugging

docker diff <container_id>

Show changes to files/directories on a container’s filesystem.

docker cp <container_id>:<path> <host_path>

Copy files/folders from container to host.

docker cp <host_path> <container_id>:<path>

Copy files/folders from host to container.

11. Events & History

docker events

Stream real-time events from the Docker daemon (container start, stop, etc.).

docker history <image_name>

Show the history of an image layers.

12. Swarm Mode (Docker Orchestration)

docker swarm init

Initialize a Docker Swarm cluster.

docker swarm join <manager_ip>:<port>


Join a node to a Swarm cluster.

docker node ls

List nodes in a Swarm cluster.

docker service create

Deploy a service in Swarm mode.

docker service ls

List running services.

13. Security & User Management

docker login / docker logout

Authenticate and log out from registries.

docker trust sign <image>

Sign an image for Docker Content Trust (security).

⚙️Docker REST API


The Docker REST API is an HTTP-based API that allows clients to interact
programmatically with the Docker daemon.

It exposes Docker’s functionality (like managing containers, images, networks,


and volumes) via standard RESTful endpoints.

🔍 How it Works:

Client-Server Model:

The Docker daemon (dockerd) acts as a server listening on a Unix socket


(/var/run/docker.sock) or a TCP port.

Clients (like Docker CLI, GUIs, or custom applications) send HTTP requests to this
API.
RESTful Endpoints:

Each Docker resource (containers, images, volumes, etc.) is accessible via REST
endpoints such as:

GET /containers/json — list containers

POST /containers/create — create a container

POST /containers/{id}/start — start a container

DELETE /containers/{id} — remove a container

JSON Payloads:

Requests and responses use JSON format for data exchange.

Authentication & Security:

When exposed over TCP, TLS can be enabled for secure communication.

🧩 Why it Matters in DevOps:


Enables automation and integration with custom tools, dashboards, or
orchestration systems.

The Docker CLI itself is just a client that uses the REST API under the hood.

Useful for remote Docker daemon management and advanced workflows


beyond CLI capabilities.

✅ Summary:

Docker REST API exposes Docker daemon functionality over HTTP, enabling
programmatic

and remote management of containers and Docker resources.

📦 Docker Objects
Docker Objects are the fundamental building blocks that Docker uses to create,

run, and manage containerized applications. Understanding these objects is key


to effectively working with Docker.
🔑 Main Docker Objects:

Images

Immutable templates that contain your application code, runtime, libraries, and
dependencies.

Images are used to create containers.

Built from Dockerfiles.

Stored in registries (Docker Hub, ECR).

Containers

Running instances of Docker images. Containers are lightweight and isolated


environments where your applications run.

Created from images using docker run.

Can be started, stopped, restarted, or removed.

Volumes

Persistent storage objects used to save data outside the container’s writable
layer.

Volumes help retain data even when containers are removed or recreated.

Managed by Docker and can be shared between containers.

Networks

Docker objects that enable communication between containers and with the
outside world.

Docker supports multiple network drivers like bridge, host, and overlay.

Containers on the same network can communicate via container name or IP.

Dockerfile (Not exactly an object, but important)

A text file with instructions to build Docker images.

✅ Summary:
Docker objects include images, containers, volumes, and networks — the core
components

to build, run, store, and connect containerized applications.

⚙️Docker Daemon and Docker CLI

Docker Daemon (dockerd):

The Docker daemon is the background service that runs on the host machine.

It manages Docker objects such as images, containers, networks, and volumes.


The daemon listens for

Docker API requests and handles container lifecycle operations like building,
running, and distributing containers.

It also manages storage and networking for containers.

Docker CLI (docker):

The Docker CLI is the command-line interface tool that users interact with to
communicate with the Docker daemon.

When you run commands like docker run or docker build, the CLI sends REST API
requests to the daemon,

which performs the actual work.

How They Work Together:

The Docker CLI acts as a client, sending commands to the Docker daemon via
REST API calls

(usually over a Unix socket or TCP).

The daemon processes these requests and performs tasks like creating
containers, pulling images, or managing networks.4

This client-server model separates the user interface (CLI) from the core Docker
engine (daemon),

allowing remote management and automation.


✅ Summary:

Docker CLI is the user-facing tool sending commands, while Docker Daemon is
the

background service executing those commands and managing Docker


resources.

🐳 How Docker Achieves Lightweight Virtualization

Docker achieves lightweight virtualization by using OS-level virtualization


instead of traditional

hardware-level virtualization

It leverages Linux kernel features to isolate applications in containers,

making them much more efficient and lightweight compared to Virtual Machines
(VMs).

🔧 Key Technologies Behind Docker's Lightweight Virtualization:

Namespaces:

Provide isolation for container processes, network, file systems, and users.

Each container has its own PID, mount, network, and IPC space.

Control Groups (cgroups):

Limit and allocate system resources like CPU, memory, and disk I/O to
containers.

Prevent one container from consuming all system resources.

Union File Systems (e.g., OverlayFS):

Allow Docker to use layered filesystems, making containers fast and space-
efficient.

Only changes are stored in a new layer; base layers are reused.

No Guest OS:
Unlike VMs, Docker containers share the host OS kernel, avoiding the overhead
of booting full operating systems.

This leads to faster startup times and smaller footprints.

Container Runtime (containerd / runc):

Executes container processes and handles image management and low-level


container operations.

✅ Summary:

Docker uses OS-level virtualization with namespaces, cgroups, and layered


filesystems to run isolated,

lightweight containers that share the host OS kernel — making them faster and
more efficient than VMs.

🧱 Role of Namespaces in Docker


Namespaces in Docker are a fundamental Linux kernel feature used to isolate
containers

from each other and from the host system.

They ensure that each container has its own independent view of system
resources

like processes, networking, file systems, and more — creating a secure and
isolated environment.

🎯 Why Namespaces Matter in Docker:


They enable multi-tenant container environments with strong process and
resource isolation.

Critical for security and containment, preventing one container from interfering
with another.

Help simulate a full operating system experience within containers without


running an entire OS.

✅ Summary:
Namespaces isolate containers at the kernel level, giving each container its own
separate view of

system resources like processes, network, and file systems.

🧩 Purpose of cgroups in Docker


Control Groups (cgroups) are a key Linux kernel feature used by Docker to limit,
prioritize, and isolate resource usage

(like CPU, memory, disk I/O, and network) for containers.

While namespaces provide isolation, cgroups enforce resource control.

✅ Summary:

Cgroups let Docker control and limit container resource usage like CPU, memory,
and I/O,

ensuring fair distribution and system stability.

🚀 docker run vs docker start


Both docker run and docker start are used to launch containers,

but they serve different purposes depending on whether the container already
exists.

🆕 docker run – Create & Start

docker run -d -p 80:80 nginx

🔁 docker start – Restart Existing

Purpose: Starts an already created (stopped) container.

Usage: Used when a container exists but is currently stopped.

It does not allow changing container configuration.

docker start my_nginx


✅ Summary:

docker run creates and starts a new container, while docker start restarts an
existing one

without changing its configuration.

docker stop <container_id or container_name> -- to stop a container

The docker exec command is used to run a new command inside a running
container without restarting it.

It's commonly used for debugging, administration, or performing tasks like


checking logs, installing packages,

or exploring the container’s environment.

docker exec [OPTIONS] <container_name_or_id> <command>

🔧 Examples:

Access container shell:

docker exec -it my_container /bin/bash

-it gives you an interactive terminal session inside the container.

Run a one-time command:

docker exec my_container ls /app

Lists the contents of the /app directory inside the container.

Check environment variables:

docker exec my_container printenv

Common Flags:

Flag Description

-i Keep STDIN open


-t Allocate a pseudo-TTY

--user Run the command as a specific user

-e Pass environment variables

✅ Summary:

docker exec lets you run commands inside an already running container—useful
for debugging,

inspections, or administrative tasks without restarting the container.

docker inspect my_container

docker inspect -f '{{ .NetworkSettings.IPAddress }}' my_container

docker inspect my_image

docker inspect my_network

docker inspect my_volume

✅ Summary:

docker inspect provides comprehensive, low-level details about Docker objects,

aiding in debugging, auditing, and automation by returning structured JSON


data.

Remove Dangling Images (Untagged Layers):

docker image prune

Remove All Unused Images:

docker image prune -a

The -a or --all flag extends the pruning to remove all images not referenced by
any container, not just dangling ones.

docker system prune -a --volumes

This command removes:


All stopped containers

All unused networks

All dangling and unused images

⚠️Caution:

Before running these commands, ensure that you do not need the images,
containers, or volumes that will be removed.

To bypass confirmation prompts, add the -f or --force flag

docker image prune -a -f

✅ Summary:

Use docker image prune to remove dangling images, docker image prune -a to
remove all unused images,

and docker system prune -a --volumes for a comprehensive

cleanup of Docker resources. Always ensure that the resources being removed
are no longer needed.

🧑‍💻 How to Run a Docker Container Interactively


Running a Docker container interactively allows you to access its shell and
execute commands directly,

which is essential for tasks like debugging, testing, or manual configuration.

🔧 Command to Run a Container Interactively:

docker run -it <image_name> <shell>

docker run -it ubuntu /bin/bash

This command starts an Ubuntu container and opens an interactive Bash shell.
✅ Summary:

Use docker run -it <image> <shell> to start a container in interactive mode,

providing direct access to its shell for real-time command execution.

can we useDocker Compose for production?

Yes, Docker Compose can be used in production environments, particularly for


small to medium-scale

applications running on a single host. However, for larger, more complex


deployments requiring high availability,

scalability, and advanced orchestration features, tools like Kubernetes are


generally more suitable.

✅ When to Use Docker Compose in Production

Suitable Scenarios:

Single-Host Deployments: Applications running on a single server without the


need for distributed orchestration.

Modest Resource Requirements: Applications with limited resource demands


that don't require dynamic scaling.

Simplified Management: Environments where ease of setup and maintenance is


prioritized over advanced features.

Advantages:

Consistent Environments: Ensures parity between development, staging, and


production environments.

Simplified Configuration: Uses a single YAML file to define and manage multi-
container applications.

Quick Deployment: Facilitates rapid deployment and iteration cycles.


⚠️Limitations of Docker Compose in Production

Challenges:

Lack of High Availability: No built-in support for automatic failover or


redundancy.

Manual Scaling: Scaling services requires manual intervention and is limited to a


single host.

bunnyshell.com

Limited Monitoring and Logging: Requires additional tools for comprehensive


monitoring, logging, and alerting.

No Built-in Load Balancing: Does not provide native load balancing across
multiple containers or hosts.

Considerations:

Single Point of Failure: Running all services on a single host can lead to a
complete outage if the host fails.

Security Concerns: Requires careful configuration to ensure secure


communication between services and proper isolation.

🔄 Alternatives for Production Environments

For applications requiring advanced features, consider the following


alternatives:

Kubernetes: Offers robust orchestration capabilities, including automatic scaling,


self-healing, and rolling updates.

Docker Swarm: Provides native clustering and orchestration features, though it's
less feature-rich compared to Kubernetes.

Bunnyshell: Allows importing Docker Compose configurations and deploying


them to Kubernetes clusters,

bridging the gap between development and production environments.

📝 Summary

Docker Compose is suitable for production use in specific scenarios, particularly


for simple,
single-host applications with modest resource requirements. However, for
larger, more complex

applications that demand high availability, scalability,

and advanced orchestration features, transitioning to platforms like Kubernetes


is advisable

Docker Compose and Docker Swarm are both tools for managing multi-container
applications,

but they serve different purposes and are suited for different environments.

🧩 Docker Compose
Purpose: Define and run multi-container applications on a single host.

Key Features:

Single-Host Deployment: Ideal for local development and testing environments.

YAML Configuration: Uses docker-compose.yml to define services, networks, and


volumes.

Simplified Management: Easily start, stop, and rebuild services with simple
commands.

No Built-In Orchestration: Lacks features like automatic scaling, load balancing,


and high availability.

Use Cases:

Local development environments.

Small-scale applications without the need for clustering or high availability.

Docker Swarm
Purpose: Orchestrate and manage containerized applications across a cluster of
machines.
Key Features:

Multi-Host Deployment: Manages a cluster of Docker nodes as a single virtual


system.

Built-In Orchestration: Supports service discovery, load balancing, scaling, and


rolling updates.

High Availability: Automatically reschedules failed containers on healthy nodes.

Secure Communication: Provides encrypted communication between nodes.

Use Cases:

Production environments requiring scalability and high availability.

Applications that need to run across multiple servers or data centers.

✅ Summary

Docker Compose is best suited for defining and running multi-container


applications on a single host,

making it ideal for development and testing environments. Docker Swarm, on


the other hand,

is designed for orchestrating and managing containers across multiple hosts,

providing features like load balancing, scaling, and high availability, which are
essential for production environments.

docker-compose down

This command stops and removes all containers defined in your docker-
compose.yml file, along with the default network.

docker-compose down --volumes --rmi all

--volumes: Removes named volumes declared in the volumes section of the

Compose file and anonymous volumes attached to containers.

--rmi all: Removes all images used by any service.


docker-compose down --volumes --rmi all --remove-orphans

The docker-compose rm command is used to remove stopped service


containers. By default, it does not remove volumes.

To remove anonymous volumes attached to containers, you can use the -v flag:

docker-compose rm -v

If you have containers created with docker-compose run, they are considered

one-off containers and are not removed by docker-compose down.

To remove them, use the docker-compose rm command:

docker-compose rm

This command will prompt you to confirm the removal of the one-off containers.

By using these commands, you can effectively remove all services and
containers managed

by Docker Compose, along with associated networks, volumes, and images as


needed.

Docker networking enables containers to communicate with each other, the


host system, and external networks.

Docker provides several built-in network drivers, each designed for specific use
cases

🔌 Docker Network Drivers

Bridge (Default)

Use Case: Isolated communication between containers on the same host.

Description: Creates a private internal network on the host. Containers


connected to this

network can communicate with each other using private IP addresses.

Example: docker run --network=bridge nginx


Host

Use Case: High-performance networking where containers use the host's


network stack.

Description: Removes network isolation between the container and the host.

The container shares the host's IP address and ports.

Example: docker run --network=host nginx

None

Use Case: Completely isolated containers with no network access.

Description: Disables all networking for the container. Useful for security or
custom networking setups.

Example: docker run --network=none nginx

Overlay

Use Case: Communication between containers across multiple Docker hosts.

Description: Creates a distributed network among multiple Docker daemons,

enabling swarm services to communicate securely.

Example: Used in Docker Swarm setups.

Macvlan

Use Case: Assigning a MAC address to a container, making it appear as a


physical device on the network.

Description: Allows containers to have their own MAC addresses and appear
directly on the physical network.

Useful for legacy applications or network monitoring.

Example: docker network create -d macvlan ...

IPvlan
Use Case: Assigning IP addresses to containers without MAC addresses.

Description: Provides full control over IPv4 and IPv6 addressing. Useful in
environments with strict network policies.

docker network ls

docker network inspect <network_name>

docker network create --driver <driver_name> <network_name>

docker network connect <network_name> <container_name>

docker network rm <network_name>

docker network prune --rm remove unsed network

In Docker, the host network driver allows a container to share the host's network
stack directly,

bypassing Docker's network isolation. This means the container uses the host's
IP address and network interfaces,

enabling it to communicate with external networks as if it were running directly


on the host

📦 What Are Docker Volumes?


Docker volumes are storage mechanisms managed by Docker that enable data
to persist independently of container lifecycles. By default,

Docker stores volume data in a specific directory on the host machine, such
as /var/lib/docker/volumes/ on Linux systems.

Types of Docker Volumes

Named Volumes

Description: User-defined volumes that can be easily referenced by name and


reused across multiple containers.

Use Case: Ideal for persisting data that needs to be shared or retained across
container restarts.
Example:

docker volume create my_volume

docker run -v my_volume:/app/data my_container

Anonymous Volumes

Description: Volumes created without a specific name, often used for temporary
data.

Use Case: Suitable for scenarios where data persistence isn't required beyond
the container's lifecycle.

Example:

docker run -v /app/data my_container

Bind Mounts

Description: Mounts a file or directory from the host machine into the container.

Use Case: Useful when you need direct access to host files or directories from
within the container.

Example:

docker run -v /host/path:/container/path my_container

tmpfs Mounts

Description: Mounts a temporary filesystem in the host system’s memory.

Use Case: Ideal for storing non-persistent data that shouldn't be written to disk,
enhancing performance.

Example:

docker run --tmpfs /app/tmp my_container

docker volume create my_volume

docker volume ls
docker volume inspect my_volume

docker volume rm my_volume

docker volume prune

Best Practices

Use Named Volumes for Persistent Data: They are easier to manage and can be
shared across containers.

Avoid Storing Data in Containers: Data stored in a container's writable layer is


lost when the container is removed.

Regularly Prune Unused Volumes: Helps in freeing up disk space.

Use Bind Mounts for Development: Allows real-time code changes without
rebuilding the image.

In Docker, bind mounts allow a container to access and interact with files or
directories from the host machine. This is achieved by mounting a specific path
from the host into the container,

enabling real-time synchronization between the two environments.

Usage Syntax:

docker run --mount type=bind,source=/host/path,target=/container/path


my_image

docker run -v /host/path:/container/path my_image

Common Use Cases:

Development Environments: Facilitates real-time code changes without


rebuilding the image.

Configuration Management: Allows containers to access configuration files


stored on the host.

Data Sharing: Enables sharing of logs or other data between the host and
container.

Considerations:
Security Risks: Since containers can access host files, there's potential for
unintentional modifications.

It's advisable to use read-only mounts when appropriate.

Portability: Bind mounts are host-specific, which can lead to issues when moving
containers between different environments.

Overriding Container Data: Mounting over existing directories in the container


can obscure pre-existing data.

Comparison with Volumes:

While bind mounts are excellent for development due to their simplicity and
real-time synchronization,

Docker volumes are preferred in production environments. Volumes are


managed by Docker,

offer better isolation, and are more portable across different systems.

📁 What are Bind Mounts in Docker?


Bind mounts let you link a specific file or folder from your local machine (host)
into a Docker container.

Any changes made on the host or inside the container reflect immediately on
both sides.

This is useful for sharing source code during development or persisting logs.

docker run -v /host/path:/container/path image-name

Key points:

The host path must already exist.

It depends on the host’s file system and permissions.

Great for development, but less portable than Docker volumes.

Summary: Bind mounts connect a folder or file from your computer directly into
the container for

easy sharing and editing.


⚖️Difference Between Volumes and Bind Mounts in Docker
Volumes are managed by Docker and stored in a part of the host filesystem
Docker controls.

They are designed for persistent data, easy backups,and sharing data between
containers. Docker handles

permissions and portability.

Bind Mounts directly link any file or folder from your host’s filesystem into a
container.

You manage the host path and permissions yourself.

They are great for development but less portable and more dependent on the
host environment.

In short:

Volumes = Docker-managed, portable, best for production data.

Bind mounts = Host-managed, flexible, best for development.

Summary: Volumes are Docker’s way to manage data safely and portably;

bind mounts link your host files directly to containers for flexible sharing.

# Create a new Docker volume

docker volume create my-volume


# List all Docker volumes

docker volume ls

# Inspect details of a specific volume

docker volume inspect my-volume

# Remove a specific Docker volume

docker volume rm my-volume

# Remove all unused Docker volumes (dangling)

docker volume prune

# Run a container with a volume mounted

docker run -v my-volume:/container/path image-name

✅ Advantages of Volumes Over Bind Mounts

Managed by Docker: Volumes are created and controlled by Docker, which


makes them easier to manage and less error-prone.

Portability: Volumes are stored in Docker’s storage area,

making them portable across different hosts and easier to back up or migrate.

Security: Docker handles permissions and isolation better with volumes,


reducing risks of accidental host file changes.

Performance: Volumes often offer better performance, especially on Docker


Desktop (Mac/Windows), compared to bind mounts.

Sharing Data Between Containers: Volumes can be easily shared among


multiple containers, simplifying multi-container apps.
Summary: Volumes are safer, portable, better performing, and easier to manage
than bind mount

⚠️Common Challenges with Docker Storage


Data Persistence: By default, container data is ephemeral

and lost when the container stops unless volumes or bind mounts are used.

Performance Issues: Bind mounts can cause slower performance on some


platforms (like Docker Desktop on Mac/Windows).

Data Consistency: Sharing volumes between containers requires careful


handling to avoid conflicts or data corruption.

Security Risks: Bind mounts expose host files directly to containers, which can
lead to accidental or malicious changes.

Storage Management: Over time, unused volumes and images can accumulate,
consuming disk space if not cleaned regularly.

Backup Complexity: Backing up data inside containers or volumes needs


additional tools or manual steps.

Summary: Docker storage challenges include keeping data persistent, managing


performance, ensuring security, and handling disk space efficiently.

🔐 How to Ensure Container Security


Use Minimal Base Images: Start with lightweight, secure images to reduce
attack surface.

Run Containers as Non-Root: Avoid running containers with root privileges to


limit damage if compromised.

Apply Least Privilege Principle: Only grant containers the permissions and
capabilities they absolutely need.

Keep Images Updated: Regularly update images to include security patches.

Scan Images for Vulnerabilities: Use tools like Clair or Trivy to detect security
issues before deployment.

Use Docker Content Trust: Sign images to ensure authenticity and integrity.
Isolate Containers: Use namespaces, cgroups, and network segmentation to
isolate containers.

Limit Resource Usage: Set CPU and memory limits to prevent DoS attacks.

Secure Secrets: Don’t hardcode secrets in images; use Docker secrets or


external vaults.

Monitor and Log: Continuously monitor container behavior and logs for
suspicious activity.

Summary: Container security comes from minimal images, least privilege,


updates,

vulnerability scanning, and strong runtime protections.

🔏 What is Docker Content Trust (DCT)?


Docker Content Trust (DCT) is a security feature that uses digital signatures to
verify the

authenticity and integrity of Docker images.

When enabled, it ensures that you only pull and run images that are signed and
trusted,

protecting you from running tampered or untrusted images.

It uses Notary to sign images at push time.

When pulling images, Docker verifies the signature before downloading.

Helps enforce supply chain security by validating image provenance.

Enable DCT:

export DOCKER_CONTENT_TRUST=1

Summary: Docker Content Trust ensures you only use verified and trusted
Docker images by

checking their digital signatures.


👤 What Are User Namespaces in Docker?
User namespaces in Docker provide security isolation by mapping container
user IDs (UIDs)

and group IDs (GIDs) to different IDs on the host system. This means a user
inside the container can run as root,

but is mapped to a non-root user on the host, reducing risk if the container is
compromised.

Helps prevent privilege escalation attacks from containers to the host.

Improves container security by isolating user permissions.

Needs to be enabled and configured in Docker daemon settings.

Summary: User namespaces isolate container users from host users,

making containers more secure by limiting root access on the host.

🔍 How to Scan Docker Images for Vulnerabilities

Use specialized security scanning tools like Trivy, Clair, Anchore, or Docker’s
built-in scanning (Docker Scan).

These tools analyze the image layers for known vulnerabilities in OS packages
and software libraries.

Run scans before pushing images to registries or deploying to production.

Integrate scanning into CI/CD pipelines to catch issues early.

Regularly update the scanning tools and your images to catch new
vulnerabilities.

trivy image your-image-name

Summary: Scan Docker images with tools like Trivy or Docker Scan to find and
fix security vulnerabilities

before deployment.
🔐 How to Secure Docker Secrets

Use Docker Secrets feature (available in Docker Swarm) to store sensitive data
like

passwords, API keys, or certificates securely.

Secrets are encrypted during transit and at rest, and only accessible to
containers that need them.

Avoid hardcoding secrets in Dockerfiles, environment variables, or source code.

Grant access to secrets on a need-to-know basis by attaching secrets only to


specific services.

For Kubernetes or other orchestration, use dedicated secret management tools


like HashiCorp Vault, AWS Secrets Manager, or Kubernetes Secrets.

Regularly rotate secrets to minimize exposure risk.

Summary: Use Docker Secrets or dedicated secret managers to safely store and
manage sensitive data,

avoiding exposure in code or environment variables.

What Are Multi-Stage Builds in Docker?


Multi-stage builds let you use multiple FROM statements in a single Dockerfile to
create smaller,

efficient images. You can compile or build your app in one stage and then copy
only the necessary

artifacts into the final stage, leaving behind all build tools and dependencies.
Benefits:

Reduces final image size.

Keeps images clean and secure by excluding unnecessary files

Simplifies build process by combining steps in one Dockerfile.

# Build stage

FROM node:18 AS builder

WORKDIR /app

COPY . .

RUN npm install && npm run build

# Final stage

FROM nginx:alpine

COPY --from=builder /app/build /usr/share/nginx/html

Summary: Multi-stage builds help create smaller, cleaner Docker images by


separating build and runtime environments.

🧪 Example of a Multi-Stage Build in Docker

Here’s a simple multi-stage Dockerfile for a React app:

# Stage 1: Build the app

FROM node:18 AS builder

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

# 🚀 Stage 2: Serve the app with Nginx


FROM nginx:alpine

COPY --from=builder /app/build /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

☕ Example: Multi-Stage Docker Build with Maven (Java App)

Here’s a multi-stage Dockerfile for a Java application using Maven:

# Stage 1: Build the Java application with Maven

FROM maven:3.8.5-openjdk-17 AS builder

WORKDIR /app

COPY pom.xml .

COPY src ./src

RUN mvn clean package -DskipTests

# 🚀 Stage 2: Run the application using a lightweight JDK image

FROM openjdk:17-jdk-slim

WORKDIR /app

COPY --from=builder /app/target/myapp.jar ./myapp.jar

EXPOSE 8080

CMD ["java", "-jar", "myapp.jar"]

Key Points:

The first stage compiles the app and creates a .jar file using Maven.

The second stage uses a slim JDK image and only copies the .jar—making the
final image much smaller.

📝 Make sure to replace myapp.jar with your actual JAR file name.

Summary: This multi-stage Dockerfile compiles a Java app with Maven, then
copies only the final

JAR into a lightweight runtime image for efficient deployment.


What is a Docker Overlay Network?
A Docker Overlay Network allows containers running on different Docker hosts
to communicate

securely as if they were on the same local network. It works by creating a virtual
network

over the physical network, using VXLAN tunneling

docker network create -d overlay my-overlay-net

Summary: Docker Overlay Network enables secure communication between


containers

across multiple hosts in a Docker Swarm.

⚙️How Can You Optimize Docker Images?


Use Minimal Base Images: Start with lightweight images like alpine or slim to
reduce size.

Multi-Stage Builds: Separate build and runtime stages to exclude unnecessary


tools and files.

Clean Up in Layers: Remove temporary files and caches (like apt-get clean, rm -
rf /var/lib/apt/lists/*) in the same layer.

Use .dockerignore: Exclude files and folders (like .git, node_modules) that don’t
need to be in the image.

Minimize Layers: Combine related commands into a single RUN statement to


reduce layers.

Avoid Installing Unused Packages: Only install what your app actually needs.

Tag and Version Your Images: Avoid using latest in production; use versioned
tags for consistency.
Summary: Optimize Docker images by using smaller base images, cleaning up
files, and only i

ncluding what’s needed to reduce size and improve performance.

🚫 What Is the Role of the .dockerignore File?


The .dockerignore file tells Docker which files and directories to exclude when
building an image.

It works just like .gitignore and helps make your build context smaller, faster,
and more secure.

Why it's important:

Speeds up the build by excluding unnecessary files (e.g., .git, node_modules,


logs).

Reduces image size by keeping junk out of the final image.

Prevents sensitive files (like .env, credentials) from accidentally being added.

Example .dockerignore:

node_modules

.git

.env

*.log

Dockerfile.dev

Summary: The .dockerignore file improves Docker build performance and


security by excluding unwanted

files from the image.


📦 How Do You Minimize the Number of Layers in a Docker
Image?
Combine Commands: Use && to chain multiple operations in a single RUN
statement.

RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*

Limit the Number of Instructions: Each RUN, COPY, and ADD creates a new layer
—combine them where possible.

Use Multi-Stage Builds: Keep only the final artifacts in the last stage to avoid
unnecessary build layers.

Clean Temporary Files Inline: Do cleanup (like removing cache or build files)
within the same RUN to avoid leftover layers.

Summary: Minimize layers by combining commands, reducing RUN/COPY


instructions, and cleaning up in the same layer.

🧼 What Is the Docker scratch Image?


The scratch image is a special, empty base image in Docker—literally nothing
inside it.

It’s used to build minimal, ultra-lightweight containers by starting from zero and
adding only what you need.

Key Use Cases:

Commonly used for static binaries (like Go programs).

No package manager, shell, or utilities—just your app and its dependencies.

Great for security, performance, and small image sizes.

Example:

FROM scratch

COPY myapp /myapp

ENTRYPOINT ["/myapp"]
Summary: The scratch image is an empty base used to build the smallest and
most secure Docker images, ideal for static binaries.

📂 Difference Between ADD and COPY in Docker

| Feature | `COPY` | `ADD` |

| ------------------ | ------------------------------ | ------------------------------------------- |

| Purpose | Copies files/folders to image | Does the same, **plus extra


features** |

| URL Support | ❌ Not supported | ✅ Can download from URLs


|

| Archive Extraction | ❌ No | ✅ Automatically extracts local `.tar`


files |

| Clarity | ✅ Simple and explicit | ❌ Can be confusing due to added


behavior |

| Best Practice | ✅ Preferred for most use cases | 🚫 Use only when additional
features needed |

🔹 Use COPY when you just need to move files/folders into the image.

🔹 Use ADD only if you need to extract .tar files or pull files from a URL.

Summary: COPY is simpler and preferred; ADD offers extras like archive
extraction and URL support.

How Do You Troubleshoot Docker Performance Issues?


🔍 Check Container Resource Usage:

Use docker stats to monitor CPU, memory, and network usage in real time.

docker stats

📦 Analyze Image Size:

Use docker images and docker history to find bloated layers and reduce image
size.

📁 Monitor Disk Space:

Run docker system df to see storage used by containers, images, volumes, and
build cache.

📝 Review Logs:

Use docker logs <container> to check for errors, slowdowns, or application


issues.

⚙️Inspect Container Details:

Run docker inspect <container> for deep config info like limits, mounts, and
networks.

🧹 Clean Up Resources:

Remove unused containers, volumes, and images to free up space:

docker system prune

🌐 Network Troubleshooting:

Use docker network ls and docker network inspect to debug connectivity issues.

📈 Use Monitoring Tools:

Integrate tools like Prometheus, Grafana, or cAdvisor for deeper insight.


Summary: Troubleshoot Docker by checking resource usage, reviewing logs,

cleaning unused resources, and using tools to monitor system performance.

What are common Docker performance bottlenecks?

Overloaded host resources (CPU, memory, disk).

Network misconfigurations or congestion.

Inefficient Dockerfiles or large images.

Running too many containers on a single host.

What are common challenges faced in Docker


implementation?
Security: Ensuring secure images and managing secrets.

Networking: Configuring container communication across hosts.

Storage: Managing persistent data.

Compatibility: Ensuring consistent behavior across environments.

Learning curve: For teams new to containerization.

# 🔄 Docker Version & Info

docker --version # Show Docker version installed

docker info # Display detailed Docker system info

# 📦 Image Commands

docker build -t "image" # Build Docker image from Dockerfile with tag

docker pull ubuntu # Download image from Docker Hub

docker push myrepo/my-image # Upload image to a Docker registry


docker images # List all local Docker images

docker rmi my-image-name # Remove a Docker image by name or ID

docker image prune -a # Remove all unused images to free space

🚢 Container Commands

docker run -d --name my-container my-image # Run container in detached


mode with a name

docker run -it ubuntu /bin/bash # Run interactive container with bash
shell

docker exec -it my-container /bin/bash # Open shell inside a running


container

docker start my-container # Start a stopped container

docker stop my-container # Stop a running container gracefully

docker restart my-container # Restart a container

docker rm my-container # Remove a stopped container

docker ps # List running containers

docker ps -a # List all containers (running and stopped)

docker logs my-container # Show logs of a container

# 📁 Volume Commands

docker volume create my-volume # Create a new Docker volume

docker volume ls # List all Docker volumes

docker volume inspect my-volume # Show detailed info about a


volume

docker volume rm my-volume # Remove a Docker volume

# 🔌 Network Commands

docker network ls # List Docker networks

docker network create my-network # Create a new Docker network

docker network inspect my-network # Inspect details of a network


docker network rm my-network # Remove a Docker network

# 🧼 Cleanup Commands

docker system prune # Remove unused containers, images,


networks, and cache

docker container prune # Remove all stopped containers

docker volume prune # Remove all unused volumes

docker image prune # Remove unused images

# 🔍 Monitoring & Debugging

docker stats # Real-time resource usage of running


containers

docker top my-container # Show running processes inside a


container

docker inspect my-container # Detailed info about container


configuration

docker diff my-container # Show changes made to container


filesystem

# 📂 Copy Files Between Host and Container

docker cp my-container:/path/in/container /local/path # Copy files from


container to host

docker cp /local/file my-container:/path/in/container # Copy files from host to


container

# 🔏 Docker Content Trust (for image signing & verification)

export DOCKER_CONTENT_TRUST=1 # Enable Docker Content Trust


(image signing)

# 🧪 Docker Scan (Security vulnerability scanning)

docker scan my-image # Scan Docker image for vulnerabilities


# 🧱 Build Multi-Stage Image (example usage)

docker build -t myapp:latest -f Dockerfile . # Build image using specified


Dockerfile

# 📄 Dockerfile Utilities

# Use .dockerignore file to exclude files/folders from build context to speed up


build and reduce image size

# Common .dockerignore entries:

# node_modules

# .git

# *.log

docker kill <container_name_or_id>

#immediately stops a running container by sending a SIGKILL signal, which


forces

the container to terminate without waiting for cleanup or graceful shutdown.

🧱 Image Layers and Layer Caching in Docker


Image Layers:

A Docker image is made up of multiple layers, each representing a step in the


Dockerfile (like a RUN or COPY command).

Each layer stores filesystem changes (added, modified, or deleted files).

Layers are stacked to form the final image and shared between images to save
space.

Layer Caching:

Docker caches each layer after building it the first time.

When you rebuild an image, Docker reuses cached layers if the Dockerfile
instructions and context haven’t changed.

This speeds up builds significantly by skipping unchanged steps.


Cache invalidation happens if a layer’s command or its context changes (e.g.,
modified source files).

Best Practices:

Put the least frequently changing instructions at the top of the Dockerfile (e.g.,
installing dependencies).

Put frequently changing steps (like copying source code) towards the bottom to
leverage caching effectively.

Summary: Docker images consist of layers, and Docker caches these layers to
speed up rebuilds by reusing unchanged parts.

You might also like