A comprehensive guide to 100 basic Docker interview questions covering essential concepts, commands, and best practices for Docker beginners and interviewers.
Docker is a powerful tool for containerization, allowing developers to package applications and their dependencies into lightweight, portable containers. This post compiles 100 basic Docker interview questions to help you brush up on the fundamentals, from installation and basic commands to advanced concepts like networking and optimization.
What is Docker?h2
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. It packages an application and its dependencies into a lightweight, portable container that runs consistently across different environments, from development laptops to production servers.
Key benefits include:
- Isolation: Containers share the host OS kernel but run in isolated user spaces, improving security and efficiency.
- Portability: “Build once, run anywhere” – no compatibility issues between dev, test, and prod.
- Efficiency: Faster than VMs as they don’t need a full guest OS.
Docker uses images (read-only templates) to create containers (runnable instances). Core components: Docker Engine (CLI, daemon), images, containers, registries (like Docker Hub). It’s widely used in CI/CD pipelines for microservices.
What is a Docker container?h2
A Docker container is a lightweight, standalone, executable package that includes an application and its dependencies, such as libraries, configuration files, and runtime. Built from a Docker image, it runs in an isolated environment on the host OS, sharing the kernel but with its own filesystem and resources.
Key Features
- Isolation: Containers operate independently, preventing conflicts between apps.
- Portability: They run consistently across different environments (dev, test, prod).
- Efficiency: Unlike VMs, containers don’t require a full guest OS, making them faster and less resource-intensive.
Containers are created using Docker commands like docker run
and are managed via Docker Engine. They’re ideal for microservices and CI/CD pipelines due to their scalability and ease of deployment.
What is the difference between a Docker container and a virtual machine?h2
Docker containers and virtual machines (VMs) both enable application isolation but differ in architecture and performance. Containers share the host OS kernel, packaging only the app and its dependencies, making them lightweight and fast. They start in seconds and use minimal resources (MBs). VMs, however, run a full guest OS on a hypervisor, emulating hardware, which makes them heavier (GBs) and slower to start (minutes).
Key Distinctions
- Isolation: Containers use process-level isolation, less secure but efficient. VMs offer stronger, hardware-level isolation.
- Portability: Containers run consistently anywhere Docker is installed. VMs depend on the hypervisor and guest OS.
- Use Cases: Containers suit microservices and CI/CD due to speed and density. VMs are better for legacy apps or multi-OS needs.
Containers optimize resource use, while VMs prioritize robust isolation.
What is Docker Engine?h2
Docker containers and virtual machines (VMs) both enable application isolation but differ in architecture and performance. Containers share the host OS kernel, packaging only the app and its dependencies, making them lightweight and fast. They start in seconds and use minimal resources (MBs). VMs, however, run a full guest OS on a hypervisor, emulating hardware, which makes them heavier (GBs) and slower to start (minutes).
Key Distinctions
- Isolation: Containers use process-level isolation, less secure but efficient. VMs offer stronger, hardware-level isolation.
- Portability: Containers run consistently anywhere Docker is installed. VMs depend on the hypervisor and guest OS.
- Use Cases: Containers suit microservices and CI/CD due to speed and density. VMs are better for legacy apps or multi-OS needs.
Containers optimize resource use, while VMs prioritize robust isolation.
What are the main components of Docker?h2
Docker’s main components work together to enable containerization, providing a platform for building, running, and managing containers efficiently.
Core Components
- Docker Engine: The runtime with a daemon (dockerd) and CLI. The daemon manages containers, images, networks, and volumes, while the CLI sends commands via a REST API.
- Docker Images: Read-only templates containing an application, dependencies, and configurations, built from Dockerfiles and stored in registries like Docker Hub.
- Docker Containers: Runnable instances of images, providing isolated environments that share the host OS kernel for lightweight execution.
- Docker Registry: A storage and distribution system for images, such as Docker Hub or private registries, enabling sharing and deployment.
- Dockerfile: A script defining steps to create an image, specifying the base image, dependencies, and commands.
These components streamline application deployment, ensuring portability and scalability across environments.
How do you install Docker on Linux?h2
To install Docker on Linux, follow a streamlined process that varies slightly by distribution but generally involves adding Docker’s official repository and installing the engine.
Installation Steps
- Update the package index:
sudo apt-get update
(Debian/Ubuntu) orsudo yum update
(CentOS/RHEL). - Install prerequisites: For Ubuntu, run
sudo apt-get install ca-certificates curl
. - Add Docker’s GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
. - Add Docker repository: For Ubuntu, use
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
. - Install Docker: Run
sudo apt-get install docker-ce docker-ce-cli containerd.io
(Ubuntu) orsudo yum install docker-ce docker-ce-cli containerd.io
(CentOS). - Start and enable Docker:
sudo systemctl start docker
andsudo systemctl enable docker
. - Verify installation: Run
docker --version
to check the installed version.
Optional
- Add your user to the Docker group to run commands without sudo:
sudo usermod -aG docker $USER
.
This ensures Docker runs smoothly on major Linux distributions like Ubuntu or CentOS. Always check Docker’s official documentation for distro-specific instructions.
How do you install Docker on Windows?h2
To install Docker on Windows, use Docker Desktop, which provides a user-friendly environment for running containers.
Installation Steps
- Download Docker Desktop from the official Docker website (docker.com).
- Ensure your system meets requirements: Windows 10/11 Pro, Enterprise, or Education (64-bit) with WSL 2 or Hyper-V enabled.
- Run the installer and follow the prompts, selecting WSL 2 backend for better performance (recommended for Windows 10/11 Home).
- Enable WSL 2 if needed: Run
wsl --install
in PowerShell as admin and set WSL 2 as default (wsl --set-default-version 2
). - Restart your computer if prompted during installation.
- Open Docker Desktop; it starts the Docker daemon automatically.
- Verify installation: Open a terminal (PowerShell or Command Prompt) and run
docker --version
to confirm Docker is installed. - Optionally, sign in to Docker Hub via Docker Desktop to access or push images.
Post-Installation
- Ensure Docker Desktop is running before executing Docker commands.
- Check WSL 2 integration in Docker Desktop settings for seamless Linux container support.
This setup enables efficient container management on Windows.
What is a Docker image?h2
A Docker image is a lightweight, read-only, executable file that serves as a blueprint for creating containers. It packages an application, its dependencies, libraries, configurations, and runtime environment into a layered filesystem, ensuring consistency across deployments.
Images are built from a Dockerfile (a script defining build steps) using the docker build
command. Each instruction in the Dockerfile creates a layer, cached for efficiency during rebuilds.
Key Characteristics
- Immutability: Once built, images can’t be changed; modifications create new images.
- Portability: Tagged with names/versions (e.g.,
nginx:latest
), stored in registries like Docker Hub for sharing. - Base Images: Start from scratch or official bases like
ubuntu
oralpine
for minimal size. - Layers: Union filesystem (e.g., OverlayFS) allows sharing layers between images, reducing storage and speeding pulls/pushes.
Images are pulled with docker pull
and used to run containers via docker run
. They’re essential for reproducible, isolated app environments in DevOps workflows.
How do you pull a Docker image from Docker Hub?h2
To pull a Docker image from Docker Hub, use the docker pull
command, which retrieves images from the Docker Hub registry to your local machine for running containers.
Steps to Pull
- Identify the image name and tag (e.g.,
nginx:latest
for the latest NGINX version). - Run the command:
docker pull nginx:latest
in your terminal. - Docker Engine downloads the image layers from Docker Hub to your local repository.
- Verify the pull: Use
docker images
to list local images and confirm the image is present.
Additional Notes
- If no tag is specified (e.g.,
docker pull nginx
), Docker defaults to thelatest
tag. - For private repositories, log in first with
docker login
to authenticate. - Ensure internet connectivity and sufficient disk space for the image.
This process enables quick access to pre-built images for development or production.
What is Docker Hub?h2
Docker Hub is Docker’s official cloud-based registry for storing, sharing, and distributing Docker images. It hosts millions of images, including official ones for software like Ubuntu and NGINX, and supports public and private repositories.
Key Features
- Public repositories allow free image sharing and pulls.
- Private repositories (paid) offer secure storage for proprietary images.
- Automated builds integrate with GitHub for seamless image updates.
- Users can search, version, and collaborate on images easily.
Commands like docker pull
fetch images, while docker push
uploads them. It’s critical for CI/CD, enabling teams to access pre-built images without local builds. Free accounts provide basic access, with pro plans adding features like vulnerability scanning.
What is a Dockerfile?h2
A Dockerfile is a text file containing a script of instructions to build a Docker image. It defines the environment, dependencies, and configuration needed for an application to run in a container.
Key Aspects
- Specifies a base image (e.g.,
FROM ubuntu:20.04
) as the starting point. - Includes commands like
COPY
,RUN
, andADD
to add files, install packages, or execute scripts. - Sets runtime behavior with
CMD
orENTRYPOINT
for container execution. - Each instruction creates a cached layer, optimizing rebuilds.
Using docker build -t image-name .
, Docker Engine processes the Dockerfile to create a portable, reproducible image, stored locally or pushed to registries like Docker Hub. It’s essential for automating and standardizing container creation in DevOps workflows.
How do you build a Docker image using a Dockerfile?h2
To build a Docker image from a Dockerfile, use the docker build
command, which processes the Dockerfile’s instructions to create a reproducible image.
Build Process
- Create a Dockerfile in your project directory with instructions like
FROM
,COPY
, andCMD
. - Run
docker build -t image-name:tag .
in the terminal, where-t
assigns a name and optional tag (e.g.,myapp:1.0
), and.
specifies the build context (current directory). - Docker Engine executes each Dockerfile instruction, creating cached layers for efficiency.
- Verify the image with
docker images
to list locally stored images.
Additional Notes
- Ensure the Dockerfile is named
Dockerfile
(case-sensitive) or specify a custom file with-f
. - Use
.dockerignore
to exclude unnecessary files from the build context. - Images can be pushed to registries like Docker Hub using
docker push
.
This process creates a portable image ready for container deployment.
What is the difference between COPY and ADD in a Dockerfile?h2
In a Dockerfile, COPY and ADD transfer files from the build context to the image, but their capabilities differ.
Core Distinctions
- COPY: Copies files or directories from the local build context to the image’s filesystem. It’s straightforward and explicit, ideal for most use cases. Syntax:
COPY <src> <dest>
. - ADD: Extends COPY by auto-extracting tar archives (e.g., .tar.gz) and supporting URL-based file downloads. Syntax:
ADD <src> <dest>
. Its extra features can reduce predictability, making it less preferred unless tar extraction or remote fetches are needed.
Stick with COPY for clarity and reproducibility in builds. Use ADD only for specific cases like unpacking archives. Both create image layers, so optimize their order for efficient caching in DevOps workflows.
What is the CMD instruction in a Dockerfile?h2
The CMD instruction in a Dockerfile specifies the default command to execute when a container starts from the image. It defines the runtime behavior of the container.
Key Points
- Syntax:
CMD ["executable", "param1", "param2"]
(exec form, preferred) orCMD command param1
(shell form). - Exec form (
["command", "arg"]
) runs the command directly, improving performance and signal handling. - Shell form (
command arg
) runs in a shell (e.g.,/bin/sh -c
), adding overhead. - Only one CMD is effective; later ones override earlier ones.
- Users can override CMD at runtime with
docker run image custom_command
.
CMD is often used with ENTRYPOINT
for flexible configurations, where CMD provides default arguments. It’s critical for defining the container’s primary process, like running a web server or script.
What is the ENTRYPOINT instruction in a Dockerfile?h2
The ENTRYPOINT instruction in a Dockerfile defines the primary executable that runs when a container starts. It specifies the command that the container is built to execute, making it less likely to be overridden compared to CMD.
Key Points
- Syntax:
ENTRYPOINT ["executable", "param1", "param2"]
(exec form, preferred) orENTRYPOINT command param1
(shell form). - Exec form runs the command directly, optimizing performance and signal handling.
- Shell form runs in a shell (e.g.,
/bin/sh -c
), adding overhead. - Unlike CMD, ENTRYPOINT is not easily overridden;
docker run
arguments append to it unless--entrypoint
is used. - Often paired with CMD to provide default arguments, enabling flexible container configurations.
ENTRYPOINT ensures the container runs a specific process, like a web server or script, ideal for defining the container’s core purpose in DevOps workflows.
What is the difference between CMD and ENTRYPOINT?h2
CMD and ENTRYPOINT in a Dockerfile both define commands to run when a container starts, but they serve different purposes and behave differently.
Key Differences
- CMD: Specifies the default command or parameters for a container. It’s easily overridden by arguments in
docker run
. Syntax:CMD ["command", "arg"]
(exec) orCMD command arg
(shell). Only one CMD is effective; later ones override. - ENTRYPOINT: Defines the primary executable, making it harder to override. Arguments in
docker run
append to ENTRYPOINT unless--entrypoint
is used. Syntax:ENTRYPOINT ["command", "arg"]
(exec) orENTRYPOINT command arg
(shell). - Use Case: Use ENTRYPOINT for fixed commands (e.g., a specific app process) and CMD for default arguments. Together, they allow flexible configurations (e.g., ENTRYPOINT sets the app, CMD sets options).
- Behavior: Exec form (
["command"]
) is preferred for both to avoid shell overhead and ensure proper signal handling.
ENTRYPOINT ensures a consistent executable, while CMD provides customizable defaults.
How do you run a Docker container?h2
To run a Docker container, use the docker run
command, which creates and starts a container from a specified image.
Steps to Run
- Pull or ensure the image exists locally (e.g.,
docker pull nginx
or use a custom image). - Run the container:
docker run [options] image:tag [command]
. Example:docker run -d -p 8080:80 nginx
starts NGINX, mapping host port 8080 to container port 80. - Common options:
-d
(detached mode),-p
(port mapping),--name
(custom container name),-v
(volume mounting). - Verify it’s running: Use
docker ps
to list active containers.
Additional Notes
- If no tag is specified, Docker uses
latest
. - Override CMD with a custom command (e.g.,
docker run nginx bash
). - Use
-it
for interactive shells (e.g.,docker run -it ubuntu bash
).
This launches a container for tasks like running apps or testing environments efficiently.
How do you list all running Docker containers?h2
To list all running Docker containers, use the docker ps
command, which displays details about active containers.
Key Details
- Run
docker ps
in the terminal to see running containers, showing container ID, image, command, status, ports, and names. - For more details, use
docker ps -a
to include stopped containers. - Use
docker ps -q
to display only container IDs, useful for scripting.
This command helps monitor and manage active containers in real-time during development or production.
How do you stop a Docker container?h2
To stop a Docker container, use the docker stop
command, which gracefully halts a running container.
Steps to Stop
- Identify the container using
docker ps
to get its ID or name. - Run
docker stop container_id_or_name
(e.g.,docker stop myapp
). - Docker sends a SIGTERM signal, allowing the container to shut down cleanly.
- If it doesn’t stop within a timeout (default 10 seconds), use
docker kill container_id_or_name
to force-stop with SIGKILL.
Additional Notes
- Verify the container stopped with
docker ps -a
(shows stopped containers). - To stop multiple containers, list their IDs/names:
docker stop id1 id2
.
This ensures containers are stopped safely, preserving data and state for later use.
How do you remove a Docker container?h2
To remove a Docker container, use the docker rm
command, which deletes a container from the system.
Steps to Remove
- Ensure the container is stopped by running
docker stop container_id_or_name
(check withdocker ps -a
). - Run
docker rm container_id_or_name
to delete the container (e.g.,docker rm myapp
). - For multiple containers, list IDs/names:
docker rm id1 id2
. - To force-remove a running container, use
docker rm -f container_id_or_name
, which stops and deletes it.
Additional Notes
- Verify removal with
docker ps -a
(container should no longer appear). - Use
docker container prune
to remove all stopped containers at once.
This frees up system resources by permanently deleting containers, ensuring a clean environment.
How do you list all Docker images?h2
To list all Docker images stored locally, use the docker images
command, which displays details about available images.
Key Details
- Run
docker images
in the terminal to see image names, tags, IDs, creation dates, and sizes. - Use
docker images -a
to include intermediate images used in builds. - For a compact output, use
docker images -q
to show only unique image IDs, useful for scripting.
This command helps manage and verify images before running containers or cleaning up storage.
How do you remove a Docker image?h2
To remove a Docker image, use the docker rmi
command, which deletes the specified image from your local system.
Steps to Remove
- List images with
docker images
to find the image name or ID (e.g.,nginx:latest
). - Ensure no containers are using the image (check with
docker ps -a
and stop/remove them withdocker stop
anddocker rm
). - Run
docker rmi image_name:tag
(e.g.,docker rmi nginx:latest
) to delete the image. - For multiple images, list them:
docker rmi image1 image2
. - To force-remove an image with running containers, use
docker rmi -f image_name
.
Additional Notes
- Verify removal with
docker images
. - Use
docker image prune
to remove all unused images, freeing up space.
This ensures efficient cleanup of unused images, optimizing disk usage.
What is Docker Compose?h2
Docker Compose is a tool for defining and managing multi-container Docker applications using a YAML file to configure services, networks, and volumes.
Key Features
- Simplifies orchestration by specifying multiple containers in a single
docker-compose.yml
file. - Defines service settings like images, ports, environment variables, and dependencies.
- Runs with
docker-compose up
to start all services anddocker-compose down
to stop and remove them. - Supports development, testing, and CI/CD by enabling consistent, reproducible environments.
Example Use
- A
docker-compose.yml
might define a web app and database, linking them via a network. - Commands like
docker-compose ps
list running services, anddocker-compose logs
shows logs.
Compose is ideal for local development and small-scale deployments, streamlining complex application setups.
How do you create a simple Docker Compose file?h2
To create a simple Docker Compose file, define a docker-compose.yml
with services, networks, and volumes for a multi-container application.
Steps to Create
- Create a file named
docker-compose.yml
. - Define the version (e.g.,
version: '3.8'
for recent Docker Compose). - List services under
services
, specifying image, ports, and configurations. Example:version: '3.8'services:web:image: nginx:latestports:- "8080:80"db:image: mysql:8.0environment:MYSQL_ROOT_PASSWORD: example - Save the file and run
docker-compose up
to start the containers.
Key Points
- Services (e.g.,
web
,db
) define containers with images or build instructions. - Ports map host to container (e.g.,
8080:80
). - Environment variables configure settings like passwords.
- Use
docker-compose down
to stop and remove containers.
This creates a basic setup, like a web server and database, for local development or testing.
What is a Docker volume?h2
A Docker volume is a mechanism for persisting data generated by and used by Docker containers, stored outside the container’s filesystem to ensure data survives container removal.
Key Features
- Volumes are managed by Docker and stored in a host directory (e.g.,
/var/lib/docker/volumes
). - They enable data sharing between containers and between containers and the host.
- Created with
docker volume create
or automatically duringdocker run
with-v
or--mount
. - Unlike bind mounts, volumes are portable and managed by Docker, making them preferred for persistent storage.
Example Use
- Run
docker run -v my_volume:/app/data
to mount a volume at/app/data
in the container. - Use
docker volume ls
to list volumes anddocker volume rm
to delete them.
Volumes are ideal for databases, logs, or stateful applications, ensuring data persistence in DevOps workflows.
How do you create a Docker volume?h2
To create a Docker volume, use the docker volume create
command or let Docker create one automatically during container startup.
Creation Methods
- Explicitly create a named volume: Run
docker volume create my_volume
to create a volume namedmy_volume
, stored in Docker’s managed storage (e.g.,/var/lib/docker/volumes
). - Automatically create during container run: Use
docker run -v my_volume:/app/data image_name
, wheremy_volume
is created if it doesn’t exist, mounting it to/app/data
in the container. - Verify creation with
docker volume ls
to list all volumes.
Key Points
- Named volumes are preferred for portability and management over bind mounts.
- Use
docker volume inspect my_volume
to check volume details. - Remove with
docker volume rm my_volume
when no longer needed.
This ensures persistent, reusable storage for containers, ideal for databases or stateful apps.
How do you mount a volume in a Docker container?h2
To mount a volume in a Docker container, use the -v
or --mount
flag with the docker run
command to attach a volume to a container’s filesystem.
Mounting Methods
- For a named volume: Run
docker run -v my_volume:/app/data image_name
, mounting the volumemy_volume
to/app/data
in the container. Ifmy_volume
doesn’t exist, Docker creates it. - For a bind mount (host directory): Use
docker run -v /host/path:/app/data image_name
to map a host directory to the container’s/app/data
. - Using
--mount
: Example,docker run --mount source=my_volume,target=/app/data image_name
for clearer syntax and more options. - Verify with
docker inspect container_name
to check mounted volumes.
Key Notes
- Named volumes are managed by Docker, ideal for persistence.
- Bind mounts depend on host paths, less portable but useful for development.
- Ensure the target path in the container is appropriate for the application.
This enables data persistence or sharing for containers, crucial for stateful apps like databases.
What is Docker networking?h2
Docker networking enables communication between containers, the host, and external networks by creating virtual networks managed by Docker.
Key Concepts
- Default Networks: Docker provides three default network types: bridge (default for single-host container communication), host (uses the host’s network stack), and none (isolates containers from networking).
- Bridge Network: Containers on the same bridge network communicate via private IP addresses. Create a custom bridge with
docker network create my_network
. - Port Mapping: Expose container ports to the host using
docker run -p host_port:container_port
(e.g.,-p 8080:80
). - Container-to-Container: Containers on the same network can communicate using container names as hostnames.
- Commands: Use
docker network ls
to list networks,docker network inspect
for details, anddocker network connect
to join containers to networks.
Docker networking simplifies microservices communication, ensuring isolation and scalability in DevOps environments.
What are the default Docker network types?h2
Docker provides three default network types for containers to communicate with each other, the host, or external networks.
Network Types
- Bridge: The default network for single-host container communication. Containers on the same bridge network get private IP addresses and can communicate directly. Created automatically as
bridge
or customized withdocker network create
. - Host: Containers use the host’s network stack directly, removing network isolation for better performance but less security. No port mapping is needed.
- None: Containers are fully isolated with no network access, useful for high-security or offline tasks.
Key Notes
- View networks with
docker network ls
. - Bridge is ideal for most apps, host for low-latency needs, and none for maximum isolation.
These networks enable flexible, secure communication setups for containerized applications.
How do you create a custom Docker network?h2
To create a custom Docker network, use the docker network create
command to define a network for container communication, typically a bridge network for single-host setups.
Creation Steps
- Run
docker network create my_network
to create a bridge network namedmy_network
. - Specify the driver if needed:
docker network create --driver bridge my_network
(bridge is default). - Connect containers to the network during launch:
docker run --network my_network image_name
. - Verify with
docker network ls
to list networks ordocker network inspect my_network
for details.
Key Benefits
- Containers on the same custom network communicate using container names as hostnames.
- Provides better isolation than the default bridge network.
- Use
docker network rm my_network
to delete when no longer needed.
Custom networks enhance microservices communication and simplify service discovery in DevOps workflows.
How do you inspect a Docker container?h2
To inspect a Docker container, use the docker inspect
command, which provides detailed information about a container’s configuration and state.
Inspection Steps
- Identify the container using
docker ps -a
to get its ID or name. - Run
docker inspect container_id_or_name
(e.g.,docker inspect myapp
). - The command outputs a JSON object with details like container status, image, ports, volumes, network settings, and environment variables.
- Filter specific fields with
--format
: e.g.,docker inspect --format='{{.State.Status}}' myapp
to get the container’s status.
Key Uses
- Troubleshoot issues by checking mounts, IP addresses, or logs.
- Verify configurations like port mappings or volume bindings.
This command is essential for debugging and understanding container behavior in DevOps workflows.
How do you view logs of a Docker container?h2
To view logs of a Docker container, use the docker logs
command, which retrieves the output (stdout/stderr) generated by the container’s main process.
Steps to View Logs
- Identify the container using
docker ps -a
to get its ID or name. - Run
docker logs container_id_or_name
(e.g.,docker logs myapp
) to display logs. - Use options like
--follow
(docker logs -f
) for real-time log streaming or--tail N
(e.g.,--tail 10
) to show the last N lines. - Check timestamps with
--timestamps
for log entry times.
Key Notes
- Logs are useful for debugging application issues or monitoring behavior.
- For multi-container setups (e.g., Docker Compose), use
docker-compose logs service_name
.
This command helps troubleshoot and monitor containers efficiently in DevOps workflows.
What is the docker ps command used for?h2
The docker ps
command lists Docker containers, providing details about their status and configuration.
Key Uses
- Run
docker ps
to display running containers, showing container ID, image, command, status, ports, and names. - Use
docker ps -a
to include stopped or exited containers. - Apply
docker ps -q
to show only container IDs, useful for scripting. - Filter with
--filter
(e.g.,docker ps --filter "name=myapp"
) to narrow down results.
This command is essential for monitoring and managing containers in real-time during development or production.
What is the docker run command?h2
The docker run
command creates and starts a Docker container from a specified image, launching it with defined configurations.
Key Features
- Syntax:
docker run [options] image:tag [command]
. Example:docker run -d -p 8080:80 nginx
runs NGINX, mapping host port 8080 to container port 80. - Common options:
-d
(detached mode),-p
(port mapping),--name
(custom name),-v
(volume mounting),-it
(interactive terminal). - Overrides image’s default CMD if a command is provided (e.g.,
docker run nginx bash
). - Automatically pulls the image from Docker Hub if not local.
This command is fundamental for deploying applications in containers, enabling quick setup for development or production environments.
How do you expose ports in a Docker container?h2
To expose ports in a Docker container, use the -p
or --publish
option with the docker run
command to map container ports to the host, allowing external access to services.
Exposing Ports
- Run
docker run -p host_port:container_port image_name
(e.g.,docker run -p 8080:80 nginx
maps host port 8080 to container port 80). - Use
-p
for specific ports or--publish-all
(-P
) to map all exposed ports in the image’s Dockerfile (viaEXPOSE
) to random host ports. - Verify mappings with
docker ps
ordocker inspect container_name
to check port bindings.
Key Notes
- The
EXPOSE
instruction in a Dockerfile documents intended ports but doesn’t publish them;-p
is required for external access. - Use protocols like
-p 8080:80/tcp
for clarity (TCP is default). - For multiple ports, add multiple
-p
flags (e.g.,-p 8080:80 -p 8443:443
).
This enables access to containerized services like web servers or databases from the host or network.
What is the -p flag in docker run?h2
The -p
flag in the docker run
command maps a container’s port to a host port, enabling external access to services running inside the container.
Key Details
- Syntax:
-p host_port:container_port
(e.g.,docker run -p 8080:80 nginx
maps host port 8080 to container port 80). - Allows specifying protocols:
-p 8080:80/tcp
or-p 8080:80/udp
(TCP is default). - Multiple ports can be mapped with multiple
-p
flags (e.g.,-p 8080:80 -p 8443:443
). - Use
-p host_port:container_port/protocol
for explicit control or--publish-all
(-P
) to map allEXPOSE
-defined ports in the Dockerfile to random host ports. - Verify mappings with
docker ps
ordocker inspect container_name
.
This flag is crucial for exposing containerized services like web servers or APIs to the host or external networks.
How do you name a Docker container?h2
To assign a custom name to a Docker container, use the --name
flag with the docker run
command during container creation.
Naming Process
- Run
docker run --name custom_name image:tag
(e.g.,docker run --name webapp nginx:latest
names the container “webapp”). - Names must be unique; Docker errors if the name is in use—stop and remove the existing container with
docker stop
anddocker rm
. - Rename an existing container with
docker rename old_name new_name
. - Check the name with
docker ps
ordocker ps -a
for all containers.
Key Advantages
- Simplifies commands like
docker start webapp
ordocker logs webapp
by avoiding random IDs. - Enhances clarity in multi-container setups for development or production.
This makes container management more intuitive and efficient.
What is the difference between docker stop and docker kill?h2
Both docker stop
and docker kill
terminate Docker containers, but they differ in how they handle the process.
Key Differences
- docker stop: Gracefully stops a container by sending a SIGTERM signal, allowing the container’s process to shut down cleanly within a timeout (default 10 seconds). Example:
docker stop myapp
. If the timeout expires, it forces termination. - docker kill: Immediately terminates a container by sending a SIGKILL signal, forcibly stopping it without cleanup. Example:
docker kill myapp
. It’s faster but may leave processes or data in an inconsistent state.
Use Cases
- Use
docker stop
for clean shutdowns, like stopping a database to preserve data integrity. - Use
docker kill
for quick termination when a container is unresponsive or for scripting.
This distinction ensures proper container management based on the need for graceful or immediate stops.
How do you restart a Docker container?h2
To restart a Docker container, use the docker restart
command, which gracefully stops and then starts the container, preserving its configuration and data.
Restart Process
- Find the container’s ID or name with
docker ps -a
. - Run
docker restart container_id_or_name
(e.g.,docker restart myapp
). - For multiple containers, list them:
docker restart id1 id2
. - Check status with
docker ps
to confirm it’s running.
Key Points
- Works on running or stopped containers; stopped ones are simply started.
- Customize timeout with
--time
(e.g.,docker restart --time 20 myapp
). - For Docker Compose, use
docker-compose restart service_name
for multi-container setups.
This is ideal for refreshing containers after updates or resolving issues efficiently.
What is Docker registry?h2
A Docker registry is a storage and distribution system for Docker images, allowing users to store, share, and retrieve images for container deployment.
Key Features
- Hosts images in repositories, tagged with versions (e.g.,
nginx:latest
). - Public registries like Docker Hub offer community and official images.
- Private registries (e.g., self-hosted or cloud-based) secure proprietary images.
- Commands:
docker pull
retrieves images,docker push
uploads them, anddocker login
authenticates for private registries.
Use Cases
- Simplifies image sharing in teams or CI/CD pipelines.
- Ensures consistent deployments across environments.
Registries are essential for managing and distributing container images efficiently.
How do you push a Docker image to Docker Hub?h2
To push a Docker image to Docker Hub, use the docker push
command after tagging the image with your Docker Hub repository name.
Push Process
- Log in to Docker Hub: Run
docker login
and enter your credentials. - Tag the image: Use
docker tag local_image:tag username/repository:tag
(e.g.,docker tag myapp:1.0 myuser/myapp:1.0
). - Push the image: Run
docker push username/repository:tag
(e.g.,docker push myuser/myapp:1.0
). - Verify the push on Docker Hub’s website or with
docker pull myuser/myapp:1.0
.
Key Notes
- Ensure the repository exists on Docker Hub (create it via the web interface if needed).
- Use
docker images
to confirm the tagged image locally. - For private repositories, ensure proper access permissions.
This process enables sharing images for collaboration or deployment in CI/CD pipelines.
How do you login to Docker Hub?h2
To log in to Docker Hub, use the docker login
command to authenticate with your Docker Hub account, enabling access to push or pull images.
Login Steps
- Run
docker login
in the terminal. - Enter your Docker Hub username and password when prompted.
- Optionally, specify a registry:
docker login registry.hub.docker.com
(default is Docker Hub). - Verify login success with a “Login Succeeded” message or check credentials in
~/.docker/config.json
.
Key Notes
- Use
docker logout
to end the session. - For private registries, ensure correct registry URL and credentials.
- Store credentials securely with a credential helper for automation.
This allows secure interaction with Docker Hub for image management in DevOps workflows.
What is a Docker tag?h2
A Docker tag is a label assigned to a Docker image to identify a specific version or variant, allowing differentiation within a repository.
Key Details
- Tags are appended to image names in the format
image:tag
(e.g.,nginx:latest
ormyapp:1.0
). latest
is the default tag if none is specified, but it’s not guaranteed to be the newest version.- Create tags with
docker tag source_image:tag new_image:tag
(e.g.,docker tag myapp:1.0 myapp:prod
). - Tags are used in
docker pull
,docker push
, ordocker run
to specify exact images.
Use Cases
- Versioning (e.g.,
app:1.0
,app:1.1
) for tracking releases. - Environment-specific labels (e.g.,
app:dev
,app:prod
) for clarity.
Tags ensure precise image selection for deployment and collaboration in DevOps workflows.
How do you tag a Docker image?h2
To tag a Docker image, use the docker tag
command to assign a new name or version to an existing image, enabling versioning or repository-specific labeling.
Tagging Process
- Identify the source image with
docker images
to get its name and tag (e.g.,myapp:1.0
). - Run
docker tag source_image:tag new_image:tag
(e.g.,docker tag myapp:1.0 myuser/myapp:prod
). - Verify the new tag with
docker images
, which lists both source and tagged images. - For Docker Hub, tag with your repository:
docker tag myapp:1.0 myuser/myapp:1.0
before pushing.
Key Notes
- Tags don’t create new images; they reference the same image ID.
- Use meaningful tags like version numbers (
1.0
) or environments (prod
,dev
) for clarity.
This simplifies image management and deployment in CI/CD pipelines.
What is the FROM instruction in Dockerfile?h2
The FROM instruction in a Dockerfile specifies the base image from which a new Docker image is built, serving as the starting point for all subsequent instructions.
Key Details
- Syntax:
FROM image:tag
(e.g.,FROM ubuntu:20.04
orFROM nginx:latest
). - Must be the first non-comment instruction in a Dockerfile.
- Can include a registry (e.g.,
FROM docker.io/library/alpine:3.14
). - Supports
AS
for multi-stage builds (e.g.,FROM node:16 AS builder
).
Use Cases
- Choose lightweight base images like
alpine
for smaller images or specific ones likepython:3.9
for app requirements. - Enables layering of custom configurations, dependencies, or applications.
FROM sets the foundation for consistent, reproducible container images in DevOps workflows.
What is the RUN instruction in Dockerfile?h2
The RUN instruction in a Dockerfile executes commands during the image build process, creating a new layer in the image with the command’s output.
Key Details
- Syntax:
RUN command
(shell form, runs in/bin/sh -c
) orRUN ["executable", "param1", "param2"]
(exec form). - Example:
RUN apt-get update && apt-get install -y curl
installs curl in the image. - Each RUN creates a new layer, cached for faster rebuilds.
- Use shell form for simple commands or exec form for direct execution without a shell, reducing overhead.
Use Cases
- Install packages, update the system, or configure the environment.
- Combine commands with
&&
to minimize layers (e.g.,RUN apt-get update && apt-get install -y python
).
RUN is essential for customizing the image’s filesystem and dependencies during the build process.
What is the WORKDIR instruction?h2
The WORKDIR instruction in a Dockerfile sets the working directory for subsequent instructions (e.g., RUN, CMD, COPY) and the container’s runtime environment.
Key Details
- Syntax:
WORKDIR /path/to/directory
(e.g.,WORKDIR /app
). - Creates the directory if it doesn’t exist and switches to it.
- Can be used multiple times; paths can be absolute or relative.
- Affects commands like
COPY
(e.g.,COPY . .
copies to WORKDIR) andCMD
execution. - Example:
WORKDIR /app
followed byCOPY . .
places files in/app
.
Use Cases
- Organizes file operations in a specific directory, improving clarity.
- Sets the default directory for container startup, like
/app
for application code.
WORKDIR simplifies Dockerfile structure and ensures consistent paths for builds and runtime.
What is the ENV instruction?h2
The ENV instruction in a Dockerfile sets environment variables in the image, which are available during the build process and in the running container.
Key Details
- Syntax:
ENV key=value
orENV key1=value1 key2=value2
(e.g.,ENV APP_PORT=8080
). - Variables can be used in subsequent instructions like RUN or CMD (e.g.,
$APP_PORT
). - Persists in containers, accessible by the application or scripts.
- Example:
ENV NODE_ENV=production
sets the Node.js environment to production mode.
Use Cases
- Configures app settings, like database URLs or API keys.
- Simplifies parameterization for consistent builds and runtime behavior.
ENV ensures environment-specific configurations are embedded in the image for DevOps workflows.
What is the LABEL instruction?h2
The LABEL instruction in a Dockerfile adds metadata to a Docker image, providing key-value pairs for descriptive information.
Key Details
- Syntax:
LABEL key="value"
orLABEL key1="value1" key2="value2"
(e.g.,LABEL version="1.0" maintainer="user@example.com"
). - Metadata is stored in the image and viewable with
docker inspect image_name
. - Commonly used for versioning, authorship, or documentation (e.g.,
LABEL description="My web app"
). - Multiple LABELs can be combined in one instruction for efficiency.
Use Cases
- Document image purpose, like project details or usage instructions.
- Enables filtering images with
docker images --filter "label=key=value"
.
LABEL enhances image organization and discoverability in DevOps workflows without affecting runtime behavior.
How do you check Docker version?h2
To check the Docker version, use the docker --version
or docker version
command to display the installed Docker Engine version and related details.
Key Details
- Run
docker --version
for a concise output, showing the Docker Engine version (e.g.,Docker version 24.0.5, build ced0996
). - Use
docker version
for detailed information, including client and server versions, API version, and OS/architecture. - Run
docker info
for additional system details, like container and image counts.
This ensures you confirm the installed version for compatibility or troubleshooting in DevOps workflows.
What is Docker daemon?h2
Docker daemon, also known as dockerd
, is the core background service in Docker Engine that manages the lifecycle of Docker objects, including images, containers, networks, and volumes. It runs as a persistent process on the host OS, listening for API requests from the Docker CLI or other clients.
Key Functions
- Builds and runs containers, handles resource allocation (CPU, memory).
- Manages storage, networking, and security for Docker operations.
- Exposes a REST API for interaction (e.g., via
docker
commands).
Key Details
- Starts automatically on Linux; on Windows/macOS, it’s managed by Docker Desktop.
- View status with
docker info
orsystemctl status docker
(Linux). - Essential for Docker’s client-server architecture, enabling remote management.
The daemon ensures efficient, isolated container execution in DevOps environments.
How do you start Docker service?h2
To start the Docker service, use system commands to launch the Docker daemon, which runs in the background to manage containers.
Starting the Service
- On Linux, run
sudo systemctl start docker
to start the Docker daemon. - Enable auto-start on boot with
sudo systemctl enable docker
. - Verify it’s running with
systemctl status docker
ordocker info
. - On Windows/macOS, Docker Desktop manages the daemon; start it by launching the Docker Desktop application.
Key Notes
- Ensure Docker is installed (
docker --version
to check). - Use
sudo
if not in the Docker group (Linux). - Restart with
sudo systemctl restart docker
if needed.
This ensures the Docker daemon is active for container operations in DevOps workflows.
What is containerization?h2
Containerization is a lightweight virtualization technology that packages an application and its dependencies into a portable, isolated unit called a container. Containers run consistently across different environments, from development to production, without requiring a full guest OS.
Key Aspects
- Containers share the host OS kernel, making them faster and smaller than VMs (MBs vs. GBs).
- They include only the app, libraries, and configs needed, ensuring portability.
- Docker is a leading platform for containerization, using images to create containers.
- Benefits include scalability, isolation, and simplified CI/CD workflows.
Containerization enables efficient deployment and management of microservices in DevOps.
Why use Docker?h2
Docker simplifies application development, deployment, and management by leveraging containerization for consistent, efficient, and scalable workflows.
Key Benefits
- Portability: Containers run identically across environments (dev, test, prod) with all dependencies included.
- Efficiency: Lightweight containers share the host OS kernel, using fewer resources than VMs.
- Isolation: Each container runs in its own environment, preventing conflicts between apps.
- Scalability: Docker supports microservices and orchestration tools like Kubernetes for easy scaling.
- CI/CD Integration: Streamlines development pipelines with reproducible builds and deployments.
Docker reduces setup time, minimizes “it works on my machine” issues, and enhances collaboration in DevOps.
What are the advantages of Docker?h2
Docker offers multiple advantages for developing, deploying, and managing applications through containerization.
Key Benefits
- Portability: Containers bundle apps and dependencies, ensuring consistent behavior across environments (dev, test, prod).
- Efficiency: Lightweight containers share the host OS kernel, using less memory and disk space than VMs.
- Isolation: Each container runs independently, preventing app conflicts and enhancing security.
- Scalability: Docker integrates with orchestration tools like Kubernetes, enabling easy scaling of microservices.
- Speed: Containers start in seconds, accelerating development and deployment cycles.
- Reproducibility: Docker images ensure consistent builds, reducing “works on my machine” issues in CI/CD pipelines.
These advantages streamline DevOps workflows, improve collaboration, and optimize resource usage.
What are the disadvantages of Docker?h2
While Docker offers many benefits, it has some disadvantages that can impact its use in certain scenarios.
Key Drawbacks
- Complexity: Managing containers, networks, and orchestration (e.g., Kubernetes) can be complex, requiring a learning curve.
- Security: Containers share the host OS kernel, making them less isolated than VMs, potentially increasing vulnerability to kernel exploits.
- Resource Overhead: Running many containers can strain system resources, especially on low-powered hosts.
- Persistent Data: Docker volumes require careful management for stateful applications like databases to avoid data loss.
- Compatibility: Not all applications, especially legacy ones, are easily containerized without significant refactoring.
These challenges may require additional tools or expertise to address in DevOps workflows.
What operating systems support Docker?h2
Docker is supported on multiple operating systems, enabling containerization across various environments.
Supported Operating Systems
- Linux: Primary platform, with native support for distributions like Ubuntu, CentOS, Debian, and Fedora. Docker Engine runs directly, leveraging the kernel for efficiency.
- Windows: Docker Desktop supports Windows 10/11 (Pro, Enterprise, Education) using WSL 2 or Hyper-V for Linux containers. Windows containers are also supported for native Windows apps.
- macOS: Docker Desktop runs on macOS (Intel and Apple Silicon), using a lightweight Linux VM to manage containers.
Key Notes
- Linux offers the best performance and flexibility; Windows/macOS rely on Docker Desktop for a user-friendly experience.
- Ensure system requirements like WSL 2 or Hyper-V for Windows, or compatible macOS versions, are met.
Docker’s cross-platform support enables consistent containerized workflows in DevOps.
How does Docker differ from LXC?h2
Docker and LXC (Linux Containers) both provide containerization but differ in design, functionality, and use cases.
Key Differences
- Architecture: Docker uses a layered filesystem (e.g., OverlayFS) for images, enabling portability and efficiency. LXC relies on Linux kernel features (cgroups, namespaces) for full-system containers, closer to lightweight VMs.
- Ease of Use: Docker simplifies workflows with tools like Dockerfiles, Docker Hub, and Compose for app-focused containers. LXC requires manual configuration, better suited for system-level containers.
- Portability: Docker images are highly portable across environments; LXC containers are less portable, often tied to specific Linux setups.
- Ecosystem: Docker has a robust ecosystem (Kubernetes, Swarm) and large registry (Docker Hub). LXC lacks a comparable ecosystem, focusing on low-level control.
Docker is ideal for microservices and CI/CD, while LXC suits custom, system-level container needs.
What is Docker Swarm?h2
Docker Swarm is a native orchestration tool for Docker that manages a cluster of Docker hosts, enabling the deployment and scaling of containerized applications across multiple machines.
Key Features
- Turns multiple Docker hosts into a single virtual host for simplified management.
- Uses
docker swarm init
to initialize a swarm anddocker swarm join
to add nodes. - Deploys services with
docker service create
, allowing scaling, load balancing, and rolling updates. - Provides high availability through manager and worker nodes, with built-in service discovery and networking.
Use Cases
- Ideal for simpler orchestration needs compared to Kubernetes, suitable for small to medium-scale deployments.
Swarm integrates seamlessly with Docker CLI, streamlining container orchestration in DevOps workflows.
What is the difference between Docker Swarm and Kubernetes?h2
Docker Swarm and Kubernetes are orchestration platforms for managing containerized applications, but they differ in complexity, features, and use cases.
Key Differences
- Ease of Use: Docker Swarm is simpler, integrated with Docker CLI, and easier to set up (e.g.,
docker swarm init
). Kubernetes has a steeper learning curve, requiring tools like kubectl and more configuration. - Scalability: Kubernetes supports larger, more complex clusters with advanced features like auto-scaling and self-healing. Swarm is better for smaller, simpler deployments.
- Ecosystem: Kubernetes has a broader ecosystem with extensive plugins, Helm charts, and cloud provider support. Swarm relies on Docker’s ecosystem, with fewer external tools.
- Features: Kubernetes offers robust load balancing, storage orchestration, and RBAC. Swarm provides basic orchestration, focusing on simplicity and native Docker integration.
Use Cases
- Use Swarm for lightweight, Docker-centric setups. Choose Kubernetes for large-scale, production-grade applications needing advanced control.
Both streamline container management, but Kubernetes is more feature-rich, while Swarm prioritizes simplicity.
How do you execute commands inside a running container?h2
To execute commands inside a running Docker container, use the docker exec
command, which runs a command in the container’s environment without stopping it.
Execution Steps
- Identify the container with
docker ps
to get its ID or name. - Run
docker exec [options] container_id_or_name command
(e.g.,docker exec myapp bash
starts a shell). - Use
-it
for interactive commands:docker exec -it myapp bash
for a terminal session. - For single commands, e.g.,
docker exec myapp ls
lists files in the container’s working directory.
Key Notes
- Ensure the container is running;
docker exec
doesn’t work on stopped containers. - Verify the command (e.g.,
bash
) exists in the container’s image. - Use
docker exec -u user
to run as a specific user.
This enables debugging, monitoring, or configuration changes in live containers for DevOps tasks.
What is docker exec?h2
The docker exec
command runs a new command inside a running Docker container, allowing interaction with its environment without stopping or restarting it.
Key Details
- Syntax:
docker exec [options] container_id_or_name command
(e.g.,docker exec myapp ls
lists files). - Use
-it
for interactive sessions, likedocker exec -it myapp bash
for a shell. - Supports options like
-u
to specify a user (e.g.,docker exec -u root myapp command
). - Requires the container to be running (check with
docker ps
).
Use Cases
- Debug issues, inspect files, or run scripts inside a live container.
- Perform maintenance tasks, like updating configurations or checking logs.
This command is essential for troubleshooting and managing containers in DevOps workflows.
How do you copy files from host to container?h2
To copy files from the host to a Docker container, use the docker cp
command, which transfers files or directories between the host and a running or stopped container.
Copy Process
- Identify the container with
docker ps -a
to get its ID or name. - Run
docker cp host_path container_id_or_name:container_path
(e.g.,docker cp ./file.txt myapp:/app/file.txt
copiesfile.txt
to/app/file.txt
in the container). - For directories, use the same syntax (e.g.,
docker cp ./data myapp:/app/data
). - Verify the copy with
docker exec myapp ls /app
to check the destination.
Key Notes
- Works for both running and stopped containers.
- Ensure the destination path exists or Docker creates it.
- Use absolute paths in the container for clarity.
This enables quick file transfers for configuration updates or data injection in DevOps workflows.
How do you copy files from container to host?h2
To copy files from a Docker container to the host, use the docker cp
command, which transfers files or directories from a running or stopped container to the host filesystem.
Copy Process
- Identify the container with
docker ps -a
to get its ID or name. - Run
docker cp container_id_or_name:container_path host_path
(e.g.,docker cp myapp:/app/file.txt ./file.txt
copiesfile.txt
from the container’s/app
to the host’s current directory). - For directories, use the same syntax (e.g.,
docker cp myapp:/app/data ./data
). - Verify the copy by checking the host path (e.g.,
ls ./file.txt
).
Key Notes
- Works for both running and stopped containers.
- Ensure the container path exists; use
docker exec myapp ls /app
to check. - Specify absolute paths in the container for accuracy.
This facilitates extracting logs, outputs, or data from containers for analysis or backups in DevOps tasks.
What is docker cp?h2
The docker cp
command copies files or directories between a Docker container and the host filesystem, working with both running and stopped containers.
Key Details
- Syntax:
docker cp source_path destination_path
. - To copy from host to container:
docker cp host_path container_id:container_path
(e.g.,docker cp ./file.txt myapp:/app/file.txt
). - To copy from container to host:
docker cp container_id:container_path host_path
(e.g.,docker cp myapp:/app/file.txt ./file.txt
). - Supports directories (e.g.,
docker cp ./data myapp:/app/data
). - Verify with
docker exec
to check container paths orls
for host paths.
Use Cases
- Transfer configuration files, logs, or data for debugging, backups, or updates.
This command simplifies file management in DevOps workflows, ensuring seamless data exchange.
What is a multi-stage build in Docker?h2
A multi-stage build in Docker uses multiple FROM
instructions in a single Dockerfile to create smaller, optimized images by separating build and runtime environments.
Key Details
- Each
FROM
starts a new stage, often with a different base image (e.g., one for building, another for running). - Example:
FROM node:16 AS builderWORKDIR /appCOPY . .RUN npm install && npm run buildFROM nginx:alpineCOPY --from=builder /app/dist /usr/share/nginx/html
- Use
--from=stage_name
to copy artifacts (e.g., compiled code) from one stage to another. - Only the final stage’s layers are included in the output image, reducing size by excluding build tools.
Benefits
- Smaller images: Eliminates unnecessary build dependencies.
- Improved security: Reduces attack surface by excluding build tools.
- Streamlines CI/CD pipelines with efficient, clean images.
This technique is ideal for optimizing container images in DevOps workflows.
How do you optimize Docker images?h2
To optimize Docker images, focus on reducing size, improving build speed, and enhancing security for efficient deployment.
Key Strategies
- Use minimal base images like
alpine
(e.g.,FROM alpine:3.14
) to cut down on size. - Leverage multi-stage builds to exclude build tools, copying only necessary artifacts (e.g.,
COPY --from=builder
). - Minimize layers by combining RUN commands (e.g.,
RUN apt-get update && apt-get install -y package
). - Remove unnecessary files with
.dockerignore
and clean up temporary files in RUN (e.g.,RUN apt-get clean
). - Use specific image tags (e.g.,
nginx:1.21
instead ofnginx:latest
) for predictability. - Avoid installing unneeded dependencies to reduce vulnerabilities and size.
These practices create smaller, faster, and more secure images, improving performance in DevOps pipelines.
What is .dockerignore file?h2
A .dockerignore
file specifies files and directories to exclude from the Docker build context, reducing the size of the context sent to the Docker daemon and improving build performance.
Key Details
- Placed in the root of the build context (same directory as Dockerfile).
- Syntax: List patterns like
node_modules
,*.log
, or.git
to ignore specific files or directories. - Example content:
node_modules.git*.md/temp
- Excludes files during
COPY
orADD
operations, preventing unnecessary files (e.g., logs, build artifacts) from being included in the image.
Benefits
- Speeds up builds by reducing context size.
- Shrinks images by avoiding irrelevant files.
- Enhances security by excluding sensitive data (e.g.,
.env
files).
This optimizes build efficiency and image size in DevOps workflows.
How do you search for Docker images on Docker Hub?h2
To search for Docker images on Docker Hub, use the docker search
command to query the Docker Hub registry for available images.
Search Process
- Run
docker search term
(e.g.,docker search nginx
) to list images matching the term. - Output includes image name, description, stars, official status, and automated build status.
- Filter results with options like
--filter "is-official=true"
for official images or--filter "stars=100"
for popular ones. - Visit hub.docker.com to browse or verify images interactively for more details.
Key Notes
- Use specific terms to narrow results (e.g.,
docker search python:3.9
). - Check image tags on Docker Hub to ensure compatibility.
This helps identify suitable images for your application in DevOps workflows.
What is the docker search command?h2
The docker search
command queries Docker Hub to find images matching a specified term, helping users discover available images for container deployment.
Key Details
- Syntax:
docker search [term]
(e.g.,docker search nginx
) lists images with details like name, description, stars, and official/automated status. - Use filters like
--filter "is-official=true"
for official images or--filter "stars=50"
for popular ones. - Example:
docker search python
shows Python-related images. - Results are limited to 25 by default; use
--limit N
to adjust (e.g.,--limit 50
).
Use Cases
- Identify images for specific applications or versions before pulling.
- Explore Docker Hub’s repository for official or community images.
This command streamlines image discovery for DevOps workflows.
How do you view Docker system information?h2
To view Docker system information, use the docker info
command, which provides detailed insights into the Docker setup and runtime environment.
Key Details
- Run
docker info
to display information like the number of containers (running, paused, stopped), images, storage driver, memory, CPUs, and Docker version. - Use
--format '{{.ServerVersion}}'
for specific details (e.g., server version). - Includes system details like OS, architecture, and networking configuration.
- Run
docker system df
to check disk usage for images, containers, and volumes.
This command is essential for monitoring system health and resource usage in DevOps workflows.
What is docker info?h2
The docker info
command displays comprehensive system-wide information about the Docker installation and its runtime environment.
Key Details
- Running
docker info
provides details like Docker version, number of containers (running, paused, stopped), images, storage driver, memory, CPUs, and networking setup. - Use
--format
for specific data, e.g.,docker info --format '{{.ServerVersion}}'
to show the server version. - Includes host OS, architecture, and configuration details like registry or plugin info.
- Useful for troubleshooting and monitoring system health.
This command helps DevOps teams assess Docker’s status and resource usage efficiently.
What is docker stats?h2
The docker stats
command displays real-time resource usage statistics for running Docker containers, helping monitor performance.
Key Details
- Run
docker stats
to show live data on CPU usage, memory consumption, network I/O, and block I/O for all running containers. - Use
docker stats container_id_or_name
to focus on a specific container (e.g.,docker stats myapp
). - Add
--no-stream
for a single snapshot instead of continuous updates. - Output includes container ID, name, and metrics like memory limit and percentage used.
- Use
--format
to customize output, e.g.,docker stats --format "{{.Name}}: {{.CPUPerc}}"
.
This command is crucial for performance monitoring and resource optimization in DevOps workflows.
How do you monitor Docker container resource usage?h2
To monitor Docker container resource usage, use the docker stats
command to track real-time metrics like CPU, memory, network, and disk I/O.
Monitoring Steps
- Run
docker stats
to display live statistics for all running containers, showing CPU percentage, memory usage, network I/O, and block I/O. - For specific containers, use
docker stats container_id_or_name
(e.g.,docker stats myapp
). - Use
--no-stream
for a one-time snapshot instead of continuous updates. - Customize output with
--format
, e.g.,docker stats --format "{{.Name}}: {{.MemUsage}}"
for memory details. - For detailed inspection, use
docker inspect container_name
to check resource limits.
This enables efficient performance tracking and resource management in DevOps environments.
What is the default storage driver in Docker?h2
The default storage driver in Docker depends on the host operating system and Docker version, but overlay2
is commonly used on modern Linux distributions.
Key Details
overlay2
is the default for most Linux systems (e.g., Ubuntu, CentOS) due to its performance and efficiency with layered filesystems.- It uses OverlayFS to manage image and container layers, supporting fast snapshots and minimal disk usage.
- Other drivers like
devicemapper
,aufs
, orbtrfs
may be default on older systems or specific setups. - Check the current driver with
docker info --format '{{.StorageDriver}}'
.
This driver choice impacts container performance and storage management in DevOps workflows.
How do you change Docker storage driver?h2
To change the Docker storage driver, modify the Docker daemon configuration, as the storage driver is set at the daemon level and cannot be changed per container.
Steps to Change
- Stop the Docker service:
sudo systemctl stop docker
. - Edit the Docker configuration file (usually
/etc/docker/daemon.json
). If it doesn’t exist, create it. - Add or update the
storage-driver
key, e.g.:{"storage-driver": "overlay2"} - Common drivers:
overlay2
(recommended),devicemapper
,btrfs
, oraufs
. - Delete existing images and containers, as changing drivers is not compatible with existing data:
docker system prune -a --volumes
. - Restart Docker:
sudo systemctl start docker
. - Verify with
docker info --format '{{.StorageDriver}}'
.
Key Notes
- Backup data before changing, as it wipes existing containers/images.
- Ensure the new driver is supported by your OS (e.g.,
overlay2
for modern Linux).
This optimizes storage performance for specific use cases in DevOps workflows.
What is AUFS?h2
AUFS (Advanced Multi-Layered Unification Filesystem) is a storage driver used by Docker for managing image and container layers in Linux environments.
Key Details
- AUFS is a union filesystem that stacks multiple directories (layers) into a single mount point, enabling efficient image layering and copy-on-write operations.
- Each image layer is read-only, with a writable container layer on top for changes.
- It supports fast snapshots and low disk usage, ideal for containerization.
- Commonly used on older Linux distributions (e.g., Ubuntu) but less favored than
overlay2
due to lack of mainline kernel support.
Use Cases
- Suitable for Docker setups on systems lacking
overlay2
support, thoughoverlay2
is preferred for modern deployments.
AUFS ensures efficient storage management for containers in specific DevOps scenarios.
What is overlay2?h2
Overlay2 is a modern storage driver used by Docker for managing image and container layers in Linux environments, leveraging OverlayFS.
Key Details
- Uses OverlayFS to create a union filesystem, stacking read-only image layers with a writable container layer for efficient storage.
- Offers better performance and stability compared to older drivers like AUFS or devicemapper.
- Default driver for most Linux distributions (e.g., Ubuntu, CentOS) in recent Docker versions due to its speed and kernel support.
- Supports fast snapshots, copy-on-write, and minimal disk usage.
- Verify with
docker info --format '{{.StorageDriver}}'
.
Overlay2 is ideal for optimizing container storage and performance in DevOps workflows.
How do you prune unused Docker objects?h2
To prune unused Docker objects, use the docker system prune
command to remove stopped containers, unused networks, dangling images, and build cache, freeing up disk space.
Pruning Process
- Run
docker system prune
to delete all stopped containers, unused networks, and dangling images (images without tags not used by running containers). - Use
--all
(docker system prune -a
) to also remove unused images (not associated with any container). - Add
--volumes
to remove unused volumes (e.g.,docker system prune --volumes
). - Confirm with
docker system df
to check reclaimed space.
Key Notes
- Use cautiously, as pruning is irreversible; ensure critical data is backed up.
- Run specific prunes like
docker container prune
,docker image prune
, ordocker volume prune
for targeted cleanup.
This optimizes system resources and maintains a clean Docker environment in DevOps workflows.
What is docker system prune?h2
The docker system prune
command removes unused Docker objects to free up disk space and maintain a clean environment.
Key Details
- Deletes stopped containers, unused networks, dangling images (untagged images not used by containers), and build cache.
- Run
docker system prune
for basic cleanup. - Use
--all
(docker system prune -a
) to remove all unused images, not just dangling ones. - Add
--volumes
(docker system prune --volumes
) to delete unused volumes. - Check reclaimed space with
docker system df
.
Use Cases
- Optimizes storage by removing obsolete objects in development or production.
- Use cautiously, as pruning is irreversible; back up critical data first.
This command streamlines resource management in DevOps workflows.
How do you export a Docker container?h2
To export a Docker container, use the docker export
command to save the container’s filesystem as a tar archive, which can be imported later or shared.
Export Process
- Identify the container with
docker ps -a
to get its ID or name. - Run
docker export container_id_or_name > file.tar
(e.g.,docker export myapp > myapp.tar
) to save the filesystem tomyapp.tar
. - Verify the tar file exists on the host (e.g.,
ls myapp.tar
).
Key Notes
- Exports only the filesystem, not volumes or container metadata (e.g., ports, environment).
- Use
docker import
to create an image from the tar file later. - Useful for backup or sharing a container’s state, but
docker save
is preferred for images.
This facilitates container backups or migrations in DevOps workflows.
How do you import a Docker container?h2
To import a Docker container, use the docker import
command to create a Docker image from a tar archive previously exported from a container’s filesystem.
Import Process
- Ensure you have a tar file from a container (e.g., created with
docker export myapp > myapp.tar
). - Run
docker import myapp.tar image_name:tag
(e.g.,docker import myapp.tar myapp:1.0
) to create an image. - Verify the image with
docker images
to confirm it’s listed. - Run a new container from the image:
docker run -d myapp:1.0
.
Key Notes
- Imports only the filesystem, not container metadata like ports or volumes.
- Use meaningful tags for organization (e.g.,
myapp:backup
). - Useful for restoring or sharing container filesystems in DevOps workflows.
This enables recreation of containers from exported archives.
What is docker save?h2
The docker save
command exports a Docker image, including its layers and metadata, as a tar archive for backup or sharing.
Key Details
- Syntax:
docker save -o file.tar image_name:tag
(e.g.,docker save -o myapp.tar myapp:1.0
). - Alternatively, use
docker save image_name:tag > file.tar
for the same result. - Saves the entire image, including its history, unlike
docker export
, which only saves a container’s filesystem. - Verify the file exists (e.g.,
ls myapp.tar
). - Use
docker load -i file.tar
to import the image later.
Use Cases
- Backup images or transfer them to systems without registry access.
- Ideal for preserving image metadata for consistent deployments in DevOps workflows.
What is docker load?h2
The docker load
command imports a Docker image from a tar archive, previously created with docker save
, into the local Docker image repository.
Key Details
- Syntax:
docker load -i file.tar
(e.g.,docker load -i myapp.tar
). - Alternatively, use
docker load < file.tar
for the same result. - Restores the image, including its layers and metadata (e.g., tags, history), unlike
docker import
, which creates an image from a container’s filesystem without metadata. - Verify the loaded image with
docker images
to confirm it’s listed. - Run a container from the image:
docker run -d image_name:tag
.
Use Cases
- Restores backed-up images or transfers images to systems without registry access.
- Ensures consistent deployments with preserved image metadata in DevOps workflows.
How do you commit changes to a new Docker image?h2
To commit changes to a new Docker image, use the docker commit
command to create an image from a modified container’s filesystem and configuration.
Commit Process
- Identify the container with
docker ps -a
to get its ID or name. - Run
docker commit container_id_or_name new_image_name:tag
(e.g.,docker commit myapp myapp:updated
). - Verify the new image with
docker images
to confirm it’s listed. - Optionally, push to a registry:
docker push new_image_name:tag
.
Key Notes
- Captures the container’s filesystem and metadata (e.g., CMD, ports).
- Use meaningful tags for versioning (e.g.,
myapp:v2
). - Avoid overusing; prefer Dockerfiles for reproducible builds in DevOps workflows.
This creates a new image with changes for reuse or sharing.
What is docker commit?h2
The docker commit
command creates a new Docker image from a container’s modified filesystem and configuration, capturing its current state.
Key Details
- Syntax:
docker commit container_id_or_name new_image_name:tag
(e.g.,docker commit myapp myapp:updated
). - Saves changes like installed packages, edited files, or updated settings in the container.
- Includes metadata like CMD, ENTRYPOINT, and exposed ports.
- Verify the image with
docker images
to confirm creation. - Useful for quick snapshots, but Dockerfiles are preferred for reproducible builds.
This enables saving container modifications as a new image for reuse or sharing in DevOps workflows.
What is the lifecycle of a Docker container?h2
The lifecycle of a Docker container involves distinct stages from creation to removal, managed by Docker commands.
Key Stages
- Created: A container is instantiated from an image using
docker create
ordocker run
, setting up its filesystem and configuration but not starting it. - Running: Started with
docker start
ordocker run
, the container executes its main process (defined by CMD/ENTRYPOINT). - Paused: Temporarily halted with
docker pause
, suspending all processes; resumed withdocker unpause
. - Stopped: Gracefully halted with
docker stop
, allowing the process to clean up; can be restarted withdocker start
. - Removed: Deleted with
docker rm
, freeing resources; running containers requiredocker rm -f
.
Key Notes
- Check status with
docker ps
(running) ordocker ps -a
(all). - Containers persist data in volumes unless removed.
This lifecycle ensures flexible management for development and deployment in DevOps workflows.
What states can a Docker container be in?h2
A Docker container can exist in several states during its lifecycle, reflecting its current status.
Container States
- Created: A container is instantiated with
docker create
ordocker run
but not yet started, with its filesystem and configuration set up. - Running: The container is active, executing its main process (via CMD/ENTRYPOINT) after
docker start
ordocker run
. - Paused: All processes are suspended using
docker pause
; resumed withdocker unpause
. - Exited/Stopped: The container’s main process has stopped, either gracefully (
docker stop
) or naturally, but the container remains and can be restarted. - Dead: A container that cannot be restarted due to errors, though still present until removed.
- Removed: The container is deleted with
docker rm
, freeing its resources.
Key Notes
- Check states with
docker ps
(running) ordocker ps -a
(all states). - These states enable flexible container management in DevOps workflows.
How do you pause a Docker container?h2
To pause a Docker container, use the docker pause
command, which suspends all processes within a running container without terminating it.
Pause Process
- Identify the running container with
docker ps
to get its ID or name. - Run
docker pause container_id_or_name
(e.g.,docker pause myapp
). - Verify the container’s state with
docker ps -a
, showing “Paused” status. - Resume the container with
docker unpause container_id_or_name
.
Key Notes
- Pausing preserves the container’s state, unlike stopping, which terminates the process.
- Useful for temporarily halting resource-intensive containers without losing data.
- Check status or troubleshoot with
docker inspect container_name
.
This allows efficient resource management in DevOps workflows.
How do you unpause a Docker container?h2
To unpause a Docker container, use the docker unpause
command to resume all processes in a paused container, restoring it to the running state.
Unpause Process
- Identify the paused container with
docker ps -a
to get its ID or name (look for “Paused” status). - Run
docker unpause container_id_or_name
(e.g.,docker unpause myapp
). - Verify the container is running with
docker ps
, which lists active containers. - Check details with
docker inspect container_name
if needed.
Key Notes
- Only works on containers in the “Paused” state, not stopped or exited ones.
- Restores the container to its previous running state without restarting the process.
- Useful for resuming resource-intensive tasks in DevOps workflows.
This ensures efficient container management with minimal disruption.
What is docker top?h2
The docker top
command displays the running processes within a Docker container, providing a snapshot of process-level details.
Key Details
- Syntax:
docker top container_id_or_name
(e.g.,docker top myapp
). - Shows process IDs (PID), user, CPU/memory usage, and commands, similar to the Linux
top
command. - Requires the container to be running; check with
docker ps
. - Useful for debugging or monitoring resource usage of container processes.
This command aids in troubleshooting and performance analysis in DevOps workflows.
How do you view processes inside a container?h2
To view processes inside a Docker container, use the docker top
command, which lists the running processes within a specified container.
Process Viewing Steps
- Identify the running container with
docker ps
to get its ID or name. - Run
docker top container_id_or_name
(e.g.,docker top myapp
) to display process details like PID, user, CPU/memory usage, and command. - For specific formatting, use
docker top container_id_or_name --format
(if supported in newer versions). - Alternatively, access a shell with
docker exec -it myapp ps aux
for a detailed process list using Linux commands.
This enables monitoring and debugging of container processes in DevOps workflows.
What is docker diff?h2
The docker diff
command inspects changes made to a container’s filesystem since it was created, showing added, modified, or deleted files.
Key Details
- Syntax:
docker diff container_id_or_name
(e.g.,docker diff myapp
). - Outputs a list with prefixes:
A
(added),C
(changed), orD
(deleted) for each affected file or directory. - Requires the container to exist (running or stopped); check with
docker ps -a
. - Useful for debugging or verifying changes before committing to a new image with
docker commit
.
This helps track filesystem modifications in containers for troubleshooting in DevOps workflows.
How do you see changes in a container’s filesystem?h2
To see changes in a Docker container’s filesystem, use the docker diff
command, which lists files and directories that have been added, modified, or deleted since the container was created.
Steps to View Changes
- Identify the container with
docker ps -a
to get its ID or name. - Run
docker diff container_id_or_name
(e.g.,docker diff myapp
). - Output shows changes with prefixes:
A
(added),C
(changed), orD
(deleted) for each file or directory. - Works for running or stopped containers.
This command is useful for debugging or verifying modifications before committing changes to a new image in DevOps workflows.
What is docker history?h2
The docker history
command displays the history of a Docker image, showing the layers and commands used to create it.
Key Details
- Syntax:
docker history image_name:tag
(e.g.,docker history nginx:latest
). - Lists each layer with details like layer ID, size, creation time, and the Dockerfile instruction (or
<missing>
for imported images). - Use
--no-trunc
to show full command details (e.g.,docker history --no-trunc nginx
). - Helps understand how an image was built, useful for debugging or optimizing.
This command aids in analyzing image composition for DevOps workflows.
How do you view layers of a Docker image?h2
To view the layers of a Docker image, use the docker history
command, which lists the layers and their corresponding build instructions.
Steps to View Layers
- Identify the image with
docker images
to get its name and tag. - Run
docker history image_name:tag
(e.g.,docker history nginx:latest
). - Output shows each layer with details: layer ID, size, creation time, and the Dockerfile command (e.g.,
RUN
,COPY
) or<missing>
for imported images. - Use
--no-trunc
for full command details (e.g.,docker history --no-trunc nginx
).
This helps analyze image composition, optimize builds, or debug issues in DevOps workflows.
What is a base image in Docker?h2
A base image in Docker is the starting point for building a Docker image, specified in a Dockerfile’s FROM
instruction, containing the foundational environment for an application.
Key Details
- Acts as the initial layer, often including an OS (e.g.,
ubuntu:20.04
) or minimal runtime (e.g.,alpine:3.14
). - Can be official images from Docker Hub (e.g.,
python:3.9
) or custom images. - Provides essential components like libraries or tools, to which custom layers (e.g., app code, dependencies) are added.
- Choose lightweight bases like
alpine
for smaller images or specific ones for compatibility.
Base images ensure consistent, reproducible builds in DevOps workflows.
How do you choose a base image?h2
Choosing a base image for a Docker container involves selecting an image that balances application requirements, size, and security.
Selection Criteria
- Compatibility: Pick an image matching your app’s needs, e.g.,
python:3.9
for Python apps ornode:16
for Node.js. - Size: Opt for lightweight images like
alpine
(e.g.,alpine:3.14
) to reduce image size and improve performance. - Official vs. Community: Use official Docker Hub images for reliability and updates; verify community images for trustworthiness.
- Security: Choose images with recent updates and minimal vulnerabilities; check with tools like
docker scan
. - Versioning: Specify exact tags (e.g.,
ubuntu:20.04
) overlatest
for predictable builds.
Key Tips
- Test compatibility in development before production.
- Use multi-stage builds to keep final images lean.
This ensures efficient, secure, and compatible containers for DevOps workflows.
What is scratch image?h2
The scratch
image in Docker is an empty, minimal base image used as a starting point for building highly customized Docker images.
Key Details
- Contains no filesystem, OS, or dependencies, making it the smallest possible base image (0 bytes).
- Used in a Dockerfile with
FROM scratch
to create images with only essential files. - Common for building lightweight images for compiled binaries (e.g., Go, C++ apps).
- Example:
FROM scratch
, thenCOPY myapp /
to add a binary, creating a minimal image.
Use Cases
- Reduces image size and attack surface for high-performance or secure applications.
- Often used in multi-stage builds to copy artifacts from a build stage to a
scratch
-based final image.
This optimizes container size and efficiency in DevOps workflows.
What is the smallest Docker image?h2
The smallest Docker image is the scratch
image, an empty base image with no filesystem, OS, or dependencies.
Key Details
- Size is 0 bytes, as it contains no layers or content.
- Used in a Dockerfile with
FROM scratch
for highly minimal images, often for compiled binaries (e.g., Go, C++). - Example:
FROM scratch
followed byCOPY myapp /
adds only the application binary. - Commonly used in multi-stage builds to create tiny final images by copying artifacts from a build stage.
Benefits
- Minimizes image size and attack surface, ideal for performance and security.
This makes scratch
perfect for lightweight, efficient containers in DevOps workflows.
Conclusionh2
Mastering Docker is essential for modern DevOps roles, as it enables efficient containerization, deployment, and scaling of applications. This series covered fundamental concepts—from core components like images, containers, and the Engine to commands for building, running, managing, and optimizing Docker artifacts. Understanding Dockerfile instructions, networking, volumes, and orchestration tools like Swarm prepares you for real-world scenarios and interview questions.
Practice by building simple projects, experimenting with Docker Compose for multi-container apps, and exploring Docker Hub. Stay updated with official documentation, as Docker evolves rapidly. With these basics, you’ll confidently demonstrate skills in creating portable, reproducible environments, reducing deployment friction, and enhancing team collaboration. Good luck in your interviews!