DevOps Docker and Containerization
Docker is a tool that packages an application and everything it needs to run — code, libraries, settings, and dependencies — into a single portable unit called a container. The container runs the same way on any machine: a developer's laptop, a test server, or a production cloud.
The classic problem Docker solves: "It works on my machine but not on the server." With Docker, the environment travels with the application.
Containers vs Virtual Machines
Both containers and virtual machines (VMs) isolate applications. The key difference is how they use system resources.
| Feature | Virtual Machine | Docker Container |
|---|---|---|
| Startup Time | Minutes | Seconds |
| Size | GBs (includes full OS) | MBs (shares host OS kernel) |
| Isolation | Strong (separate OS) | Process-level isolation |
| Portability | Limited | Very high — runs anywhere |
| Resource Use | Heavy | Lightweight |
A VM includes an entire guest operating system. A container shares the host OS kernel and only bundles the application and its dependencies — making it far lighter and faster.
Core Docker Concepts
Docker Image
An image is a read-only blueprint for creating containers. It includes the application code, runtime, libraries, and file system. Images are built from a Dockerfile.
Docker Container
A container is a running instance of an image. Start and stop containers like starting and stopping a program. Multiple containers can run from the same image simultaneously.
Dockerfile
A Dockerfile is a plain text file with instructions for building a Docker image — step by step.
Docker Hub
Docker Hub is a public registry where pre-built images are stored and shared. Pulling an official Nginx, Node.js, or MySQL image takes one command.
Docker Registry
A registry stores Docker images. Docker Hub is the public one. Companies often use private registries like AWS ECR, Azure ACR, or a self-hosted registry.
Writing a Dockerfile
Here is a Dockerfile for a simple Node.js web app:
# Start from an official Node.js base image
FROM node:18-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (for layer caching)
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the app listens on
EXPOSE 3000
# Command to start the application
CMD ["node", "server.js"]Dockerfile Instructions Explained
| Instruction | Purpose |
|---|---|
| FROM | Sets the base image to build upon |
| WORKDIR | Sets the working directory inside the container |
| COPY | Copies files from host to container |
| RUN | Executes a command during image build |
| EXPOSE | Documents the port the app uses (informational) |
| CMD | Command to run when the container starts |
| ENV | Sets environment variables inside the container |
| ARG | Passes build-time variables to the Dockerfile |
Essential Docker Commands
Building an Image
# Build an image from the Dockerfile in current directory
docker build -t myapp:1.0 .
# List all local images
docker imagesRunning a Container
# Run a container from an image
docker run -d -p 3000:3000 --name myapp-container myapp:1.0
# -d = Run in background (detached)
# -p = Map host port 3000 to container port 3000
# --name = Give the container a friendly nameManaging Containers
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a container
docker stop myapp-container
# Remove a container
docker rm myapp-container
# View container logs
docker logs myapp-container
# Open a shell inside a running container
docker exec -it myapp-container shWorking with Registries
# Pull an image from Docker Hub
docker pull nginx:latest
# Tag an image for pushing to a registry
docker tag myapp:1.0 myusername/myapp:1.0
# Push to Docker Hub
docker push myusername/myapp:1.0Docker Compose – Multi-Container Applications
Most real applications involve multiple services: a web app, a database, a cache. Docker Compose manages all of them together using a single YAML file.
Example: Web App + MySQL Database
version: '3.8'
services:
webapp:
build: .
ports:
- "3000:3000"
environment:
- DB_HOST=database
- DB_PASSWORD=secret
depends_on:
- database
database:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: appdb
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:Docker Compose Commands
# Start all services
docker compose up -d
# Stop all services
docker compose down
# View logs for all services
docker compose logs -f
# Rebuild images and restart
docker compose up -d --buildDocker Volumes – Persistent Data
Containers are stateless by design — data inside a container disappears when the container stops. Volumes solve this by storing data on the host machine independently of the container lifecycle.
# Create a volume
docker volume create mydata
# Mount a volume when running a container
docker run -v mydata:/app/data myapp:1.0
# Mount a host directory into the container (bind mount)
docker run -v /home/john/config:/app/config myapp:1.0Docker in a CI/CD Pipeline
Docker fits naturally into CI/CD:
- Developer pushes code to Git.
- CI pipeline runs tests inside a Docker container (consistent environment).
- Pipeline builds a Docker image and tags it with the commit hash.
- Image is pushed to a private registry (AWS ECR, Docker Hub).
- CD pipeline pulls the new image and deploys it to the server or Kubernetes cluster.
Real-World Example
A Python Flask API needs to run on three different environments: a developer's Windows laptop, a Linux staging server, and an AWS production server.
Without Docker, each environment needs manual configuration — different Python versions, library conflicts, and OS differences cause constant headaches.
With Docker, the developer builds one image. It runs identically on all three environments. The Dockerfile is committed to Git alongside the application code, so the build process is documented and reproducible.
Summary
- Docker packages applications into portable, self-contained containers.
- A Dockerfile defines how to build a Docker image layer by layer.
- Containers are lightweight, start in seconds, and run consistently across environments.
- Docker Compose manages multi-container applications with a single YAML file.
- Volumes provide persistent storage that survives container restarts.
- Docker images stored in registries are the deployable artifacts in modern CI/CD pipelines.
