Table of contents
- What is Docker? π³
- Understanding Containers π¦
- Docker vs. Virtual Machines π€
- Installing Docker π§
- Essential Docker Commands π οΈ
- Debugging Docker Containers π
- Developing with Docker: A Next.js Example π
- Docker Compose: Managing Multiple Services π
- Building and Publishing Docker Images π¦
- Deploying Your Containerized App π
- Docker Volumes: Persisting Data πΎ
- Best Practices and Tips π‘
- Common Issues and Solutions π§
- Conclusion π
What is Docker? π³
I remember when I first heard about Docker - it seemed complex and intimidating. But at its core, Docker is just a platform that makes it super easy to create, deploy, and run applications using containers. Think of it as a way to package your application and all its dependencies into a standardized unit (called a container) that can run anywhere.
The beauty of Docker is that it ensures your application works the same way across different environments. You know that classic developer excuse, "But it works on my machine!"? Well, with Docker, that becomes a thing of the past.
Understanding Containers π¦
Before we go further, let's clear up what containers actually are. I like to explain containers using the shipping container analogy:
Just like how shipping containers standardized global cargo transport by providing a consistent way to package goods, software containers standardize how we package applications. Each container includes:
Your application code
Runtime environment (like Node.js)
System tools and libraries
Configuration files
The cool part is that containers are isolated from each other and the host system, but they share the host's OS kernel. This makes them extremely lightweight and fast to start up.
Docker vs. Virtual Machines π€
While exploring Docker, Iβve been getting a lot of questions about the difference between Docker containers and virtual machines (VMs). Hereβs what Iβve come to understand so far
Virtual Machines:
Run a complete OS with its own kernel
Typically take minutes to start
Are resource-heavy (GBs of space)
Provide full isolation
Docker Containers:
Share the host OS kernel
Start in seconds
Are lightweight (MBs of space)
Provide process-level isolation
Here's a visual representation of the difference:
Traditional VM:
+-----------------+ +-----------------+
| App 1 | | App 2 |
+-----------------+ +-----------------+
| Guest OS 1 | | Guest OS 2 |
+-----------------+ +-----------------+
| Hypervisor |
+-----------------+
| Host OS |
+-----------------+
| Hardware |
+-----------------+
Docker:
+-----------------+ +-----------------+
| App 1 | | App 2 |
+-----------------+ +-----------------+
| Docker Engine |
+-----------------+---------------------
| Host OS |
+-----------------+
| Hardware |
+-----------------+
Installing Docker π§
Let me walk you through installing Docker on your machine.
For Mac:
Download Docker Desktop from Docker Hub
Double-click the downloaded
.dmg
fileDrag Docker to your Applications folder
Open Docker from Applications
For Windows:
Enable WSL 2 (Windows Subsystem for Linux) first:
wsl --install
Download Docker Desktop from Docker Hub
Run the installer
Start Docker Desktop
For Linux (Ubuntu):
# Update package index
sudo apt-get update
# Install prerequisites
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up stable repository
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Add your user to docker group (to run docker without sudo)
sudo usermod -aG docker $USER
Verify installation by running:
docker --version
docker run hello-world
Essential Docker Commands π οΈ
Let me share the Docker commands I use most frequently. I'll explain what each one does:
# Pull an image
docker pull node:18
# List images
docker images
# Run a container
docker run -d -p 3000:3000 --name my-app node:18
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a container
docker stop my-app
# Remove a container
docker rm my-app
# Remove an image
docker rmi node:18
# View container logs
docker logs my-app
# Execute command in running container
docker exec -it my-app bash
Debugging Docker Containers π
When things go wrong (and trust me, they will), here's my debugging workflow:
Check container logs:
docker logs my-app docker logs --tail 100 my-app # Last 100 lines docker logs -f my-app # Follow logs in real-time
Inspect container details:
docker inspect my-app
Check container resource usage:
docker stats my-app
Get an interactive shell inside the container:
bashCopydocker exec -it my-app /bin/bash # or docker exec -it my-app /bin/sh
Developing with Docker: A Next.js Example π
Let's create a practical example using Next.js. I'll show you my complete development workflow:
First, create a new Next.js project:
npx create-next-app@latest my-nextjs-docker cd my-nextjs-docker
Create a Dockerfile:
# Development stage FROM node:18-alpine AS development WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["npm", "run", "dev"] # Production stage FROM node:18-alpine AS production WORKDIR /app COPY package*.json ./ RUN npm install --production COPY . . RUN npm run build EXPOSE 3000 CMD ["npm", "start"]
Create
.dockerignore
:node_modules .next .git
Build and run the development container:
bashCopydocker build -t nextjs-dev --target development . docker run -d -p 3000:3000 -v $(pwd):/app nextjs-dev
Docker Compose: Managing Multiple Services π
When your app grows, you'll likely need multiple services (like a database, cache, etc.). This is where Docker Compose shines. Here's an example with Next.js and MongoDB:
# docker-compose.yml
version: '3.8'
services:
web:
build:
context: .
target: development
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
- MONGODB_URI=mongodb://db:27017/myapp
depends_on:
- db
db:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
volumes:
mongodb_data:
Run everything with:
bashCopydocker-compose up -d
Building and Publishing Docker Images π¦
Let's build our production image and push it to a private repository:
Build the production image:
docker build -t my-registry.azurecr.io/nextjs-app:v1 --target production .
Log in to your private registry (example using Azure Container Registry):
bashCopyaz acr login --name my-registry
Push the image:
bashCopydocker push my-registry.azurecr.io/nextjs-app:v1
Deploying Your Containerized App π
Here's how I usually deploy my containerized apps to different platforms:
Using Docker Compose (Simple deployment):
bashCopydocker-compose -f docker-compose.prod.yml up -d
Using Kubernetes:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextjs-app
spec:
replicas: 3
selector:
matchLabels:
app: nextjs-app
template:
metadata:
labels:
app: nextjs-app
spec:
containers:
- name: nextjs-app
image: my-registry.azurecr.io/nextjs-app:v1
ports:
- containerPort: 3000
Deploy with:
bashCopykubectl apply -f deployment.yaml
Docker Volumes: Persisting Data πΎ
Docker containers are ephemeral - they lose their data when stopped. That's where volumes come in. Here are the three types I use:
- Named Volumes:
bashCopy# Create a volume
docker volume create my-data
# Use in docker run
docker run -v my-data:/app/data my-app
# Use in docker-compose
volumes:
my-data:
driver: local
- Bind Mounts (great for development):
bashCopydocker run -v $(pwd):/app my-app
- tmpfs Mounts (for temporary data):
bashCopydocker run --tmpfs /app/temp my-app
Best Practices and Tips π‘
After working with Docker for years, here are some best practices I always follow:
Use multi-stage builds to keep images small
Never run containers as root
Use
.dockerignore
to exclude unnecessary filesCache dependencies separately from code
Use environment variables for configuration
Tag images specifically (avoid using 'latest')
Regularly update base images for security
Use health checks in production containers
Common Issues and Solutions π§
Here are some issues I've encountered and their solutions:
Container can't connect to the internet:
Check DNS settings
Verify network configuration
Container keeps restarting:
Check logs for errors
Verify enough resources are allocated
Volume permission issues:
Check user/group IDs
Use
chown
in Dockerfile
Image size too large:
Use multi-stage builds
Remove unnecessary files
Use smaller base images
Conclusion π
Tada! Thatβs your Docker foundation right there! π Youβre all set to dive in and start containerizing your apps. The more you experiment and play around with different configurations, the better you'll get. If you hit any bumps along the way, no worriesβthe Docker community is full of awesome folks, and the docs are your best friend.
So go ahead, get those containers rolling, and remember: if you ever need help, Iβve got your back. Happy containerizing! π³π