A Complete Guide to Docker

A Complete Guide to Docker

From Zero to Deployment

Β·

7 min read

What is Docker? 🐳

I remember when I first heard about Docker - it seemed complex and intimidating. But at its core, Docker is just a platform that makes it super easy to create, deploy, and run applications using containers. Think of it as a way to package your application and all its dependencies into a standardized unit (called a container) that can run anywhere.

The beauty of Docker is that it ensures your application works the same way across different environments. You know that classic developer excuse, "But it works on my machine!"? Well, with Docker, that becomes a thing of the past.

Understanding Containers πŸ“¦

Before we go further, let's clear up what containers actually are. I like to explain containers using the shipping container analogy:

Just like how shipping containers standardized global cargo transport by providing a consistent way to package goods, software containers standardize how we package applications. Each container includes:

  • Your application code

  • Runtime environment (like Node.js)

  • System tools and libraries

  • Configuration files

The cool part is that containers are isolated from each other and the host system, but they share the host's OS kernel. This makes them extremely lightweight and fast to start up.

Docker vs. Virtual Machines πŸ€”

While exploring Docker, I’ve been getting a lot of questions about the difference between Docker containers and virtual machines (VMs). Here’s what I’ve come to understand so far

Virtual Machines:

  • Run a complete OS with its own kernel

  • Typically take minutes to start

  • Are resource-heavy (GBs of space)

  • Provide full isolation

Docker Containers:

  • Share the host OS kernel

  • Start in seconds

  • Are lightweight (MBs of space)

  • Provide process-level isolation

Here's a visual representation of the difference:

Traditional VM:
+-----------------+  +-----------------+
|    App 1        |  |    App 2        |
+-----------------+  +-----------------+
|    Guest OS 1   |  |    Guest OS 2   |
+-----------------+  +-----------------+
|    Hypervisor   |
+-----------------+
|    Host OS      |
+-----------------+
|    Hardware     |
+-----------------+

Docker:
+-----------------+  +-----------------+
|    App 1        |  |    App 2        |
+-----------------+  +-----------------+
|            Docker Engine             |
+-----------------+---------------------
|    Host OS      |
+-----------------+
|    Hardware     |
+-----------------+

Installing Docker πŸ”§

Let me walk you through installing Docker on your machine.

For Mac:

  1. Download Docker Desktop from Docker Hub

  2. Double-click the downloaded .dmg file

  3. Drag Docker to your Applications folder

  4. Open Docker from Applications

For Windows:

  1. Enable WSL 2 (Windows Subsystem for Linux) first:

     wsl --install
    
  2. Download Docker Desktop from Docker Hub

  3. Run the installer

  4. Start Docker Desktop

For Linux (Ubuntu):

# Update package index
sudo apt-get update

# Install prerequisites
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Set up stable repository
echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

# Add your user to docker group (to run docker without sudo)
sudo usermod -aG docker $USER

Verify installation by running:

docker --version
docker run hello-world

Essential Docker Commands πŸ› οΈ

Let me share the Docker commands I use most frequently. I'll explain what each one does:

# Pull an image
docker pull node:18

# List images
docker images

# Run a container
docker run -d -p 3000:3000 --name my-app node:18

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# Stop a container
docker stop my-app

# Remove a container
docker rm my-app

# Remove an image
docker rmi node:18

# View container logs
docker logs my-app

# Execute command in running container
docker exec -it my-app bash

Debugging Docker Containers πŸ”

When things go wrong (and trust me, they will), here's my debugging workflow:

  1. Check container logs:

     docker logs my-app
     docker logs --tail 100 my-app  # Last 100 lines
     docker logs -f my-app          # Follow logs in real-time
    
  2. Inspect container details:

     docker inspect my-app
    
  3. Check container resource usage:

     docker stats my-app
    
  4. Get an interactive shell inside the container:

     bashCopydocker exec -it my-app /bin/bash
     # or
     docker exec -it my-app /bin/sh
    

    Developing with Docker: A Next.js Example πŸš€

    Let's create a practical example using Next.js. I'll show you my complete development workflow:

    1. First, create a new Next.js project:

       npx create-next-app@latest my-nextjs-docker
       cd my-nextjs-docker
      
    2. Create a Dockerfile:

       # Development stage
       FROM node:18-alpine AS development
      
       WORKDIR /app
      
       COPY package*.json ./
       RUN npm install
      
       COPY . .
      
       EXPOSE 3000
       CMD ["npm", "run", "dev"]
      
       # Production stage
       FROM node:18-alpine AS production
      
       WORKDIR /app
      
       COPY package*.json ./
       RUN npm install --production
      
       COPY . .
       RUN npm run build
      
       EXPOSE 3000
       CMD ["npm", "start"]
      
    3. Create .dockerignore:

       node_modules
       .next
       .git
      
    4. Build and run the development container:

       bashCopydocker build -t nextjs-dev --target development .
       docker run -d -p 3000:3000 -v $(pwd):/app nextjs-dev
      

Docker Compose: Managing Multiple Services πŸ”„

When your app grows, you'll likely need multiple services (like a database, cache, etc.). This is where Docker Compose shines. Here's an example with Next.js and MongoDB:

# docker-compose.yml
version: '3.8'

services:
  web:
    build: 
      context: .
      target: development
    ports:
      - "3000:3000"
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - MONGODB_URI=mongodb://db:27017/myapp
    depends_on:
      - db

  db:
    image: mongo:latest
    ports:
      - "27017:27017"
    volumes:
      - mongodb_data:/data/db

volumes:
  mongodb_data:

Run everything with:

bashCopydocker-compose up -d

Building and Publishing Docker Images πŸ“¦

Let's build our production image and push it to a private repository:

  1. Build the production image:

     docker build -t my-registry.azurecr.io/nextjs-app:v1 --target production .
    
  2. Log in to your private registry (example using Azure Container Registry):

     bashCopyaz acr login --name my-registry
    
  3. Push the image:

  4.  bashCopydocker push my-registry.azurecr.io/nextjs-app:v1
    

Deploying Your Containerized App πŸš€

Here's how I usually deploy my containerized apps to different platforms:

Using Docker Compose (Simple deployment):

bashCopydocker-compose -f docker-compose.prod.yml up -d

Using Kubernetes:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nextjs-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nextjs-app
  template:
    metadata:
      labels:
        app: nextjs-app
    spec:
      containers:
      - name: nextjs-app
        image: my-registry.azurecr.io/nextjs-app:v1
        ports:
        - containerPort: 3000

Deploy with:

bashCopykubectl apply -f deployment.yaml

Docker Volumes: Persisting Data πŸ’Ύ

Docker containers are ephemeral - they lose their data when stopped. That's where volumes come in. Here are the three types I use:

  1. Named Volumes:
bashCopy# Create a volume
docker volume create my-data

# Use in docker run
docker run -v my-data:/app/data my-app

# Use in docker-compose
volumes:
  my-data:
    driver: local
  1. Bind Mounts (great for development):
bashCopydocker run -v $(pwd):/app my-app
  1. tmpfs Mounts (for temporary data):
bashCopydocker run --tmpfs /app/temp my-app

Best Practices and Tips πŸ’‘

After working with Docker for years, here are some best practices I always follow:

  1. Use multi-stage builds to keep images small

  2. Never run containers as root

  3. Use .dockerignore to exclude unnecessary files

  4. Cache dependencies separately from code

  5. Use environment variables for configuration

  6. Tag images specifically (avoid using 'latest')

  7. Regularly update base images for security

  8. Use health checks in production containers

Common Issues and Solutions πŸ”§

Here are some issues I've encountered and their solutions:

  1. Container can't connect to the internet:

    • Check DNS settings

    • Verify network configuration

  2. Container keeps restarting:

    • Check logs for errors

    • Verify enough resources are allocated

  3. Volume permission issues:

    • Check user/group IDs

    • Use chown in Dockerfile

  4. Image size too large:

    • Use multi-stage builds

    • Remove unnecessary files

    • Use smaller base images

Conclusion πŸŽ‰

Tada! That’s your Docker foundation right there! πŸŽ‰ You’re all set to dive in and start containerizing your apps. The more you experiment and play around with different configurations, the better you'll get. If you hit any bumps along the way, no worriesβ€”the Docker community is full of awesome folks, and the docs are your best friend.

So go ahead, get those containers rolling, and remember: if you ever need help, I’ve got your back. Happy containerizing! πŸ³πŸš€

Did you find this article valuable?

Support Tushar Writes by becoming a sponsor. Any amount is appreciated!

Β