Docker Container Deployment - Complete Guide
Published: September 25, 2024 | Reading time: 22 minutes
Docker Deployment Overview
Docker containerization provides consistent, portable application deployment:
Docker Benefits
# Key Benefits
- Consistent environments
- Easy scaling
- Resource isolation
- Fast deployment
- Version control
- Microservices architecture
- Cloud portability
Docker Installation
Docker Engine Setup
Installation Commands
# Ubuntu/Debian Installation
sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker
# Add user to docker group
sudo usermod -aG docker $USER
# Verify installation
docker --version
docker run hello-world
Docker Compose Installation
Docker Compose Setup
# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Make executable
sudo chmod +x /usr/local/bin/docker-compose
# Verify installation
docker-compose --version
# Alternative: Install via pip
pip install docker-compose
# Check Docker Compose version
docker-compose version
Dockerfile Creation
Basic Dockerfile
Node.js Dockerfile
# Use official Node.js runtime as base image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Change ownership
RUN chown -R nextjs:nodejs /app
USER nextjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start application
CMD ["npm", "start"]
Multi-stage Dockerfile
Optimized Multi-stage Build
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install all dependencies (including dev)
RUN npm ci
# Copy source code
COPY . .
# Build application
RUN npm run build
# Production stage
FROM node:18-alpine AS production
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install only production dependencies
RUN npm ci --only=production && npm cache clean --force
# Copy built application from builder stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/public ./public
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Change ownership
RUN chown -R nextjs:nodejs /app
USER nextjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start application
CMD ["node", "dist/index.js"]
Docker Compose Configuration
Basic Docker Compose
docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:password@db:5432/myapp
depends_on:
- db
- redis
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
db:
image: postgres:15-alpine
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 30s
timeout: 10s
retries: 3
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- web
restart: unless-stopped
volumes:
postgres_data:
redis_data:
Production Docker Compose
Production Configuration
version: '3.8'
services:
web:
image: myapp:latest
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=${DATABASE_URL}
- REDIS_URL=${REDIS_URL}
- JWT_SECRET=${JWT_SECRET}
depends_on:
- db
- redis
restart: unless-stopped
deploy:
replicas: 3
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
db:
image: postgres:15-alpine
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./backups:/backups
restart: unless-stopped
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 30s
timeout: 10s
retries: 3
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.25'
memory: 256M
reservations:
cpus: '0.1'
memory: 128M
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 30s
timeout: 10s
retries: 3
volumes:
postgres_data:
driver: local
redis_data:
driver: local
networks:
default:
driver: bridge
Container Orchestration
Docker Swarm
Swarm Setup
# Initialize Docker Swarm
docker swarm init
# Join worker nodes
docker swarm join --token SWMTKN-1-xxx 192.168.1.100:2377
# Create overlay network
docker network create --driver overlay --attachable myapp-network
# Deploy stack
docker stack deploy -c docker-compose.yml myapp
# List services
docker service ls
# Scale service
docker service scale myapp_web=5
# Update service
docker service update --image myapp:v2.0 myapp_web
# Remove stack
docker stack rm myapp
# Leave swarm
docker swarm leave
Kubernetes Deployment
Kubernetes YAML
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Production Deployment
CI/CD Pipeline
GitHub Actions Workflow
# .github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: |
${{ secrets.DOCKER_USERNAME }}/myapp:latest
${{ secrets.DOCKER_USERNAME }}/myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Deploy to production
if: github.ref == 'refs/heads/main'
run: |
ssh ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_HOST }} << 'EOF'
docker pull ${{ secrets.DOCKER_USERNAME }}/myapp:latest
docker-compose down
docker-compose up -d
docker system prune -f
EOF
Health Checks and Monitoring
Health Check Implementation
# healthcheck.js
const express = require('express');
const app = express();
// Health check endpoint
app.get('/health', (req, res) => {
const healthcheck = {
uptime: process.uptime(),
message: 'OK',
timestamp: Date.now(),
checks: {
database: 'OK',
redis: 'OK',
memory: process.memoryUsage(),
cpu: process.cpuUsage()
}
};
try {
res.status(200).json(healthcheck);
} catch (error) {
healthcheck.message = 'ERROR';
res.status(503).json(healthcheck);
}
});
// Readiness check
app.get('/ready', (req, res) => {
// Check if all dependencies are ready
const isReady = checkDatabase() && checkRedis();
if (isReady) {
res.status(200).json({ status: 'ready' });
} else {
res.status(503).json({ status: 'not ready' });
}
});
function checkDatabase() {
// Implement database connectivity check
return true;
}
function checkRedis() {
// Implement Redis connectivity check
return true;
}
module.exports = app;
Security Best Practices
Container Security
Security Configuration
# Secure Dockerfile
FROM node:18-alpine
# Install security updates
RUN apk update && apk upgrade
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies with security audit
RUN npm ci --only=production && npm audit --audit-level moderate
# Copy application code
COPY --chown=nextjs:nodejs . .
# Switch to non-root user
USER nextjs
# Expose port
EXPOSE 3000
# Add security headers
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start application
CMD ["node", "index.js"]
# Docker run with security options
docker run -d \
--name myapp \
--read-only \
--tmpfs /tmp \
--tmpfs /var/cache/nginx \
--user 1001:1001 \
--cap-drop ALL \
--security-opt no-new-privileges \
--memory=512m \
--cpus=0.5 \
-p 3000:3000 \
myapp:latest
Performance Optimization
Image Optimization
Optimization Techniques
# Use multi-stage builds
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
COPY --from=builder /app/dist ./dist
USER 1001
EXPOSE 3000
CMD ["node", "dist/index.js"]
# Use .dockerignore
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.nyc_output
coverage
.nyc_output
.coverage
.coverage/
.env.local
.env.development.local
.env.test.local
.env.production.local
# Optimize layer caching
# Order matters: dependencies first, then code
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Use specific tags instead of latest
FROM node:18.15.0-alpine3.17
# Minimize layers
RUN apk update && apk add --no-cache \
curl \
&& rm -rf /var/cache/apk/*
Monitoring and Logging
Log Management
Logging Configuration
# Docker Compose with logging
version: '3.8'
services:
web:
image: myapp:latest
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
environment:
- LOG_LEVEL=info
- LOG_FORMAT=json
# Centralized logging with ELK stack
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
environment:
- discovery.type=single-node
- xpack.security.enabled=false
volumes:
- es_data:/usr/share/elasticsearch/data
logstash:
image: docker.elastic.co/logstash/logstash:8.5.0
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:8.5.0
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
volumes:
es_data:
Troubleshooting
Common Issues
Debugging Commands
# Container debugging
docker ps -a
docker logs container_name
docker exec -it container_name /bin/sh
# Resource monitoring
docker stats
docker system df
docker system prune
# Network debugging
docker network ls
docker network inspect network_name
docker port container_name
# Volume debugging
docker volume ls
docker volume inspect volume_name
# Image debugging
docker images
docker history image_name
docker inspect image_name
# Build debugging
docker build --no-cache -t myapp .
docker build --progress=plain -t myapp .
# Compose debugging
docker-compose logs -f
docker-compose ps
docker-compose exec service_name /bin/sh
# Performance analysis
docker run --rm -it --pid=host alpine:latest sh
# Inside container: ps aux
Best Practices
Production Checklist
Security
- Use non-root users
- Keep base images updated
- Scan for vulnerabilities
- Use secrets management
- Enable read-only containers
- Drop unnecessary capabilities
- Use minimal base images
Performance
- Optimize image layers
- Use multi-stage builds
- Implement health checks
- Set resource limits
- Use proper caching
- Monitor resource usage
- Implement graceful shutdown
Summary
Docker container deployment involves several key components:
- Installation: Docker Engine, Docker Compose setup
- Dockerfile: Multi-stage builds, security hardening
- Orchestration: Docker Compose, Swarm, Kubernetes
- CI/CD: Automated builds, testing, deployment
- Security: Non-root users, vulnerability scanning
- Monitoring: Health checks, logging, metrics
- Performance: Resource optimization, caching
- Troubleshooting: Debugging tools, common issues
Need More Help?
Struggling with Docker deployment or need help containerizing your applications? Our DevOps experts can help you implement robust containerized solutions.
Get Docker Help