Deploying microservices requires careful orchestration of multiple services, their dependencies, and infrastructure components. This chapter covers deployment strategies, containerization, orchestration platforms, and best practices for production-ready Django microservices.
Deploying microservices is fundamentally different from deploying a monolithic application. Instead of deploying one large application, you're deploying multiple small services that need to work together seamlessly. This creates unique challenges that require careful planning and the right tools.
Why Microservices Deployment is Challenging:
Real-World Example: Imagine you're running an e-commerce platform with these services:
When you deploy a new version of the Order Service, you need to ensure:
1. Blue-Green Deployment:
2. Rolling Deployment:
3. Canary Deployment:
4. Feature Flags:
Containerization is the foundation of modern microservices deployment. Docker containers package your application with all its dependencies, ensuring it runs consistently across different environments. Think of containers as lightweight, portable boxes that contain everything your service needs to run.
Benefits of Containerization:
Container vs Virtual Machine:
A Dockerfile is like a recipe that tells Docker how to build your container. Let's create production-ready Dockerfiles for our Django services:
# user_service/Dockerfile
# Use official Python runtime as base image
FROM python:3.11-slim
# Set environment variables
# PYTHONDONTWRITEBYTECODE: Prevents Python from writing pyc files to disc
# PYTHONUNBUFFERED: Prevents Python from buffering stdout and stderr
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV DJANGO_SETTINGS_MODULE=user_service.settings.production
# Set work directory inside container
WORKDIR /app
# Install system dependencies
# We need these for PostgreSQL and other system-level packages
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
build-essential \
libpq-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
# Copy requirements first to leverage Docker layer caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy project files
COPY . .
# Create non-root user for security
# Running as root inside containers is a security risk
RUN adduser --disabled-password --gecos '' appuser
RUN chown -R appuser:appuser /app
USER appuser
# Collect static files
# Django needs this for serving CSS, JS, and images
RUN python manage.py collectstatic --noinput
# Health check - Docker will use this to verify the container is healthy
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD python manage.py check --deploy || exit 1
# Expose port 8000 to the outside world
EXPOSE 8000
# Command to run when container starts
# Using Gunicorn as WSGI server for production
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "user_service.wsgi:application"]
Dockerfile Explanation:
python:3.11-slim provides Python with minimal OSMulti-stage builds create smaller, more secure production images by separating build dependencies from runtime dependencies:
# order_service/Dockerfile
# Build stage - includes build tools and dependencies
FROM python:3.11-slim as builder
WORKDIR /app
# Install build dependencies (needed for compiling packages)
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies to a local directory
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Production stage - minimal runtime environment
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV PATH=/home/appuser/.local/bin:$PATH
# Install only runtime dependencies (no build tools)
RUN apt-get update && apt-get install -y \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/*
# Create non-root user
RUN adduser --disabled-password --gecos '' appuser
# Copy Python dependencies from builder stage
# This excludes build tools, making the image smaller and more secure
COPY --from=builder /root/.local /home/appuser/.local
# Set work directory and copy application
WORKDIR /app
COPY --chown=appuser:appuser . .
# Switch to non-root user
USER appuser
# Collect static files
RUN python manage.py collectstatic --noinput
# Health check
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health/ || exit 1
EXPOSE 8000
# Use Gunicorn with optimized settings for production
CMD ["gunicorn", \
"--bind", "0.0.0.0:8000", \
"--workers", "3", \
"--worker-class", "gevent", \
"--worker-connections", "1000", \
"--max-requests", "1000", \
"--max-requests-jitter", "100", \
"--timeout", "30", \
"--keep-alive", "2", \
"order_service.wsgi:application"]
Multi-stage Benefits:
Docker Compose orchestrates multiple containers for local development. It's like a conductor that starts all your services in the right order with the right configuration:
# docker-compose.yml
version: '3.8'
services:
# Database services
postgres:
image: postgres:13
container_name: microservices_postgres
environment:
POSTGRES_DB: microservices_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
# Persist database data between container restarts
- postgres_data:/var/lib/postgresql/data
# Initialize database with custom scripts
- ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
ports:
- "5432:5432"
networks:
- microservices_network
# Health check to ensure database is ready
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: microservices_redis
ports:
- "6379:6379"
networks:
- microservices_network
# Persist Redis data
volumes:
- redis_data:/data
# Redis configuration for better performance
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
# Message Broker
rabbitmq:
image: rabbitmq:3.11-management
container_name: microservices_rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: password123
# Enable management plugin
RABBITMQ_PLUGINS: rabbitmq_management
ports:
- "5672:5672" # AMQP port
- "15672:15672" # Management UI
volumes:
- rabbitmq_data:/var/lib/rabbitmq
networks:
- microservices_network
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "-q", "ping"]
interval: 30s
timeout: 30s
retries: 3
# Microservices
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
container_name: user_service
ports:
- "8000:8000"
environment:
# Database connection
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/microservices_db
# Cache connection
- REDIS_URL=redis://redis:6379/0
# Message queue connection
- CELERY_BROKER_URL=amqp://admin:password123@rabbitmq:5672//
# Django settings
- DJANGO_SETTINGS_MODULE=user_service.settings.development
- DEBUG=1
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
rabbitmq:
condition: service_healthy
networks:
- microservices_network
volumes:
# Mount source code for development (hot reload)
- ./user_service:/app
# Don't mount these directories (use container versions)
- /app/node_modules
- /app/.venv
# Override command for development
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
# Restart policy
restart: unless-stopped
# Celery worker for user service
user-worker:
build: ./user_service
container_name: user_worker
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/microservices_db
- REDIS_URL=redis://redis:6379/0
- CELERY_BROKER_URL=amqp://admin:password123@rabbitmq:5672//
- DJANGO_SETTINGS_MODULE=user_service.settings.development
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
rabbitmq:
condition: service_healthy
networks:
- microservices_network
volumes:
- ./user_service:/app
command: celery -A user_service worker --loglevel=info -Q user_queue
restart: unless-stopped
order-service:
build: ./order_service
container_name: order_service
ports:
- "8001:8000"
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/microservices_db
- REDIS_URL=redis://redis:6379/0
- CELERY_BROKER_URL=amqp://admin:password123@rabbitmq:5672//
- USER_SERVICE_URL=http://user-service:8000
- DJANGO_SETTINGS_MODULE=order_service.settings.development
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
rabbitmq:
condition: service_healthy
user-service:
condition: service_started
networks:
- microservices_network
volumes:
- ./order_service:/app
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
restart: unless-stopped
order-worker:
build: ./order_service
container_name: order_worker
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/microservices_db
- REDIS_URL=redis://redis:6379/0
- CELERY_BROKER_URL=amqp://admin:password123@rabbitmq:5672//
- USER_SERVICE_URL=http://user-service:8000
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
rabbitmq:
condition: service_healthy
networks:
- microservices_network
volumes:
- ./order_service:/app
command: celery -A order_service worker --loglevel=info -Q order_queue
restart: unless-stopped
# API Gateway using Nginx
nginx:
image: nginx:alpine
container_name: api_gateway
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/ssl:/etc/nginx/ssl
depends_on:
- user-service
- order-service
networks:
- microservices_network
restart: unless-stopped
# Named volumes for data persistence
volumes:
postgres_data:
driver: local
redis_data:
driver: local
rabbitmq_data:
driver: local
# Custom network for service communication
networks:
microservices_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
Docker Compose Features Explained:
depends_on ensures services start in the right orderUsing Docker Compose:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f user-service
# Scale a service
docker-compose up -d --scale user-worker=3
# Stop all services
docker-compose down
# Rebuild and start
docker-compose up -d --build
# Remove everything including volumes
docker-compose down -v
Nginx acts as an API Gateway and reverse proxy for your microservices. It's the single entry point that routes requests to the appropriate services, handles SSL termination, load balancing, and caching.
Why Use Nginx as API Gateway:
# nginx/nginx.conf
# Main nginx configuration for microservices API gateway
# Global settings
user nginx;
worker_processes auto; # Use all available CPU cores
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
# Connection settings
events {
worker_connections 1024; # Max connections per worker
use epoll; # Efficient connection method for Linux
multi_accept on; # Accept multiple connections at once
}
http {
# Basic settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
# Performance settings
sendfile on; # Efficient file serving
tcp_nopush on; # Send headers in one piece
tcp_nodelay on; # Don't buffer data-sends
keepalive_timeout 65; # Keep connections alive
types_hash_max_size 2048;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied any;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
# Rate limiting zones
# Limit requests per IP to prevent abuse
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=auth:10m rate=5r/s;
# Upstream definitions (backend services)
upstream user_service {
# Load balancing method
least_conn; # Route to server with fewest active connections
# Backend servers
server user-service:8000 max_fails=3 fail_timeout=30s;
# Add more instances for load balancing:
# server user-service-2:8000 max_fails=3 fail_timeout=30s;
# server user-service-3:8000 max_fails=3 fail_timeout=30s;
# Health check (requires nginx-plus or custom module)
# health_check interval=10s fails=3 passes=2;
}
upstream order_service {
least_conn;
server order-service:8000 max_fails=3 fail_timeout=30s;
}
upstream payment_service {
least_conn;
server payment-service:8000 max_fails=3 fail_timeout=30s;
}
# Main server block
server {
listen 80;
server_name api.yourdomain.com localhost;
# Security headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Referrer-Policy "strict-origin-when-cross-origin";
# CORS headers for API access
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS";
add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept, Authorization";
# Handle preflight requests
if ($request_method = 'OPTIONS') {
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS";
add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept, Authorization";
add_header Access-Control-Max-Age 1728000;
add_header Content-Type "text/plain charset=UTF-8";
add_header Content-Length 0;
return 204;
}
# User service routes
location /api/users/ {
# Apply rate limiting
limit_req zone=api burst=20 nodelay;
# Proxy settings
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeout settings
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
# Error handling
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
}
# Authentication routes (stricter rate limiting)
location /api/auth/ {
limit_req zone=auth burst=10 nodelay;
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Shorter timeouts for auth
proxy_connect_timeout 3s;
proxy_send_timeout 5s;
proxy_read_timeout 5s;
}
# Order service routes
location /api/orders/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://order_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Longer timeout for order processing
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# Payment service routes (most secure)
location /api/payments/ {
# Stricter rate limiting for payments
limit_req zone=api burst=5 nodelay;
proxy_pass http://payment_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Additional security headers for payments
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Longer timeout for payment processing
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Static files (if serving from nginx)
location /static/ {
alias /var/www/static/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Media files
location /media/ {
alias /var/www/media/;
expires 30d;
add_header Cache-Control "public";
}
# Default location (catch-all)
location / {
return 404 "Not Found";
}
}
# HTTPS server (production)
server {
listen 443 ssl http2;
server_name api.yourdomain.com;
# SSL configuration
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
# SSL settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# HSTS header
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Include all the location blocks from the HTTP server
include /etc/nginx/conf.d/api-locations.conf;
}
}
Advanced Nginx Features:
# nginx/conf.d/caching.conf
# Response caching configuration
# Cache zones
proxy_cache_path /var/cache/nginx/api levels=1:2 keys_zone=api_cache:10m max_size=1g inactive=60m use_temp_path=off;
# Cache configuration for different endpoints
location /api/users/ {
# Cache GET requests for user data
proxy_cache api_cache;
proxy_cache_methods GET HEAD;
proxy_cache_valid 200 5m; # Cache successful responses for 5 minutes
proxy_cache_valid 404 1m; # Cache 404s for 1 minute
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
# Cache headers
add_header X-Cache-Status $upstream_cache_status;
# Don't cache authenticated requests
proxy_cache_bypass $http_authorization;
proxy_no_cache $http_authorization;
proxy_pass http://user_service;
}
# nginx/conf.d/monitoring.conf
# Monitoring and metrics
# Status page for monitoring
server {
listen 8080;
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 10.0.0.0/8;
deny all;
}
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
Docker Compose Integration:
# nginx service in docker-compose.yml
nginx:
image: nginx:alpine
container_name: api_gateway
ports:
- "80:80"
- "443:443"
- "8080:8080" # Monitoring port
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/ssl:/etc/nginx/ssl
- ./static:/var/www/static
- ./media:/var/www/media
- nginx_cache:/var/cache/nginx
depends_on:
- user-service
- order-service
- payment-service
networks:
- microservices_network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
Testing Nginx Configuration:
# Test configuration syntax
docker exec api_gateway nginx -t
# Reload configuration without downtime
docker exec api_gateway nginx -s reload
# View access logs
docker logs api_gateway -f
# Test endpoints
curl -H "Host: api.yourdomain.com" http://localhost/api/users/
curl -H "Host: api.yourdomain.com" http://localhost/health
This Nginx configuration provides a production-ready API gateway with load balancing, caching, security headers, rate limiting, and monitoring capabilities.
Kubernetes (K8s) is a container orchestration platform that automates the deployment, scaling, and management of containerized applications. Think of Kubernetes as a sophisticated autopilot for your microservices - it ensures they're running, healthy, and can scale automatically based on demand.
Before diving into deployment, let's understand key Kubernetes concepts:
Core Components:
Why Kubernetes for Microservices:
Namespaces provide isolation between different environments or teams. ConfigMaps store non-sensitive configuration data that can be shared across services.
# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: microservices
labels:
name: microservices
environment: production
---
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: microservices-config
namespace: microservices
labels:
app: microservices
data:
# Database configuration
POSTGRES_DB: "microservices_db"
POSTGRES_USER: "postgres"
# Cache configuration
REDIS_URL: "redis://redis-service:6379/0"
# Message queue configuration
CELERY_BROKER_URL: "amqp://admin:password123@rabbitmq-service:5672//"
# Service URLs for inter-service communication
USER_SERVICE_URL: "http://user-service:8000"
ORDER_SERVICE_URL: "http://order-service:8000"
PAYMENT_SERVICE_URL: "http://payment-service:8000"
# Django settings
DJANGO_SETTINGS_MODULE: "user_service.settings.production"
DEBUG: "False"
# Logging configuration
LOG_LEVEL: "INFO"
# Feature flags
ENABLE_CACHING: "true"
ENABLE_METRICS: "true"
ConfigMap Usage Explained:
Secrets store sensitive information like passwords, API keys, and certificates. Unlike ConfigMaps, Secrets are base64 encoded and can be encrypted at rest.
# k8s/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: microservices-secrets
namespace: microservices
labels:
app: microservices
type: Opaque
data:
# Database password (base64 encoded)
POSTGRES_PASSWORD: cG9zdGdyZXM= # 'postgres' in base64
# Django secret key (base64 encoded)
DJANGO_SECRET_KEY: eW91ci1zZWNyZXQta2V5LWhlcmU= # your-secret-key-here
# RabbitMQ password (base64 encoded)
RABBITMQ_PASSWORD: cGFzc3dvcmQxMjM= # 'password123' in base64
# JWT signing key
JWT_SECRET_KEY: and0LXNpZ25pbmcta2V5LWZvci1qd3Q= # jwt-signing-key-for-jwt
# External API keys
STRIPE_SECRET_KEY: c2tfbGl2ZV95b3VyX3N0cmlwZV9zZWNyZXRfa2V5 # sk_live_your_stripe_secret_key
SENDGRID_API_KEY: U0cuWW91clNlbmRHcmlkQVBJS2V5SGVyZQ== # SG.YourSendGridAPIKeyHere
---
# Alternative: Using stringData (automatically base64 encoded)
apiVersion: v1
kind: Secret
metadata:
name: microservices-secrets-alt
namespace: microservices
type: Opaque
stringData: # Kubernetes will automatically base64 encode these
POSTGRES_PASSWORD: "postgres"
DJANGO_SECRET_KEY: "your-secret-key-here"
RABBITMQ_PASSWORD: "password123"
JWT_SECRET_KEY: "jwt-signing-key-for-jwt"
Creating Secrets from Command Line:
# Create secret from literal values
kubectl create secret generic microservices-secrets \
--from-literal=POSTGRES_PASSWORD=postgres \
--from-literal=DJANGO_SECRET_KEY=your-secret-key-here \
--namespace=microservices
# Create secret from files
kubectl create secret generic ssl-certs \
--from-file=tls.crt=path/to/cert.crt \
--from-file=tls.key=path/to/cert.key \
--namespace=microservices
# Create secret for Docker registry authentication
kubectl create secret docker-registry regcred \
--docker-server=your-registry.com \
--docker-username=your-username \
--docker-password=your-password \
--docker-email=your-email@example.com \
--namespace=microservices
Security Best Practices for Secrets:
# k8s/postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: microservices
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: microservices-secrets
key: POSTGRES_PASSWORD
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
namespace: microservices
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: microservices
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
# k8s/user-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: microservices
labels:
app: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: your-registry/user-service:latest
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
value: "postgresql://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@postgres-service:5432/$(POSTGRES_DB)"
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_USER
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_DB
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: microservices-secrets
key: POSTGRES_PASSWORD
- name: DJANGO_SECRET_KEY
valueFrom:
secretKeyRef:
name: microservices-secrets
key: DJANGO_SECRET_KEY
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: microservices-config
key: REDIS_URL
- name: CELERY_BROKER_URL
valueFrom:
configMapKeyRef:
name: microservices-config
key: CELERY_BROKER_URL
livenessProbe:
httpGet:
path: /health/
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready/
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
initContainers:
- name: migrate
image: your-registry/user-service:latest
command: ['python', 'manage.py', 'migrate']
env:
- name: DATABASE_URL
value: "postgresql://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@postgres-service:5432/$(POSTGRES_DB)"
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_USER
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_DB
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: microservices-secrets
key: POSTGRES_PASSWORD
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: microservices
spec:
selector:
app: user-service
ports:
- port: 8000
targetPort: 8000
type: ClusterIP
---
# Celery Worker for User Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-worker
namespace: microservices
spec:
replicas: 2
selector:
matchLabels:
app: user-worker
template:
metadata:
labels:
app: user-worker
spec:
containers:
- name: user-worker
image: your-registry/user-service:latest
command: ['celery', '-A', 'user_service', 'worker', '--loglevel=info', '-Q', 'user_queue']
env:
- name: DATABASE_URL
value: "postgresql://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@postgres-service:5432/$(POSTGRES_DB)"
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_USER
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: microservices-config
key: POSTGRES_DB
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: microservices-secrets
key: POSTGRES_PASSWORD
- name: CELERY_BROKER_URL
valueFrom:
configMapKeyRef:
name: microservices-config
key: CELERY_BROKER_URL
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service-hpa
namespace: microservices
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: microservices-ingress
namespace: microservices
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- api.yourdomain.com
secretName: microservices-tls
rules:
- host: api.yourdomain.com
http:
paths:
- path: /users(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service
port:
number: 8000
- path: /orders(/|$)(.*)
pathType: Prefix
backend:
service:
name: order-service
port:
number: 8000
# .github/workflows/deploy.yml
name: Deploy Microservices
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
service: [user-service, order-service]
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
cd ${{ matrix.service }}
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run tests
run: |
cd ${{ matrix.service }}
pytest --cov --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ${{ matrix.service }}/coverage.xml
build-and-push:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
strategy:
matrix:
service: [user-service, order-service]
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/${{ matrix.service }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha,prefix={{branch}}-
type=raw,value=latest,enable={{is_default_branch}}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: ./${{ matrix.service }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy-staging:
needs: build-and-push
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
environment: staging
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up kubectl
uses: azure/setup-kubectl@v3
with:
version: 'v1.27.0'
- name: Configure kubectl
run: |
echo "${{ secrets.KUBE_CONFIG_STAGING }}" | base64 -d > kubeconfig
export KUBECONFIG=kubeconfig
- name: Deploy to staging
run: |
export KUBECONFIG=kubeconfig
# Update image tags in deployment files
sed -i "s|your-registry/user-service:latest|${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/user-service:${{ github.sha }}|g" k8s/user-service.yaml
sed -i "s|your-registry/order-service:latest|${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/order-service:${{ github.sha }}|g" k8s/order-service.yaml
# Apply Kubernetes manifests
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secrets.yaml
kubectl apply -f k8s/postgres.yaml
kubectl apply -f k8s/redis.yaml
kubectl apply -f k8s/rabbitmq.yaml
kubectl apply -f k8s/user-service.yaml
kubectl apply -f k8s/order-service.yaml
kubectl apply -f k8s/hpa.yaml
kubectl apply -f k8s/ingress.yaml
# Wait for deployments to be ready
kubectl rollout status deployment/user-service -n microservices --timeout=300s
kubectl rollout status deployment/order-service -n microservices --timeout=300s
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
environment: production
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up kubectl
uses: azure/setup-kubectl@v3
with:
version: 'v1.27.0'
- name: Configure kubectl
run: |
echo "${{ secrets.KUBE_CONFIG_PRODUCTION }}" | base64 -d > kubeconfig
export KUBECONFIG=kubeconfig
- name: Deploy to production
run: |
export KUBECONFIG=kubeconfig
# Update image tags
sed -i "s|your-registry/user-service:latest|${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/user-service:${{ github.sha }}|g" k8s/user-service.yaml
sed -i "s|your-registry/order-service:latest|${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/order-service:${{ github.sha }}|g" k8s/order-service.yaml
# Rolling update deployment
kubectl set image deployment/user-service user-service=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/user-service:${{ github.sha }} -n microservices
kubectl set image deployment/order-service order-service=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/order-service:${{ github.sha }} -n microservices
# Wait for rollout to complete
kubectl rollout status deployment/user-service -n microservices --timeout=600s
kubectl rollout status deployment/order-service -n microservices --timeout=600s
{
"family": "user-service",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
"containerDefinitions": [
{
"name": "user-service",
"image": "your-account.dkr.ecr.us-west-2.amazonaws.com/user-service:latest",
"portMappings": [
{
"containerPort": 8000,
"protocol": "tcp"
}
],
"environment": [
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "user_service.settings.production"
}
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:us-west-2:123456789012:secret:microservices/database-url"
},
{
"name": "DJANGO_SECRET_KEY",
"valueFrom": "arn:aws:secretsmanager:us-west-2:123456789012:secret:microservices/django-secret"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/user-service",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "ecs"
}
},
"healthCheck": {
"command": [
"CMD-SHELL",
"curl -f http://localhost:8000/health/ || exit 1"
],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
}
]
}
# infrastructure/main.tf
provider "aws" {
region = var.aws_region
}
# VPC and Networking
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "microservices-vpc"
cidr = "10.0.0.0/16"
azs = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Environment = var.environment
Project = "microservices"
}
}
# ECS Cluster
resource "aws_ecs_cluster" "microservices" {
name = "microservices-cluster"
capacity_providers = ["FARGATE", "FARGATE_SPOT"]
default_capacity_provider_strategy {
capacity_provider = "FARGATE"
weight = 1
}
setting {
name = "containerInsights"
value = "enabled"
}
}
# Application Load Balancer
resource "aws_lb" "microservices" {
name = "microservices-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb.id]
subnets = module.vpc.public_subnets
enable_deletion_protection = false
tags = {
Environment = var.environment
}
}
# RDS Database
resource "aws_db_instance" "postgres" {
identifier = "microservices-postgres"
engine = "postgres"
engine_version = "13.7"
instance_class = "db.t3.micro"
allocated_storage = 20
max_allocated_storage = 100
storage_type = "gp2"
storage_encrypted = true
db_name = "microservices"
username = "postgres"
password = var.db_password
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.postgres.name
backup_retention_period = 7
backup_window = "03:00-04:00"
maintenance_window = "sun:04:00-sun:05:00"
skip_final_snapshot = true
tags = {
Environment = var.environment
}
}
# ElastiCache Redis
resource "aws_elasticache_subnet_group" "redis" {
name = "microservices-redis-subnet-group"
subnet_ids = module.vpc.private_subnets
}
resource "aws_elasticache_cluster" "redis" {
cluster_id = "microservices-redis"
engine = "redis"
node_type = "cache.t3.micro"
num_cache_nodes = 1
parameter_group_name = "default.redis7"
port = 6379
subnet_group_name = aws_elasticache_subnet_group.redis.name
security_group_ids = [aws_security_group.redis.id]
tags = {
Environment = var.environment
}
}
# k8s/monitoring.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: microservices
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'user-service'
static_configs:
- targets: ['user-service:8000']
metrics_path: '/metrics'
- job_name: 'order-service'
static_configs:
- targets: ['order-service:8000']
metrics_path: '/metrics'
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: microservices
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
ports:
- containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- name: storage-volume
mountPath: /prometheus
command:
- '/bin/prometheus'
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
volumes:
- name: config-volume
configMap:
name: prometheus-config
- name: storage-volume
emptyDir: {}
# user_service/health/views.py
from django.http import JsonResponse
from django.db import connections
from django.core.cache import cache
import redis
import logging
logger = logging.getLogger(__name__)
def health_check(request):
"""Basic health check endpoint"""
return JsonResponse({
'status': 'healthy',
'service': 'user-service',
'version': '1.0.0'
})
def readiness_check(request):
"""Readiness check with dependency verification"""
checks = {
'database': check_database(),
'cache': check_cache(),
'external_services': check_external_services()
}
all_healthy = all(checks.values())
status_code = 200 if all_healthy else 503
return JsonResponse({
'status': 'ready' if all_healthy else 'not_ready',
'checks': checks
}, status=status_code)
def check_database():
"""Check database connectivity"""
try:
db_conn = connections['default']
db_conn.cursor()
return True
except Exception as e:
logger.error(f"Database check failed: {e}")
return False
def check_cache():
"""Check cache connectivity"""
try:
cache.set('health_check', 'ok', 30)
return cache.get('health_check') == 'ok'
except Exception as e:
logger.error(f"Cache check failed: {e}")
return False
def check_external_services():
"""Check external service dependencies"""
try:
# Add checks for external services
return True
except Exception as e:
logger.error(f"External service check failed: {e}")
return False
This comprehensive deployment guide covers containerization, orchestration, CI/CD, cloud deployments, and monitoring for Django microservices. The examples provide production-ready configurations that can be adapted to your specific requirements and infrastructure.
Testing Microservices
Testing microservices presents unique challenges compared to monolithic applications. This chapter covers comprehensive testing strategies, tools, and best practices for ensuring reliability and quality in your Django microservices architecture.
Securing Microservices
Security in microservices architecture requires a comprehensive approach covering authentication, authorization, network security, data protection, and monitoring. This section explores security best practices and implementation strategies for Django microservices.