Cloud deployment transforms Django applications from single-server setups to globally distributed, auto-scaling, and highly available systems. This comprehensive guide covers everything Django developers need to know about deploying to major cloud platforms, from basic concepts to advanced architectures, cost optimization, and production best practices.
Infrastructure as a Service (IaaS)
Platform as a Service (PaaS)
Function as a Service (FaaS) / Serverless
Container as a Service (CaaS)
Single-Region Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Load Balancer │───▶│ Web Servers │───▶│ Database │
│ (ALB/CLB) │ │ (Auto Scaling) │ │ (RDS/SQL) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ CDN/Static │ │ File Storage │ │ Cache Layer │
│ (CloudFront/S3) │ │ (S3/Blob) │ │ (ElastiCache) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Multi-Region Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Global Load Balancer │
│ (Route 53/Cloud DNS) │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Region 1 │ │ Region 2 │ │ Region 3 │
│ (Primary) │ │ (Secondary) │ │ (DR) │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ DB Primary │ │ DB Replica │ │ DB Backup │
└─────────────┘ └─────────────┘ └─────────────┘
Right-Sizing Resources
Reserved Instances and Savings Plans
Storage Optimization
Network Cost Management
AWS is the most comprehensive cloud platform with over 200 services. For Django developers, key services include EC2 (compute), RDS (databases), S3 (storage), CloudFront (CDN), and Elastic Beanstalk (PaaS).
Initial Account Configuration
# Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
# Configure AWS CLI
aws configure
# AWS Access Key ID: [Your Access Key]
# AWS Secret Access Key: [Your Secret Key]
# Default region name: us-east-1
# Default output format: json
# Verify configuration
aws sts get-caller-identity
IAM Security Best Practices
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"rds:*",
"s3:*",
"elasticbeanstalk:*",
"iam:PassRole",
"cloudformation:*"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": ["us-east-1", "us-west-2"]
}
}
}
]
}
Elastic Beanstalk is AWS's PaaS offering that simplifies Django deployment while maintaining full control over underlying resources.
Project Structure for Beanstalk
myproject/
├── .ebextensions/
│ ├── 01_packages.config
│ ├── 02_python.config
│ ├── 03_django.config
│ └── 04_https.config
├── .elasticbeanstalk/
│ └── config.yml
├── requirements.txt
├── manage.py
├── myproject/
│ ├── settings/
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── development.py
│ │ └── production.py
│ └── wsgi.py
└── application.py # EB entry point
Django Settings for Elastic Beanstalk
# settings/production.py
import os
from .base import *
# Elastic Beanstalk environment variables
DEBUG = False
ALLOWED_HOSTS = [
'.elasticbeanstalk.com',
'.amazonaws.com',
os.environ.get('DOMAIN_NAME', ''),
]
# Database configuration from EB environment
if 'RDS_HOSTNAME' in os.environ:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['RDS_DB_NAME'],
'USER': os.environ['RDS_USERNAME'],
'PASSWORD': os.environ['RDS_PASSWORD'],
'HOST': os.environ['RDS_HOSTNAME'],
'PORT': os.environ['RDS_PORT'],
'OPTIONS': {
'sslmode': 'require',
},
}
}
# S3 Static and Media Files
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('S3_BUCKET_NAME')
AWS_S3_REGION_NAME = os.environ.get('AWS_DEFAULT_REGION', 'us-east-1')
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
# Static files configuration
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/static/'
# Media files configuration
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/media/'
# S3 settings
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_S3_FILE_OVERWRITE = False
AWS_QUERYSTRING_AUTH = False
# Security settings
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
# Session and CSRF cookies
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
# Logging to CloudWatch
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': '/opt/python/log/django.log',
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django': {
'handlers': ['file', 'console'],
'level': 'INFO',
'propagate': True,
},
},
}
Application Entry Point
# application.py
import os
import sys
# Add the project directory to Python path
sys.path.insert(0, os.path.dirname(__file__))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings.production')
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Elastic Beanstalk Configuration Files
# .ebextensions/01_packages.config
packages:
yum:
git: []
postgresql-devel: []
libjpeg-turbo-devel: []
libffi-devel: []
openssl-devel: []
# .ebextensions/02_python.config
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: application:application
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myproject.settings.production
PYTHONPATH: /opt/python/current/app
aws:elasticbeanstalk:container:python:staticfiles:
/static/: staticfiles/
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: staticfiles
# .ebextensions/03_django.config
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
leader_only: true
03_compress_static:
command: "source /opt/python/run/venv/bin/activate && python manage.py compress --force"
leader_only: true
04_create_cache_table:
command: "source /opt/python/run/venv/bin/activate && python manage.py createcachetable"
leader_only: true
05_load_fixtures:
command: "source /opt/python/run/venv/bin/activate && python manage.py loaddata initial_data.json"
leader_only: true
ignoreErrors: true
option_settings:
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
SecurityGroups: sg-12345678
InstanceType: t3.medium
aws:autoscaling:asg:
MinSize: 2
MaxSize: 10
Cooldown: 300
aws:autoscaling:trigger:
MeasureName: CPUUtilization
Unit: Percent
UpperThreshold: 80
LowerThreshold: 20
ScaleUpIncrement: 2
ScaleDownIncrement: -1
aws:elasticbeanstalk:environment:
LoadBalancerType: application
ServiceRole: aws-elasticbeanstalk-service-role
aws:elbv2:loadbalancer:
IdleTimeout: 300
SecurityGroups: sg-12345678
aws:elbv2:listener:443:
Protocol: HTTPS
SSLCertificateArns: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
HealthCheckSuccessThreshold: Ok
HealthCheckURL: /health/
aws:elasticbeanstalk:cloudwatch:logs:
StreamLogs: true
DeleteOnTerminate: false
RetentionInDays: 7
# .ebextensions/04_https.config
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt": ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
# .ebextensions/05_environment_variables.config
option_settings:
aws:elasticbeanstalk:application:environment:
DEBUG: "False"
SECRET_KEY: "your-secret-key-from-parameter-store"
S3_BUCKET_NAME: "your-s3-bucket-name"
REDIS_URL: "redis://your-elasticache-endpoint:6379/1"
SENTRY_DSN: "your-sentry-dsn"
EMAIL_HOST: "email-smtp.us-east-1.amazonaws.com"
EMAIL_PORT: "587"
EMAIL_USE_TLS: "True"
Deployment Commands
# Initialize Elastic Beanstalk application
eb init -p python-3.9 django-app --region us-east-1
# Create environment
eb create production --database.engine postgres --database.size db.t3.micro
# Deploy application
eb deploy
# Set environment variables
eb setenv DEBUG=False SECRET_KEY=your-secret-key
# Open application in browser
eb open
# View logs
eb logs
# SSH into instance
eb ssh
# Terminate environment (careful!)
eb terminate production
Advanced Beanstalk Configuration
# .elasticbeanstalk/config.yml
branch-defaults:
main:
environment: production
group_suffix: null
global:
application_name: django-app
branch: null
default_ec2_keyname: my-key-pair
default_platform: Python 3.9 running on 64bit Amazon Linux 2
default_region: us-east-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: eb-cli
repository: null
sc: git
workspace_type: Application
Serverless Django with Zappa
# zappa_settings.json
{
"production": {
"app_function": "myproject.wsgi.application",
"aws_region": "us-east-1",
"profile_name": "default",
"project_name": "django-serverless",
"runtime": "python3.9",
"s3_bucket": "django-serverless-deployments",
"memory_size": 512,
"timeout_seconds": 30,
"environment_variables": {
"DJANGO_SETTINGS_MODULE": "myproject.settings.production"
},
"vpc_config": {
"SubnetIds": ["subnet-12345", "subnet-67890"],
"SecurityGroupIds": ["sg-12345678"]
},
"keep_warm": false,
"slim_handler": true,
"exclude": [
"*.pyc",
"__pycache__",
"tests/",
"docs/",
".git/"
]
}
}
# Deploy commands
pip install zappa
zappa init
zappa deploy production
zappa update production
zappa undeploy production
Serverless Django Settings
# settings/serverless.py
import os
from .base import *
# Lambda-specific settings
DEBUG = False
ALLOWED_HOSTS = ['*'] # API Gateway handles routing
# Database - use RDS Proxy for connection pooling
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['DB_NAME'],
'USER': os.environ['DB_USER'],
'PASSWORD': os.environ['DB_PASSWORD'],
'HOST': os.environ['DB_PROXY_ENDPOINT'], # RDS Proxy endpoint
'PORT': '5432',
'OPTIONS': {
'sslmode': 'require',
},
'CONN_MAX_AGE': 0, # Don't persist connections in Lambda
}
}
# Use S3 for static and media files
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# Cache using DynamoDB or ElastiCache
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': os.environ['REDIS_URL'],
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'CONNECTION_POOL_KWARGS': {
'max_connections': 1, # Limit connections in Lambda
},
},
}
}
# Session storage
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
# Logging for CloudWatch
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'root': {
'handlers': ['console'],
},
}
Amazon ECS (Elastic Container Service) with Fargate provides serverless container deployment without managing EC2 instances.
ECS Task Definition
{
"family": "django-app",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "1024",
"memory": "2048",
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
"containerDefinitions": [
{
"name": "django-web",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/django-app:latest",
"portMappings": [
{
"containerPort": 8000,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "myproject.settings.production"
},
{
"name": "AWS_DEFAULT_REGION",
"value": "us-east-1"
}
],
"secrets": [
{
"name": "SECRET_KEY",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:django/secret-key-AbCdEf"
},
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:django/database-url-XyZ123"
},
{
"name": "REDIS_URL",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:django/redis-url-MnOpQr"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/django-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs",
"awslogs-create-group": "true"
}
},
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:8000/health/ || exit 1"],
"interval": 30,
"timeout": 10,
"retries": 3,
"startPeriod": 60
},
"mountPoints": [
{
"sourceVolume": "static-files",
"containerPath": "/app/staticfiles",
"readOnly": false
}
]
},
{
"name": "nginx",
"image": "nginx:alpine",
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"dependsOn": [
{
"containerName": "django-web",
"condition": "HEALTHY"
}
],
"mountPoints": [
{
"sourceVolume": "static-files",
"containerPath": "/usr/share/nginx/html/static",
"readOnly": true
},
{
"sourceVolume": "nginx-config",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/django-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "nginx"
}
}
}
],
"volumes": [
{
"name": "static-files"
},
{
"name": "nginx-config",
"host": {
"sourcePath": "/ecs/nginx-config"
}
}
]
}
ECS Service Definition
{
"serviceName": "django-app-service",
"cluster": "django-cluster",
"taskDefinition": "django-app:1",
"desiredCount": 3,
"launchType": "FARGATE",
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-12345678",
"subnet-87654321"
],
"securityGroups": [
"sg-12345678"
],
"assignPublicIp": "DISABLED"
}
},
"loadBalancers": [
{
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/django-tg/1234567890123456",
"containerName": "nginx",
"containerPort": 80
}
],
"serviceRegistries": [
{
"registryArn": "arn:aws:servicediscovery:us-east-1:123456789012:service/srv-12345678",
"containerName": "django-web",
"containerPort": 8000
}
],
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 50,
"deploymentCircuitBreaker": {
"enable": true,
"rollback": true
}
},
"enableExecuteCommand": true,
"tags": [
{
"key": "Environment",
"value": "production"
},
{
"key": "Application",
"value": "django-app"
}
]
}
ECS Deployment Script
#!/bin/bash
# deploy-ecs.sh
set -e
# Configuration
AWS_REGION="us-east-1"
ECR_REPOSITORY="123456789012.dkr.ecr.us-east-1.amazonaws.com/django-app"
CLUSTER_NAME="django-cluster"
SERVICE_NAME="django-app-service"
TASK_FAMILY="django-app"
echo "🚀 Starting ECS deployment..."
# Build and push Docker image
echo "📦 Building Docker image..."
docker build -t django-app:latest .
# Tag for ECR
docker tag django-app:latest $ECR_REPOSITORY:latest
docker tag django-app:latest $ECR_REPOSITORY:$(git rev-parse --short HEAD)
# Login to ECR
echo "🔐 Logging in to ECR..."
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY
# Push images
echo "📤 Pushing images to ECR..."
docker push $ECR_REPOSITORY:latest
docker push $ECR_REPOSITORY:$(git rev-parse --short HEAD)
# Update task definition
echo "📝 Updating task definition..."
TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition $TASK_FAMILY --query 'taskDefinition')
# Create new task definition with updated image
NEW_TASK_DEFINITION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$ECR_REPOSITORY:$(git rev-parse --short HEAD)" '.containerDefinitions[0].image = $IMAGE | del(.taskDefinitionArn) | del(.revision) | del(.status) | del(.requiresAttributes) | del(.placementConstraints) | del(.compatibilities) | del(.registeredAt) | del(.registeredBy)')
# Register new task definition
echo "📋 Registering new task definition..."
aws ecs register-task-definition --cli-input-json "$NEW_TASK_DEFINITION"
# Update service
echo "🔄 Updating ECS service..."
aws ecs update-service \
--cluster $CLUSTER_NAME \
--service $SERVICE_NAME \
--task-definition $TASK_FAMILY
# Wait for deployment to complete
echo "⏳ Waiting for deployment to complete..."
aws ecs wait services-stable \
--cluster $CLUSTER_NAME \
--services $SERVICE_NAME
echo "✅ ECS deployment completed successfully!"
RDS PostgreSQL Configuration
# Create RDS subnet group
aws rds create-db-subnet-group \
--db-subnet-group-name django-db-subnet-group \
--db-subnet-group-description "Subnet group for Django RDS" \
--subnet-ids subnet-12345678 subnet-87654321
# Create RDS instance
aws rds create-db-instance \
--db-instance-identifier django-production-db \
--db-instance-class db.t3.medium \
--engine postgres \
--engine-version 13.7 \
--master-username postgres \
--master-user-password SecurePassword123! \
--allocated-storage 100 \
--storage-type gp2 \
--storage-encrypted \
--vpc-security-group-ids sg-12345678 \
--db-subnet-group-name django-db-subnet-group \
--backup-retention-period 7 \
--multi-az \
--auto-minor-version-upgrade \
--deletion-protection
RDS Connection in Django
# settings/production.py
import os
# RDS Database configuration
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('RDS_DB_NAME', 'django_app'),
'USER': os.environ.get('RDS_USERNAME', 'postgres'),
'PASSWORD': os.environ.get('RDS_PASSWORD'),
'HOST': os.environ.get('RDS_HOSTNAME'),
'PORT': os.environ.get('RDS_PORT', '5432'),
'OPTIONS': {
'sslmode': 'require',
'connect_timeout': 10,
'options': '-c default_transaction_isolation=read_committed'
},
'CONN_MAX_AGE': 600,
'CONN_HEALTH_CHECKS': True,
}
}
# Read replica configuration
if os.environ.get('RDS_READ_HOSTNAME'):
DATABASES['read'] = {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('RDS_DB_NAME', 'django_app'),
'USER': os.environ.get('RDS_READ_USERNAME', 'postgres'),
'PASSWORD': os.environ.get('RDS_READ_PASSWORD'),
'HOST': os.environ.get('RDS_READ_HOSTNAME'),
'PORT': os.environ.get('RDS_PORT', '5432'),
'OPTIONS': {
'sslmode': 'require',
},
'CONN_MAX_AGE': 600,
}
DATABASE_ROUTERS = ['myproject.routers.DatabaseRouter']
S3 Bucket Configuration
# Create S3 bucket
aws s3 mb s3://django-app-static-files --region us-east-1
# Configure bucket policy
cat > bucket-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::django-app-static-files/static/*"
}
]
}
EOF
aws s3api put-bucket-policy --bucket django-app-static-files --policy file://bucket-policy.json
# Configure CORS
cat > cors-config.json << EOF
{
"CORSRules": [
{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "HEAD"],
"MaxAgeSeconds": 3000
}
]
}
EOF
aws s3api put-bucket-cors --bucket django-app-static-files --cors-configuration file://cors-config.json
Django S3 Storage Configuration
# storage_backends.py
from storages.backends.s3boto3 import S3Boto3Storage
class StaticStorage(S3Boto3Storage):
location = 'static'
default_acl = 'public-read'
file_overwrite = True
class MediaStorage(S3Boto3Storage):
location = 'media'
default_acl = 'public-read'
file_overwrite = False
# settings/production.py
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('S3_BUCKET_NAME')
AWS_S3_REGION_NAME = os.environ.get('AWS_DEFAULT_REGION', 'us-east-1')
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
# Static files
STATICFILES_STORAGE = 'myproject.storage_backends.StaticStorage'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/static/'
# Media files
DEFAULT_FILE_STORAGE = 'myproject.storage_backends.MediaStorage'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/media/'
# S3 settings
AWS_DEFAULT_ACL = None
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_S3_FILE_OVERWRITE = False
AWS_QUERYSTRING_AUTH = False
AWS_S3_SIGNATURE_VERSION = 's3v4'
# CloudFront CDN (optional)
if os.environ.get('CLOUDFRONT_DOMAIN'):
AWS_S3_CUSTOM_DOMAIN = os.environ.get('CLOUDFRONT_DOMAIN')
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/static/'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/media/'
ElastiCache Configuration
# Create cache subnet group
aws elasticache create-cache-subnet-group \
--cache-subnet-group-name django-cache-subnet-group \
--cache-subnet-group-description "Subnet group for Django ElastiCache" \
--subnet-ids subnet-12345678 subnet-87654321
# Create Redis cluster
aws elasticache create-replication-group \
--replication-group-id django-redis-cluster \
--description "Redis cluster for Django" \
--num-cache-clusters 2 \
--cache-node-type cache.t3.micro \
--engine redis \
--engine-version 6.2 \
--cache-parameter-group-name default.redis6.x \
--cache-subnet-group-name django-cache-subnet-group \
--security-group-ids sg-12345678 \
--at-rest-encryption-enabled \
--transit-encryption-enabled \
--auth-token YourAuthTokenHere \
--automatic-failover-enabled
Django Redis Configuration
# settings/production.py
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': f"redis://:{os.environ.get('REDIS_AUTH_TOKEN')}@{os.environ.get('REDIS_ENDPOINT')}:6379/1",
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'CONNECTION_POOL_KWARGS': {
'max_connections': 50,
'retry_on_timeout': True,
'ssl_cert_reqs': None,
},
},
'KEY_PREFIX': 'django',
'VERSION': 1,
'TIMEOUT': 300,
}
}
# Session storage
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
SESSION_COOKIE_AGE = 86400
# cloudformation/django-infrastructure.yml
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Django Application Infrastructure'
Parameters:
Environment:
Type: String
Default: production
AllowedValues: [development, staging, production]
InstanceType:
Type: String
Default: t3.medium
AllowedValues: [t3.small, t3.medium, t3.large]
Resources:
# VPC and Networking
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsHostnames: true
EnableDnsSupport: true
Tags:
- Key: Name
Value: !Sub ${Environment}-django-vpc
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [0, !GetAZs '']
CidrBlock: 10.0.1.0/24
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub ${Environment}-public-subnet-1
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [1, !GetAZs '']
CidrBlock: 10.0.2.0/24
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub ${Environment}-public-subnet-2
PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [0, !GetAZs '']
CidrBlock: 10.0.3.0/24
Tags:
- Key: Name
Value: !Sub ${Environment}-private-subnet-1
PrivateSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [1, !GetAZs '']
CidrBlock: 10.0.4.0/24
Tags:
- Key: Name
Value: !Sub ${Environment}-private-subnet-2
# Internet Gateway
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Sub ${Environment}-igw
InternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
# Route Tables
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${Environment}-public-routes
DefaultPublicRoute:
Type: AWS::EC2::Route
DependsOn: InternetGatewayAttachment
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref PublicSubnet1
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref PublicSubnet2
# Security Groups
LoadBalancerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${Environment}-alb-sg
GroupDescription: Security group for Application Load Balancer
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
WebServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${Environment}-web-sg
GroupDescription: Security group for web servers
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8000
ToPort: 8000
SourceSecurityGroupId: !Ref LoadBalancerSecurityGroup
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 10.0.0.0/16
DatabaseSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub ${Environment}-db-sg
GroupDescription: Security group for database
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 5432
ToPort: 5432
SourceSecurityGroupId: !Ref WebServerSecurityGroup
# RDS Database
DBSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: Subnet group for RDS database
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Tags:
- Key: Name
Value: !Sub ${Environment}-db-subnet-group
Database:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceIdentifier: !Sub ${Environment}-django-db
DBInstanceClass: db.t3.micro
Engine: postgres
EngineVersion: '13.7'
MasterUsername: postgres
MasterUserPassword: !Ref DatabasePassword
AllocatedStorage: 20
StorageType: gp2
StorageEncrypted: true
VPCSecurityGroups:
- !Ref DatabaseSecurityGroup
DBSubnetGroupName: !Ref DBSubnetGroup
BackupRetentionPeriod: 7
MultiAZ: !If [IsProduction, true, false]
DeletionProtection: !If [IsProduction, true, false]
# ElastiCache Redis
CacheSubnetGroup:
Type: AWS::ElastiCache::SubnetGroup
Properties:
Description: Subnet group for ElastiCache
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
CacheCluster:
Type: AWS::ElastiCache::CacheCluster
Properties:
CacheNodeType: cache.t3.micro
Engine: redis
NumCacheNodes: 1
CacheSubnetGroupName: !Ref CacheSubnetGroup
VpcSecurityGroupIds:
- !Ref CacheSecurityGroup
CacheSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for ElastiCache
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 6379
ToPort: 6379
SourceSecurityGroupId: !Ref WebServerSecurityGroup
# Application Load Balancer
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Sub ${Environment}-django-alb
Scheme: internet-facing
SecurityGroups:
- !Ref LoadBalancerSecurityGroup
Subnets:
- !Ref PublicSubnet1
- !Ref PublicSubnet2
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Name: !Sub ${Environment}-django-tg
Port: 8000
Protocol: HTTP
VpcId: !Ref VPC
HealthCheckPath: /health/
HealthCheckProtocol: HTTP
HealthCheckIntervalSeconds: 30
HealthCheckTimeoutSeconds: 5
HealthyThresholdCount: 2
UnhealthyThresholdCount: 3
LoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref TargetGroup
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
# Auto Scaling Group
LaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateName: !Sub ${Environment}-django-lt
LaunchTemplateData:
ImageId: ami-0c02fb55956c7d316 # Amazon Linux 2
InstanceType: !Ref InstanceType
SecurityGroupIds:
- !Ref WebServerSecurityGroup
IamInstanceProfile:
Arn: !GetAtt InstanceProfile.Arn
UserData:
Fn::Base64: !Sub |
#!/bin/bash
yum update -y
yum install -y docker
service docker start
usermod -a -G docker ec2-user
# Install Docker Compose
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Deploy application
mkdir -p /opt/django_app
cd /opt/django_app
# Create environment file
cat > .env << EOF
DJANGO_SETTINGS_MODULE=myproject.settings.production
DATABASE_URL=postgresql://postgres:${DatabasePassword}@${Database.Endpoint.Address}:5432/postgres
REDIS_URL=redis://${CacheCluster.RedisEndpoint.Address}:6379/1
EOF
# Pull and run application
docker run -d --name django-app \
--env-file .env \
-p 8000:8000 \
123456789012.dkr.ecr.us-east-1.amazonaws.com/django-app:latest
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: !Sub ${Environment}-django-asg
VPCZoneIdentifier:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
LaunchTemplate:
LaunchTemplateId: !Ref LaunchTemplate
Version: !GetAtt LaunchTemplate.LatestVersionNumber
MinSize: 1
MaxSize: 4
DesiredCapacity: 2
TargetGroupARNs:
- !Ref TargetGroup
HealthCheckType: ELB
HealthCheckGracePeriod: 300
# IAM Roles
InstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref InstanceRole
Conditions:
IsProduction: !Equals [!Ref Environment, production]
Parameters:
DatabasePassword:
Type: String
NoEcho: true
Description: Password for the RDS database
Outputs:
LoadBalancerDNS:
Description: DNS name of the load balancer
Value: !GetAtt LoadBalancer.DNSName
Export:
Name: !Sub ${Environment}-LoadBalancerDNS
DatabaseEndpoint:
Description: RDS database endpoint
Value: !GetAtt Database.Endpoint.Address
Export:
Name: !Sub ${Environment}-DatabaseEndpoint
Google Cloud Platform offers robust services for Django deployment including App Engine (PaaS), Cloud Run (serverless containers), and Google Kubernetes Engine (GKE). GCP is known for its strong data analytics, machine learning capabilities, and global network infrastructure.
Initial Setup
# Install Google Cloud SDK
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
# Initialize gcloud
gcloud init
# Set default project and region
gcloud config set project your-project-id
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
# Enable required APIs
gcloud services enable appengine.googleapis.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable cloudrun.googleapis.com
gcloud services enable container.googleapis.com
gcloud services enable sqladmin.googleapis.com
gcloud services enable redis.googleapis.com
# Create service account for deployment
gcloud iam service-accounts create django-deploy \
--display-name="Django Deployment Service Account"
# Grant necessary permissions
gcloud projects add-iam-policy-binding your-project-id \
--member="serviceAccount:django-deploy@your-project-id.iam.gserviceaccount.com" \
--role="roles/appengine.appAdmin"
gcloud projects add-iam-policy-binding your-project-id \
--member="serviceAccount:django-deploy@your-project-id.iam.gserviceaccount.com" \
--role="roles/cloudsql.client"
# Create and download service account key
gcloud iam service-accounts keys create django-deploy-key.json \
--iam-account=django-deploy@your-project-id.iam.gserviceaccount.com
App Engine is Google's fully managed PaaS that automatically handles scaling, load balancing, and infrastructure management.
App Engine Configuration
# app.yaml
runtime: python39
# Environment variables
env_variables:
DJANGO_SETTINGS_MODULE: myproject.settings.production
SECRET_KEY: your-secret-key-here
GCS_BUCKET_NAME: your-gcs-bucket
REDIS_URL: redis://10.0.0.1:6379/1
# Automatic scaling configuration
automatic_scaling:
min_instances: 2
max_instances: 20
target_cpu_utilization: 0.6
target_throughput_utilization: 0.6
max_concurrent_requests: 80
max_idle_instances: 5
min_idle_instances: 1
# Resource allocation
resources:
cpu: 2
memory_gb: 4
disk_size_gb: 10
# URL handlers
handlers:
# Static files served directly by App Engine
- url: /static
static_dir: staticfiles/
secure: always
expiration: "1d"
http_headers:
Cache-Control: "public, max-age=86400"
# Media files (if not using GCS)
- url: /media
static_dir: media/
secure: always
expiration: "1h"
# Health check endpoint
- url: /health
script: auto
secure: always
login: admin
# Admin interface (restricted)
- url: /admin/.*
script: auto
secure: always
login: admin
# All other URLs
- url: /.*
script: auto
secure: always
# Cloud SQL connection
beta_settings:
cloud_sql_instances: your-project:us-central1:django-db
# VPC connector for private resources
vpc_access_connector:
name: projects/your-project/locations/us-central1/connectors/django-connector
# Error handlers
error_handlers:
- file: error.html
error_code: over_quota
- file: error.html
error_code: dos_api_denial
- file: error.html
error_code: timeout
Django Settings for App Engine
# settings/appengine.py
import os
from .base import *
# App Engine specific settings
DEBUG = False
ALLOWED_HOSTS = ['*'] # App Engine handles routing
# Database configuration for Cloud SQL
if os.getenv('GAE_APPLICATION', None):
# Running on production App Engine
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['DB_NAME'],
'USER': os.environ['DB_USER'],
'PASSWORD': os.environ['DB_PASSWORD'],
'HOST': f'/cloudsql/{os.environ["CLOUD_SQL_CONNECTION_NAME"]}',
'PORT': '',
'OPTIONS': {
'init_command': "SET sql_mode='STRICT_TRANS_TABLES'",
},
}
}
else:
# Running locally
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('DB_NAME', 'django_local'),
'USER': os.environ.get('DB_USER', 'postgres'),
'PASSWORD': os.environ.get('DB_PASSWORD', ''),
'HOST': os.environ.get('DB_HOST', 'localhost'),
'PORT': os.environ.get('DB_PORT', '5432'),
}
}
# Google Cloud Storage for static and media files
DEFAULT_FILE_STORAGE = 'storages.backends.gcloud.GoogleCloudStorage'
STATICFILES_STORAGE = 'storages.backends.gcloud.GoogleCloudStorage'
GS_BUCKET_NAME = os.environ.get('GCS_BUCKET_NAME')
GS_PROJECT_ID = os.environ.get('GOOGLE_CLOUD_PROJECT')
GS_DEFAULT_ACL = 'publicRead'
GS_AUTO_CREATE_BUCKET = True
GS_FILE_OVERWRITE = False
# Static and media URLs
STATIC_URL = f'https://storage.googleapis.com/{GS_BUCKET_NAME}/static/'
MEDIA_URL = f'https://storage.googleapis.com/{GS_BUCKET_NAME}/media/'
# Cache configuration using Memorystore Redis
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': os.environ.get('REDIS_URL', 'redis://localhost:6379/1'),
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
},
'KEY_PREFIX': 'django',
}
}
# Session configuration
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
# Security settings
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
# Logging configuration for Cloud Logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'root': {
'handlers': ['console'],
'level': 'INFO',
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
'propagate': False,
},
},
}
Requirements for App Engine
# requirements.txt
Django==4.2.7
gunicorn==21.2.0
psycopg2-binary==2.9.7
django-storages[google]==1.14.2
django-redis==5.4.0
google-cloud-storage==2.10.0
google-cloud-secret-manager==2.16.4
Pillow==10.0.1
App Engine Deployment Commands
# Deploy to App Engine
gcloud app deploy
# Deploy with specific version
gcloud app deploy --version=v1 --no-promote
# Deploy and set traffic
gcloud app deploy --version=v2
gcloud app services set-traffic default --splits=v2=100
# View logs
gcloud app logs tail -s default
# Open application
gcloud app browse
# Manage versions
gcloud app versions list
gcloud app versions delete v1
Create Cloud SQL Instance
# Create PostgreSQL instance
gcloud sql instances create django-db \
--database-version=POSTGRES_13 \
--tier=db-f1-micro \
--region=us-central1 \
--root-password=SecurePassword123! \
--backup \
--backup-start-time=03:00 \
--maintenance-window-day=SUN \
--maintenance-window-hour=04 \
--enable-bin-log \
--retained-backups-count=7
# Create database
gcloud sql databases create django_app --instance=django-db
# Create user
gcloud sql users create django_user \
--instance=django-db \
--password=UserPassword123!
# Get connection name
gcloud sql instances describe django-db --format="value(connectionName)"
Cloud SQL Proxy for Local Development
# Download Cloud SQL Proxy
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
# Run proxy
./cloud_sql_proxy -instances=your-project:us-central1:django-db=tcp:5432
Create Redis Instance
# Create Redis instance
gcloud redis instances create django-redis \
--size=1 \
--region=us-central1 \
--redis-version=redis_6_x \
--tier=basic
# Get Redis host
gcloud redis instances describe django-redis \
--region=us-central1 \
--format="value(host)"
Cloud Run provides serverless container deployment with automatic scaling and pay-per-use pricing.
Dockerfile for Cloud Run
# Dockerfile for Cloud Run
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV PORT=8080
# Install system dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Set work directory
WORKDIR /app
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy project
COPY . .
# Collect static files
RUN python manage.py collectstatic --noinput
# Create non-root user
RUN adduser --disabled-password --gecos '' appuser
RUN chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 8080
# Run the application
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 myproject.wsgi:application
Cloud Build Configuration
# cloudbuild.yaml
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/django-app:$COMMIT_SHA', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/django-app:$COMMIT_SHA']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'django-app'
- '--image'
- 'gcr.io/$PROJECT_ID/django-app:$COMMIT_SHA'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
- '--allow-unauthenticated'
- '--set-env-vars'
- 'DJANGO_SETTINGS_MODULE=myproject.settings.production,GCS_BUCKET_NAME=$_GCS_BUCKET'
- '--set-cloudsql-instances'
- '$PROJECT_ID:us-central1:django-db'
- '--memory'
- '2Gi'
- '--cpu'
- '2'
- '--concurrency'
- '80'
- '--max-instances'
- '100'
- '--min-instances'
- '1'
- '--timeout'
- '300'
- '--port'
- '8080'
- '--vpc-connector'
- 'django-connector'
- '--vpc-egress'
- 'private-ranges-only'
# Substitutions for environment-specific values
substitutions:
_GCS_BUCKET: 'django-app-static-files'
# Build options
options:
logging: CLOUD_LOGGING_ONLY
machineType: 'E2_HIGHCPU_8'
images:
- 'gcr.io/$PROJECT_ID/django-app:$COMMIT_SHA'
Cloud Run Service Configuration
# service.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: django-app
annotations:
run.googleapis.com/ingress: all
run.googleapis.com/ingress-status: all
spec:
template:
metadata:
annotations:
run.googleapis.com/cloudsql-instances: your-project:us-central1:django-db
run.googleapis.com/vpc-access-connector: django-connector
run.googleapis.com/vpc-access-egress: private-ranges-only
autoscaling.knative.dev/maxScale: "100"
autoscaling.knative.dev/minScale: "1"
run.googleapis.com/cpu-throttling: "false"
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: gcr.io/your-project/django-app:latest
ports:
- containerPort: 8080
env:
- name: DJANGO_SETTINGS_MODULE
value: myproject.settings.production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: django-secrets
key: database-url
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: django-secrets
key: secret-key
resources:
limits:
cpu: "2"
memory: "2Gi"
livenessProbe:
httpGet:
path: /health/
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready/
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Deploy to Cloud Run
# Build and deploy using Cloud Build
gcloud builds submit --config cloudbuild.yaml
# Direct deployment
gcloud run deploy django-app \
--image gcr.io/your-project/django-app:latest \
--platform managed \
--region us-central1 \
--allow-unauthenticated \
--memory 2Gi \
--cpu 2 \
--concurrency 80 \
--max-instances 100 \
--min-instances 1 \
--set-cloudsql-instances your-project:us-central1:django-db \
--set-env-vars DJANGO_SETTINGS_MODULE=myproject.settings.production
# Update traffic allocation
gcloud run services update-traffic django-app \
--to-latest \
--region us-central1
# View service details
gcloud run services describe django-app --region us-central1
Django Settings for Cloud Run
# settings/cloudrun.py
import os
from .base import *
# Cloud Run specific settings
DEBUG = False
ALLOWED_HOSTS = ['*'] # Cloud Run handles routing
# Use Cloud SQL via Unix socket
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['DB_NAME'],
'USER': os.environ['DB_USER'],
'PASSWORD': os.environ['DB_PASSWORD'],
'HOST': f'/cloudsql/{os.environ["CLOUD_SQL_CONNECTION_NAME"]}',
'PORT': '',
'CONN_MAX_AGE': 0, # Cloud Run is stateless
}
}
# Google Cloud Storage
DEFAULT_FILE_STORAGE = 'storages.backends.gcloud.GoogleCloudStorage'
STATICFILES_STORAGE = 'storages.backends.gcloud.GoogleCloudStorage'
GS_BUCKET_NAME = os.environ.get('GCS_BUCKET_NAME')
GS_DEFAULT_ACL = 'publicRead'
# Redis cache via Memorystore
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': f"redis://{os.environ.get('REDIS_HOST')}:6379/1",
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'CONNECTION_POOL_KWARGS': {
'max_connections': 10, # Limit for Cloud Run
},
},
}
}
# Session storage
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
# Security
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# gke/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-app
labels:
app: django-app
spec:
replicas: 3
selector:
matchLabels:
app: django-app
template:
metadata:
labels:
app: django-app
spec:
containers:
- name: django
image: gcr.io/PROJECT_ID/django-app:latest
ports:
- containerPort: 8000
env:
- name: DJANGO_SETTINGS_MODULE
value: "myproject.settings.production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: django-secrets
key: database-url
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: django-secrets
key: secret-key
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health/
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready/
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
selector:
app: django-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
type: LoadBalancer
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: django-ssl-cert
spec:
domains:
- yourdomain.com
- www.yourdomain.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: django-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: django-ip
networking.gke.io/managed-certificates: django-ssl-cert
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: yourdomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: django-service
port:
number: 80
# terraform/gcp/main.tf
provider "google" {
project = var.project_id
region = var.region
}
# Enable required APIs
resource "google_project_service" "required_apis" {
for_each = toset([
"compute.googleapis.com",
"container.googleapis.com",
"sql-component.googleapis.com",
"redis.googleapis.com",
"run.googleapis.com"
])
service = each.value
disable_on_destroy = false
}
# VPC Network
resource "google_compute_network" "vpc" {
name = "${var.environment}-vpc"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "subnet" {
name = "${var.environment}-subnet"
ip_cidr_range = "10.0.0.0/24"
region = var.region
network = google_compute_network.vpc.id
}
# Cloud SQL Database
resource "google_sql_database_instance" "postgres" {
name = "${var.environment}-django-db"
database_version = "POSTGRES_13"
region = var.region
settings {
tier = "db-f1-micro"
backup_configuration {
enabled = true
start_time = "03:00"
}
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.vpc.id
}
database_flags {
name = "log_statement"
value = "all"
}
}
depends_on = [google_service_networking_connection.private_vpc_connection]
}
resource "google_sql_database" "database" {
name = "django_app"
instance = google_sql_database_instance.postgres.name
}
resource "google_sql_user" "user" {
name = "django_user"
instance = google_sql_database_instance.postgres.name
password = var.db_password
}
# Redis Instance
resource "google_redis_instance" "cache" {
name = "${var.environment}-redis"
tier = "BASIC"
memory_size_gb = 1
region = var.region
authorized_network = google_compute_network.vpc.id
}
# GKE Cluster
resource "google_container_cluster" "primary" {
name = "${var.environment}-gke"
location = var.region
remove_default_node_pool = true
initial_node_count = 1
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.subnet.name
workload_identity_config {
workload_pool = "${var.project_id}.svc.id.goog"
}
}
resource "google_container_node_pool" "primary_nodes" {
name = "${var.environment}-node-pool"
location = var.region
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-medium"
service_account = google_service_account.gke_service_account.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
# Service Account
resource "google_service_account" "gke_service_account" {
account_id = "${var.environment}-gke-sa"
display_name = "GKE Service Account"
}
# Private Service Connection
resource "google_compute_global_address" "private_ip_address" {
name = "${var.environment}-private-ip"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = google_compute_network.vpc.id
}
resource "google_service_networking_connection" "private_vpc_connection" {
network = google_compute_network.vpc.id
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
# Variables
variable "project_id" {
description = "GCP Project ID"
type = string
}
variable "region" {
description = "GCP Region"
type = string
default = "us-central1"
}
variable "environment" {
description = "Environment name"
type = string
default = "production"
}
variable "db_password" {
description = "Database password"
type = string
sensitive = true
}
# Outputs
output "cluster_endpoint" {
value = google_container_cluster.primary.endpoint
}
output "database_connection_name" {
value = google_sql_database_instance.postgres.connection_name
}
output "redis_host" {
value = google_redis_instance.cache.host
}
Microsoft Azure provides comprehensive cloud services for Django deployment including App Service (PaaS), Container Instances, Azure Kubernetes Service (AKS), and Azure Functions. Azure integrates well with Microsoft's enterprise ecosystem and offers strong hybrid cloud capabilities.
Install Azure CLI
# Install Azure CLI (Linux)
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Install Azure CLI (macOS)
brew update && brew install azure-cli
# Login to Azure
az login
# Set default subscription
az account set --subscription "Your Subscription Name"
# Create resource group
az group create --name django-app-rg --location eastus
# List available locations
az account list-locations --output table
Azure App Service is a fully managed PaaS for building, deploying, and scaling web apps.
Create App Service Plan and Web App
# Create App Service Plan
az appservice plan create \
--name django-app-plan \
--resource-group django-app-rg \
--sku B1 \
--is-linux
# Create Web App
az webapp create \
--resource-group django-app-rg \
--plan django-app-plan \
--name django-app-unique-name \
--runtime "PYTHON|3.11" \
--deployment-local-git
# Configure app settings
az webapp config appsettings set \
--resource-group django-app-rg \
--name django-app-unique-name \
--settings \
DJANGO_SETTINGS_MODULE=myproject.settings.production \
SECRET_KEY="your-secret-key" \
DEBUG=False \
WEBSITES_PORT=8000
# Configure startup command
az webapp config set \
--resource-group django-app-rg \
--name django-app-unique-name \
--startup-file "gunicorn --bind 0.0.0.0:8000 myproject.wsgi:application"
Django Settings for Azure App Service
# settings/azure.py
import os
from .base import *
# Azure App Service specific settings
DEBUG = False
ALLOWED_HOSTS = [
'.azurewebsites.net',
os.environ.get('WEBSITE_HOSTNAME', ''),
]
# Database configuration for Azure Database for PostgreSQL
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('AZURE_POSTGRESQL_NAME'),
'USER': os.environ.get('AZURE_POSTGRESQL_USER'),
'PASSWORD': os.environ.get('AZURE_POSTGRESQL_PASSWORD'),
'HOST': os.environ.get('AZURE_POSTGRESQL_HOST'),
'PORT': '5432',
'OPTIONS': {
'sslmode': 'require',
},
}
}
# Azure Blob Storage for static and media files
AZURE_ACCOUNT_NAME = os.environ.get('AZURE_STORAGE_ACCOUNT_NAME')
AZURE_ACCOUNT_KEY = os.environ.get('AZURE_STORAGE_ACCOUNT_KEY')
AZURE_CONTAINER = os.environ.get('AZURE_STORAGE_CONTAINER_NAME', 'static')
if AZURE_ACCOUNT_NAME:
STATICFILES_STORAGE = 'storages.backends.azure_storage.AzureStorage'
DEFAULT_FILE_STORAGE = 'storages.backends.azure_storage.AzureStorage'
AZURE_CUSTOM_DOMAIN = f'{AZURE_ACCOUNT_NAME}.blob.core.windows.net'
STATIC_URL = f'https://{AZURE_CUSTOM_DOMAIN}/{AZURE_CONTAINER}/static/'
MEDIA_URL = f'https://{AZURE_CUSTOM_DOMAIN}/{AZURE_CONTAINER}/media/'
# Azure Cache for Redis
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': os.environ.get('AZURE_REDIS_URL'),
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'CONNECTION_POOL_KWARGS': {
'ssl_cert_reqs': None,
},
},
}
}
# Security settings
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
# Session configuration
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
# Logging to Azure Application Insights
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'azure': {
'level': 'INFO',
'class': 'opencensus.ext.azure.log_exporter.AzureLogHandler',
'connection_string': os.environ.get('APPLICATIONINSIGHTS_CONNECTION_STRING'),
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django': {
'handlers': ['azure', 'console'],
'level': 'INFO',
'propagate': True,
},
},
}
Azure DevOps Pipeline
# azure-pipelines.yml
trigger:
- main
variables:
azureServiceConnectionId: 'your-service-connection'
webAppName: 'django-app-unique-name'
environmentName: 'production'
projectRoot: $(System.DefaultWorkingDirectory)
pythonVersion: '3.11'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: 'ubuntu-latest'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- script: |
source antenv/bin/activate
python manage.py collectstatic --noinput
workingDirectory: $(projectRoot)
displayName: 'Collect static files'
- script: |
source antenv/bin/activate
python manage.py test
workingDirectory: $(projectRoot)
displayName: 'Run tests'
- task: ArchiveFiles@2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: 'ubuntu-latest'
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
displayName: 'Deploy Azure Web App'
inputs:
azureSubscription: $(azureServiceConnectionId)
appType: 'webAppLinux'
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip
startUpCommand: 'gunicorn --bind 0.0.0.0:8000 --timeout 600 myproject.wsgi:application'
- task: AzureCLI@2
displayName: 'Run Django migrations'
inputs:
azureSubscription: $(azureServiceConnectionId)
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az webapp ssh --resource-group django-app-rg --name $(webAppName) --command "cd /home/site/wwwroot && python manage.py migrate"
Create PostgreSQL Database
# Create PostgreSQL server
az postgres server create \
--resource-group django-app-rg \
--name django-postgres-server \
--location eastus \
--admin-user djangoadmin \
--admin-password SecurePassword123! \
--sku-name GP_Gen5_2 \
--version 13
# Create database
az postgres db create \
--resource-group django-app-rg \
--server-name django-postgres-server \
--name django_app
# Configure firewall rule for Azure services
az postgres server firewall-rule create \
--resource-group django-app-rg \
--server django-postgres-server \
--name AllowAzureServices \
--start-ip-address 0.0.0.0 \
--end-ip-address 0.0.0.0
# Get connection string
az postgres server show-connection-string \
--server-name django-postgres-server \
--database-name django_app \
--admin-user djangoadmin \
--admin-password SecurePassword123!
Create Storage Account
# Create storage account
az storage account create \
--name djangoappstorageaccount \
--resource-group django-app-rg \
--location eastus \
--sku Standard_LRS
# Create container for static files
az storage container create \
--name static \
--account-name djangoappstorageaccount \
--public-access blob
# Get storage account key
az storage account keys list \
--resource-group django-app-rg \
--account-name djangoappstorageaccount
Create Redis Cache
# Create Redis cache
az redis create \
--location eastus \
--name django-redis-cache \
--resource-group django-app-rg \
--sku Basic \
--vm-size c0
# Get Redis connection string
az redis list-keys \
--resource-group django-app-rg \
--name django-redis-cache
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"containerGroupName": {
"type": "string",
"defaultValue": "django-app-group"
},
"containerName": {
"type": "string",
"defaultValue": "django-app"
},
"image": {
"type": "string",
"defaultValue": "your-registry.azurecr.io/django-app:latest"
},
"databaseUrl": {
"type": "securestring"
},
"secretKey": {
"type": "securestring"
}
},
"resources": [
{
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2021-03-01",
"name": "[parameters('containerGroupName')]",
"location": "[resourceGroup().location]",
"properties": {
"containers": [
{
"name": "[parameters('containerName')]",
"properties": {
"image": "[parameters('image')]",
"ports": [
{
"port": 8000,
"protocol": "TCP"
}
],
"environmentVariables": [
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "myproject.settings.production"
},
{
"name": "DATABASE_URL",
"secureValue": "[parameters('databaseUrl')]"
},
{
"name": "SECRET_KEY",
"secureValue": "[parameters('secretKey')]"
}
],
"resources": {
"requests": {
"cpu": 1,
"memoryInGB": 2
}
}
}
}
],
"osType": "Linux",
"ipAddress": {
"type": "Public",
"ports": [
{
"port": 8000,
"protocol": "TCP"
}
]
},
"restartPolicy": "Always"
}
}
],
"outputs": {
"containerIPv4Address": {
"type": "string",
"value": "[reference(resourceId('Microsoft.ContainerInstance/containerGroups/', parameters('containerGroupName'))).ipAddress.ip]"
}
}
}
PaaS platforms abstract away infrastructure management, allowing developers to focus on application code. These platforms handle scaling, load balancing, and maintenance automatically.
Heroku is one of the most popular PaaS platforms, known for its simplicity and developer-friendly approach.
Heroku Configuration Files
# Procfile
web: gunicorn myproject.wsgi:application --log-file - --log-level info
worker: celery -A myproject worker -l info --concurrency=4
beat: celery -A myproject beat -l info
release: python manage.py migrate
# runtime.txt
python-3.11.6
# requirements.txt additions for Heroku
gunicorn==21.2.0
dj-database-url==2.1.0
whitenoise[brotli]==6.6.0
psycopg2-binary==2.9.7
django-heroku==0.3.1
django-storages[boto3]==1.14.2
celery[redis]==5.3.4
Heroku-Specific Settings
# settings/heroku.py
import os
import dj_database_url
import django_heroku
from .base import *
# Heroku-specific settings
DEBUG = False
ALLOWED_HOSTS = ['.herokuapp.com']
# Database configuration from DATABASE_URL
DATABASES = {
'default': dj_database_url.config(
default=os.environ.get('DATABASE_URL'),
conn_max_age=600,
conn_health_checks=True,
ssl_require=True,
)
}
# Static files with WhiteNoise
MIDDLEWARE.insert(1, 'whitenoise.middleware.WhiteNoiseMiddleware')
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
WHITENOISE_USE_FINDERS = True
# Media files on S3 (Heroku filesystem is ephemeral)
if os.environ.get('AWS_ACCESS_KEY_ID'):
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')
AWS_S3_REGION_NAME = os.environ.get('AWS_S3_REGION_NAME', 'us-east-1')
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_FILE_OVERWRITE = False
# Redis cache configuration
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': os.environ.get('REDIS_URL'),
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'CONNECTION_POOL_KWARGS': {
'ssl_cert_reqs': None,
},
},
}
}
# Session configuration
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'
# Security settings
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
# Email configuration
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.mailgun.org'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
EMAIL_HOST_USER = os.environ.get('MAILGUN_SMTP_LOGIN')
EMAIL_HOST_PASSWORD = os.environ.get('MAILGUN_SMTP_PASSWORD')
# Logging configuration
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'root': {
'handlers': ['console'],
'level': 'INFO',
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
'propagate': False,
},
},
}
# Apply Heroku-specific configurations
django_heroku.settings(locals(), staticfiles=False)
Heroku Deployment Commands
# Install Heroku CLI
curl https://cli-assets.heroku.com/install.sh | sh
# Login to Heroku
heroku login
# Create Heroku app
heroku create django-app-unique-name
# Add buildpack
heroku buildpacks:set heroku/python
# Set environment variables
heroku config:set DJANGO_SETTINGS_MODULE=myproject.settings.heroku
heroku config:set SECRET_KEY=your-secret-key-here
heroku config:set DEBUG=False
# Add PostgreSQL addon
heroku addons:create heroku-postgresql:mini
# Add Redis addon
heroku addons:create heroku-redis:mini
# Add Mailgun addon
heroku addons:create mailgun:starter
# Deploy application
git push heroku main
# Run migrations
heroku run python manage.py migrate
# Create superuser
heroku run python manage.py createsuperuser
# Scale dynos
heroku ps:scale web=2 worker=1
# View logs
heroku logs --tail
# Open application
heroku open
Heroku Review Apps Configuration
{
"environments": {
"review": {
"addons": [
"heroku-postgresql:mini",
"heroku-redis:mini"
],
"buildpacks": [
{
"url": "heroku/python"
}
],
"env": {
"DJANGO_SETTINGS_MODULE": "myproject.settings.heroku",
"DEBUG": "False"
},
"formation": {
"web": {
"quantity": 1,
"size": "basic"
}
},
"scripts": {
"postdeploy": "python manage.py migrate && python manage.py loaddata fixtures/sample_data.json"
}
}
}
}
# settings/heroku.py
import os
import dj_database_url
from .base import *
# Heroku-specific settings
DEBUG = False
ALLOWED_HOSTS = ['.herokuapp.com', 'yourdomain.com']
# Database
DATABASES = {
'default': dj_database_url.config(
default=os.environ.get('DATABASE_URL'),
conn_max_age=600,
conn_health_checks=True,
)
}
# Static files
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
# Security
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'root': {
'handlers': ['console'],
},
}
Render is a modern cloud platform that offers simple deployment with automatic SSL, global CDN, and built-in CI/CD.
Render Configuration
# render.yaml
services:
- type: web
name: django-app
env: python
plan: starter
buildCommand: "./build.sh"
startCommand: "gunicorn myproject.wsgi:application"
envVars:
- key: DJANGO_SETTINGS_MODULE
value: myproject.settings.production
- key: SECRET_KEY
generateValue: true
- key: DATABASE_URL
fromDatabase:
name: django-db
property: connectionString
- key: REDIS_URL
fromService:
type: redis
name: django-redis
property: connectionString
databases:
- name: django-db
plan: starter
services:
- type: redis
name: django-redis
plan: starter
Build Script for Render
#!/usr/bin/env bash
# build.sh
set -o errexit
pip install -r requirements.txt
python manage.py collectstatic --clear --no-input
python manage.py migrate
Railway provides a developer-focused platform with automatic deployments from Git repositories.
Railway Configuration
# railway.toml
[build]
builder = "NIXPACKS"
buildCommand = "python -m pip install -r requirements.txt && python manage.py collectstatic --noinput"
[deploy]
healthcheckPath = "/health/"
healthcheckTimeout = 300
restartPolicyType = "ON_FAILURE"
restartPolicyMaxRetries = 10
startCommand = "gunicorn myproject.wsgi:application --bind 0.0.0.0:$PORT"
[environments.production]
variables = {
DJANGO_SETTINGS_MODULE = "myproject.settings.production",
DEBUG = "False"
}
Railway Deployment Process
# Install Railway CLI
npm install -g @railway/cli
# Login to Railway
railway login
# Initialize project
railway init
# Link to existing project
railway link
# Deploy
railway up
# Add environment variables
railway variables set DJANGO_SETTINGS_MODULE=myproject.settings.production
railway variables set SECRET_KEY=your-secret-key
# Add PostgreSQL database
railway add postgresql
# Add Redis
railway add redis
# View logs
railway logs
# Open application
railway open
# Dockerfile for Railway
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Collect static files
RUN python manage.py collectstatic --noinput
# Create non-root user
RUN adduser --disabled-password --gecos '' appuser
RUN chown -R appuser:appuser /app
USER appuser
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "myproject.wsgi:application"]
Fly.io runs applications close to users worldwide with automatic scaling and edge deployment.
Fly.io Configuration
# fly.toml
app = "django-app"
primary_region = "ord"
[build]
builder = "paketobuildpacks/builder:base"
[env]
DJANGO_SETTINGS_MODULE = "myproject.settings.production"
PORT = "8000"
[http_service]
internal_port = 8000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 1
[[services]]
http_checks = []
internal_port = 8000
processes = ["app"]
protocol = "tcp"
script_checks = []
[services.concurrency]
hard_limit = 25
soft_limit = 20
type = "connections"
[[services.ports]]
force_https = true
handlers = ["http"]
port = 80
[[services.ports]]
handlers = ["tls", "http"]
port = 443
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
[[statics]]
guest_path = "/app/staticfiles"
url_prefix = "/static/"
Deploy to Fly.io
# Install flyctl
curl -L https://fly.io/install.sh | sh
# Login to Fly.io
flyctl auth login
# Launch application
flyctl launch
# Set secrets
flyctl secrets set SECRET_KEY=your-secret-key
flyctl secrets set DATABASE_URL=postgresql://...
# Deploy
flyctl deploy
# Scale application
flyctl scale count 2
# View logs
flyctl logs
# SSH into machine
flyctl ssh console
DigitalOcean App Platform provides a simple PaaS solution with predictable pricing.
# .do/app.yaml
name: django-app
region: nyc1
services:
- name: web
source_dir: /
github:
repo: your-username/your-repo
branch: main
deploy_on_push: true
build_command: pip install -r requirements.txt && python manage.py collectstatic --noinput
run_command: gunicorn --worker-tmp-dir /dev/shm --workers 2 --bind 0.0.0.0:8080 myproject.wsgi:application
environment_slug: python
instance_count: 2
instance_size_slug: basic-xxs
http_port: 8080
health_check:
http_path: /health/
initial_delay_seconds: 30
period_seconds: 10
timeout_seconds: 5
success_threshold: 1
failure_threshold: 3
env:
- key: DJANGO_SETTINGS_MODULE
value: myproject.settings.production
- key: SECRET_KEY
value: ${SECRET_KEY}
type: SECRET
- key: DATABASE_URL
value: ${django-db.DATABASE_URL}
- key: REDIS_URL
value: ${redis.DATABASE_URL}
- key: DEBUG
value: "False"
- key: ALLOWED_HOSTS
value: "${APP_DOMAIN}"
- name: worker
source_dir: /
github:
repo: your-username/your-repo
branch: main
deploy_on_push: true
build_command: pip install -r requirements.txt
run_command: celery -A myproject worker -l info --concurrency=2
environment_slug: python
instance_count: 1
instance_size_slug: basic-xxs
env:
- key: DJANGO_SETTINGS_MODULE
value: myproject.settings.production
- key: SECRET_KEY
value: ${SECRET_KEY}
type: SECRET
- key: DATABASE_URL
value: ${django-db.DATABASE_URL}
- key: REDIS_URL
value: ${redis.DATABASE_URL}
- name: scheduler
source_dir: /
github:
repo: your-username/your-repo
branch: main
deploy_on_push: true
build_command: pip install -r requirements.txt
run_command: celery -A myproject beat -l info
environment_slug: python
instance_count: 1
instance_size_slug: basic-xxs
env:
- key: DJANGO_SETTINGS_MODULE
value: myproject.settings.production
- key: SECRET_KEY
value: ${SECRET_KEY}
type: SECRET
- key: DATABASE_URL
value: ${django-db.DATABASE_URL}
- key: REDIS_URL
value: ${redis.DATABASE_URL}
databases:
- name: django-db
engine: PG
version: "13"
size: db-s-dev-database
num_nodes: 1
- name: redis
engine: REDIS
version: "6"
size: db-s-dev-database
static_sites:
- name: static-files
source_dir: /staticfiles
github:
repo: your-username/your-repo
branch: main
build_command: python manage.py collectstatic --noinput
output_dir: /staticfiles
Environment Variables Strategy
# settings/environment.py
import os
from typing import Any, Dict, List
class EnvironmentConfig:
"""Centralized environment configuration management"""
def __init__(self):
self.required_vars = [
'SECRET_KEY',
'DATABASE_URL',
'DJANGO_SETTINGS_MODULE',
]
self.optional_vars = {
'DEBUG': 'False',
'ALLOWED_HOSTS': '',
'REDIS_URL': '',
'EMAIL_HOST': '',
'SENTRY_DSN': '',
}
def validate_environment(self) -> List[str]:
"""Validate required environment variables"""
missing_vars = []
for var in self.required_vars:
if not os.environ.get(var):
missing_vars.append(var)
return missing_vars
def get_config(self) -> Dict[str, Any]:
"""Get configuration from environment"""
config = {}
# Required variables
for var in self.required_vars:
config[var] = os.environ.get(var)
# Optional variables with defaults
for var, default in self.optional_vars.items():
config[var] = os.environ.get(var, default)
return config
def get_boolean(self, key: str, default: bool = False) -> bool:
"""Get boolean value from environment"""
value = os.environ.get(key, str(default)).lower()
return value in ('true', '1', 'yes', 'on')
def get_list(self, key: str, separator: str = ',') -> List[str]:
"""Get list value from environment"""
value = os.environ.get(key, '')
return [item.strip() for item in value.split(separator) if item.strip()]
# Usage in settings
env_config = EnvironmentConfig()
missing_vars = env_config.validate_environment()
if missing_vars:
raise ValueError(f"Missing required environment variables: {missing_vars}")
config = env_config.get_config()
Secrets Management
# utils/secrets.py
import os
import json
import boto3
from google.cloud import secretmanager
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
class SecretsManager:
"""Unified secrets management across cloud providers"""
def __init__(self, provider: str = 'env'):
self.provider = provider
self._client = None
def get_secret(self, secret_name: str) -> str:
"""Get secret value based on provider"""
if self.provider == 'aws':
return self._get_aws_secret(secret_name)
elif self.provider == 'gcp':
return self._get_gcp_secret(secret_name)
elif self.provider == 'azure':
return self._get_azure_secret(secret_name)
else:
return os.environ.get(secret_name)
def _get_aws_secret(self, secret_name: str) -> str:
"""Get secret from AWS Secrets Manager"""
if not self._client:
self._client = boto3.client('secretsmanager')
try:
response = self._client.get_secret_value(SecretId=secret_name)
return response['SecretString']
except Exception as e:
raise ValueError(f"Failed to get AWS secret {secret_name}: {e}")
def _get_gcp_secret(self, secret_name: str) -> str:
"""Get secret from Google Secret Manager"""
if not self._client:
self._client = secretmanager.SecretManagerServiceClient()
try:
project_id = os.environ.get('GOOGLE_CLOUD_PROJECT')
name = f"projects/{project_id}/secrets/{secret_name}/versions/latest"
response = self._client.access_secret_version(request={"name": name})
return response.payload.data.decode("UTF-8")
except Exception as e:
raise ValueError(f"Failed to get GCP secret {secret_name}: {e}")
def _get_azure_secret(self, secret_name: str) -> str:
"""Get secret from Azure Key Vault"""
if not self._client:
vault_url = os.environ.get('AZURE_KEY_VAULT_URL')
credential = DefaultAzureCredential()
self._client = SecretClient(vault_url=vault_url, credential=credential)
try:
secret = self._client.get_secret(secret_name)
return secret.value
except Exception as e:
raise ValueError(f"Failed to get Azure secret {secret_name}: {e}")
# Usage in settings
secrets_manager = SecretsManager(provider=os.environ.get('SECRETS_PROVIDER', 'env'))
SECRET_KEY = secrets_manager.get_secret('SECRET_KEY')
DATABASE_PASSWORD = secrets_manager.get_secret('DATABASE_PASSWORD')
Database Connection Optimization
# settings/database_optimization.py
import os
# Connection pooling configuration
DATABASE_POOL_CONFIG = {
'MAX_CONNS': int(os.environ.get('DB_MAX_CONNECTIONS', '20')),
'MIN_CONNS': int(os.environ.get('DB_MIN_CONNECTIONS', '5')),
'CONN_MAX_AGE': int(os.environ.get('DB_CONN_MAX_AGE', '600')),
'CONN_HEALTH_CHECKS': True,
}
# Read replica configuration
if os.environ.get('DATABASE_READ_URL'):
DATABASES['read'] = {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('DB_READ_NAME'),
'USER': os.environ.get('DB_READ_USER'),
'PASSWORD': os.environ.get('DB_READ_PASSWORD'),
'HOST': os.environ.get('DB_READ_HOST'),
'PORT': os.environ.get('DB_READ_PORT', '5432'),
'OPTIONS': {
'sslmode': 'require',
'connect_timeout': 10,
},
**DATABASE_POOL_CONFIG,
}
DATABASE_ROUTERS = ['myproject.routers.DatabaseRouter']
# Query optimization settings
DATABASES['default']['OPTIONS'].update({
'options': '-c default_transaction_isolation=read_committed -c statement_timeout=30000'
})
Health Check Implementation
# health/views.py
import time
import psutil
from django.http import JsonResponse
from django.db import connection
from django.core.cache import cache
from django.conf import settings
def comprehensive_health_check(request):
"""Comprehensive health check for cloud deployments"""
start_time = time.time()
health_data = {
'status': 'healthy',
'timestamp': time.time(),
'checks': {},
'metadata': {
'version': getattr(settings, 'VERSION', 'unknown'),
'environment': os.environ.get('DJANGO_SETTINGS_MODULE', 'unknown'),
'region': os.environ.get('AWS_REGION') or os.environ.get('GOOGLE_CLOUD_REGION') or 'unknown',
}
}
# Database check
try:
with connection.cursor() as cursor:
cursor.execute("SELECT 1")
health_data['checks']['database'] = {'status': 'healthy', 'response_time': time.time() - start_time}
except Exception as e:
health_data['checks']['database'] = {'status': 'unhealthy', 'error': str(e)}
health_data['status'] = 'unhealthy'
# Cache check
try:
cache_start = time.time()
cache.set('health_check', 'ok', 30)
cache_result = cache.get('health_check')
cache_time = time.time() - cache_start
if cache_result == 'ok':
health_data['checks']['cache'] = {'status': 'healthy', 'response_time': cache_time}
else:
health_data['checks']['cache'] = {'status': 'unhealthy', 'error': 'Cache test failed'}
health_data['status'] = 'unhealthy'
except Exception as e:
health_data['checks']['cache'] = {'status': 'unhealthy', 'error': str(e)}
health_data['status'] = 'unhealthy'
# System resources check
try:
cpu_percent = psutil.cpu_percent(interval=1)
memory = psutil.virtual_memory()
disk = psutil.disk_usage('/')
health_data['checks']['system'] = {
'status': 'healthy',
'cpu_percent': cpu_percent,
'memory_percent': memory.percent,
'disk_percent': (disk.used / disk.total) * 100,
}
# Mark as unhealthy if resources are critically low
if cpu_percent > 95 or memory.percent > 95:
health_data['checks']['system']['status'] = 'unhealthy'
health_data['status'] = 'unhealthy'
except Exception as e:
health_data['checks']['system'] = {'status': 'unhealthy', 'error': str(e)}
# External dependencies check
health_data['checks']['external'] = check_external_dependencies()
total_time = time.time() - start_time
health_data['response_time'] = total_time
status_code = 200 if health_data['status'] == 'healthy' else 503
return JsonResponse(health_data, status=status_code)
def check_external_dependencies():
"""Check external service dependencies"""
dependencies = {}
# Check email service
if hasattr(settings, 'EMAIL_HOST'):
try:
from django.core.mail import get_connection
connection = get_connection()
connection.open()
connection.close()
dependencies['email'] = {'status': 'healthy'}
except Exception as e:
dependencies['email'] = {'status': 'unhealthy', 'error': str(e)}
# Check S3/storage service
if hasattr(settings, 'AWS_STORAGE_BUCKET_NAME'):
try:
from django.core.files.storage import default_storage
default_storage.exists('health_check.txt')
dependencies['storage'] = {'status': 'healthy'}
except Exception as e:
dependencies['storage'] = {'status': 'unhealthy', 'error': str(e)}
return dependencies
Common Cloud Deployment Problems and Solutions
# Problem: Static files return 404 errors
# Solution: Ensure proper static file configuration
# Check these settings:
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# Ensure collectstatic runs during deployment
# Add to build command: python manage.py collectstatic --noinput
# Problem: Database connection timeouts or SSL errors
# Solution: Proper database configuration
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'OPTIONS': {
'sslmode': 'require',
'connect_timeout': 10,
'options': '-c statement_timeout=30000'
},
'CONN_MAX_AGE': 600,
'CONN_HEALTH_CHECKS': True,
}
}
# Problem: Application running out of memory
# Solutions:
# 1. Optimize queries to reduce memory usage
# 2. Use pagination for large datasets
# 3. Implement proper caching
# 4. Scale up instance size or add more instances
# Monitor memory usage
import psutil
memory = psutil.virtual_memory()
if memory.percent > 90:
# Log warning or trigger scaling
pass
# Problem: High response times
# Solutions:
# 1. Add database indexes
# 2. Implement caching
# 3. Optimize database queries
# 4. Use CDN for static files
# 5. Enable compression
# Add performance monitoring
import time
from django.utils.deprecation import MiddlewareMixin
class PerformanceMiddleware(MiddlewareMixin):
def process_request(self, request):
request.start_time = time.time()
def process_response(self, request, response):
if hasattr(request, 'start_time'):
duration = time.time() - request.start_time
response['X-Response-Time'] = f'{duration:.3f}s'
# Log slow requests
if duration > 2.0:
logger.warning(f'Slow request: {request.path} took {duration:.3f}s')
return response
This comprehensive cloud deployment guide provides Django developers with everything needed to successfully deploy applications to major cloud platforms, from basic setup to advanced production configurations and troubleshooting.
This comprehensive cloud deployment guide covers all major cloud platforms and PaaS solutions, providing production-ready configurations for deploying Django applications at scale.
Using Docker
Docker containerization provides consistent, portable, and scalable deployment environments for Django applications. This chapter covers Docker fundamentals, multi-stage builds, container orchestration, and production-ready Docker configurations for Django applications.
Scaling and Load Balancing
Scaling Django applications requires strategic planning for handling increased traffic, data growth, and user demands. This chapter covers horizontal and vertical scaling strategies, load balancing configurations, auto-scaling implementations, and performance optimization techniques for high-traffic Django applications.