Performance and Optimization

Caching Strategies

Caching is one of the most effective performance optimization techniques in Django applications. By storing frequently accessed data in fast storage systems, you can dramatically reduce database load, improve response times, and enhance user experience. This comprehensive guide covers multi-level caching strategies, cache backends, invalidation patterns, and advanced caching techniques.

Caching Strategies

Caching is one of the most effective performance optimization techniques in Django applications. By storing frequently accessed data in fast storage systems, you can dramatically reduce database load, improve response times, and enhance user experience. This comprehensive guide covers multi-level caching strategies, cache backends, invalidation patterns, and advanced caching techniques.

Understanding Django's Caching Framework

Django provides a flexible caching framework that supports multiple cache backends and caching levels. Understanding when and how to use each type of caching is crucial for building high-performance applications.

Cache Hierarchy

┌─────────────────────────────────────────────────────────────┐
│                    Browser Cache                            │
│  • Static files (CSS, JS, images)                          │
│  • HTTP cache headers                                       │
│  • Service worker cache                                     │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│                    CDN/Reverse Proxy                       │
│  • Static content delivery                                 │
│  • Geographic distribution                                 │
│  • Edge caching                                            │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│                    Application Cache                       │
│  • View caching                                            │
│  • Template fragment caching                              │
│  • Low-level cache API                                    │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│                    Database Query Cache                    │
│  • ORM query results                                       │
│  • Aggregation results                                     │
│  • Complex computations                                    │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│                    Database Cache                          │
│  • Query plan cache                                        │
│  • Buffer pool                                             │
│  • Index cache                                             │
└─────────────────────────────────────────────────────────────┘

Cache Backend Configuration

Redis is the preferred cache backend for production Django applications due to its performance, persistence options, and advanced features.

# settings.py
CACHES = {
    'default': {
        'BACKEND': 'django_redis.cache.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/1',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient',
            'CONNECTION_POOL_KWARGS': {
                'max_connections': 50,
                'retry_on_timeout': True,
            },
            'COMPRESSOR': 'django_redis.compressors.zlib.ZlibCompressor',
            'SERIALIZER': 'django_redis.serializers.json.JSONSerializer',
        },
        'KEY_PREFIX': 'myapp',
        'VERSION': 1,
        'TIMEOUT': 300,  # 5 minutes default timeout
    }
}

# Multiple cache configurations
CACHES = {
    'default': {
        'BACKEND': 'django_redis.cache.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/1',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient',
        },
        'TIMEOUT': 300,
    },
    'sessions': {
        'BACKEND': 'django_redis.cache.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/2',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient',
        },
        'TIMEOUT': 86400,  # 24 hours for sessions
    },
    'long_term': {
        'BACKEND': 'django_redis.cache.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/3',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient',
        },
        'TIMEOUT': 86400 * 7,  # 1 week for long-term data
    }
}

# Session configuration
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'sessions'

Memcached Configuration

# settings.py - Memcached with python-memcached
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache',
        'LOCATION': [
            '127.0.0.1:11211',
            '127.0.0.1:11212',  # Multiple servers for redundancy
        ],
        'OPTIONS': {
            'MAX_ENTRIES': 1000,
            'CULL_FREQUENCY': 3,
        }
    }
}

Low-Level Cache API Strategies

Basic Caching Patterns

from django.core.cache import cache
from django.core.cache.utils import make_template_fragment_key
import hashlib
import json

class CacheManager:
    """Centralized cache management with consistent patterns"""
    
    @staticmethod
    def get_cache_key(prefix, *args, **kwargs):
        """Generate consistent cache keys"""
        key_data = f"{prefix}:{':'.join(map(str, args))}"
        if kwargs:
            key_data += f":{hashlib.md5(json.dumps(kwargs, sort_keys=True).encode()).hexdigest()}"
        return key_data
    
    @staticmethod
    def cache_result(timeout=300):
        """Decorator for caching function results"""
        def decorator(func):
            def wrapper(*args, **kwargs):
                # Generate cache key from function name and arguments
                cache_key = CacheManager.get_cache_key(
                    f"func_{func.__name__}", *args, **kwargs
                )
                
                # Try to get from cache
                result = cache.get(cache_key)
                if result is not None:
                    return result
                
                # Execute function and cache result
                result = func(*args, **kwargs)
                cache.set(cache_key, result, timeout)
                return result
            
            return wrapper
        return decorator

# Usage examples
@CacheManager.cache_result(timeout=600)  # 10 minutes
def get_popular_products(category_id, limit=10):
    """Get popular products with caching"""
    return Product.objects.filter(
        category_id=category_id,
        is_active=True
    ).annotate(
        popularity_score=Count('orders')
    ).order_by('-popularity_score')[:limit]

@CacheManager.cache_result(timeout=3600)  # 1 hour
def calculate_user_statistics(user_id):
    """Calculate expensive user statistics"""
    user = User.objects.get(id=user_id)
    
    stats = {
        'total_orders': user.orders.count(),
        'total_spent': user.orders.aggregate(
            total=Sum('total_amount')
        )['total'] or 0,
        'favorite_category': user.orders.values(
            'items__product__category__name'
        ).annotate(
            count=Count('id')
        ).order_by('-count').first(),
        'last_order_date': user.orders.order_by('-created_at').first().created_at
    }
    
    return stats

Advanced Caching Patterns

from django.core.cache import caches
from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
import time

class SmartCache:
    """Advanced caching with automatic invalidation"""
    
    def __init__(self, cache_alias='default'):
        self.cache = caches[cache_alias]
    
    def get_or_set_with_lock(self, key, callable_func, timeout=300, lock_timeout=10):
        """Prevent cache stampede with distributed locking"""
        lock_key = f"lock:{key}"
        
        # Try to get from cache first
        result = self.cache.get(key)
        if result is not None:
            return result
        
        # Try to acquire lock
        if self.cache.add(lock_key, "locked", lock_timeout):
            try:
                # Double-check cache after acquiring lock
                result = self.cache.get(key)
                if result is not None:
                    return result
                
                # Execute function and cache result
                result = callable_func()
                self.cache.set(key, result, timeout)
                return result
            
            finally:
                # Always release lock
                self.cache.delete(lock_key)
        else:
            # Wait for lock to be released and try again
            time.sleep(0.1)
            return self.get_or_set_with_lock(key, callable_func, timeout, lock_timeout)
    
    def invalidate_pattern(self, pattern):
        """Invalidate all keys matching a pattern (Redis only)"""
        if hasattr(self.cache, 'delete_pattern'):
            self.cache.delete_pattern(pattern)
    
    def get_many_with_fallback(self, keys, fallback_func):
        """Get multiple keys with fallback for missing ones"""
        cached_results = self.cache.get_many(keys)
        missing_keys = set(keys) - set(cached_results.keys())
        
        if missing_keys:
            # Get missing data
            missing_data = fallback_func(missing_keys)
            
            # Cache missing data
            cache_data = {}
            for key in missing_keys:
                if key in missing_data:
                    cache_data[key] = missing_data[key]
            
            if cache_data:
                self.cache.set_many(cache_data, timeout=300)
                cached_results.update(cache_data)
        
        return cached_results

# Usage example
smart_cache = SmartCache()

def get_product_details(product_ids):
    """Get product details with smart caching"""
    cache_keys = [f"product:{pid}" for pid in product_ids]
    
    def fetch_missing_products(missing_keys):
        missing_ids = [key.split(':')[1] for key in missing_keys]
        products = Product.objects.filter(id__in=missing_ids).select_related('category')
        
        result = {}
        for product in products:
            key = f"product:{product.id}"
            result[key] = {
                'id': product.id,
                'name': product.name,
                'price': str(product.price),
                'category': product.category.name,
            }
        return result
    
    return smart_cache.get_many_with_fallback(cache_keys, fetch_missing_products)

View-Level Caching Strategies

Conditional View Caching

from django.views.decorators.cache import cache_page
from django.views.decorators.vary import vary_on_headers, vary_on_cookie
from django.utils.decorators import method_decorator
from django.views.generic import ListView
import hashlib

def cache_per_user(timeout=300):
    """Cache view per user"""
    def decorator(view_func):
        def wrapper(request, *args, **kwargs):
            if request.user.is_authenticated:
                cache_key = f"view_user_{request.user.id}_{request.path}"
                
                # Include query parameters in cache key
                if request.GET:
                    query_hash = hashlib.md5(
                        str(sorted(request.GET.items())).encode()
                    ).hexdigest()
                    cache_key += f"_{query_hash}"
                
                cached_response = cache.get(cache_key)
                if cached_response:
                    return cached_response
                
                response = view_func(request, *args, **kwargs)
                cache.set(cache_key, response, timeout)
                return response
            else:
                return view_func(request, *args, **kwargs)
        
        return wrapper
    return decorator

# Usage examples
@cache_per_user(timeout=600)
def user_dashboard(request):
    """User-specific dashboard with caching"""
    user_stats = calculate_user_statistics(request.user.id)
    recent_orders = request.user.orders.select_related('status')[:5]
    
    context = {
        'stats': user_stats,
        'recent_orders': recent_orders,
    }
    return render(request, 'dashboard.html', context)

@method_decorator(cache_page(300), name='dispatch')
@method_decorator(vary_on_headers('User-Agent'), name='dispatch')
class ProductListView(ListView):
    """Cached product list with user agent variation"""
    model = Product
    template_name = 'products/list.html'
    paginate_by = 20
    
    def get_queryset(self):
        return Product.objects.select_related('category').filter(is_active=True)

Smart Cache Invalidation

from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
from django.core.cache import cache

class CacheInvalidator:
    """Automatic cache invalidation based on model changes"""
    
    @staticmethod
    def invalidate_model_cache(model_class, instance_id=None):
        """Invalidate all cache keys related to a model"""
        model_name = model_class._meta.label_lower
        
        # Invalidate general model caches
        patterns = [
            f"model_{model_name}_*",
            f"list_{model_name}_*",
            f"count_{model_name}_*",
        ]
        
        if instance_id:
            patterns.extend([
                f"instance_{model_name}_{instance_id}_*",
                f"detail_{model_name}_{instance_id}_*",
            ])
        
        for pattern in patterns:
            cache.delete_pattern(pattern)
    
    @staticmethod
    def invalidate_related_caches(instance):
        """Invalidate caches related to model relationships"""
        model_name = instance._meta.label_lower
        
        # Invalidate foreign key related caches
        for field in instance._meta.get_fields():
            if field.is_relation and hasattr(instance, field.name):
                related_obj = getattr(instance, field.name)
                if related_obj:
                    related_model = field.related_model._meta.label_lower
                    cache.delete_pattern(f"*{related_model}_{related_obj.id}*")

# Signal handlers for automatic invalidation
@receiver([post_save, post_delete], sender=Product)
def invalidate_product_cache(sender, instance, **kwargs):
    """Invalidate product-related caches"""
    CacheInvalidator.invalidate_model_cache(Product, instance.id)
    
    # Invalidate category caches
    if instance.category:
        cache.delete_pattern(f"category_{instance.category.id}_*")
    
    # Invalidate homepage cache if featured product
    if instance.is_featured:
        cache.delete_pattern("homepage_*")

@receiver([post_save, post_delete], sender=Order)
def invalidate_order_cache(sender, instance, **kwargs):
    """Invalidate order-related caches"""
    CacheInvalidator.invalidate_model_cache(Order, instance.id)
    
    # Invalidate user statistics
    cache.delete_pattern(f"user_stats_{instance.user.id}_*")
    
    # Invalidate product popularity caches
    for item in instance.items.all():
        cache.delete_pattern(f"product_popularity_{item.product.id}_*")

Template Fragment Caching

Strategic Fragment Caching

<!-- templates/products/detail.html -->
{% load cache %}

<div class="product-detail">
    <!-- Cache expensive product information -->
    {% cache 3600 product_info product.id product.updated_at %}
    <div class="product-info">
        <h1>{{ product.name }}</h1>
        <p class="price">${{ product.price }}</p>
        <div class="description">{{ product.description|markdown }}</div>
        
        <!-- Cache related products separately -->
        {% cache 1800 related_products product.category.id %}
        <div class="related-products">
            <h3>Related Products</h3>
            {% for related in product.get_related_products %}
                <div class="related-item">
                    <a href="{{ related.get_absolute_url }}">{{ related.name }}</a>
                </div>
            {% endfor %}
        </div>
        {% endcache %}
    </div>
    {% endcache %}
    
    <!-- Don't cache user-specific content -->
    <div class="user-actions">
        {% if user.is_authenticated %}
            <button class="add-to-cart" data-product-id="{{ product.id }}">
                Add to Cart
            </button>
            {% if product in user.wishlist.products.all %}
                <button class="remove-wishlist">Remove from Wishlist</button>
            {% else %}
                <button class="add-wishlist">Add to Wishlist</button>
            {% endif %}
        {% endif %}
    </div>
    
    <!-- Cache reviews with pagination -->
    {% cache 900 product_reviews product.id page_number %}
    <div class="reviews">
        <h3>Customer Reviews</h3>
        {% for review in reviews %}
            <div class="review">
                <div class="rating">{{ review.rating }}/5</div>
                <div class="comment">{{ review.comment }}</div>
                <div class="author">{{ review.author.username }}</div>
            </div>
        {% endfor %}
    </div>
    {% endcache %}
</div>

Dynamic Fragment Caching

# Custom template tags for dynamic caching
from django import template
from django.core.cache import cache
from django.template.loader import render_to_string
import hashlib

register = template.Library()

@register.simple_tag(takes_context=True)
def cache_include(context, template_name, cache_key, timeout=300, **kwargs):
    """Include template with caching"""
    # Build complete cache key
    full_cache_key = f"template_{cache_key}"
    if kwargs:
        key_suffix = hashlib.md5(str(sorted(kwargs.items())).encode()).hexdigest()
        full_cache_key += f"_{key_suffix}"
    
    # Try to get from cache
    cached_content = cache.get(full_cache_key)
    if cached_content is not None:
        return cached_content
    
    # Render template
    template_context = context.flatten()
    template_context.update(kwargs)
    
    content = render_to_string(template_name, template_context)
    
    # Cache the result
    cache.set(full_cache_key, content, timeout)
    
    return content

@register.simple_tag
def cache_expensive_computation(cache_key, timeout=300):
    """Cache expensive computations in templates"""
    def decorator(func):
        cached_result = cache.get(cache_key)
        if cached_result is not None:
            return cached_result
        
        result = func()
        cache.set(cache_key, result, timeout)
        return result
    
    return decorator

Database Query Result Caching

ORM Query Caching

from django.core.cache import cache
from django.db import models
import hashlib

class CachedQuerySet(models.QuerySet):
    """QuerySet with automatic caching"""
    
    def cache(self, timeout=300, key_prefix=None):
        """Enable caching for this queryset"""
        self._cache_timeout = timeout
        self._cache_key_prefix = key_prefix or self.model._meta.label_lower
        return self
    
    def _get_cache_key(self):
        """Generate cache key from query"""
        query_str = str(self.query)
        query_hash = hashlib.md5(query_str.encode()).hexdigest()
        return f"{self._cache_key_prefix}_query_{query_hash}"
    
    def __iter__(self):
        if hasattr(self, '_cache_timeout'):
            cache_key = self._get_cache_key()
            cached_result = cache.get(cache_key)
            
            if cached_result is not None:
                return iter(cached_result)
            
            # Execute query and cache results
            result = list(super().__iter__())
            cache.set(cache_key, result, self._cache_timeout)
            return iter(result)
        
        return super().__iter__()

class CachedManager(models.Manager):
    """Manager with caching support"""
    
    def get_queryset(self):
        return CachedQuerySet(self.model, using=self._db)
    
    def cached(self, timeout=300, key_prefix=None):
        """Get cached queryset"""
        return self.get_queryset().cache(timeout, key_prefix)

# Model with caching
class Product(models.Model):
    name = models.CharField(max_length=200)
    category = models.ForeignKey('Category', on_delete=models.CASCADE)
    price = models.DecimalField(max_digits=10, decimal_places=2)
    is_active = models.BooleanField(default=True)
    
    objects = CachedManager()
    
    @classmethod
    def get_popular_products(cls, limit=10, timeout=3600):
        """Get popular products with caching"""
        cache_key = f"popular_products_{limit}"
        cached_products = cache.get(cache_key)
        
        if cached_products is None:
            cached_products = list(
                cls.objects.filter(is_active=True)
                .annotate(order_count=Count('orderitem'))
                .order_by('-order_count')[:limit]
            )
            cache.set(cache_key, cached_products, timeout)
        
        return cached_products

# Usage
popular_products = Product.objects.cached(timeout=1800).filter(
    is_active=True
).select_related('category')[:20]

Aggregation Caching

class StatisticsCache:
    """Cache for expensive statistical computations"""
    
    @staticmethod
    def get_daily_stats(date, recalculate=False):
        """Get daily statistics with caching"""
        cache_key = f"daily_stats_{date.strftime('%Y%m%d')}"
        
        if not recalculate:
            cached_stats = cache.get(cache_key)
            if cached_stats is not None:
                return cached_stats
        
        # Calculate statistics
        stats = {
            'total_orders': Order.objects.filter(
                created_at__date=date
            ).count(),
            
            'total_revenue': Order.objects.filter(
                created_at__date=date,
                status='completed'
            ).aggregate(
                total=Sum('total_amount')
            )['total'] or 0,
            
            'new_users': User.objects.filter(
                date_joined__date=date
            ).count(),
            
            'top_products': list(
                Product.objects.filter(
                    orderitem__order__created_at__date=date
                ).annotate(
                    quantity_sold=Sum('orderitem__quantity')
                ).order_by('-quantity_sold')[:5].values(
                    'id', 'name', 'quantity_sold'
                )
            ),
        }
        
        # Cache for 24 hours (longer for past dates)
        timeout = 86400 if date < timezone.now().date() else 3600
        cache.set(cache_key, stats, timeout)
        
        return stats
    
    @staticmethod
    def get_user_lifetime_value(user_id):
        """Calculate and cache user lifetime value"""
        cache_key = f"user_ltv_{user_id}"
        cached_ltv = cache.get(cache_key)
        
        if cached_ltv is None:
            user_orders = Order.objects.filter(
                user_id=user_id,
                status='completed'
            )
            
            ltv_data = user_orders.aggregate(
                total_spent=Sum('total_amount'),
                order_count=Count('id'),
                avg_order_value=Avg('total_amount'),
                first_order=Min('created_at'),
                last_order=Max('created_at')
            )
            
            # Calculate customer lifetime in days
            if ltv_data['first_order'] and ltv_data['last_order']:
                lifetime_days = (
                    ltv_data['last_order'] - ltv_data['first_order']
                ).days + 1
                ltv_data['lifetime_days'] = lifetime_days
                ltv_data['avg_days_between_orders'] = (
                    lifetime_days / max(ltv_data['order_count'], 1)
                )
            
            # Cache for 6 hours
            cache.set(cache_key, ltv_data, 21600)
            cached_ltv = ltv_data
        
        return cached_ltv

Cache Warming Strategies

Proactive Cache Warming

from django.core.management.base import BaseCommand
from django.core.cache import cache
from concurrent.futures import ThreadPoolExecutor
import logging

logger = logging.getLogger(__name__)

class Command(BaseCommand):
    """Management command for cache warming"""
    help = 'Warm up application caches'
    
    def add_arguments(self, parser):
        parser.add_argument(
            '--categories',
            action='store_true',
            help='Warm category caches'
        )
        parser.add_argument(
            '--products',
            action='store_true',
            help='Warm product caches'
        )
        parser.add_argument(
            '--concurrent',
            type=int,
            default=5,
            help='Number of concurrent workers'
        )
    
    def handle(self, *args, **options):
        if options['categories']:
            self.warm_category_caches(options['concurrent'])
        
        if options['products']:
            self.warm_product_caches(options['concurrent'])
    
    def warm_category_caches(self, max_workers):
        """Warm category-related caches"""
        categories = Category.objects.filter(is_active=True)
        
        def warm_category(category):
            try:
                # Warm product count cache
                cache_key = f"category_product_count_{category.id}"
                count = category.products.filter(is_active=True).count()
                cache.set(cache_key, count, 3600)
                
                # Warm popular products cache
                cache_key = f"category_popular_products_{category.id}"
                popular = list(
                    category.products.filter(is_active=True)
                    .annotate(order_count=Count('orderitem'))
                    .order_by('-order_count')[:10]
                )
                cache.set(cache_key, popular, 1800)
                
                logger.info(f"Warmed cache for category: {category.name}")
                
            except Exception as e:
                logger.error(f"Error warming cache for category {category.id}: {e}")
        
        with ThreadPoolExecutor(max_workers=max_workers) as executor:
            executor.map(warm_category, categories)
    
    def warm_product_caches(self, max_workers):
        """Warm product-related caches"""
        products = Product.objects.filter(is_active=True).select_related('category')
        
        def warm_product(product):
            try:
                # Warm product details cache
                cache_key = f"product_details_{product.id}"
                details = {
                    'id': product.id,
                    'name': product.name,
                    'price': str(product.price),
                    'category': product.category.name,
                }
                cache.set(cache_key, details, 3600)
                
                # Warm related products cache
                cache_key = f"product_related_{product.id}"
                related = list(
                    Product.objects.filter(
                        category=product.category,
                        is_active=True
                    ).exclude(id=product.id)[:5]
                )
                cache.set(cache_key, related, 1800)
                
                logger.info(f"Warmed cache for product: {product.name}")
                
            except Exception as e:
                logger.error(f"Error warming cache for product {product.id}: {e}")
        
        with ThreadPoolExecutor(max_workers=max_workers) as executor:
            executor.map(warm_product, products)

# Automatic cache warming after deployments
class CacheWarmer:
    """Automatic cache warming service"""
    
    @staticmethod
    def warm_critical_caches():
        """Warm the most critical caches"""
        critical_operations = [
            ('homepage_stats', CacheWarmer._warm_homepage_stats),
            ('popular_products', CacheWarmer._warm_popular_products),
            ('category_navigation', CacheWarmer._warm_category_navigation),
        ]
        
        for cache_name, operation in critical_operations:
            try:
                operation()
                logger.info(f"Successfully warmed {cache_name}")
            except Exception as e:
                logger.error(f"Failed to warm {cache_name}: {e}")
    
    @staticmethod
    def _warm_homepage_stats():
        """Warm homepage statistics"""
        stats = {
            'total_products': Product.objects.filter(is_active=True).count(),
            'total_categories': Category.objects.filter(is_active=True).count(),
            'featured_products': list(
                Product.objects.filter(
                    is_active=True,
                    is_featured=True
                ).select_related('category')[:6]
            ),
        }
        cache.set('homepage_stats', stats, 3600)
    
    @staticmethod
    def _warm_popular_products():
        """Warm popular products cache"""
        popular = Product.get_popular_products(limit=20)
        cache.set('popular_products_20', popular, 3600)
    
    @staticmethod
    def _warm_category_navigation():
        """Warm category navigation cache"""
        categories = list(
            Category.objects.filter(is_active=True)
            .annotate(product_count=Count('products'))
            .order_by('name')
        )
        cache.set('category_navigation', categories, 7200)

Cache Monitoring and Metrics

Cache Performance Monitoring

from django.core.cache import cache
from django.utils.deprecation import MiddlewareMixin
import time
import logging

logger = logging.getLogger('cache_performance')

class CacheMonitoringMiddleware(MiddlewareMixin):
    """Monitor cache performance and hit rates"""
    
    def process_request(self, request):
        request.cache_stats = {
            'hits': 0,
            'misses': 0,
            'sets': 0,
            'start_time': time.time()
        }
    
    def process_response(self, request, response):
        if hasattr(request, 'cache_stats'):
            stats = request.cache_stats
            duration = time.time() - stats['start_time']
            
            # Calculate hit rate
            total_operations = stats['hits'] + stats['misses']
            hit_rate = (stats['hits'] / total_operations * 100) if total_operations > 0 else 0
            
            # Log cache performance
            logger.info(
                f"Cache stats for {request.path}: "
                f"hits={stats['hits']}, misses={stats['misses']}, "
                f"sets={stats['sets']}, hit_rate={hit_rate:.1f}%, "
                f"duration={duration:.3f}s"
            )
            
            # Add headers for monitoring
            response['X-Cache-Hits'] = str(stats['hits'])
            response['X-Cache-Misses'] = str(stats['misses'])
            response['X-Cache-Hit-Rate'] = f"{hit_rate:.1f}%"
        
        return response

# Cache statistics collector
class CacheStats:
    """Collect and analyze cache statistics"""
    
    @staticmethod
    def get_cache_info():
        """Get cache backend information"""
        try:
            # Redis-specific stats
            if hasattr(cache, '_cache') and hasattr(cache._cache, 'get_client'):
                client = cache._cache.get_client()
                info = client.info()
                
                return {
                    'backend': 'Redis',
                    'memory_used': info.get('used_memory_human'),
                    'memory_peak': info.get('used_memory_peak_human'),
                    'connected_clients': info.get('connected_clients'),
                    'total_commands': info.get('total_commands_processed'),
                    'keyspace_hits': info.get('keyspace_hits'),
                    'keyspace_misses': info.get('keyspace_misses'),
                    'hit_rate': (
                        info.get('keyspace_hits', 0) / 
                        max(info.get('keyspace_hits', 0) + info.get('keyspace_misses', 0), 1) * 100
                    )
                }
        except Exception as e:
            logger.error(f"Error getting cache info: {e}")
        
        return {'backend': 'Unknown', 'error': 'Could not retrieve cache info'}
    
    @staticmethod
    def analyze_cache_keys(pattern='*'):
        """Analyze cache key patterns and sizes"""
        try:
            if hasattr(cache, '_cache') and hasattr(cache._cache, 'get_client'):
                client = cache._cache.get_client()
                keys = client.keys(pattern)
                
                analysis = {
                    'total_keys': len(keys),
                    'key_patterns': {},
                    'large_keys': [],
                }
                
                for key in keys[:1000]:  # Limit analysis to prevent performance issues
                    # Analyze key patterns
                    prefix = key.decode().split('_')[0] if '_' in key.decode() else 'other'
                    analysis['key_patterns'][prefix] = analysis['key_patterns'].get(prefix, 0) + 1
                    
                    # Check key size
                    try:
                        size = client.memory_usage(key)
                        if size and size > 10000:  # Keys larger than 10KB
                            analysis['large_keys'].append({
                                'key': key.decode(),
                                'size': size
                            })
                    except:
                        pass
                
                return analysis
        except Exception as e:
            logger.error(f"Error analyzing cache keys: {e}")
        
        return {'error': 'Could not analyze cache keys'}

Best Practices and Common Pitfalls

Cache Key Design

class CacheKeyBuilder:
    """Consistent cache key building patterns"""
    
    @staticmethod
    def build_key(prefix, *args, version=None, **kwargs):
        """Build consistent cache keys"""
        # Start with prefix
        key_parts = [prefix]
        
        # Add positional arguments
        key_parts.extend(str(arg) for arg in args)
        
        # Add keyword arguments (sorted for consistency)
        if kwargs:
            for k, v in sorted(kwargs.items()):
                key_parts.append(f"{k}:{v}")
        
        # Join with colons
        key = ':'.join(key_parts)
        
        # Add version if specified
        if version:
            key = f"v{version}:{key}"
        
        # Ensure key length is reasonable
        if len(key) > 200:
            import hashlib
            key_hash = hashlib.md5(key.encode()).hexdigest()
            key = f"{prefix}:hash:{key_hash}"
        
        return key
    
    @staticmethod
    def user_specific_key(user, prefix, *args, **kwargs):
        """Build user-specific cache keys"""
        return CacheKeyBuilder.build_key(
            f"user:{user.id}:{prefix}",
            *args,
            **kwargs
        )
    
    @staticmethod
    def model_key(model_instance, prefix, *args, **kwargs):
        """Build model-specific cache keys"""
        model_name = model_instance._meta.label_lower
        return CacheKeyBuilder.build_key(
            f"{model_name}:{model_instance.pk}:{prefix}",
            *args,
            **kwargs
        )

# Usage examples
cache_key = CacheKeyBuilder.build_key(
    'product_list',
    category_id=5,
    page=2,
    sort='price',
    version=2
)
# Result: "v2:product_list:category_id:5:page:2:sort:price"

user_cache_key = CacheKeyBuilder.user_specific_key(
    request.user,
    'dashboard_stats'
)
# Result: "user:123:dashboard_stats"

Cache Invalidation Strategies

class CacheInvalidationManager:
    """Manage complex cache invalidation scenarios"""
    
    def __init__(self):
        self.invalidation_rules = {}
    
    def register_invalidation_rule(self, model, callback):
        """Register invalidation callback for model changes"""
        if model not in self.invalidation_rules:
            self.invalidation_rules[model] = []
        self.invalidation_rules[model].append(callback)
    
    def invalidate_for_model(self, model_class, instance):
        """Execute all invalidation rules for a model"""
        if model_class in self.invalidation_rules:
            for callback in self.invalidation_rules[model_class]:
                try:
                    callback(instance)
                except Exception as e:
                    logger.error(f"Cache invalidation error: {e}")

# Global invalidation manager
cache_invalidator = CacheInvalidationManager()

# Register invalidation rules
def invalidate_product_caches(product):
    """Invalidate all product-related caches"""
    patterns = [
        f"product:{product.id}:*",
        f"category:{product.category.id}:*",
        "popular_products:*",
        "homepage:*",
    ]
    
    for pattern in patterns:
        cache.delete_pattern(pattern)

def invalidate_user_caches(user):
    """Invalidate all user-related caches"""
    patterns = [
        f"user:{user.id}:*",
        f"user_stats:{user.id}:*",
    ]
    
    for pattern in patterns:
        cache.delete_pattern(pattern)

# Register the rules
cache_invalidator.register_invalidation_rule(Product, invalidate_product_caches)
cache_invalidator.register_invalidation_rule(User, invalidate_user_caches)

# Signal handlers
@receiver([post_save, post_delete])
def handle_model_cache_invalidation(sender, instance, **kwargs):
    """Handle cache invalidation for any model"""
    cache_invalidator.invalidate_for_model(sender, instance)

Common Caching Pitfalls

# ❌ DON'T: Cache user-specific data globally
def bad_user_dashboard(request):
    cached_data = cache.get('dashboard_data')  # Same for all users!
    if not cached_data:
        cached_data = get_user_dashboard_data(request.user)
        cache.set('dashboard_data', cached_data, 300)
    return render(request, 'dashboard.html', {'data': cached_data})

# ✅ DO: Include user ID in cache key
def good_user_dashboard(request):
    cache_key = f'dashboard_data_user_{request.user.id}'
    cached_data = cache.get(cache_key)
    if not cached_data:
        cached_data = get_user_dashboard_data(request.user)
        cache.set(cache_key, cached_data, 300)
    return render(request, 'dashboard.html', {'data': cached_data})

# ❌ DON'T: Cache without considering data freshness
def bad_product_list(request):
    products = cache.get('all_products')  # Never invalidated!
    if not products:
        products = list(Product.objects.all())
        cache.set('all_products', products, 86400)  # 24 hours
    return render(request, 'products.html', {'products': products})

# ✅ DO: Include timestamp or version in cache key
def good_product_list(request):
    # Include last update time in cache key
    last_update = Product.objects.aggregate(
        last_modified=Max('updated_at')
    )['last_modified']
    
    cache_key = f'products_list_{last_update.timestamp()}'
    products = cache.get(cache_key)
    
    if not products:
        products = list(Product.objects.select_related('category').all())
        cache.set(cache_key, products, 3600)
    
    return render(request, 'products.html', {'products': products})

# ❌ DON'T: Cache large objects without compression
def bad_large_data_cache():
    large_data = generate_large_dataset()  # 10MB of data
    cache.set('large_data', large_data, 3600)  # Uses lots of memory

# ✅ DO: Compress large cached objects
import pickle
import gzip

def good_large_data_cache():
    cache_key = 'large_data_compressed'
    cached_data = cache.get(cache_key)
    
    if cached_data is None:
        large_data = generate_large_dataset()
        
        # Compress before caching
        compressed_data = gzip.compress(pickle.dumps(large_data))
        cache.set(cache_key, compressed_data, 3600)
        return large_data
    else:
        # Decompress when retrieving
        return pickle.loads(gzip.decompress(cached_data))

Performance Impact Measurement

Implementing comprehensive caching strategies typically provides:

  • Database Load Reduction: 60-90% fewer database queries
  • Response Time Improvement: 50-80% faster page loads
  • Server Capacity Increase: 3-5x more concurrent users
  • Memory Efficiency: Better resource utilization with proper cache sizing

The key to successful caching is understanding your application's data access patterns, implementing appropriate cache levels, and maintaining cache consistency through smart invalidation strategies. Start with query result caching and view caching for immediate impact, then add template fragment caching and advanced strategies as needed.

Remember: measure first, cache strategically, and always have a plan for cache invalidation. Proper caching transforms Django applications from database-bound systems into high-performance, scalable platforms capable of handling significant user loads.