Testing microservices presents unique challenges compared to monolithic applications. This chapter covers comprehensive testing strategies, tools, and best practices for ensuring reliability and quality in your Django microservices architecture.
Testing microservices is fundamentally different from testing monolithic applications. While a monolith has all its components in one place, microservices are distributed across multiple services, each with its own database, API, and business logic. This distribution creates unique testing challenges that require specialized approaches.
Service Dependencies: In a monolith, if you need user data for an order test, you simply query the user table. In microservices, the order service might need to call the user service over HTTP, which introduces network latency, potential failures, and the need for the user service to be running during tests.
Data Consistency: Each service typically has its own database. Testing scenarios that involve multiple services means dealing with eventual consistency - data might not be immediately available across all services after an operation.
Network Reliability: Services communicate over the network, which can fail, timeout, or be slow. Your tests need to account for these real-world conditions.
Environment Complexity: Running tests might require spinning up multiple services, databases, message queues, and other infrastructure components.
Integration Points: The interfaces between services (APIs, message queues, shared databases) become critical points that need thorough testing.
To address these challenges, we use a layered testing approach:
The testing pyramid for microservices is a strategy that helps you balance test coverage, execution speed, and maintenance effort. Think of it as a pyramid where the base (unit tests) is wide and the top (end-to-end tests) is narrow.
Setting up a comprehensive testing environment for microservices requires several specialized tools. Let's understand what each tool does and why you need it:
# Core testing frameworks
pip install pytest==7.4.0 # Modern Python testing framework
pip install pytest-django==4.5.2 # Django integration for pytest
pip install pytest-asyncio==0.21.1 # Support for async/await tests
pip install pytest-mock==3.11.1 # Mocking utilities for pytest
Why pytest? Unlike Django's built-in unittest framework, pytest offers:
# API testing
pip install requests-mock==1.11.0 # Mock HTTP requests in tests
pip install responses==0.23.1 # Alternative HTTP mocking library
pip install httpx==0.24.1 # Modern HTTP client with async support
Why these tools? In microservices, you'll frequently test HTTP API calls:
requests-mock: Intercepts HTTP calls made by the requests libraryresponses: Similar to requests-mock but with different APIhttpx: Modern HTTP client that supports both sync and async operations# Contract testing
pip install pact-python==1.6.0 # Consumer-driven contract testing
What is contract testing? When Service A calls Service B, both services need to agree on the API format. Contract testing ensures this agreement is maintained even as services evolve independently.
# Test data and factories
pip install factory-boy==3.3.0 # Generate test data objects
pip install faker==19.3.0 # Generate realistic fake data
Why factories? Instead of manually creating test data, factories generate realistic test objects automatically. This makes tests more maintainable and realistic.
# Coverage and reporting
pip install pytest-cov==4.1.0 # Code coverage for pytest
pip install coverage==7.2.7 # Core coverage measurement tool
# Load testing
pip install locust==2.15.1 # Performance and load testing
Code coverage tells you which parts of your code are tested. While 100% coverage doesn't guarantee bug-free code, it helps identify untested areas.
Unit tests are the foundation of your testing strategy. They test individual pieces of code in isolation, without dependencies on databases, external services, or complex setup. Think of unit tests as testing individual LEGO blocks before you build a castle.
In microservices, unit tests are even more critical because:
The service layer contains your business logic - the core rules and operations of your application. Testing this layer ensures your business rules work correctly regardless of how they're accessed (via API, CLI, or other interfaces).
# user_service/tests/test_services.py
import pytest
from unittest.mock import Mock, patch
from django.test import TestCase
from django.contrib.auth.models import User
from user_service.services import UserService
from user_service.exceptions import UserNotFoundError
class TestUserService:
"""
Test class for UserService business logic
We use pytest-style classes instead of Django's TestCase for unit tests
because they're faster and don't require database setup.
"""
@pytest.fixture
def user_service(self):
"""
Fixture that provides a UserService instance for each test
Fixtures are pytest's way of providing test setup. This fixture
runs before each test method and provides a fresh UserService instance.
"""
return UserService()
@pytest.fixture
def sample_user(self):
"""
Fixture that creates a test user in the database
The @pytest.mark.django_db decorator (used on test methods) tells pytest
that this test needs database access. Without it, database operations fail.
"""
return User.objects.create_user(
username='testuser',
email='test@example.com',
password='testpass123'
)
@pytest.mark.django_db
def test_get_user_by_id_success(self, user_service, sample_user):
"""
Test successful user retrieval by ID
This test verifies the happy path - when everything works correctly.
We test that:
1. The method returns a user object
2. The returned user has the correct attributes
"""
# Act: Call the method we're testing
result = user_service.get_user_by_id(sample_user.id)
# Assert: Verify the results are what we expect
assert result.id == sample_user.id
assert result.username == 'testuser'
assert result.email == 'test@example.com'
@pytest.mark.django_db
def test_get_user_by_id_not_found(self, user_service):
"""
Test user retrieval with non-existent ID
This test verifies error handling - what happens when things go wrong.
We expect a specific exception to be raised.
"""
# Act & Assert: We expect this to raise an exception
with pytest.raises(UserNotFoundError):
user_service.get_user_by_id(99999) # ID that doesn't exist
@patch('user_service.tasks.send_welcome_email.delay')
def test_create_user_triggers_welcome_email(self, mock_email_task, user_service):
"""
Test that user creation triggers welcome email task
This test uses mocking to verify that our service calls external dependencies
(in this case, a Celery task) without actually executing them.
The @patch decorator replaces the real send_welcome_email.delay function
with a mock object that we can inspect.
"""
# Arrange: Set up test data
user_data = {
'username': 'newuser',
'email': 'new@example.com',
'password': 'newpass123',
'first_name': 'New',
'last_name': 'User'
}
# Act: Call the method we're testing
user = user_service.create_user(user_data)
# Assert: Verify the user was created correctly
assert user.username == 'newuser'
# Assert: Verify the welcome email task was called
mock_email_task.assert_called_once_with(user.id)
@patch('user_service.services.requests.post')
def test_notify_other_services(self, mock_post, user_service, sample_user):
"""
Test notification to other services
This test mocks HTTP requests to external services. We don't want our
unit tests to make real HTTP calls because:
1. They would be slow
2. They would require other services to be running
3. They could fail due to network issues
"""
# Arrange: Set up the mock to return a successful response
mock_post.return_value.status_code = 200
mock_post.return_value.json.return_value = {'success': True}
# Act: Call the method we're testing
result = user_service.notify_profile_update(
sample_user.id,
{'first_name': 'Updated'}
)
# Assert: Verify the method returned success
assert result is True
# Assert: Verify HTTP requests were made to both services
assert mock_post.call_count == 2 # Called for each service
# You can also verify the exact calls that were made:
calls = mock_post.call_args_list
assert 'order-service' in calls[0][1]['url'] # First call to order service
assert 'notification-service' in calls[1][1]['url'] # Second call to notification service
Key testing concepts explained:
Testing Django models can be tedious if you create test data manually for each test. Factory Boy solves this by generating realistic test data automatically. Think of factories as assembly lines that produce test objects with sensible defaults.
# user_service/tests/factories.py
import factory
from django.contrib.auth.models import User
from user_service.models import UserProfile
class UserFactory(factory.django.DjangoModelFactory):
"""
Factory for creating User objects in tests
This factory automatically generates realistic user data.
Each time you call UserFactory(), you get a new user with
unique username, email, etc.
"""
class Meta:
model = User # Tell factory which model to create
# Generate sequential usernames: user1, user2, user3, etc.
username = factory.Sequence(lambda n: f"user{n}")
# Generate email based on username
email = factory.LazyAttribute(lambda obj: f"{obj.username}@example.com")
# Generate realistic fake names
first_name = factory.Faker('first_name')
last_name = factory.Faker('last_name')
# Set reasonable defaults
is_active = True
class UserProfileFactory(factory.django.DjangoModelFactory):
"""
Factory for creating UserProfile objects
This factory demonstrates how to create related objects.
Each profile is automatically linked to a user.
"""
class Meta:
model = UserProfile
# Create a related User object using the UserFactory
user = factory.SubFactory(UserFactory)
# Generate realistic fake data
bio = factory.Faker('text', max_nb_chars=200)
birth_date = factory.Faker('date_of_birth', minimum_age=18, maximum_age=80)
phone_number = factory.Faker('phone_number')
Now let's see how to use these factories in tests:
# user_service/tests/test_models.py
import pytest
from django.core.exceptions import ValidationError
from user_service.tests.factories import UserFactory, UserProfileFactory
@pytest.mark.django_db
class TestUserProfile:
"""
Test class for UserProfile model
Model tests verify that your Django models work correctly:
- Validation rules are enforced
- Relationships work properly
- Custom methods return expected values
"""
def test_user_profile_creation(self):
"""
Test user profile creation with factory
This test verifies that our factory creates valid objects
and that the model's basic functionality works.
"""
# Act: Create a profile using the factory
profile = UserProfileFactory()
# Assert: Verify the profile was created with expected data
assert profile.user.username is not None
assert profile.bio is not None
assert profile.birth_date is not None
# Verify the profile was saved to the database
assert profile.id is not None
def test_user_profile_str_representation(self):
"""
Test string representation of user profile
This tests the __str__ method of your model, which is used
in Django admin and other places where the object is displayed.
"""
# Arrange: Create a user with a specific username
user = UserFactory(username='testuser')
profile = UserProfileFactory(user=user)
# Act & Assert: Test the string representation
assert str(profile) == f"Profile for testuser"
def test_phone_number_validation(self):
"""
Test phone number validation
This test verifies that model validation works correctly.
We create an invalid object and expect validation to fail.
"""
# Arrange: Create a profile with invalid phone number
# Using .build() creates the object but doesn't save it to database
profile = UserProfileFactory.build(phone_number='invalid-phone')
# Act & Assert: Expect validation to fail
with pytest.raises(ValidationError):
profile.full_clean() # This runs model validation
def test_user_profile_relationship(self):
"""
Test the relationship between User and UserProfile
This verifies that Django's foreign key relationships work correctly.
"""
# Arrange: Create a profile (which automatically creates a user)
profile = UserProfileFactory()
# Act: Access the related user
user = profile.user
# Assert: Verify the relationship works both ways
assert user.profile == profile
assert profile.user == user
def test_multiple_profiles_different_users(self):
"""
Test that multiple profiles create different users
This verifies that our factory creates unique objects each time.
"""
# Act: Create multiple profiles
profile1 = UserProfileFactory()
profile2 = UserProfileFactory()
# Assert: Verify they have different users
assert profile1.user != profile2.user
assert profile1.user.username != profile2.user.username
assert profile1.user.email != profile2.user.email
Benefits of using factories:
Factory usage patterns:
# Create with defaults
user = UserFactory()
# Override specific fields
user = UserFactory(username='specific_user', email='specific@email.com')
# Create multiple objects
users = UserFactory.create_batch(5) # Creates 5 users
# Build without saving to database (useful for validation tests)
user = UserFactory.build()
# Create related objects
profile = UserProfileFactory(user__username='custom_user') # Creates user with custom username
Integration tests verify that different components of your service work together correctly. While unit tests focus on individual functions, integration tests ensure that your service layer, database, and external APIs all play nicely together.
Unit Test: "Does this function calculate tax correctly?" Integration Test: "When I create an order through the service layer, does it save to the database, call the tax service, and update the order total correctly?"
Integration tests are slower than unit tests because they involve real database operations and potentially external service calls, but they catch bugs that unit tests miss.
These tests verify that your service layer correctly interacts with the database, including transactions, relationships, and data consistency.
# user_service/tests/test_integration.py
import pytest
from django.test import TransactionTestCase
from django.db import transaction
from django.core.exceptions import ValidationError
from user_service.models import User, UserProfile
from user_service.services import UserService
@pytest.mark.django_db
class TestUserServiceIntegration:
"""
Integration tests for UserService
These tests verify that the service layer correctly interacts
with the database and handles complex operations involving
multiple models and transactions.
"""
def test_user_creation_with_profile(self):
"""
Test complete user creation flow with profile
This test verifies that creating a user with a profile:
1. Creates both User and UserProfile records
2. Links them correctly
3. Saves all data to the database
4. Handles the transaction properly
"""
# Arrange: Set up test data
service = UserService()
user_data = {
'username': 'integrationuser',
'email': 'integration@example.com',
'password': 'testpass123',
'profile_data': {
'bio': 'Integration test user',
'phone_number': '+1234567890'
}
}
# Act: Call the service method
user = service.create_user_with_profile(user_data)
# Assert: Verify user creation
assert user.username == 'integrationuser'
assert user.email == 'integration@example.com'
# Verify the user was actually saved to database
saved_user = User.objects.get(id=user.id)
assert saved_user.username == 'integrationuser'
# Assert: Verify profile creation and relationship
assert hasattr(user, 'profile')
assert user.profile.bio == 'Integration test user'
assert user.profile.phone_number == '+1234567890'
# Verify the profile was saved and linked correctly
saved_profile = UserProfile.objects.get(user=user)
assert saved_profile.bio == 'Integration test user'
def test_transaction_rollback_on_error(self):
"""
Test that transactions are properly rolled back on errors
This is a critical test for data integrity. If creating a user
succeeds but creating the profile fails, we want to ensure
that the user creation is rolled back so we don't have
orphaned data.
"""
service = UserService()
# Arrange: Create user data that will cause an error in profile creation
user_data = {
'username': 'erroruser',
'email': 'error@example.com',
'password': 'testpass123',
'profile_data': {
'phone_number': 'invalid-phone-number' # This will cause validation error
}
}
# Act & Assert: Expect the operation to fail
with pytest.raises(ValidationError):
service.create_user_with_profile(user_data)
# Assert: Verify that user was not created due to rollback
# This is the key test - if transactions work correctly,
# the user should not exist in the database
assert not User.objects.filter(username='erroruser').exists()
# Also verify no orphaned profile was created
assert not UserProfile.objects.filter(user__username='erroruser').exists()
def test_bulk_user_creation(self):
"""
Test creating multiple users in a single operation
This tests bulk operations and ensures they work correctly
with the database.
"""
service = UserService()
# Arrange: Create data for multiple users
users_data = [
{
'username': f'bulkuser{i}',
'email': f'bulk{i}@example.com',
'password': 'testpass123'
}
for i in range(5)
]
# Act: Create users in bulk
created_users = service.create_users_bulk(users_data)
# Assert: Verify all users were created
assert len(created_users) == 5
# Verify they're all in the database
for i, user in enumerate(created_users):
assert user.username == f'bulkuser{i}'
# Verify it's actually in the database
db_user = User.objects.get(id=user.id)
assert db_user.username == f'bulkuser{i}'
def test_user_update_with_related_data(self):
"""
Test updating user with related profile data
This tests complex update operations that involve
multiple related models.
"""
service = UserService()
# Arrange: Create a user with profile
user = service.create_user_with_profile({
'username': 'updateuser',
'email': 'update@example.com',
'password': 'testpass123',
'profile_data': {
'bio': 'Original bio',
'phone_number': '+1111111111'
}
})
# Act: Update user and profile data
updated_user = service.update_user_with_profile(user.id, {
'first_name': 'Updated',
'last_name': 'Name',
'profile_data': {
'bio': 'Updated bio',
'phone_number': '+2222222222'
}
})
# Assert: Verify user updates
assert updated_user.first_name == 'Updated'
assert updated_user.last_name == 'Name'
# Verify profile updates
updated_user.refresh_from_db() # Reload from database
assert updated_user.profile.bio == 'Updated bio'
assert updated_user.profile.phone_number == '+2222222222'
# Verify changes are persisted in database
db_user = User.objects.select_related('profile').get(id=user.id)
assert db_user.first_name == 'Updated'
assert db_user.profile.bio == 'Updated bio'
Key integration testing concepts:
API integration tests verify that your REST endpoints work correctly with the database and business logic. These tests make actual HTTP requests to your Django views and verify the responses.
# user_service/tests/test_api_integration.py
import pytest
import json
from django.test import Client
from django.urls import reverse
from rest_framework import status
from rest_framework.test import APIClient
from django.contrib.auth.models import User
from user_service.tests.factories import UserFactory
@pytest.mark.django_db
class TestUserAPIIntegration:
"""
API integration tests for user endpoints
These tests verify that your API endpoints:
1. Accept the correct input formats
2. Return the expected response formats
3. Properly interact with the database
4. Handle authentication and permissions correctly
5. Return appropriate HTTP status codes
"""
@pytest.fixture
def api_client(self):
"""
Fixture providing an API client for making HTTP requests
We use DRF's APIClient instead of Django's Client because
it has better support for JSON requests and authentication.
"""
return APIClient()
@pytest.fixture
def authenticated_user(self):
"""Fixture providing an authenticated user for tests that require login"""
return UserFactory()
def test_user_list_api(self, api_client):
"""
Test user list API endpoint
This test verifies:
1. The endpoint returns a 200 status code
2. The response contains the expected number of users
3. The response format is correct
"""
# Arrange: Create test users using our factory
UserFactory.create_batch(3) # Creates 3 users
# Act: Make GET request to user list endpoint
response = api_client.get(reverse('user-list'))
# Assert: Verify response status
assert response.status_code == status.HTTP_200_OK
# Assert: Verify response data
data = response.json()
assert 'results' in data # Assuming paginated response
assert len(data['results']) == 3
# Verify each user has expected fields
for user_data in data['results']:
assert 'id' in user_data
assert 'username' in user_data
assert 'email' in user_data
# Verify sensitive fields are not exposed
assert 'password' not in user_data
def test_user_creation_api(self, api_client):
"""
Test user creation via API
This test verifies:
1. Valid user data creates a new user
2. The user is actually saved to the database
3. The response contains the created user data
4. Sensitive data is handled correctly
"""
# Arrange: Prepare user data
user_data = {
'username': 'apiuser',
'email': 'api@example.com',
'password': 'apipass123',
'first_name': 'API',
'last_name': 'User'
}
# Act: Make POST request to create user
response = api_client.post(
reverse('user-list'),
data=json.dumps(user_data),
content_type='application/json'
)
# Assert: Verify response status
assert response.status_code == status.HTTP_201_CREATED
# Assert: Verify response data
response_data = response.json()
assert response_data['username'] == 'apiuser'
assert response_data['email'] == 'api@example.com'
assert response_data['first_name'] == 'API'
assert response_data['last_name'] == 'User'
# Verify password is not returned in response
assert 'password' not in response_data
# Assert: Verify user was actually created in database
created_user = User.objects.get(username='apiuser')
assert created_user.email == 'api@example.com'
assert created_user.check_password('apipass123') # Verify password was hashed
def test_user_creation_validation_errors(self, api_client):
"""
Test user creation with invalid data
This test verifies that the API properly validates input
and returns appropriate error messages.
"""
# Arrange: Prepare invalid user data (missing required fields)
invalid_data = {
'username': '', # Empty username should be invalid
'email': 'invalid-email', # Invalid email format
# Missing password
}
# Act: Make POST request with invalid data
response = api_client.post(
reverse('user-list'),
data=json.dumps(invalid_data),
content_type='application/json'
)
# Assert: Verify error response
assert response.status_code == status.HTTP_400_BAD_REQUEST
# Verify error details
error_data = response.json()
assert 'username' in error_data # Username validation error
assert 'email' in error_data # Email validation error
assert 'password' in error_data # Password required error
# Verify no user was created
assert User.objects.count() == 0
def test_user_update_api(self, api_client, authenticated_user):
"""
Test user update via API
This test verifies:
1. Authenticated users can update their data
2. Partial updates work correctly
3. Changes are persisted to the database
"""
# Arrange: Authenticate the user
api_client.force_authenticate(user=authenticated_user)
# Prepare update data (partial update)
update_data = {
'first_name': 'Updated',
'last_name': 'Name'
}
# Act: Make PATCH request to update user
response = api_client.patch(
reverse('user-detail', kwargs={'pk': authenticated_user.id}),
data=json.dumps(update_data),
content_type='application/json'
)
# Assert: Verify response
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert response_data['first_name'] == 'Updated'
assert response_data['last_name'] == 'Name'
# Assert: Verify changes were saved to database
authenticated_user.refresh_from_db()
assert authenticated_user.first_name == 'Updated'
assert authenticated_user.last_name == 'Name'
def test_user_detail_unauthorized(self, api_client):
"""
Test that unauthorized users cannot access user details
This test verifies authentication and authorization work correctly.
"""
# Arrange: Create a user but don't authenticate
user = UserFactory()
# Act: Try to access user detail without authentication
response = api_client.get(
reverse('user-detail', kwargs={'pk': user.id})
)
# Assert: Verify access is denied
assert response.status_code == status.HTTP_401_UNAUTHORIZED
def test_user_list_pagination(self, api_client):
"""
Test that user list pagination works correctly
This test verifies that large datasets are properly paginated.
"""
# Arrange: Create many users
UserFactory.create_batch(25) # Create 25 users
# Act: Request first page
response = api_client.get(reverse('user-list'))
# Assert: Verify pagination
assert response.status_code == status.HTTP_200_OK
data = response.json()
assert 'results' in data
assert 'count' in data
assert 'next' in data
assert 'previous' in data
assert data['count'] == 25 # Total count
assert len(data['results']) <= 20 # Page size (assuming 20 per page)
assert data['next'] is not None # Should have next page
assert data['previous'] is None # First page, no previous
def test_user_search_functionality(self, api_client):
"""
Test user search/filtering functionality
This test verifies that query parameters work correctly.
"""
# Arrange: Create users with specific data
UserFactory(username='john_doe', first_name='John')
UserFactory(username='jane_doe', first_name='Jane')
UserFactory(username='bob_smith', first_name='Bob')
# Act: Search for users with 'doe' in username
response = api_client.get(
reverse('user-list'),
{'search': 'doe'} # Query parameter
)
# Assert: Verify search results
assert response.status_code == status.HTTP_200_OK
data = response.json()
assert len(data['results']) == 2 # Should find john_doe and jane_doe
usernames = [user['username'] for user in data['results']]
assert 'john_doe' in usernames
assert 'jane_doe' in usernames
assert 'bob_smith' not in usernames
Key API integration testing concepts:
Contract testing is a crucial technique for microservices that ensures services can communicate correctly even as they evolve independently. Think of it as a formal agreement between services about how they'll talk to each other.
In microservices, Service A (consumer) calls Service B (provider). Both teams develop independently, but they need to ensure their services remain compatible. Traditional integration tests require both services to be running, which creates dependencies and slows down development.
The Challenge:
The Solution: Contract testing verifies the "contract" (API specification) between services without requiring both services to run simultaneously.
Pact is the most popular contract testing framework. Here's how to implement it:
# tests/contract/test_user_service_contract.py
import pytest
from pact import Consumer, Provider, Like, Term
from user_service.client import UserServiceClient
# Consumer side (Order Service testing User Service contract)
@pytest.fixture(scope='session')
def pact():
"""
Create a Pact between Order Service (consumer) and User Service (provider)
This fixture sets up the contract testing framework. The consumer
(Order Service) defines what it expects from the provider (User Service).
"""
return Consumer('OrderService').has_pact_with(Provider('UserService'))
@pytest.fixture(scope='session')
def user_service_client():
"""
Client for making requests to User Service
This would be the actual client your Order Service uses to
communicate with the User Service.
"""
return UserServiceClient('http://localhost:1234')
def test_get_user_contract(pact, user_service_client):
"""
Test contract for getting user by ID
This test defines what the Order Service expects when it
requests user data from the User Service.
"""
# Define the expected response structure
# Like() means "a value like this" - the actual value can vary
# but the type and structure must match
expected_user = {
'id': Like(1), # Any integer
'username': Like('testuser'), # Any string
'email': Like('test@example.com'), # Any string (could add email validation)
'first_name': Like('Test'), # Any string
'last_name': Like('User'), # Any string
'is_active': Like(True), # Any boolean
'created_at': Term( # String matching ISO date format
matcher=r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}',
example='2023-01-01T12:00:00'
)
}
# Define the interaction - what request triggers what response
(pact
.given('User with ID 1 exists') # Provider state
.upon_receiving('a request for user with ID 1') # Description
.with_request('GET', '/api/users/1/') # Expected request
.will_respond_with(200, body=expected_user)) # Expected response
# Execute the test within the Pact context
with pact:
# Make the actual request using your client
user = user_service_client.get_user(1)
# Verify the response matches your expectations
assert user['username'] == 'testuser'
assert user['email'] == 'test@example.com'
assert 'id' in user
assert 'created_at' in user
def test_create_user_contract(pact, user_service_client):
"""
Test contract for creating a new user
This test defines the contract for user creation, including
the request format and expected response.
"""
# Define the request data structure
user_data = {
'username': 'newuser',
'email': 'new@example.com',
'password': 'newpass123',
'first_name': 'New',
'last_name': 'User'
}
# Define the expected response
expected_response = {
'id': Like(2), # New user gets an ID
'username': 'newuser', # Exact match for provided data
'email': 'new@example.com',
'first_name': 'New',
'last_name': 'User',
'is_active': True, # Default value
'created_at': Term(
matcher=r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}',
example='2023-01-01T12:00:00'
)
}
# Define the interaction
(pact
.given('User service is available') # Provider state
.upon_receiving('a request to create a new user') # Description
.with_request('POST', '/api/users/', body=user_data) # Expected request
.will_respond_with(201, body=expected_response)) # Expected response
# Execute the test
with pact:
user = user_service_client.create_user(user_data)
# Verify the response
assert user['username'] == 'newuser'
assert user['id'] is not None
assert user['is_active'] is True
def test_get_user_not_found_contract(pact, user_service_client):
"""
Test contract for user not found scenario
It's important to test error cases too, so both services
agree on how errors are handled.
"""
expected_error = {
'error': 'User not found',
'code': 'USER_NOT_FOUND',
'detail': Like('User with ID 999 does not exist')
}
(pact
.given('User with ID 999 does not exist')
.upon_receiving('a request for non-existent user')
.with_request('GET', '/api/users/999/')
.will_respond_with(404, body=expected_error))
with pact:
# This should raise an exception or return an error response
try:
user_service_client.get_user(999)
assert False, "Expected an exception for non-existent user"
except UserNotFoundError as e:
assert e.code == 'USER_NOT_FOUND'
def test_user_list_with_pagination_contract(pact, user_service_client):
"""
Test contract for paginated user list
This shows how to test more complex response structures
like paginated lists.
"""
expected_response = {
'count': Like(100), # Total number of users
'next': Like('http://example.com/api/users/?page=2'),
'previous': None, # First page has no previous
'results': [
{
'id': Like(1),
'username': Like('user1'),
'email': Like('user1@example.com'),
'is_active': Like(True)
},
{
'id': Like(2),
'username': Like('user2'),
'email': Like('user2@example.com'),
'is_active': Like(True)
}
]
}
(pact
.given('Multiple users exist')
.upon_receiving('a request for user list')
.with_request('GET', '/api/users/', query={'page': '1', 'page_size': '20'})
.will_respond_with(200, body=expected_response))
with pact:
response = user_service_client.get_users(page=1, page_size=20)
assert 'results' in response
assert 'count' in response
assert len(response['results']) >= 1
Key Contract Testing Concepts:
The provider side (User Service) needs to verify that it can fulfill all the contracts that consumers have defined. This ensures that when you make changes to your service, you don't break existing consumers.
# user_service/tests/test_contract_provider.py
import pytest
from pact import Verifier
from django.test import override_settings
from django.contrib.auth.models import User
from user_service.tests.factories import UserFactory
def test_user_service_provider_contract():
"""
Verify that User Service fulfills the contract
This test runs on the provider side (User Service) and verifies
that the service can fulfill all the contracts that consumers
have defined.
"""
# Create a Pact verifier
verifier = Verifier(
provider='UserService', # Name of this service
provider_base_url='http://localhost:8000' # URL where service is running
)
# Define provider states - these set up the data needed for each test
provider_states = {
'User with ID 1 exists': setup_user_exists,
'User service is available': setup_service_available,
'Multiple users exist': setup_multiple_users,
'User with ID 999 does not exist': setup_user_not_exists
}
# Verify the contracts
# This will:
# 1. Read the contract file generated by consumer tests
# 2. Make requests to your running service
# 3. Verify the responses match what consumers expect
success, logs = verifier.verify_pacts(
'./pacts/orderservice-userservice.json', # Contract file location
provider_states_setup_url='http://localhost:8000/api/provider-states/',
provider_states=provider_states,
verbose=True # Show detailed logs
)
# Assert that all contract verifications passed
assert success, f"Contract verification failed: {logs}"
def setup_user_exists():
"""
Set up test data for 'User with ID 1 exists' state
This function is called before tests that require a user with ID 1.
It ensures the database is in the correct state for the test.
"""
# Clean up any existing data
User.objects.filter(id=1).delete()
# Create the required user
UserFactory(
id=1,
username='testuser',
email='test@example.com',
first_name='Test',
last_name='User',
is_active=True
)
return {'message': 'User with ID 1 created'}
def setup_service_available():
"""
Set up test data for 'User service is available' state
This ensures the service is ready to handle user creation requests.
"""
# Ensure database is clean and ready
# You might want to clear any test data that could interfere
return {'message': 'Service is ready'}
def setup_multiple_users():
"""
Set up test data for 'Multiple users exist' state
This creates multiple users for pagination tests.
"""
# Clear existing users
User.objects.all().delete()
# Create multiple users
users = []
for i in range(1, 101): # Create 100 users
user = UserFactory(
username=f'user{i}',
email=f'user{i}@example.com',
is_active=True
)
users.append(user)
return {'message': f'Created {len(users)} users'}
def setup_user_not_exists():
"""
Set up test data for 'User with ID 999 does not exist' state
This ensures that user with ID 999 doesn't exist for error testing.
"""
# Make sure user 999 doesn't exist
User.objects.filter(id=999).delete()
return {'message': 'User with ID 999 does not exist'}
# Provider states endpoint
# This is a special endpoint that Pact calls to set up test data
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_http_methods
import json
@csrf_exempt
@require_http_methods(["POST"])
def provider_states_endpoint(request):
"""
Endpoint for setting up provider states during contract verification
Pact calls this endpoint before each test to set up the required
data state. The request contains the state name, and this endpoint
calls the appropriate setup function.
"""
try:
data = json.loads(request.body)
state = data.get('state')
# Map of state names to setup functions
state_handlers = {
'User with ID 1 exists': setup_user_exists,
'User service is available': setup_service_available,
'Multiple users exist': setup_multiple_users,
'User with ID 999 does not exist': setup_user_not_exists,
}
handler = state_handlers.get(state)
if handler:
result = handler()
return JsonResponse(result)
else:
return JsonResponse(
{'error': f'Unknown state: {state}'},
status=400
)
except Exception as e:
return JsonResponse(
{'error': str(e)},
status=500
)
# Add this to your URLs
# urls.py
from django.urls import path
from . import views
urlpatterns = [
# ... your other URLs
path('api/provider-states/', views.provider_states_endpoint, name='provider-states'),
]
Running Contract Tests:
# 1. Consumer side (Order Service)
# Run consumer tests to generate contract files
pytest tests/contract/
# This generates: ./pacts/orderservice-userservice.json
# 2. Provider side (User Service)
# Start your Django service
python manage.py runserver 8000
# In another terminal, run provider verification
pytest user_service/tests/test_contract_provider.py
# Or use Pact CLI directly
pact-verifier --provider-base-url=http://localhost:8000 \
--pact-urls=./pacts/orderservice-userservice.json \
--provider-states-setup-url=http://localhost:8000/api/provider-states/
Benefits of Contract Testing:
Best Practices:
Component testing treats an individual service as a black box and tests it in isolation with all external dependencies mocked or stubbed. Think of it as testing a complete microservice as if it were a single component, without worrying about other services.
Integration Testing: Tests how components within a service work together (database + service layer + views) Component Testing: Tests an entire service as a unit, with external services mocked
Component testing is particularly valuable in microservices because it:
# user_service/tests/test_component.py
import pytest
import responses
from django.test import override_settings
from django.urls import reverse
from rest_framework.test import APIClient
from rest_framework import status
from user_service.services import UserService
from user_service.tests.factories import UserFactory
@pytest.mark.django_db
class TestUserServiceComponent:
"""
Component tests for the entire User Service
These tests verify that the User Service works correctly as a complete unit,
with all external dependencies mocked. We test the service through its
public interfaces (APIs) while controlling all external interactions.
"""
@pytest.fixture
def api_client(self):
"""API client for making HTTP requests to the service"""
return APIClient()
@responses.activate
def test_user_creation_with_external_notifications(self, api_client):
"""
Test user creation with mocked external service calls
This test verifies that when a user is created:
1. The user is saved to the database
2. External services are notified correctly
3. The API returns the expected response
4. All external calls are made with correct data
We use @responses.activate to mock HTTP calls to external services.
"""
# Arrange: Mock external service calls
# Mock notification service
responses.add(
responses.POST,
'http://notification-service:8002/api/user-created/',
json={'success': True, 'notification_id': 'notif_123'},
status=200
)
# Mock analytics service
responses.add(
responses.POST,
'http://analytics-service:8003/api/track-event/',
json={'tracked': True, 'event_id': 'event_456'},
status=200
)
# Mock email service
responses.add(
responses.POST,
'http://email-service:8004/api/send-welcome/',
json={'sent': True, 'message_id': 'msg_789'},
status=200
)
# Prepare user creation data
user_data = {
'username': 'componentuser',
'email': 'component@example.com',
'password': 'testpass123',
'first_name': 'Component',
'last_name': 'Test'
}
# Act: Create user through the API
response = api_client.post(
reverse('user-list'),
data=user_data,
format='json'
)
# Assert: Verify the API response
assert response.status_code == status.HTTP_201_CREATED
response_data = response.json()
assert response_data['username'] == 'componentuser'
assert response_data['email'] == 'component@example.com'
# Assert: Verify user was saved to database
from django.contrib.auth.models import User
created_user = User.objects.get(username='componentuser')
assert created_user.email == 'component@example.com'
# Assert: Verify external service calls were made
assert len(responses.calls) == 3 # Should have called all 3 services
# Verify specific service calls
notification_call = next(
call for call in responses.calls
if 'notification-service' in call.request.url
)
assert notification_call.request.url == 'http://notification-service:8002/api/user-created/'
analytics_call = next(
call for call in responses.calls
if 'analytics-service' in call.request.url
)
assert analytics_call.request.url == 'http://analytics-service:8003/api/track-event/'
@responses.activate
def test_user_service_handles_external_service_failure(self, api_client):
"""
Test graceful handling of external service failures
This test verifies that the User Service continues to work correctly
even when external services are unavailable. The user should still
be created successfully, but external notifications might fail.
"""
# Arrange: Mock external service failure
responses.add(
responses.POST,
'http://notification-service:8002/api/user-created/',
json={'error': 'Service unavailable'},
status=503 # Service unavailable
)
# Mock analytics service success (to test partial failure)
responses.add(
responses.POST,
'http://analytics-service:8003/api/track-event/',
json={'tracked': True},
status=200
)
user_data = {
'username': 'failureuser',
'email': 'failure@example.com',
'password': 'testpass123'
}
# Act: Create user despite external service failure
response = api_client.post(
reverse('user-list'),
data=user_data,
format='json'
)
# Assert: User creation should still succeed
# The service should be resilient to external failures
assert response.status_code == status.HTTP_201_CREATED
# Verify user was created in database
from django.contrib.auth.models import User
user = User.objects.get(username='failureuser')
assert user.email == 'failure@example.com'
# Verify both services were called (even though one failed)
assert len(responses.calls) == 2
@responses.activate
def test_user_profile_update_workflow(self, api_client):
"""
Test complete user profile update workflow
This test verifies the entire workflow when a user updates their profile:
1. Profile is updated in database
2. Cache is invalidated
3. Other services are notified
4. Audit log is created
"""
# Arrange: Create a user first
user = UserFactory(username='updateuser')
api_client.force_authenticate(user=user)
# Mock external service calls for profile update
responses.add(
responses.POST,
'http://cache-service:8005/api/invalidate/',
json={'invalidated': True},
status=200
)
responses.add(
responses.POST,
'http://audit-service:8006/api/log-event/',
json={'logged': True, 'audit_id': 'audit_123'},
status=200
)
responses.add(
responses.POST,
'http://recommendation-service:8007/api/user-updated/',
json={'updated': True},
status=200
)
# Prepare update data
update_data = {
'first_name': 'Updated',
'last_name': 'Name',
'bio': 'Updated bio information'
}
# Act: Update user profile
response = api_client.patch(
reverse('user-detail', kwargs={'pk': user.id}),
data=update_data,
format='json'
)
# Assert: Verify API response
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert response_data['first_name'] == 'Updated'
assert response_data['last_name'] == 'Name'
# Verify database was updated
user.refresh_from_db()
assert user.first_name == 'Updated'
assert user.last_name == 'Name'
# Verify external services were notified
assert len(responses.calls) == 3
# Verify cache invalidation was called
cache_call = next(
call for call in responses.calls
if 'cache-service' in call.request.url
)
assert cache_call is not None
def test_user_authentication_workflow(self, api_client):
"""
Test user authentication workflow without external dependencies
This test verifies the authentication process works correctly
within the service boundaries.
"""
# Arrange: Create a user
user = UserFactory(username='authuser', email='auth@example.com')
user.set_password('testpass123')
user.save()
# Act: Attempt to authenticate
login_data = {
'username': 'authuser',
'password': 'testpass123'
}
response = api_client.post(
reverse('auth-login'),
data=login_data,
format='json'
)
# Assert: Verify successful authentication
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert 'token' in response_data or 'access_token' in response_data
# Test invalid credentials
invalid_data = {
'username': 'authuser',
'password': 'wrongpassword'
}
response = api_client.post(
reverse('auth-login'),
data=invalid_data,
format='json'
)
assert response.status_code == status.HTTP_401_UNAUTHORIZED
@pytest.mark.parametrize("user_data,expected_error", [
# Test various validation scenarios
({'username': '', 'email': 'test@example.com', 'password': 'pass123'}, 'username'),
({'username': 'test', 'email': 'invalid-email', 'password': 'pass123'}, 'email'),
({'username': 'test', 'email': 'test@example.com', 'password': '123'}, 'password'),
({'username': 'test', 'email': 'test@example.com'}, 'password'), # Missing password
])
def test_user_validation_scenarios(self, api_client, user_data, expected_error):
"""
Test various user validation scenarios
This parameterized test verifies that the service correctly validates
user input and returns appropriate error messages.
"""
# Act: Attempt to create user with invalid data
response = api_client.post(
reverse('user-list'),
data=user_data,
format='json'
)
# Assert: Verify validation error
assert response.status_code == status.HTTP_400_BAD_REQUEST
error_data = response.json()
assert expected_error in error_data
Key Component Testing Concepts:
Test Containers is a powerful approach that uses Docker containers to provide real database and service instances for testing. Instead of mocking everything, you run actual databases, message queues, and other services in containers during tests.
Benefits of Test Containers:
# user_service/tests/test_containers.py
import pytest
import psycopg2
from testcontainers.postgres import PostgresContainer
from testcontainers.redis import RedisContainer
from testcontainers.compose import DockerCompose
from django.test import override_settings
from django.core.management import call_command
from django.db import connections
from user_service.services import UserService
@pytest.fixture(scope='session')
def postgres_container():
"""
Provide a PostgreSQL container for testing
This fixture starts a real PostgreSQL database in a Docker container.
The container is shared across all tests in the session for performance,
but each test should clean up its data.
"""
with PostgresContainer("postgres:13") as postgres:
# The container is now running and accessible
yield postgres
# Container is automatically stopped and removed after tests
@pytest.fixture(scope='session')
def redis_container():
"""
Provide a Redis container for testing
This gives us a real Redis instance for testing caching and
session functionality.
"""
with RedisContainer("redis:6") as redis:
yield redis
@pytest.fixture(scope='session')
def test_database(postgres_container):
"""
Set up test database with migrations
This fixture:
1. Configures Django to use the test PostgreSQL container
2. Runs database migrations
3. Provides a clean database for tests
"""
# Get connection details from the container
db_url = postgres_container.get_connection_url()
# Override Django database settings to use the container
with override_settings(
DATABASES={
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': postgres_container.POSTGRES_DB,
'USER': postgres_container.POSTGRES_USER,
'PASSWORD': postgres_container.POSTGRES_PASSWORD,
'HOST': postgres_container.get_container_host_ip(),
'PORT': postgres_container.get_exposed_port(5432),
}
}
):
# Run database migrations
call_command('migrate', verbosity=0, interactive=False)
yield
# Cleanup: Drop all tables after tests
# This ensures a clean state for other test sessions
call_command('flush', verbosity=0, interactive=False)
@pytest.fixture
def redis_cache(redis_container):
"""
Configure Django to use the Redis container for caching
"""
redis_url = f"redis://{redis_container.get_container_host_ip()}:{redis_container.get_exposed_port(6379)}/0"
with override_settings(
CACHES={
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': redis_url,
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}
):
yield redis_url
@pytest.mark.integration
def test_user_service_with_real_database(test_database, postgres_container):
"""
Test user service with real PostgreSQL database
This test uses an actual PostgreSQL database running in a container,
giving us confidence that our code works with real database operations,
transactions, and constraints.
"""
service = UserService()
# Test user creation with real database
user_data = {
'username': 'dbuser',
'email': 'db@example.com',
'password': 'testpass123',
'first_name': 'Database',
'last_name': 'User'
}
# Act: Create user
user = service.create_user(user_data)
# Assert: Verify user was created
assert user.id is not None
assert user.username == 'dbuser'
# Verify user was actually saved to database by retrieving it
retrieved_user = service.get_user_by_id(user.id)
assert retrieved_user.username == 'dbuser'
assert retrieved_user.email == 'db@example.com'
# Test database constraints (e.g., unique username)
duplicate_data = user_data.copy()
with pytest.raises(Exception): # Should raise integrity error
service.create_user(duplicate_data)
@pytest.mark.integration
def test_user_caching_with_real_redis(test_database, redis_cache):
"""
Test user caching with real Redis instance
This test verifies that caching works correctly with a real Redis
instance, including cache hits, misses, and invalidation.
"""
from django.core.cache import cache
service = UserService()
# Create a user
user_data = {
'username': 'cacheuser',
'email': 'cache@example.com',
'password': 'testpass123'
}
user = service.create_user(user_data)
# Test caching
cache_key = f'user:{user.id}'
# First access - should be a cache miss
cached_user = cache.get(cache_key)
assert cached_user is None
# Cache the user
cache.set(cache_key, user, 300) # Cache for 5 minutes
# Second access - should be a cache hit
cached_user = cache.get(cache_key)
assert cached_user is not None
assert cached_user.username == 'cacheuser'
# Test cache invalidation
cache.delete(cache_key)
cached_user = cache.get(cache_key)
assert cached_user is None
@pytest.fixture(scope='session')
def microservices_environment():
"""
Set up a complete microservices environment using Docker Compose
This fixture starts multiple services (database, cache, message queue)
that your microservice depends on, creating a realistic test environment.
"""
with DockerCompose(".", compose_file_name="docker-compose.test.yml") as compose:
# Wait for services to be ready
compose.wait_for("http://localhost:5432") # PostgreSQL
compose.wait_for("http://localhost:6379") # Redis
compose.wait_for("http://localhost:5672") # RabbitMQ
yield compose
@pytest.mark.integration
def test_complete_user_workflow_with_dependencies(microservices_environment):
"""
Test complete user workflow with all dependencies running
This test runs against a complete microservices environment with
real databases, caches, and message queues.
"""
service = UserService()
# Test user creation with all dependencies
user_data = {
'username': 'fullworkflow',
'email': 'workflow@example.com',
'password': 'testpass123'
}
# This should:
# 1. Save user to PostgreSQL
# 2. Cache user data in Redis
# 3. Send message to RabbitMQ for notifications
user = service.create_user_with_full_workflow(user_data)
assert user.username == 'fullworkflow'
# Verify user is in database
from django.contrib.auth.models import User
db_user = User.objects.get(id=user.id)
assert db_user.username == 'fullworkflow'
# Verify user is cached
from django.core.cache import cache
cached_user = cache.get(f'user:{user.id}')
assert cached_user is not None
# Verify message was sent (you'd need to check RabbitMQ queue)
# This would require additional setup to inspect message queues
class TestUserServicePerformance:
"""
Performance tests using real databases
These tests verify that your service performs well under load
with real database operations.
"""
@pytest.mark.integration
@pytest.mark.slow
def test_bulk_user_creation_performance(self, test_database):
"""
Test performance of bulk user creation
This test creates many users and measures performance,
helping identify bottlenecks in database operations.
"""
import time
service = UserService()
# Prepare data for 1000 users
users_data = [
{
'username': f'perfuser{i}',
'email': f'perf{i}@example.com',
'password': 'testpass123'
}
for i in range(1000)
]
# Measure time for bulk creation
start_time = time.time()
created_users = service.create_users_bulk(users_data)
end_time = time.time()
# Assert performance expectations
creation_time = end_time - start_time
assert len(created_users) == 1000
assert creation_time < 10.0 # Should complete in under 10 seconds
# Verify all users were created correctly
from django.contrib.auth.models import User
assert User.objects.count() == 1000
@pytest.mark.integration
def test_concurrent_user_operations(self, test_database):
"""
Test concurrent user operations for race conditions
This test simulates multiple concurrent requests to identify
race conditions and database locking issues.
"""
import threading
import time
service = UserService()
results = []
errors = []
def create_user(index):
try:
user_data = {
'username': f'concurrent{index}',
'email': f'concurrent{index}@example.com',
'password': 'testpass123'
}
user = service.create_user(user_data)
results.append(user)
except Exception as e:
errors.append(e)
# Create 10 threads to simulate concurrent requests
threads = []
for i in range(10):
thread = threading.Thread(target=create_user, args=(i,))
threads.append(thread)
# Start all threads
for thread in threads:
thread.start()
# Wait for all threads to complete
for thread in threads:
thread.join()
# Assert all operations completed successfully
assert len(results) == 10
assert len(errors) == 0
# Verify all users were created with unique usernames
usernames = [user.username for user in results]
assert len(set(usernames)) == 10 # All unique
Docker Compose for Test Environment:
# docker-compose.test.yml
version: '3.8'
services:
test-postgres:
image: postgres:13
environment:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_pass
ports:
- "5432:5432"
test-redis:
image: redis:6
ports:
- "6379:6379"
test-rabbitmq:
image: rabbitmq:3.11-management
environment:
RABBITMQ_DEFAULT_USER: test_user
RABBITMQ_DEFAULT_PASS: test_pass
ports:
- "5672:5672"
- "15672:15672"
Running Container Tests:
# Install testcontainers
pip install testcontainers[postgresql,redis]
# Run integration tests
pytest -m integration
# Run slow performance tests
pytest -m slow
# Run all container tests
pytest user_service/tests/test_containers.py -v
Best Practices for Container Testing:
End-to-end (E2E) tests verify that your entire system works correctly by testing complete user journeys across multiple services. These tests are the closest to how real users interact with your system, but they're also the most complex and fragile.
What E2E Tests Do:
Challenges of E2E Testing:
Best Practices:
# tests/e2e/test_user_order_flow.py
import pytest
import requests
import time
import json
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime, timedelta
class TestUserOrderFlow:
"""
End-to-end tests for user and order services interaction
These tests verify complete business workflows that span multiple
services. They test the system as a whole, from a user's perspective.
"""
@pytest.fixture(scope='class')
def services_ready(self):
"""
Wait for all services to be ready before running tests
This fixture ensures that all required services are running
and responding before we start our E2E tests.
"""
services = [
('User Service', 'http://user-service:8000/health/'),
('Order Service', 'http://order-service:8001/health/'),
('Payment Service', 'http://payment-service:8002/health/'),
('Inventory Service', 'http://inventory-service:8003/health/'),
('Notification Service', 'http://notification-service:8004/health/')
]
for service_name, health_url in services:
self.wait_for_service(service_name, health_url)
yield
def wait_for_service(self, service_name, url, timeout=60):
"""
Wait for a service to be ready
This method polls a service's health endpoint until it responds
successfully or the timeout is reached.
"""
print(f"Waiting for {service_name} to be ready...")
start_time = time.time()
while time.time() - start_time < timeout:
try:
response = requests.get(url, timeout=5)
if response.status_code == 200:
print(f"✓ {service_name} is ready")
return
except requests.RequestException as e:
print(f" {service_name} not ready yet: {e}")
time.sleep(2) # Wait 2 seconds before retrying
raise Exception(f"{service_name} not ready after {timeout} seconds")
def test_complete_user_order_flow(self, services_ready):
"""
Test complete flow from user creation to order completion
This test simulates a real user journey:
1. User registers on the platform
2. User browses products
3. User adds items to cart
4. User places an order
5. Payment is processed
6. Order is confirmed
7. User receives confirmation email
This tests the integration between User, Order, Payment,
Inventory, and Notification services.
"""
print("Starting complete user order flow test...")
# Step 1: Create a new user
print("Step 1: Creating user...")
user_data = {
'username': 'e2euser',
'email': 'e2e@example.com',
'password': 'testpass123',
'first_name': 'E2E',
'last_name': 'User'
}
user_response = requests.post(
'http://user-service:8000/api/users/',
json=user_data,
timeout=10
)
assert user_response.status_code == 201, f"User creation failed: {user_response.text}"
user = user_response.json()
user_id = user['id']
print(f"✓ User created with ID: {user_id}")
# Step 2: Authenticate the user
print("Step 2: Authenticating user...")
auth_response = requests.post(
'http://user-service:8000/api/auth/login/',
json={
'username': 'e2euser',
'password': 'testpass123'
},
timeout=10
)
assert auth_response.status_code == 200, f"Authentication failed: {auth_response.text}"
auth_data = auth_response.json()
auth_token = auth_data.get('token') or auth_data.get('access_token')
assert auth_token, "No authentication token received"
headers = {'Authorization': f'Bearer {auth_token}'}
print("✓ User authenticated successfully")
# Step 3: Check available products
print("Step 3: Checking available products...")
products_response = requests.get(
'http://inventory-service:8003/api/products/',
headers=headers,
timeout=10
)
assert products_response.status_code == 200, f"Product fetch failed: {products_response.text}"
products = products_response.json()
assert len(products) > 0, "No products available"
print(f"✓ Found {len(products)} available products")
# Step 4: Create an order with multiple items
print("Step 4: Creating order...")
order_data = {
'user_id': user_id,
'items': [
{
'product_id': products[0]['id'],
'quantity': 2,
'price': products[0]['price']
},
{
'product_id': products[1]['id'] if len(products) > 1 else products[0]['id'],
'quantity': 1,
'price': products[1]['price'] if len(products) > 1 else products[0]['price']
}
],
'shipping_address': {
'street': '123 Test Street',
'city': 'Test City',
'state': 'TS',
'zip_code': '12345',
'country': 'US'
}
}
order_response = requests.post(
'http://order-service:8001/api/orders/',
json=order_data,
headers=headers,
timeout=15
)
assert order_response.status_code == 201, f"Order creation failed: {order_response.text}"
order = order_response.json()
order_id = order['id']
print(f"✓ Order created with ID: {order_id}")
# Step 5: Process payment
print("Step 5: Processing payment...")
payment_data = {
'order_id': order_id,
'payment_method': 'credit_card',
'card_details': {
'card_number': '4111111111111111', # Test card number
'expiry_month': '12',
'expiry_year': '2025',
'cvv': '123'
},
'billing_address': order_data['shipping_address']
}
payment_response = requests.post(
'http://payment-service:8002/api/payments/',
json=payment_data,
headers=headers,
timeout=20 # Payment processing might take longer
)
assert payment_response.status_code == 200, f"Payment failed: {payment_response.text}"
payment = payment_response.json()
assert payment['status'] == 'completed', f"Payment not completed: {payment}"
print(f"✓ Payment processed successfully: {payment['payment_id']}")
# Step 6: Verify order status updated
print("Step 6: Verifying order status...")
# Wait a moment for async processing
time.sleep(3)
order_status_response = requests.get(
f'http://order-service:8001/api/orders/{order_id}/',
headers=headers,
timeout=10
)
assert order_status_response.status_code == 200, f"Order status check failed: {order_status_response.text}"
updated_order = order_status_response.json()
assert updated_order['status'] in ['confirmed', 'processing'], f"Unexpected order status: {updated_order['status']}"
print(f"✓ Order status updated to: {updated_order['status']}")
# Step 7: Verify inventory was updated
print("Step 7: Verifying inventory updates...")
for item in order_data['items']:
inventory_response = requests.get(
f'http://inventory-service:8003/api/products/{item["product_id"]}/',
headers=headers,
timeout=10
)
assert inventory_response.status_code == 200
product = inventory_response.json()
# Inventory should be reduced (we can't check exact numbers without knowing initial stock)
assert 'stock_quantity' in product
print("✓ Inventory updates verified")
# Step 8: Check that notification was sent
print("Step 8: Verifying notification was sent...")
# Check user's notifications
notifications_response = requests.get(
f'http://notification-service:8004/api/users/{user_id}/notifications/',
headers=headers,
timeout=10
)
if notifications_response.status_code == 200:
notifications = notifications_response.json()
order_notifications = [
n for n in notifications
if 'order' in n.get('type', '').lower() and str(order_id) in n.get('message', '')
]
assert len(order_notifications) > 0, "No order confirmation notification found"
print("✓ Order confirmation notification sent")
else:
print("⚠ Could not verify notifications (service might not be implemented)")
print("🎉 Complete user order flow test passed!")
def test_concurrent_user_operations(self, services_ready):
"""
Test concurrent operations across services
This test simulates multiple users performing operations simultaneously
to verify that the system handles concurrent load correctly.
"""
print("Starting concurrent user operations test...")
def create_user_and_order(user_index):
"""
Create a user and place an order
This function will be run concurrently by multiple threads
to simulate real-world concurrent usage.
"""
try:
# Create user
user_data = {
'username': f'concurrent_user_{user_index}',
'email': f'concurrent_{user_index}@example.com',
'password': 'testpass123',
'first_name': f'User{user_index}',
'last_name': 'Concurrent'
}
user_response = requests.post(
'http://user-service:8000/api/users/',
json=user_data,
timeout=10
)
if user_response.status_code != 201:
return {'success': False, 'error': f'User creation failed: {user_response.text}'}
user = user_response.json()
# Authenticate user
auth_response = requests.post(
'http://user-service:8000/api/auth/login/',
json={
'username': user_data['username'],
'password': user_data['password']
},
timeout=10
)
if auth_response.status_code != 200:
return {'success': False, 'error': f'Authentication failed: {auth_response.text}'}
auth_data = auth_response.json()
token = auth_data.get('token') or auth_data.get('access_token')
headers = {'Authorization': f'Bearer {token}'}
# Create order
order_data = {
'user_id': user['id'],
'items': [
{
'product_id': 1, # Assuming product 1 exists
'quantity': 1,
'price': 19.99
}
],
'shipping_address': {
'street': f'{user_index} Test Street',
'city': 'Test City',
'state': 'TS',
'zip_code': '12345',
'country': 'US'
}
}
order_response = requests.post(
'http://order-service:8001/api/orders/',
json=order_data,
headers=headers,
timeout=15
)
if order_response.status_code != 201:
return {'success': False, 'error': f'Order creation failed: {order_response.text}'}
order = order_response.json()
return {
'success': True,
'user_id': user['id'],
'order_id': order['id'],
'user_index': user_index
}
except Exception as e:
return {'success': False, 'error': str(e)}
# Execute concurrent operations
print("Creating 10 concurrent users and orders...")
with ThreadPoolExecutor(max_workers=5) as executor:
# Submit 10 concurrent operations
futures = [
executor.submit(create_user_and_order, i)
for i in range(10)
]
# Collect results
results = [future.result() for future in futures]
# Analyze results
successful_operations = [r for r in results if r['success']]
failed_operations = [r for r in results if not r['success']]
print(f"Successful operations: {len(successful_operations)}")
print(f"Failed operations: {len(failed_operations)}")
# Print failure details
for failure in failed_operations:
print(f" Failed: {failure['error']}")
# Assert that most operations succeeded
# We allow some failures due to race conditions or resource constraints
success_rate = len(successful_operations) / len(results)
assert success_rate >= 0.8, f"Success rate too low: {success_rate:.2%}"
print(f"✓ Concurrent operations test passed with {success_rate:.2%} success rate")
def test_service_failure_resilience(self, services_ready):
"""
Test system resilience when individual services fail
This test verifies that the system can handle partial failures
gracefully without completely breaking down.
"""
print("Testing service failure resilience...")
# This test would require the ability to stop/start services
# In a real environment, you might use chaos engineering tools
# like Chaos Monkey to randomly fail services
# For now, we'll test timeout handling
print("Testing timeout handling...")
# Create a user first
user_data = {
'username': 'resilience_user',
'email': 'resilience@example.com',
'password': 'testpass123'
}
user_response = requests.post(
'http://user-service:8000/api/users/',
json=user_data,
timeout=10
)
assert user_response.status_code == 201
# Test with very short timeout to simulate service failure
try:
requests.get(
'http://order-service:8001/api/orders/',
timeout=0.001 # Extremely short timeout
)
assert False, "Expected timeout exception"
except requests.Timeout:
print("✓ Timeout handling works correctly")
print("✓ Service failure resilience test completed")
# Helper functions for E2E test setup
def setup_test_data():
"""
Set up test data needed for E2E tests
This function creates any necessary test data like products,
categories, or configuration needed for E2E tests.
"""
# Create test products
products_data = [
{
'name': 'Test Product 1',
'price': 29.99,
'stock_quantity': 100,
'description': 'A test product for E2E testing'
},
{
'name': 'Test Product 2',
'price': 49.99,
'stock_quantity': 50,
'description': 'Another test product for E2E testing'
}
]
for product_data in products_data:
requests.post(
'http://inventory-service:8003/api/products/',
json=product_data,
headers={'Authorization': 'Bearer admin-token'} # Admin token
)
def cleanup_test_data():
"""
Clean up test data after E2E tests
This function removes any test data created during E2E tests
to ensure a clean state for subsequent test runs.
"""
# Clean up test users
test_users = requests.get(
'http://user-service:8000/api/users/?username__startswith=e2e',
headers={'Authorization': 'Bearer admin-token'}
)
if test_users.status_code == 200:
for user in test_users.json().get('results', []):
requests.delete(
f'http://user-service:8000/api/users/{user["id"]}/',
headers={'Authorization': 'Bearer admin-token'}
)
Running E2E Tests:
# Start all services first
docker-compose up -d
# Wait for services to be ready
sleep 30
# Run E2E tests
pytest tests/e2e/ -v -s
# Run with specific markers
pytest tests/e2e/ -m "not slow" -v
# Run single test
pytest tests/e2e/test_user_order_flow.py::TestUserOrderFlow::test_complete_user_order_flow -v -s
E2E Test Environment Setup:
# docker-compose.e2e.yml
version: '3.8'
services:
user-service:
build: ./user_service
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/e2e_db
- REDIS_URL=redis://redis:6379/0
depends_on:
- postgres
- redis
order-service:
build: ./order_service
ports:
- "8001:8000"
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/e2e_db
- USER_SERVICE_URL=http://user-service:8000
depends_on:
- postgres
- user-service
# ... other services
postgres:
image: postgres:13
environment:
POSTGRES_DB: e2e_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
redis:
image: redis:7-alpine
Best Practices for E2E Testing:
Load testing verifies that your microservices can handle expected (and unexpected) levels of traffic. While functional tests verify that your code works correctly, load tests verify that it works correctly under stress.
In microservices, load testing is critical because:
Key metrics to monitor:
Locust is a Python-based load testing tool that's perfect for testing microservices. It allows you to define user behavior in Python code and simulate thousands of concurrent users.
# tests/load/locustfile.py
from locust import HttpUser, task, between, events
import json
import random
import logging
# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class UserServiceLoadTest(HttpUser):
"""
Load test for User Service
This class simulates user behavior for the User Service.
Each instance represents a virtual user performing various operations.
"""
# Wait time between tasks (simulates user think time)
wait_time = between(1, 3) # Wait 1-3 seconds between requests
def on_start(self):
"""
Called when a simulated user starts
This method runs once per user when they start their session.
Use it to set up any user-specific data or perform login.
"""
self.user_id = None
self.auth_token = None
# Create a user and authenticate
self.create_and_authenticate_user()
def create_and_authenticate_user(self):
"""Create a user and get authentication token"""
# Generate unique username
username = f'loadtest_user_{random.randint(1000, 999999)}'
# Create user
user_data = {
'username': username,
'email': f'{username}@example.com',
'password': 'testpass123',
'first_name': 'Load',
'last_name': 'Test'
}
with self.client.post(
'/api/users/',
json=user_data,
catch_response=True,
name="Create User"
) as response:
if response.status_code == 201:
user = response.json()
self.user_id = user['id']
response.success()
else:
response.failure(f"Failed to create user: {response.text}")
return
# Authenticate
auth_data = {
'username': username,
'password': 'testpass123'
}
with self.client.post(
'/api/auth/login/',
json=auth_data,
catch_response=True,
name="Login"
) as response:
if response.status_code == 200:
auth = response.json()
self.auth_token = auth.get('token') or auth.get('access_token')
response.success()
else:
response.failure(f"Failed to authenticate: {response.text}")
@property
def headers(self):
"""Get authentication headers"""
if self.auth_token:
return {'Authorization': f'Bearer {self.auth_token}'}
return {}
@task(5) # Weight: 5 (runs 5x more often than weight 1 tasks)
def get_user_list(self):
"""
Test user list endpoint
This simulates users browsing the user list.
The @task decorator with weight 5 means this runs more frequently.
"""
with self.client.get(
'/api/users/',
headers=self.headers,
catch_response=True,
name="Get User List"
) as response:
if response.status_code == 200:
data = response.json()
if 'results' in data:
response.success()
else:
response.failure("Invalid response format")
else:
response.failure(f"Status code: {response.status_code}")
@task(3)
def get_user_detail(self):
"""
Test user detail endpoint
This simulates users viewing profile details.
"""
if not self.user_id:
return
with self.client.get(
f'/api/users/{self.user_id}/',
headers=self.headers,
catch_response=True,
name="Get User Detail"
) as response:
if response.status_code == 200:
user = response.json()
if 'username' in user:
response.success()
else:
response.failure("Invalid user data")
else:
response.failure(f"Status code: {response.status_code}")
@task(2)
def update_user(self):
"""
Test user update endpoint
This simulates users updating their profiles.
"""
if not self.user_id:
return
update_data = {
'first_name': f'Updated_{random.randint(100, 999)}',
'bio': f'Updated bio at {random.randint(1000, 9999)}'
}
with self.client.patch(
f'/api/users/{self.user_id}/',
json=update_data,
headers=self.headers,
catch_response=True,
name="Update User"
) as response:
if response.status_code == 200:
response.success()
else:
response.failure(f"Status code: {response.status_code}")
@task(1)
def search_users(self):
"""
Test user search functionality
This simulates users searching for other users.
"""
search_terms = ['test', 'user', 'load', 'john', 'jane']
search_term = random.choice(search_terms)
with self.client.get(
f'/api/users/?search={search_term}',
headers=self.headers,
catch_response=True,
name="Search Users"
) as response:
if response.status_code == 200:
response.success()
else:
response.failure(f"Status code: {response.status_code}")
class OrderServiceLoadTest(HttpUser):
"""
Load test for Order Service
This simulates realistic order creation and management patterns.
"""
wait_time = between(2, 5) # Orders take longer to think about
def on_start(self):
"""Set up user and authentication"""
self.user_id = None
self.auth_token = None
self.order_ids = []
# Create and authenticate user
self.create_user()
def create_user(self):
"""Create a user for order testing"""
username = f'order_user_{random.randint(1000, 999999)}'
user_data = {
'username': username,
'email': f'{username}@example.com',
'password': 'testpass123'
}
response = self.client.post('/api/users/', json=user_data)
if response.status_code == 201:
user = response.json()
self.user_id = user['id']
# Authenticate
auth_response = self.client.post(
'/api/auth/login/',
json={'username': username, 'password': 'testpass123'}
)
if auth_response.status_code == 200:
auth = auth_response.json()
self.auth_token = auth.get('token') or auth.get('access_token')
@property
def headers(self):
"""Get authentication headers"""
if self.auth_token:
return {'Authorization': f'Bearer {self.auth_token}'}
return {}
@task(3)
def create_order(self):
"""
Test order creation with realistic data
This simulates users placing orders with varying numbers of items.
"""
if not self.user_id:
return
# Generate realistic order data
num_items = random.randint(1, 5)
items = [
{
'product_id': random.randint(1, 100),
'quantity': random.randint(1, 3),
'price': round(random.uniform(10.0, 200.0), 2)
}
for _ in range(num_items)
]
order_data = {
'user_id': self.user_id,
'items': items,
'shipping_address': {
'street': f'{random.randint(100, 9999)} Test Street',
'city': 'Test City',
'state': 'TS',
'zip_code': f'{random.randint(10000, 99999)}',
'country': 'US'
}
}
with self.client.post(
'/api/orders/',
json=order_data,
headers=self.headers,
catch_response=True,
name="Create Order"
) as response:
if response.status_code == 201:
order = response.json()
self.order_ids.append(order['id'])
response.success()
else:
response.failure(f"Failed to create order: {response.text}")
@task(5)
def get_user_orders(self):
"""
Test retrieving user's orders
This simulates users checking their order history.
"""
if not self.user_id:
return
with self.client.get(
f'/api/users/{self.user_id}/orders/',
headers=self.headers,
catch_response=True,
name="Get User Orders"
) as response:
if response.status_code == 200:
response.success()
else:
response.failure(f"Status code: {response.status_code}")
@task(2)
def get_order_detail(self):
"""
Test retrieving order details
This simulates users checking specific order details.
"""
if not self.order_ids:
return
order_id = random.choice(self.order_ids)
with self.client.get(
f'/api/orders/{order_id}/',
headers=self.headers,
catch_response=True,
name="Get Order Detail"
) as response:
if response.status_code == 200:
response.success()
else:
response.failure(f"Status code: {response.status_code}")
# Event handlers for custom metrics
@events.request.add_listener
def on_request(request_type, name, response_time, response_length, exception, **kwargs):
"""
Custom request handler for additional logging
This handler is called for every request and can be used to
collect custom metrics or log specific events.
"""
if exception:
logger.error(f"Request failed: {name} - {exception}")
elif response_time > 2000: # Log slow requests (> 2 seconds)
logger.warning(f"Slow request: {name} took {response_time}ms")
@events.test_start.add_listener
def on_test_start(environment, **kwargs):
"""
Called when load test starts
Use this to set up any test-wide configuration or logging.
"""
logger.info("Load test starting...")
logger.info(f"Target host: {environment.host}")
@events.test_stop.add_listener
def on_test_stop(environment, **kwargs):
"""
Called when load test stops
Use this to clean up or generate final reports.
"""
logger.info("Load test completed")
logger.info(f"Total requests: {environment.stats.total.num_requests}")
logger.info(f"Total failures: {environment.stats.total.num_failures}")
Running Load Tests:
# Install Locust
pip install locust
# Run load test with web UI
locust -f tests/load/locustfile.py --host=http://localhost:8000
# Then open http://localhost:8089 in your browser to configure and start the test
# Run load test from command line (headless)
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--users 100 \
--spawn-rate 10 \
--run-time 5m \
--headless
# Run with specific user class
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
UserServiceLoadTest
# Generate HTML report
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--users 100 \
--spawn-rate 10 \
--run-time 5m \
--headless \
--html=load_test_report.html
Interpreting Load Test Results:
Key metrics to analyze:
1. Response Time Percentiles:
- p50 (median): 50% of requests are faster than this
- p95: 95% of requests are faster than this
- p99: 99% of requests are faster than this
Example: If p95 is 500ms, 95% of users get responses in under 500ms
2. Requests per Second (RPS):
- Total throughput your system can handle
- Should increase linearly with users (up to a point)
3. Failure Rate:
- Percentage of failed requests
- Should be < 1% under normal load
- Spikes indicate system breaking points
4. Response Time vs Users:
- Response time should stay relatively flat as users increase
- Sharp increases indicate bottlenecks
Load Testing Best Practices:
Types of Load Tests:
# Spike Test: Sudden increase in load
# Run with: --users 1000 --spawn-rate 100 --run-time 2m
# Soak Test: Sustained load over time
# Run with: --users 100 --spawn-rate 10 --run-time 2h
# Stress Test: Find breaking point
# Run with: --users 5000 --spawn-rate 50 --run-time 10m
# Scalability Test: Verify horizontal scaling
# Run against multiple instances and compare results
This completes the comprehensive testing chapter with all major testing strategies for Django microservices!
Setting up proper testing configuration and continuous integration is crucial for maintaining code quality in microservices. This section covers how to configure your testing environment and integrate tests into your development workflow.
Pytest is highly configurable and can be customized to work perfectly with Django microservices. Here's a comprehensive configuration:
# pytest.ini
[tool:pytest]
# Django settings module for tests
DJANGO_SETTINGS_MODULE = user_service.settings.test
# Test discovery patterns
python_files = tests.py test_*.py *_tests.py
python_classes = Test*
python_functions = test_*
# Command line options that are always applied
addopts =
--verbose # Show detailed test output
--tb=short # Shorter traceback format
--strict-markers # Treat unknown markers as errors
--strict-config # Treat config errors as errors
--disable-warnings # Disable pytest warnings
--cov=user_service # Code coverage for user_service package
--cov-report=term-missing # Show missing lines in terminal
--cov-report=html # Generate HTML coverage report
--cov-report=xml # Generate XML coverage report (for CI)
--cov-fail-under=80 # Fail if coverage is below 80%
--maxfail=5 # Stop after 5 failures
--durations=10 # Show 10 slowest tests
# Test markers for categorizing tests
markers =
unit: Unit tests (fast, isolated)
integration: Integration tests (database, external services)
contract: Contract tests (API agreements)
e2e: End-to-end tests (full system)
slow: Slow running tests (> 1 second)
smoke: Smoke tests (basic functionality)
regression: Regression tests (bug fixes)
security: Security-related tests
performance: Performance tests
# Minimum Python version
minversion = 6.0
# Test paths
testpaths = tests
# Ignore certain directories
norecursedirs =
.git
.tox
dist
build
*.egg
node_modules
.venv
venv
Create separate settings for testing to ensure tests run in isolation:
# user_service/settings/test.py
"""
Test-specific Django settings
These settings optimize Django for testing by:
- Using faster password hashers
- Disabling migrations
- Using in-memory databases
- Disabling caching
- Simplifying logging
"""
from .base import *
# Test database - use SQLite in memory for speed
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': ':memory:', # In-memory database for speed
'OPTIONS': {
'timeout': 20,
}
}
}
# Alternative: Use PostgreSQL for more realistic testing
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.postgresql',
# 'NAME': 'test_user_service',
# 'USER': 'test_user',
# 'PASSWORD': 'test_password',
# 'HOST': 'localhost',
# 'PORT': '5432',
# 'TEST': {
# 'NAME': 'test_user_service',
# }
# }
# }
# Disable migrations for faster tests
class DisableMigrations:
"""
Disable Django migrations during testing
This speeds up tests by not running migrations.
Instead, Django creates tables directly from models.
"""
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
MIGRATION_MODULES = DisableMigrations()
# Use faster password hashers for tests
PASSWORD_HASHERS = [
'django.contrib.auth.hashers.MD5PasswordHasher', # Fast but insecure (OK for tests)
]
# Disable caching during tests
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
}
}
# Test Celery settings - run tasks synchronously
CELERY_TASK_ALWAYS_EAGER = True # Execute tasks immediately
CELERY_TASK_EAGER_PROPAGATES = True # Propagate exceptions
CELERY_BROKER_URL = 'memory://' # Use in-memory broker
# Disable logging during tests (unless debugging)
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'null': {
'class': 'logging.NullHandler',
},
},
'root': {
'handlers': ['null'],
},
'loggers': {
'django': {
'handlers': ['null'],
'propagate': False,
},
'user_service': {
'handlers': ['null'],
'propagate': False,
}
}
}
# Test-specific settings
DEBUG = False # Disable debug mode
TEMPLATE_DEBUG = False # Disable template debug
EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend' # In-memory email
SECRET_KEY = 'test-secret-key-not-for-production'
# Media files for tests
MEDIA_ROOT = '/tmp/test_media'
# Static files
STATIC_ROOT = '/tmp/test_static'
# Security settings (can be relaxed for tests)
ALLOWED_HOSTS = ['*']
CORS_ALLOW_ALL_ORIGINS = True
# Test-specific middleware (remove unnecessary middleware)
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
]
# Disable external service calls during tests
EXTERNAL_SERVICES = {
'user_service_url': 'http://mock-user-service',
'order_service_url': 'http://mock-order-service',
'payment_service_url': 'http://mock-payment-service',
}
Here's a comprehensive CI/CD pipeline that runs all types of tests:
# .github/workflows/test.yml
name: Test Microservices
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
env:
PYTHON_VERSION: '3.11'
jobs:
# Lint and code quality checks
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 black isort mypy
pip install -r requirements.txt
- name: Lint with flake8
run: |
# Stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# Exit-zero treats all errors as warnings
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=88 --statistics
- name: Check code formatting with black
run: black --check .
- name: Check import sorting with isort
run: isort --check-only .
- name: Type checking with mypy
run: mypy user_service --ignore-missing-imports
# Unit tests for each service
unit-tests:
runs-on: ubuntu-latest
strategy:
matrix:
service: [user-service, order-service, payment-service]
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Cache pip dependencies
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
cd ${{ matrix.service }}
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run unit tests
run: |
cd ${{ matrix.service }}
pytest tests/unit/ -m "unit and not slow" \
--cov --cov-report=xml \
--junitxml=test-results.xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ${{ matrix.service }}/coverage.xml
flags: ${{ matrix.service }}-unit
name: ${{ matrix.service }}-unit-coverage
- name: Upload test results
uses: actions/upload-artifact@v3
if: always()
with:
name: test-results-${{ matrix.service }}
path: ${{ matrix.service }}/test-results.xml
# Integration tests with real databases
integration-tests:
runs-on: ubuntu-latest
needs: unit-tests
services:
postgres:
image: postgres:13
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: test_db
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
redis:
image: redis:6
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 6379:6379
rabbitmq:
image: rabbitmq:3.11
env:
RABBITMQ_DEFAULT_USER: test
RABBITMQ_DEFAULT_PASS: test
options: >-
--health-cmd "rabbitmq-diagnostics -q ping"
--health-interval 30s
--health-timeout 30s
--health-retries 3
ports:
- 5672:5672
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run integration tests
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379/0
RABBITMQ_URL: amqp://test:test@localhost:5672//
run: |
pytest tests/integration/ -m "integration" \
--cov --cov-report=xml \
--maxfail=3
- name: Upload integration coverage
uses: codecov/codecov-action@v3
with:
file: coverage.xml
flags: integration
name: integration-coverage
# Contract tests
contract-tests:
runs-on: ubuntu-latest
needs: integration-tests
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install Pact
run: |
pip install pact-python
pip install -r requirements.txt
- name: Run consumer contract tests
run: |
pytest tests/contract/ -m "contract"
- name: Publish Pacts
if: github.ref == 'refs/heads/main'
run: |
pact-broker publish pacts/ \
--broker-base-url ${{ secrets.PACT_BROKER_URL }} \
--broker-username ${{ secrets.PACT_BROKER_USERNAME }} \
--broker-password ${{ secrets.PACT_BROKER_PASSWORD }}
# End-to-end tests
e2e-tests:
runs-on: ubuntu-latest
needs: contract-tests
if: github.ref == 'refs/heads/main' || github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Start services with Docker Compose
run: |
docker-compose -f docker-compose.test.yml up -d
sleep 60 # Wait for services to be ready
- name: Check service health
run: |
curl -f http://localhost:8000/health/ || exit 1
curl -f http://localhost:8001/health/ || exit 1
curl -f http://localhost:8002/health/ || exit 1
- name: Set up Python for E2E tests
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install E2E test dependencies
run: |
pip install pytest requests
- name: Run E2E tests
run: |
pytest tests/e2e/ -m "e2e" \
--maxfail=1 \
-v
- name: Collect service logs
if: failure()
run: |
docker-compose -f docker-compose.test.yml logs > service-logs.txt
- name: Upload service logs
if: failure()
uses: actions/upload-artifact@v3
with:
name: service-logs
path: service-logs.txt
- name: Cleanup
if: always()
run: |
docker-compose -f docker-compose.test.yml down -v
# Security tests
security-tests:
runs-on: ubuntu-latest
needs: unit-tests
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install security tools
run: |
pip install bandit safety
pip install -r requirements.txt
- name: Run Bandit security linter
run: |
bandit -r . -f json -o bandit-report.json || true
- name: Check for known security vulnerabilities
run: |
safety check --json --output safety-report.json || true
- name: Upload security reports
uses: actions/upload-artifact@v3
with:
name: security-reports
path: |
bandit-report.json
safety-report.json
# Performance tests (optional, run on schedule)
performance-tests:
runs-on: ubuntu-latest
if: github.event_name == 'schedule' || contains(github.event.head_commit.message, '[perf]')
steps:
- uses: actions/checkout@v4
- name: Set up services
run: |
docker-compose -f docker-compose.test.yml up -d
sleep 60
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install Locust
run: pip install locust
- name: Run performance tests
run: |
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--users 50 \
--spawn-rate 5 \
--run-time 2m \
--headless \
--html=performance-report.html
- name: Upload performance report
uses: actions/upload-artifact@v3
with:
name: performance-report
path: performance-report.html
- name: Cleanup
if: always()
run: |
docker-compose -f docker-compose.test.yml down -v
# Scheduled performance tests (weekly)
# on:
# schedule:
# - cron: '0 2 * * 1' # Every Monday at 2 AM
Create scripts to make testing easier for developers:
#!/bin/bash
# scripts/test.sh - Comprehensive test runner
set -e # Exit on any error
echo "🧪 Running comprehensive test suite..."
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${GREEN}✓${NC} $1"
}
print_warning() {
echo -e "${YELLOW}⚠${NC} $1"
}
print_error() {
echo -e "${RED}✗${NC} $1"
}
# Check if virtual environment is activated
if [[ "$VIRTUAL_ENV" == "" ]]; then
print_warning "Virtual environment not detected. Activating..."
source venv/bin/activate
fi
# Install dependencies
print_status "Installing dependencies..."
pip install -r requirements.txt
pip install -r requirements-test.txt
# Code quality checks
print_status "Running code quality checks..."
echo " - Checking code formatting..."
black --check . || (print_error "Code formatting issues found. Run 'black .' to fix." && exit 1)
echo " - Checking import sorting..."
isort --check-only . || (print_error "Import sorting issues found. Run 'isort .' to fix." && exit 1)
echo " - Running linter..."
flake8 . || (print_error "Linting issues found." && exit 1)
echo " - Type checking..."
mypy user_service --ignore-missing-imports || print_warning "Type checking issues found."
# Unit tests
print_status "Running unit tests..."
pytest tests/unit/ -m "unit and not slow" --cov --cov-report=term-missing
# Integration tests (if services are available)
if curl -s http://localhost:5432 > /dev/null 2>&1; then
print_status "Running integration tests..."
pytest tests/integration/ -m "integration"
else
print_warning "Database not available. Skipping integration tests."
fi
# Contract tests
print_status "Running contract tests..."
pytest tests/contract/ -m "contract" || print_warning "Contract tests failed or skipped."
# Security checks
print_status "Running security checks..."
bandit -r . -ll || print_warning "Security issues found."
safety check || print_warning "Vulnerable dependencies found."
print_status "All tests completed successfully! 🎉"
#!/bin/bash
# scripts/test-quick.sh - Quick test runner for development
echo "🚀 Running quick tests..."
# Run only fast unit tests
pytest tests/unit/ -m "unit and not slow" -x --tb=short
echo "✅ Quick tests completed!"
#!/bin/bash
# scripts/test-watch.sh - Watch mode for continuous testing
echo "👀 Starting test watch mode..."
# Install pytest-watch if not available
pip install pytest-watch
# Watch for changes and run tests
ptw tests/unit/ -- -m "unit and not slow" --tb=short
This completes the comprehensive testing chapter for Django microservices! The chapter now covers all aspects of testing from unit tests to load testing, with detailed explanations suitable for beginners.
Orchestrating Microservices with Celery and RabbitMQ
Microservices architecture often requires asynchronous task processing and inter-service communication. Celery, combined with RabbitMQ as a message broker, provides a robust solution for orchestrating tasks across your Django microservices ecosystem.
Deploying Microservices
Deploying microservices requires careful orchestration of multiple services, their dependencies, and infrastructure components. This chapter covers deployment strategies, containerization, orchestration platforms, and best practices for production-ready Django microservices.