This project demonstrates a FastAPI application with OpenTelemetry instrumentation integrated with a complete observability stack using Docker Compose.
- FastAPI Application (
main.py): A simple REST API with OpenTelemetry tracing and metrics - OpenTelemetry Collector: Receives telemetry data and forwards it to appropriate backends
- Prometheus: Time-series database for metrics
- Grafana: Visualization dashboard for metrics and traces
- Tempo: Distributed tracing backend
| Service | URL | Description |
|---|---|---|
| FastAPI App | http://localhost:8000 | Your application with OpenTelemetry instrumentation |
| Grafana | http://localhost:3000 | Dashboard (admin/admin) |
| Prometheus | http://localhost:9090 | Metrics database |
| Tempo | http://localhost:3200 | Traces database |
-
Start the observability stack:
docker-compose up -d ```**** -
Install Python dependencies:
pip install -r requirements.txt
-
Run your FastAPI application:
python main.py
-
Test the application:
# Basic endpoint curl http://localhost:8000/ # Health check curl http://localhost:8000/health # Items endpoint curl http://localhost:8000/items/123?q=test # Complex operation (demonstrates nested tracing) curl http://localhost:8000/complex-operation
- Automatic instrumentation of FastAPI requests
- Custom spans for business logic
- Nested spans for complex operations
- Error tracking and exception recording
- Span attributes for additional context
- Custom counters for request counting
- Histograms for request duration tracking
- Automatic HTTP metrics from instrumentation
-
Grafana Dashboard: Visit http://localhost:3000 (admin/admin)
- Add Prometheus as a data source: http://prometheus:9090
- Add Tempo as a data source: http://tempo:3200
- Create dashboards to visualize your metrics and traces
-
Prometheus: Visit http://localhost:9090 to explore metrics
-
Tempo: Visit http://localhost:3200 to explore traces
The OpenTelemetry configuration sends data to:
- Traces:
localhost:4317(OTLP gRPC) - Metrics:
localhost:4317(OTLP gRPC)
All data flows through the OpenTelemetry Collector which then forwards to Prometheus and Tempo.
GET /- Hello World endpointGET /health- Health check endpointGET /items/{item_id}- Item retrieval with optional query parameterGET /complex-operation- Demonstrates nested tracing and metrics
Try accessing GET /items/999 occasionally to see error tracing in action.
This project includes comprehensive K6 performance testing scripts to validate your application under various load conditions.
- Smoke Test - Basic functionality verification (1 user, 1 minute)
- Load Test - Normal expected traffic simulation (up to 200 users, 16 minutes)
- Stress Test - Breaking point testing (up to 400 users, 20 minutes)
- Spike Test - Sudden traffic burst simulation (spikes to 500 users)
- Soak Test - Extended duration stability testing (50 users, 30 minutes)
# Run a quick smoke test
./quick-k6-test.sh smoke
# Run a load test
./quick-k6-test.sh load
# Run a stress test
./quick-k6-test.sh stress# Check prerequisites
./run-k6-tests.sh check
# Run individual tests
./run-k6-tests.sh smoke
./run-k6-tests.sh load
./run-k6-tests.sh stress
./run-k6-tests.sh spike
./run-k6-tests.sh soak
# Run all tests in sequence (45-60 minutes)
./run-k6-tests.sh all
# Clean up old results
./run-k6-tests.sh clean# Run K6 tests using Docker Compose profile
docker-compose --profile testing up k6
# Run specific test by modifying the docker-compose.yml command
docker-compose run --rm k6 run /scripts/load-test.jsWhile K6 tests are running, monitor your application in real-time:
-
Grafana Dashboard: http://localhost:3000
- View request rates, response times, error rates
- Monitor OpenTelemetry traces and custom metrics
- Observe system behavior under load
-
Prometheus: http://localhost:9090
- Query custom application metrics
- Monitor resource utilization
-
K6 Results: JSON output saved to
k6-results/directory
- ✅ Success Criteria: P95 response times under thresholds, low error rates
- 📊 Key Metrics: Response time percentiles, throughput, error rates
- 🔍 Traces: Detailed request flow analysis during load tests
- 📈 Trends: Performance degradation patterns under stress
For detailed K6 testing documentation, see k6/README.md.