A platform showcasing the Top 500 players in Splatoon 3, with historical rankings, analytics, and machine learning inference. SplatTop consists of:
- A React frontend served by Nginx
- A FastAPI backend running under Uvicorn (via Gunicorn)
- Redis for caching, message brokering, and PubSub
- Celery workers for background processing
- Celery Beat for scheduled tasks
- A machine learning inference service (SplatGPT)
- An Nginx Ingress controller
- Cert-Manager for SSL/TLS certificates (production only)
Deployment is handled via Kubernetes (DigitalOcean Kubernetes Service in production), with continuous integration and deployment via GitHub Actions. Local development can be done using Docker & kind or by running components individually.
- Docker & kind (
kubectl) for Kubernetes-based local development - Python 3.10+ & Poetry for backend dependencies and scripts
- Node.js & npm for frontend development
- (Optional) Access to a
secrets.yamlor a local.envfile containing database credentials and ML storage settings
We welcome contributions for localizations! If you are interested in helping translate SplatTop into other languages, please refer to the Contributing Localizations section.
- Installation
- Architecture
- Contributing
- Contributing Localizations
- License
- API Authentication & Tokens
You can run SplatTop locally without Kubernetes by starting each component manually:
-
Clone the Repository:
git clone https://github.com/cesaregarza/SplatTop.git cd SplatTop -
Environment Variables: Copy the example and set credentials:
cp .env.example .env # Edit .env to set DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME, DEV_MODE=1 -
Install Dependencies:
poetry install # Backend and scripts cd src/react_app && npm install # Frontend dependencies
-
Start Redis:
docker run -d --name redis -p 6379:6379 redis:6
-
Start Celery Services:
# Worker poetry run celery -A src.celery_app.app:celery worker --loglevel=info # Scheduler (Beat) poetry run celery -A src.celery_app.beat:celery beat --loglevel=info
-
Run Backend:
poetry run uvicorn fast_api_app.app:app --reload --host 0.0.0.0 --port 5000 --app-dir src
-
Run Frontend:
cd src/react_app npm start # Opens http://localhost:3000 with hot-reload
-
(Optional) ML Inference Service: If you have access to the SplatGPT Docker image (
splatnlp:latest), run:docker run -d --name splatnlp \ -e ENV=development \ -e DO_SPACES_ML_ENDPOINT=... \ -e DO_SPACES_ML_DIR=... \ -p 9000:9000 splatnlp:latest
Alternatively, you can use Kubernetes (kind) and Makefile targets for a K8s-based workflow:
# Create a local kind cluster
kind create cluster --config k8s/kind-config.yaml
# Build images and load into kind
make build
# Deploy all components for development
make deploy-devFrontend: http://localhost:3000 (hot-reload) or http://localhost:4000 (prod build) Backend API: http://localhost:5000 Ingress (pre-prod): http://localhost:8080
Sensitive settings (database credentials, ML storage endpoints) should be provided via environment variables in .env, or via secrets.yaml when deploying on Kubernetes.
-
Local
.env:- Copy
.env.exampleto.envand fill in DB and other service credentials.
- Copy
-
Kubernetes
secrets.yaml:- For production or K8s deployments, create a
k8s/secrets.yamlcontaining the same keys as your.env(DB_* vars and ML settings). - Apply with:
kubectl apply -f k8s/secrets.yaml.
- For production or K8s deployments, create a
The frontend pod hosts the React application on Nginx. It serves the user interface of SplatTop, enabling users to interact with the website and view the Top 500 players and their ranking history. This pod communicates exclusively with the backend pod to fetch data and update the UI based on user actions, and it cannot access the rest of the Kubernetes cluster. While currently written in JavaScript and JSX, a migration to TypeScript and TSX is planned for the near future.
The backend pod runs a FastAPI application hosted on Gunicorn with Uvicorn workers. It handles API requests from the frontend, processes data, communicates with Redis and the database, and exposes websocket endpoints for real-time updates.
Persistent player and leaderboard data is stored in PostgreSQL. Database connection is configured via environment variables (DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME). For local caching and development, SQLite is used for read-only or non-critical data stores.
The Redis pod functions as a cache, message broker, and PubSub system. Celery workers use Redis to queue tasks and store results. The backend caches frequently accessed leaderboard snapshots in Redis. Redis PubSub channels are used for real-time player detail updates.
The workers pod runs Celery workers that handle asynchronous tasks such as:
- Pulling and updating leaderboard data
- Fetching weapon info and aliases
- Computing analytics (Lorenz curves, Gini coefficients, skill offsets)
- Populating Redis caches and database tables
The machine learning inference service (SplatGPT) provides loadout recommendations via a REST API on port 9000. It can be run as a separate Docker image (splatnlp:latest) and communicates with the backend or external storage (e.g., DigitalOcean Spaces) to load models and data.
The task scheduler pod runs Celery Beat, which schedules periodic tasks (e.g., data pulls, analytics updates) for the Celery workers according to src/celery_app/beat.py schedules.
The ingress controller pod uses Nginx to manage incoming traffic to the Kubernetes cluster. It routes requests to the appropriate services based on predefined rules, ensuring that users can access the frontend and backend services. This is not used in the -dev environment, as the frontend and backend communicate directly with each other.
The cert-manager pod is responsible for managing SSL/TLS certificates using Let's Encrypt. This pod is only deployed in the production environment to ensure secure communication between users and the SplatTop website and is completely isolated from the rest of the system.
Contributions are welcome, but I maintain high standards for code quality and maintainability. A CONTRIBUTING.md file will be created in the future, but for now, please reach out via a GitHub Issue to discuss potential contributions. I am open to all ideas but am selective about what code is merged into the project. Feedback and suggestions are always welcome, so please do not hesitate to reach out.
This project is licensed under GPL-3.0. Please refer to the LICENSE file for more information.
SplatTop uses high-entropy bearer tokens to protect selected API endpoints. Tokens are minted and managed via admin endpoints and validated against Redis-backed metadata. Token hashing uses SHA-256 with a server-side pepper for lookup/storage.
Environment variables
API_TOKEN_PEPPER: Required for hashing API tokens.ADMIN_TOKEN_PEPPER(optional): Pepper for admin tokens (defaults toAPI_TOKEN_PEPPER).ADMIN_API_TOKENS_HASHED: Comma-separated or JSON array of SHA-256(pepper+token) hashes for admin access.API_TOKEN_ALLOWED_SCOPES(optional): Comma-separated allowlist of valid scopes.ADMIN_MAX_API_TOKENS(optional, default 1000): Cap on minted tokens.
Base path: /api/admin/tokens (requires an admin bearer token in headers; see below).
-
Mint a token
- POST
/api/admin/tokens - Body:
{ "name": "ServiceName", "note": "optional", "scopes": ["ripple.read"], "expires_at_ms": 0 } - Response includes the full bearer token in the
tokenfield:rpl_<uuid>_<secret>.
- POST
-
List tokens
- GET
/api/admin/tokens
- GET
-
Revoke a token
- DELETE
/api/admin/tokens/{token_id}
- DELETE
Headers for admin endpoints
Authorization: Bearer <ADMIN_TOKEN>orX-Admin-Token: <ADMIN_TOKEN>- The server hashes the presented admin token with the configured pepper and compares against
ADMIN_API_TOKENS_HASHED.
CI note for admin tokens
- The workflow
.github/workflows/update_k8s_deployment.yamlcomputesADMIN_API_TOKENS_HASHEDfromADMIN_API_TOKENSusing Python. It accepts JSON arrays, newline-delimited, or comma-separated token lists and fails fast if the pepper is missing.
Send your minted token with either header:
Authorization: Bearer rpl_<uuid>_<secret>X-API-Token: rpl_<uuid>_<secret>
Example curl
curl -H "Authorization: Bearer rpl_..." \
http://localhost:5000/api/ripple/leaderboardScopes
- Example required scope:
ripple.readfor ripple endpoints. - If a token has empty scopes, it defaults to full access for backward compatibility. Define scopes to restrict access.
Simple fixed-window limits are enforced per-token or per-IP for /api/* (excluding /api/admin/*).
Environment variables
API_RL_PER_SEC(default 10)API_RL_PER_MIN(default 120)API_RL_FAIL_OPEN(defaultfalse): Iftrue, requests pass when Redis is unavailable; otherwise responses are 429.
Response on limit: 429 { "detail": "Rate limit exceeded" }
All non-admin /api/* requests are enqueued to Redis for usage analytics and periodically flushed to PostgreSQL by Celery.
Redis keys
- Queue:
api:usage:queue - Processing:
api:usage:queue:processing - Lock:
api:usage:flush:lock
Environment variables
API_USAGE_FLUSH_BATCH(default 1000): Max items to move per flush run.API_USAGE_MAX_ATTEMPTS(default 5): Max requeue attempts before DLQ.API_USAGE_LOCK_TTL(default 55): Lock TTL seconds to prevent overlap.API_USAGE_RECOVER_LIMIT(default 1000): Max orphaned items recovered per run.API_USAGE_FLUSH_MINUTE(default*): Minute field for Celery Beat cron.
Client IP resolution is conservative by default; proxy headers are ignored unless explicitly enabled.
Environment variable
TRUST_PROXY_HEADERS(set totrue/1to enable): Only enable when your deployment runs behind a trusted proxy that setsX-Forwarded-For/X-Real-IPcorrectly.
- Configure secrets:
- Set
API_TOKEN_PEPPER(high entropy). - Set
ADMIN_API_TOKENS(plaintext) in CI; the workflow producesADMIN_API_TOKENS_HASHEDfor the cluster from the pepper and tokens.
- Set
- Ensure database schema exists (managed externally):
auth.api_tokensandauth.api_token_usagetables with appropriate indexes (e.g.,api_token_usage(token_id, ts)for analytics).
- Deploy Redis (broker + cache) and Celery Beat/Workers.
- Tune rate-limits and flush cadence via env vars as above.