This workspace contains utilities for extracting workout-related data from the MyFit production database and performing exploratory analysis in a Jupyter notebook. The workflow omits any user or session tables and focuses exclusively on training metrics.
- Python 3.12+
- Access to the production PostgreSQL database (Cockroach-compatible) with a valid connection string
-
Install and bootstrap uv
# One-time install of uv (Linux/macOS) curl -LsSf https://astral.sh/uv/install.sh | sh # Inside the repo root cd /home/anonjr/Programming/myfit-analysis uv venv . .venv/bin/activate uv pip sync requirements.lock
Tip: Add the environment to Jupyter via
uv run python -m ipykernel install --user --name myfit-analysisif you plan to run the notebook outside VS Code. If you need an editable install of this repository (e.g., for packaging), runuv pip install -e .after syncing.Updating dependencies? Regenerate the lock with
uv pip compile --extra dev pyproject.toml -o requirements.lockand re-runuv pip sync requirements.lock. -
Configure environment variables
Update
.envwith your PostgreSQL connection string:DATABASE_URL=postgresql://username:password@host:26257/myfit?sslmode=require
Run the extractor to pull the workout-related tables into local CSV files. The
script reads DATABASE_URL from .env and skips any user or session data.
uv run python main.py --output-dir dataUse --dry-run to view the export plan without querying the database and
--verbose for additional logging.
Open the notebook at notebooks/workout_analysis.ipynb in VS Code or launch
Jupyter Lab with uv run jupyter lab and execute the cells. The notebook relies on the CSV exports located in
./data and produces seaborn and matplotlib visualisations covering:
- Weekly workout volume and consistency
- Distribution of total load per workout
- Average RIR trend over time
- Extend the extractor if new workout-adjacent tables are introduced.
- Layer in additional metrics (e.g., mesocycle adherence) or predictive models (e.g., fatigue scoring) on top of the exported data.