🦉 Data Versioning and ML Experiments
-
Updated
Dec 16, 2025 - Python
🦉 Data Versioning and ML Experiments
☁️ 🚀 📊 📈 Evaluating state of the art in AI
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
PyTorch Lightning + Hydra. A very user-friendly template for ML experimentation. ⚡🔥⚡
This is the development home of the workflow management system Snakemake. For general information, see
Accelerated deep learning R&D
Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.
A toolkit for reproducible reinforcement learning research.
This is the repository of our article published in RecSys 2019 "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" and of several follow-up studies.
RNA-seq workflow using STAR and DESeq2
Get started DVC project (NLP, random forest)
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Open solution to the Home Credit Default Risk challenge 🏡
This Snakemake pipeline implements the GATK best-practices workflow
Create highly reproducible python environments
Tool for encapsulating, running, and reproducing data science projects
Get started DVC project
Automation and Make
High-fidelity performance metrics for generative models in PyTorch
A comparison between some VPS providers. It uses Ansible to perform a series of automated benchmark tests over the VPS servers that you specify. It allows the reproducibility of those tests by anyone that wanted to compare these results to their own. All the tests results are available in order to provide independence and transparency.
Add a description, image, and links to the reproducibility topic page so that developers can more easily learn about it.
To associate your repository with the reproducibility topic, visit your repo's landing page and select "manage topics."