Highlights
- Pro
Lists (2)
Sort Name ascending (A-Z)
Stars
Repo for Paper: Discovering Interpretable Algorithms by Decompiling Transformers to RASP
A special type of transformer with discrete embeddings and attention.
2026 AI/ML internship & new graduate job list updated daily
Dafny is a verification-aware programming language
Train the smallest LM you can that fits in 16MB. Best model wins!
Deep Learning Interpretability with Symbolic Regression
Open-source release accompanying Gao et al. 2025
Open-source implementation of AlphaEvolve
egg is a flexible, high-performance e-graph library
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"
Python functions powered by AI agents - with runtime post-conditions for reliable agentic workflows.
High-Performance Symbolic Regression in Python and Julia
Distributed High-Performance Symbolic Regression in Julia
OpenAI & Ollama compatible API powered by Codex
A Foundation Model for Generalist Gaming Agents
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement …
Lean 4 programming language and theorem prover
This repository contains implementations and illustrative code to accompany DeepMind publications
Open source code for AlphaFold 2.
Recurrent Switching Linear Dynamical Systems
Code supporting the paper "Weakly Supervised Learning with Assemblies of Neurons", by Dabagia, Papadimitriou, and Vempala [2021].
Hackable and optimized Transformers building blocks, supporting a composable construction.