Stars
JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training
A scalable asynchronous reinforcement learning implementation with in-flight weight updates.
DSPy: The framework for programming—not prompting—language models
vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization
Build memory-native AI agents with Memory OS — an open-source framework for long-term memory, retrieval, and adaptive learning in large language models. Agent Memory | Memory System | Memory Manage…
**Notion MCP Server** is a Model Context Protocol (MCP) server implementation that enables AI assistants to interact with Notion's API. This production-ready server provides a complete set of tools.
📓 An MCP server for managing your personal knowledge, daily notes, and re-usable prompts via GitHub Gists
A zero-dependency, lightweight (~3kB), consent platform agnostic, cookie banner for any website (compatible with Google Consent Mode).
A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.
An AI-powered research assistant that performs iterative, deep research on any topic by combining search engines, web scraping, and large language models. The goal of this repo is to provide the si…
A course on aligning smol models.
Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.
Instruct-tune LLaMA on consumer hardware
A repository of Maker Skill Trees and templates to make your own.
Large Action Model framework to develop AI Web Agents
Open-source and strong foundation image recognition models.
A super lightweight image processing algorithm for detection and extraction of overlapped handwritten signatures on scanned documents using OpenCV and scikit-image.
Lab assignments for Introduction to Data-Centric AI, MIT IAP 2024 👩🏽💻
Cleanlab's open-source library is the standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
[NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333
Seamlessly integrate LLMs into scikit-learn.
The official gpt4free repository | various collection of powerful language models | o4, o3 and deepseek r1, gpt-4.1, gemini 2.5
PyTorch code and models for the DINOv2 self-supervised learning method.