Stars
- All languages
- ActionScript
- Assembly
- Astro
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Cuda
- Cython
- Dockerfile
- Elixir
- Emacs Lisp
- Erlang
- Fortran
- Frege
- GLSL
- Go
- HTML
- Haskell
- Haxe
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- LLVM
- Lua
- MATLAB
- MLIR
- Mathematica
- MoonScript
- OCaml
- Objective-C
- PHP
- Perl 6
- PowerShell
- Prolog
- Protocol Buffer
- Puppet
- PureScript
- Python
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Shen
- Swift
- SystemVerilog
- TeX
- TypeScript
- Vim Script
- Zig
- wisp
Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence
Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence
Toolkit for linearizing PDFs for LLM datasets/training
Open source implementation of AlphaFold3
A minimal Python framework for building custom AI inference servers with full control over logic, batching, and scaling.
scalable and robust tree-based speculative decoding algorithm
Data-Driven Evaluation for LLM-Powered Applications
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
Speed up model training by fixing data loading.
CUDA Templates and Python DSLs for High-Performance Linear Algebra
A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.
A toolkit for building Python extensions in Zig.
[CVPR'24 Highlight] Official PyTorch implementation of CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
🤖 A PyTorch library of curated Transformer models and their composable components
Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena
Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised learning from images with a joint-embedding predictive arch…
Distributed Reinforcement Learning accelerated by Lightning Fabric
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models (CRFM) at Stanford for holistic, reproducible and transparen…
Fine-tune Segment-Anything Model with Lightning Fabric.
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")
Lightning support for Intel Habana accelerators.