-
Stanford University
- Stanford, CA
- https://ai.stanford.edu/~kzliu
- @kenziyuliu
Highlights
- Pro
Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Command-line program to download videos from YouTube.com and other video sites
Tensors and Dynamic neural networks in Python with strong GPU acceleration
real time face swap and one-click video deepfake with only a single image
A high-throughput and memory-efficient inference and serving engine for LLMs
Interact with your documents using the power of GPT, 100% privately, no data leaks
The world's simplest facial recognition api for Python and the command line
Rich is a Python library for rich text and beautiful formatting in the terminal.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Free, open source crypto trading bot
Collection of Summer 2026 tech internships!
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A generative speech model for daily dialogue.
A community-maintained Python framework for creating mathematical animations.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
DSPy: The framework for programming—not prompting—language models
You like pytorch? You like micrograd? You love tinygrad! ❤️
⚡ A Fast, Extensible Progress Bar for Python and CLI
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training