Stars
Learn GPU Programming in Mojo๐ฅ by Solving Puzzles
Assorted machine learning implementations for medical data.
Algorithm powering the For You feed on X
Dicer auto-sharder: Infrastructure for building sharded services
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNโฆ
Improving 3D Model Performance on the Web Using LOD Control
Web browser engineering (a book)
CDN Up and Running - Building a CDN from Scratch to Learn about CDN, Nginx, Lua, Prometheus, Grafana, Load balancing, and Containers.
A hands-on introduction to video technology: image, video, codec (av1, vp9, h265) and more (ffmpeg encoding). Translations: ๐บ๐ธ ๐จ๐ณ ๐ฏ๐ต ๐ฎ๐น ๐ฐ๐ท ๐ท๐บ ๐ง๐ท ๐ช๐ธ
Textbook for advanced students and engineers on modern SoC design using Arm Cortex-A: architecture, interconnects, validation, and fabrication (educational)
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use thโฆ
Complete solutions to the Programming Massively Parallel Processors Edition 4
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
PyTorch code and models for VJEPA2 self-supervised learning from video.
PyTorch code and models for V-JEPA self-supervised learning from video.
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
Code for the book "The Elements of Differentiable Programming".
Code repository for the paper - "Matryoshka Representation Learning"
A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorch
A tool for creating and running Linux containers using lightweight virtual machines on a Mac. It is written in Swift, and optimized for Apple silicon.
Minimal and annotated implementations of key ideas from modern deep learning research.