Stars
A compact implementation of SGLang, designed to demystify the complexities of modern LLM serving systems.
A Miasma Color Scheme Theme for Omarchy
cuTile is a programming model for writing parallel kernels for NVIDIA GPUs
A faithful reproduction of the "Attention is All You Need" paper's transformer architecture.
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Grab your own sweet-looking '.is-a.dev' subdomain.
Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)
🌐 Make websites accessible for AI agents. Automate tasks online with ease.
The official GitHub mirror of the Chromium source
Google Chromium, sans integration with Google
A tool to convert audio (or video) files into spectrogram images using the Short Fast Fourier Transform (SFFT)
notkisk / executorch
Forked from pytorch/executorchOn-device AI across mobile, embedded and edge for PyTorch
An extremely fast Python linter and code formatter, written in Rust.
Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.
On-device AI across mobile, embedded and edge for PyTorch
Build Real-Time Knowledge Graphs for AI Agents
⚡A CLI tool for code structural search, lint and rewriting. Written in Rust
Replace 'hub' with 'ingest' in any GitHub URL to get a prompt-friendly extract of a codebase
Language Savant. If your repository's language is being reported incorrectly, send us a pull request!
Fast and accurate AI powered file content types detection
Python bindings to the Tree-sitter parsing library
Supercharge Your LLM with the Fastest KV Cache Layer
notkisk / handson-ml3
Forked from ageron/handson-ml3A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
You like pytorch? You like micrograd? You love tinygrad! ❤️
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.