- Bangalore, India
-
23:34
(UTC +05:30) - @penstrokes75
- in/abheesht-sharma
Stars
AI agents running research on single-GPU nanochat training automatically
Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
A character-level language diffusion model trained on Tiny Shakespeare
TPU inference for vLLM, with unified JAX and PyTorch support.
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthr…
Our library for RL environments + evals
A high-throughput and memory-efficient inference and serving engine for LLMs
Rust-tokenizer offers high-performance tokenizers for modern language models, including WordPiece, Byte-Pair Encoding (BPE) and Unigram (SentencePiece) models
A framework for few-shot evaluation of language models.
A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
Library for reading and processing ML training data.
A JAX-native High Performance Eval Metrics Library
A simple, performant and scalable Jax LLM!
📋 A list of open LLMs available for commercial use.
Multi-backend recommender systems with Keras 3
You like pytorch? You like micrograd? You love tinygrad! ❤️
Bringing BERT into modernity via both architecture changes and scaling
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.
Build resilient language agents as graphs.
High-performance, asynchronous Python HTTP client library designed for faster file transfers using concurrency, semaphores, and fault-tolerant features.