- California, USA
- https://awsaf49.github.io
- https://www.kaggle.com/awsaf49
- in/awsaf49
- @awsaf49
Highlights
- Pro
Lists (3)
Sort Name ascending (A-Z)
Starred repositories
A paper list for spatial reasoning
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
Easy access to IAB Tech Lab taxonomies, including Content, Audience and Ad Product
The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—but scores >74% on SWE-bench verified!
A lightweight, local-first, and 🆓 experiment tracking library from Hugging Face 🤗
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM with Scalable Long-Term Memory"
Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch
A version of verl to support diverse tool use
[NeurIPS 2025] An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL
A PyTorch library for implementing flow matching algorithms, featuring continuous and discrete flow matching implementations. It includes practical examples for both text and image modalities.
The simplest, fastest repository for training/finetuning small-sized VLMs.
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
verl: Volcano Engine Reinforcement Learning for LLMs
[ICLR 2025] SONICS: Synthetic Or Not - Identifying Counterfeit Songs
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL
YuE: Open Full-song Music Generation Foundation Model, something similar to Suno.ai but open
A book for Learning the Foundations of LLMs
A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations
Unified automatic quality assessment for speech, music, and sound.
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
This is the homepage of a new book entitled "Mathematical Foundations of Reinforcement Learning."
A curated list of awesome Deep Reinforcement Learning resources
Witness the aha moment of VLM with less than $3.
Minimal reproduction of DeepSeek R1-Zero