Lists (1)
Sort Name ascending (A-Z)
Stars
Code related to the brainwide map paper
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.
SkyRL: A Modular Full-stack RL Library for LLMs
[AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".
MCP-Bench: Benchmarking Tool-Using LLM Agents with Complex Real-World Tasks via MCP Servers
A Unified Framework for High-Performance and Extensible LLM Steering
Official code for "Kairos: Towards Adaptive and Generalizable Time Series Foundation Models"
This repository hosts an open seizure detection benchmarking platform.
The absolute trainer to light up AI agents.
Official implementation of X-Master, a general-purpose tool-augmented reasoning agent.
This is the repository for the Tool Learning survey.
TradingAgents: Multi-Agents LLM Financial Trading Framework
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
Awesome Unified Multimodal Models
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
Official Implementation of Rectified Flow (ICLR2023 Spotlight)
Context engineering is the new vibe coding - it's the way to actually make AI coding assistants work. Claude Code is the best for this so that's what this repo is centered around, but you can apply…
[ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models
Verification of Google DeepMind's AlphaEvolve 48-multiplication matrix algorithm, a breakthrough in matrix multiplication after 56 years.
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
A library for mechanistic interpretability of GPT-style language models
[ICLR 2024] Official Implementation of "Diffusion-TS: Interpretable Diffusion for General Time Series Generation"
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
The code for the paper "[ICLR'24]MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process"