-
University of Minnesota Twin Cities
- Minneapolis
- https://asherding.com/
Highlights
- Pro
Starred repositories
Awesome LLM compression research papers and tools.
Empowering RAG with a memory-based data interface for all-purpose applications!
A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
机器学习工程师、算法工程师、软件工程师、数据科学家-面试指南 | Interview guide for MLE, SDE, DS
The official implementation of the paper "Self-Updatable Large Language Models by Integrating Context into Model Parameters"
Source code and demo for memory bank and SiliconFriend
This repository introduce a comprehensive paper list, datasets, methods and tools for memory research.
Benchmarking Chat Assistants on Long-Term Interactive Memory (ICLR 2025)
Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation
Native Multimodal Models are World Learners
Optimize prompts, code, and more with AI-powered Reflective Text Evolution
"Paper2Slides: From Paper to Presentation in One Click"
Codebase for "How Alignment Shrinks the Generative Horizon"
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.
SGLang is a fast serving framework for large language models and vision language models.
Official PyTorch implementation for "Large Language Diffusion Models"
MemGen: Weaving Generative Latent Memory for Self-Evolving Agents
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
bwasti / torchtitan
Forked from pytorch/torchtitanA PyTorch native platform for training generative AI models
The code for NeurIPS 2025 paper "A-MEM: Agentic Memory for LLM Agents"
Code and documentation to train Stanford's Alpaca models, and generate the data.
Supercharge Your LLM with the Fastest KV Cache Layer
Fast and memory-efficient exact attention
Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL