Stars
[ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration
Comprehensive tools and frameworks for developing foundation models tailored to recommendation systems.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
[ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model
Locating and editing factual associations in GPT (NeurIPS 2022)
Mass-editing thousands of facts into a transformer memory (ICLR 2023)
We introduce EMMET and unify model editing with popular algorithms ROME and MEMIT.
[AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data
This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation" (ICLR2025 Oral).
AnyEdit: Edit Any Knowledge Encoded in Language Models, ICML 2025
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Neuron-Level Sequential Editing for Large Language Models, ACL 2025 Main
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)