Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).
-
Updated
Oct 30, 2025 - Python
Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
[ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
LLM KV cache compression made easy
Helios: Real Real-Time Long Video Generation Model
LongBench v2 and LongBench (ACL 25'&24')
[ICLR 2026] LongLive: Real-time Interactive Long Video Generation
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Large Context Attention
The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory"
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy
open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality
PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" (https://arxiv.org/abs/2404.07143)
[ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference
[EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs
[TMLR 2026] Survey: https://arxiv.org/pdf/2507.20198
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Add a description, image, and links to the long-context topic page so that developers can more easily learn about it.
To associate your repository with the long-context topic, visit your repo's landing page and select "manage topics."