You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implementation of Flash-DLM (paper: FlashDLM: Accelerating Diffusion Language Models via Efficient KV Caching and Guided Diffusion). Provides training-free methods to accelerate diffusion language model inference.
multiple tokens, and a verifier filters them using the main model’s confidence. Focuses on speed–accuracy tradeoffs, visualization, and modular design for easy benchmarking and research.