Implementation of the stacked denoising autoencoder in Tensorflow
-
Updated
Aug 21, 2018 - Python
Implementation of the stacked denoising autoencoder in Tensorflow
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
Pytorch implementations of various types of autoencoders
Tensorflow Examples
Pivotal Token Search
Official Code for Paper: Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation
SANSA - sparse EASE for millions of items
Sparse Autoencoders (SAE) vs CLIP fine-tuning fun.
[JAMIA] Official repository of Deep Propensity Network - Sparse Autoencoder(DPN-SA)
Sparse Embedding Compression for Scalable Retrieval in Recommender Systems
Collection of autoencoder models in Tensorflow
Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the paper "Evaluating Open-Source Sparse Autoencoders on Disentangling Factual Knowledge in GPT-2 Small"
[NeurIPS 2025] This is the official repository for VL-SAE: Interpreting and Enhancing Vision-Language Alignment with a Unified Concept Set
Official Triton kernels for TopK and HierarchicalTopK Sparse Autoencoder decoders.
Implemented semi-supervised learning for digit recognition using Sparse Autoencoder
Repository for "From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance"
Implementations and Experiments: Transformers, RoPE, KV cache, SAEs, Tokenisers
Official code for NeurIPS 2025 paper "Revising and Falsifying Sparse Autoencoder Feature Explanations".
Interpret and control dense embedding via sparse autoencoder.
Multi-Layer Sparse Autoencoders (ICLR 2025)
Add a description, image, and links to the sparse-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the sparse-autoencoder topic, visit your repo's landing page and select "manage topics."