This is the official public repository for the NeurIPS 2025 paper "Buffer Layers for Test-Time Adaptation".
- Python == 3.12
- PyTorch == 2.2.1
- torchvision
- timm
Our codebase heavily depends on the environment provided by the following GitHub repository ➜ MarioDobler. Please refer to it for setting up the required environment.
We provide example configurations for WideResNet and ResNeXt models on CIFAR10-C and CIFAR100-C, using two representative TTA methods: TENT and EATA.
Each model source code example can be found in the following folders:
- CIFAR10-C / WRN-28 →
models/custom_standard.py - CIFAR10-C / ResNeXt →
models/custom_standard_cifar10_resnext.py - CIFAR100-C / WRN-40 →
models/custom_standard_cifar100_wrn40.py - CIFAR100-C / ResNeXt →
models/custom_standard_cifar100.py
Example method configurations are also provided:
- TENT_Buffer →
methods/tent_buffer.py - EATA_Buffer →
methods/eata_buffer.py
The corresponding configuration files can be found on /cfgs:
See run.sh for running scripts.
Our method is highly flexible:
➡️ It supports any pretrained model, and
➡️ It can be applied to any test-time adaptation (TTA) technique, including but not limited to TENT, EATA, and future extensions.
You can easily apply our method to any pretrained model and any TTA method by following the steps below:
-
Prepare the source code of the target model
Make sure you have access to the forward implementation you want to modify. -
Insert the
BufferLayerinto the model architecture
Add theBufferLayerclass at the beginning of the model source code and properly integrate it.
(Seemodels/custom_standard.pyfor reference.) -
Load pretrained weights while keeping Buffer layers randomly initialized
Match the original pretrained weights from loaded model and leave Buffer layers un/randomly initailized.
(Seetest_time.pyfor the matching logic.) -
Modify the TTA method to incorporate Buffer layers
Adjust theconfigure_modelandcollect_paramsfunctions for your TTA method.
(Seemethods/tent_buffer.pyormethods/eata_buffer.py.)
We are currently developing a wrapper library that automates the entire procedure described above. Once it is ready and publicly released, we will update this repository accordingly.
Our codebase is heavily based on the excellent TTA benchmark repository by mariodoebler. We sincerely appreciate their contribution to the community.
If you find our work useful or valuable for your research, please consider citing us:
@inproceedings{kimbuffer,
title={Buffer layers for Test-Time Adaptation},
author={Kim, Hyeongyu and Han, GeonHui and Hwang, Dosik},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}