Skip to content

hyeongyu-kim/Buffer_TTA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Buffer layers for Test-Time Adaptation

This is the official public repository for the NeurIPS 2025 paper "Buffer Layers for Test-Time Adaptation".

arxiv link

Frame 1


Requirements

  • Python == 3.12
  • PyTorch == 2.2.1
  • torchvision
  • timm

Our codebase heavily depends on the environment provided by the following GitHub repository ➜ MarioDobler. Please refer to it for setting up the required environment.


📌 Provided Examples

We provide example configurations for WideResNet and ResNeXt models on CIFAR10-C and CIFAR100-C, using two representative TTA methods: TENT and EATA.

🔹 Models

Each model source code example can be found in the following folders:

  • CIFAR10-C / WRN-28models/custom_standard.py
  • CIFAR10-C / ResNeXtmodels/custom_standard_cifar10_resnext.py
  • CIFAR100-C / WRN-40models/custom_standard_cifar100_wrn40.py
  • CIFAR100-C / ResNeXtmodels/custom_standard_cifar100.py

🔹 Methods

Example method configurations are also provided:

  • TENT_Buffermethods/tent_buffer.py
  • EATA_Buffermethods/eata_buffer.py

🔹 Configurations

The corresponding configuration files can be found on /cfgs:

🔹 Script

See run.sh for running scripts.


🔥 Generality of Our Approach

Our method is highly flexible:
➡️ It supports any pretrained model, and
➡️ It can be applied to any test-time adaptation (TTA) technique, including but not limited to TENT, EATA, and future extensions.

🔧 How to Apply Our Method to Any Model and Any TTA Technique

You can easily apply our method to any pretrained model and any TTA method by following the steps below:

  1. Prepare the source code of the target model
    Make sure you have access to the forward implementation you want to modify.

  2. Insert the BufferLayer into the model architecture
    Add the BufferLayer class at the beginning of the model source code and properly integrate it.
    (See models/custom_standard.py for reference.)

  3. Load pretrained weights while keeping Buffer layers randomly initialized
    Match the original pretrained weights from loaded model and leave Buffer layers un/randomly initailized.
    (See test_time.py for the matching logic.)

  4. Modify the TTA method to incorporate Buffer layers
    Adjust the configure_model and collect_params functions for your TTA method.
    (See methods/tent_buffer.py or methods/eata_buffer.py.)

⭐ Upcoming Features

We are currently developing a wrapper library that automates the entire procedure described above. Once it is ready and publicly released, we will update this repository accordingly.


🙏 Acknowledgment

Our codebase is heavily based on the excellent TTA benchmark repository by mariodoebler. We sincerely appreciate their contribution to the community.

📚 Citation

If you find our work useful or valuable for your research, please consider citing us:

@inproceedings{kimbuffer,
  title={Buffer layers for Test-Time Adaptation},
  author={Kim, Hyeongyu and Han, GeonHui and Hwang, Dosik},
  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}

About

Buffer TTA

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors