This is a PyTorch/GPU re-implementation of Next-Embedding Prediction Makes Strong Vision Learners.
Next-Embedding Predictive Autoregression. An image is split into patches and embedded into a sequence. An autoregressive model predicts the next embedding from previous ones.
@article{six2025nepa,
title={Next-Embedding Prediction Makes Strong Vision Learners},
author={Sihan Xu and Ziqiao Ma and Wenhao Chai and Xuweiyi Chen and Weiyang Jin and Joyce Chai and Saining Xie and Stella X. Yu},
journal={arXiv preprint arXiv: 2512.16922},
year={2025}
}
The codebase has been tested with the following environment:
- Python 3.10
- PyTorch 2.8.0
- Transformers 4.56.2
First, clone the repository:
git clone https://github.com/SihanXU/nepa
cd nepaThen, create a conda environment and install dependencies:
conda env create -f environment.yml
conda activate nepaAlternatively, you can install the dependencies manually:
pip install -r requirements.txtHere's a simple example to run inference with a pretrained NEPA model:
from transformers import AutoImageProcessor
from models.vit_nepa import ViTNepaForImageClassification
from PIL import Image
import requests
url = 'https://raw.githubusercontent.com/pytorch/hub/master/images/dog.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('SixAILab/nepa-large-patch14-224-sft')
model = ViTNepaForImageClassification.from_pretrained('SixAILab/nepa-large-patch14-224-sft')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])To download pretrained models from Hugging Face Hub, you need to authenticate with your Hugging Face account:
hf auth loginWe use Wandb to track experiments. You may want to use Weights & Biases to log and track your experiments:
pip install wandb
wandb loginWe use the ImageNet-1k dataset for training and evaluation. To download the dataset via Hugging Face Datasets:
python download_dataset.pyThis script will download and prepare the ImageNet-1k dataset. Note that this requires approximately 150GB of disk space. You may need to accept the dataset terms on Hugging Face before downloading.
We provide pretrained checkpoints for NEPA models. The following table compares our reproduced results with the paper:
| Model | SwiGLU (paper) | GeLU (reproduce) |
|---|---|---|
| Nepa-B | 83.8 | 83.75 |
| Nepa-L | 85.3 | 85.40 |
To evaluate the base model on ImageNet-1k validation set:
bash scripts/eval/nepa_b_sft_eval.sh This should give:
***** eval metrics *****
eval_accuracy = 0.8375
eval_loss = 0.7169
To evaluate the large model:
bash scripts/eval/nepa_l_sft_eval.sh This should give:
***** eval metrics *****
eval_accuracy = 0.854
eval_loss = 0.6371
To fine-tune a pretrained NEPA model on ImageNet-1k for image classification:
For the base model:
bash scripts/finetune/nepa_b_sft.sh For the large model:
bash scripts/finetune/nepa_l_sft.sh You can modify the training hyperparameters (learning rate, batch size, epochs, etc.) in the corresponding script files.
To pretrain NEPA from scratch on ImageNet-1k:
For the base model:
bash scripts/pretrain/nepa_b.shFor the large model:
bash scripts/pretrain/nepa_l.shPretraining typically requires multiple GPUs. We recommend using at least 8 A100 GPUs for the large model.
After pretraining, you can convert the pretrained model to a classification model by initializing a classification head. Use the init_nepa_cls_from_pretrain.py script:
Here is an example:
python init_nepa_cls_from_pretrain.py \
--pretrained_model_id SixAILab/nepa-base-patch14-224 \
--config_model_id configs/finetune/nepa-base-patch14-224-sft \
--pretrained_revision main \
--save_local \
--local_dir ./nepa-base-patch14-224-sft
We gratefully acknowledge the developers of Transformers, Datasets, Evaluate, and timm for their excellent open-source contributions.
Feel free to contact me through email (sihanxu@umich.edu). Enjoy!