β’ π Overview β’ π€ Collections β’ βοΈ Setup β’ ποΈββοΈ Training β’ π Evaluation β’ π Citation β’ βοΈ Contact
We introduce ExpandR, a joint optimization framework that enhances dense retrieval by aligning Large Language Models (LLMs) with retriever preferences through query expansion.
ExpandR prompts LLMs to generate query expansions and uses them to guide both retriever training and LLM refinement. To improve alignment, ExpandR incorporates retriever reward and self-reward signals and applies Direct Preference Optimization (DPO) to fine-tune the LLM. This joint training strategy encourages the LLM to generate expansions that are not only semantically rich but also tailored to the retrieval utility of dense retrievers.
We have made the following resources available in our π€ExpandR collection on Hugging Face.
| Resource | Description | Link |
|---|---|---|
| LLM | The query expansion model, developed using Llama-3-8B | π€ExpandR_LLM |
| Retriever | The retriever, developed based on AnchorDr | π€ExpandR_Retriever |
| LLM Training data | the data used to train the query expansion model | π€llm_training_data |
| Retriever Training data | the data used to train the retriever | π€retriever_training_data |
(1) Use git clone to download this project:
git clone git@github.com:NEUIR/ExpandR.git
cd ExpandR
(2) Install the following packages using Pip or Conda under your environment οΌPlease make sure to install the dependencies in the order listed to avoid version conflicts.οΌ
Python=3.10.14
torch=1.13.1
tqdm
trl
vllm
accelerate
deepspeed
peft
cd src/beir
pip install -e .
faiss-gpu==1.7.2
jsonlines
sentence-transformers==2.2.2
datasets==1.18.3
numpy==1.23.5
cd src/transformers
pip install -e .
omegaconf==2.0.6
hydra-core==1.0.7
sacrebleu==2.3.1
editdistance
huggingface_hub==0.13.4
we use eight datasets from the public portion of dataset curated by authors of Repetition Improves Language Model Embeddings. The dataset can be downloaded from the GitHub page of Echo embeddings repository. To use the training script, the downloaded dataset should be placed in the data directory. The directory layout should be as follows:
data
ββ echo-data
ββ eli5_question_answer.jsonl
ββ fever.jsonl
ββ hotpot_qa.jsonl
ββ msmarco_document.jsonl
ββ msmarco_passage.jsonl
ββ nq.jsonl
ββ squad.jsonl
ββ trivia_qa.jsonl
To merge these data, use the following command:
cd data/echo-data
cat *.jsonl > merge_data_80w.jsonl
Then run the following command to randomly split the data into two parts:
python ExpandR/src/split.py
You can download the checkpoint of our trained AnchorDR directly from [here] and use it, or follow the flow below to train it.
(1) First step: Download the related model
You need to download AnchorDR model as the vanilla retriever Model.
(2) Second step: Construct supervised contrastive training data
Then you can construct a dataset for supervised training by running this script, which includes generating query expansion using LLM and dividing the dataset. Our constructed dataset has been uploaded to [huggingface]. You can download and use them directly.
cd ExpandR/scripts
bash gen_supervised_data.sh
(3) Third step: Training the retriever Model
After constructing the training data, you can start training the retriever model.
bash supervised_train.sh
You can download the lora checkpoint of the generator of ExpandR directly from [here] and merge them, or follow the flow below to train it.
(1) First step: Download the related model
You need to download lama3-8B-Instruct model as the vanilla Generation Model.
(2) Second step: Construct dpo training data
Then you can construct a dataset for dpo training by running this script, which includes multiple steps such as generating query expansion using LLM, reward model filtering data, and dividing the dataset. Our constructed dataset has been uploaded to [huggingface]. You can download and use them directly.
cd ExpandR/scripts
bash gen_dpo_data.sh
(3) Third step: Training the Generation Model
After constructing the training data, you can start training the query expansion generation model.
bash dpo_train.sh
(4) Fourth step: Combine the weights
You need to combine the weights of the Generation model trained using lora in Third step.
bash merge_lora.sh
After training the ExpandR model, you can test the performance of ExpandR on Beir using the following command (Multi-GPU evaluation is supported).
CUDA_VISIBLE_DEVICES=0 bash ExpandR/scripts/eval_beir_15.sh
If you find this work useful, please cite our paper and give us a shining star π
@misc{yao2025expandrteachingdenseretrievers,
title={ExpandR: Teaching Dense Retrievers Beyond Queries with LLM Guidance},
author={Sijia Yao and Pengcheng Huang and Zhenghao Liu and Yu Gu and Yukun Yan and Shi Yu and Ge Yu},
year={2025},
eprint={2502.17057},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2502.17057},
}
If you have questions, suggestions, and bug reports, please email:
ysj1426746590@outlook.com