Skip to content

Source code of the paper "Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?" ICML'25

Notifications You must be signed in to change notification settings

liangzid/LoRA-sSecurity

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LoRA's Security Analysis

This repository contains the experimental code of our paper "Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?" in ICML'2025.

You can find the submission version of the paper here: Arxiv

Environment

This repository is based on python 3.10, you may use a similar version of python environments.

Use pip to configure the environment

pip install -r re.txt

Use conda to build the environment

conda env create -f lora.yml
conda activate lora

Experiments

All training scripts are in ./scripts.

Vanilla training:

bash scripts/1.0.poison_label.sh
bash scripts/1.10.poison_backdoor1.sh

Influence of initialization variance:

bash scripts/2.1.variance_nlu_train.sh
bash scripts/2.1.backdoor.variance_nlu_vary.sh

Influence of the rank:

bash scripts/2.3.varying_rank_train.sh
bash scripts/2.3.backdoor.varying_rank_train.sh

Influence of the backdoor rate:

bash scripts/2.5.vary_poisonrate.sh
bash scripts/2.7.vary_backdoor.sh

You can also customize the training script using the following template as a reference:

#!/bin/bash

echo "HOME: ${HOME}"
export python=${HOME}/anaconda3/envs/lora/bin/python3
export TORCH_USE_CUDA_DSA="1"
export root_dir="${HOME}/LoRASecurity/"
export POD_save_dir="${root_dir}/ckpts/XXXXXXXXXXX/"

export task_ls=("sst2" "cola" "qnli" "qqp")
export cuda_ls=(1 2 3 4 5 6 7 0)
export TRAIN_NUMS=(1.0)
export POISON_NUMS=(0.0)
export is_lora_s=("1")
export train_times=(1 2 3 4 5)
export base_ls=("google-bert/bert-large-uncased")

# export overall_step=100000
export overall_step=10000
# export msl=64
export msl=512
export epoch=10
# export max_new_tokens=16
export batch_size=8
export poison_side="backdoor-simple"

export var_type="1/d"
export var_vls=("2.0" "1.5" "1.0" "0.667" "0.333" "0.1" "0.001" "0.0001")

for (( i=0; i<${#var_vls[@]}; i++ )); do
    export var_value=${var_vls[$i]}
    export cudanum=${cuda_ls[$i]}
(
    export CUDA_VISIBLE_DEVICES="${cudanum}"
for task in ${task_ls[*]}
do		
for train_frac in ${TRAIN_NUMS[*]}
do
    for from_path in ${base_ls[*]}
    do
    for poison_frac in ${POISON_NUMS[*]}
    do
	for is_lora in ${is_lora_s[*]}
	do
	    if [ "${is_lora}" -eq 1 ]; then
		export lr="3e-5"
	    else
		export lr="3e-6"
	    fi
	    # export lr="3e-5"
	    # export lr="3e-4"

	for train_time in ${train_times[*]}
	do

	  echo "======================================================"
	  echo "+++++++task: ${task}+++++++"
	  echo "+++++++cuda: ${cudanum}++++++++"
	  echo "+++++++train_frac: ${train_frac}+++++++"
	  echo "+++++++from_path: ${from_path}+++++++"
	  echo "+++++++poison_frac: ${poison_frac}+++++++"
	  echo "+++++++is_lora: ${is_lora}+++++++"
	  echo "+++++++train_time: ${train_time}+++++++"
	  echo "+++++++var_type: ${var_type}+++++++"
	  echo "+++++++var_value: ${var_value}+++++++"
	  echo "======================================================"
	  export save_path="${POD_save_dir}var_scale--${var_value}_poison_side--${poison_side}_dataset_${task}---trainfrac_${train_frac}---poisonfrac_${poison_frac}---traintime_${train_time}---islora_${is_lora}---frompath_${from_path}"

	  echo "SAVE PATH: ${save_path}"

          $python ${root_dir}nlu_train.py\
		  --dataset_name=$task \
		  --poison_frac=$poison_frac \
		  --var_type=${var_type} \
		  --var_value=${var_value} \
		  --train_num_frac=$train_frac \
		  --device="cuda" \
		  --epoch=$epoch \
		  --poison_side=${poison_side} \
		  --seed=${train_time} \
		  --acc_step=1 \
		  --log_step=50 \
		  --save_step=1000000 \
		  --overall_step=${overall_step} \
		  --LR=$lr \
		  --use_lora=$is_lora \
		  --rank=8 \
		  --lora_alpha=16 \
		  --batch_size=$batch_size \
		  --max_length=$msl \
  		  --from_path=$from_path \
		  --save_path=$save_path

	    echo "DONE FOR THIS LOOP OF THE SCRIPT..."

        done
      done
    done
  done
done
done
) > results.log &
done

Visualization

In ./plot/:

  • vary_pr_plot.py: Figure 1
  • vary_backdoor_pr_plot.py: Figure 2
  • hyprid_vary_rank.py & vary_rank_plot.py: Figure 4
  • hyprid_vary_init.py & vary_init_var.py: Figure 5

In ./theory:

  • visualize_geometry.py: Figure 3

Citations

@misc{lora-s,
      title={Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?}, 
      author={Zi Liang and Haibo Hu and Qingqing Ye and Yaxin Xiao and Ronghua Li},
      year={2025},
      eprint={2505.12871},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2505.12871}, 
}

About

Source code of the paper "Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?" ICML'25

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published