To install automia(editable):
python -m pip install -e ".[all]"
or automia(package)
pip install automia
A sandbox environment (env.sh) that will be used to execute the generated MIAs should be installed with common computation packages (such as numpy, torch, scipy,..) and packages required to run the MIAs. We recommend to save the model's outputs (i.e., logits) using safetensor and read load to save computational cost each trial.
-
Prepare a codebase template including
env.sh,template.py, andconfig.yaml. Example in (examples/bbllm/arxiv_pythia) -
Run vllm servers.
automiarequires two APIs (LLM and embedding) and then run automia
python -m automia.main --experiment-dir examples/bbllm/arxiv_pythia --timeout 300 --model-name qwen --base-url http://localhost:9800/v1 --provider vllm --embedding-model-name qwen-embedding --embedding-base-url http://localhost:9700/v1 --budget 100 --output-dir results/bbllm/arxiv
eval.pyandvis.pyare available to eval the top 10 MIAs and visualize the MIAs
python vis.py --output-dir results/bbllm/arxiv
A html file index.html will be written into results/bbllm/arxiv
python eval.py --template examples/bbllm/arxiv_pythia/template.py --output-dir results/bbllm/arxiv
A cvs file will be written into results/bbllm/arxiv
Please make sure the stored data is available to reload for our example template.py (line 135 in examples/bbllm/arxiv_pythia/template.py) by running generate.py scripts.
The template.py must have an argument of `output-dir`, create this output directory, and write the results into `mia-results.json` with 4 keys: "auc_score", "tpr_1_score", "tpr_5_score", "combined_score".
A detailed documents and step-by-step to reproceduce our paper's experiments will be available soon! Feel free to reach out to me at <email> or create issues.