I saved my LoRA weights with model.save_pretrained(path, safe_serialization=True) and expected the serialization to be actually safe but it saved in PyTorch pickle format instead (I unzipped the adapter_model.bin to check and there was a pickle in it). It is very unsafe to save an unsafe checkpoint when safe serialization is requested because then people will have riskier behaviors when sharing and using shared checkpoints. Please either support safetensors for PEFT model weights (greatly preferred!) or raise an exception (at least print a warning) when safe_serialization=True is requested so that people do not think the checkpoints are safe when they are not.