Skip to content

Conversation

@kovalexal
Copy link
Contributor

Hi!

This PR introduces the following changes:

  1. Fixes an error during SVD decomposition when Conv2d layers are present in LoRAs (caused by the lack of fan_in_fan_out in Conv2d LoRALayer)
    if target.fan_in_fan_out:
    delta_weight = delta_weight.T
  2. Adds an ability to combine multiple adapters with concatenation of their A and B matrices. It's super fast compared to SVD decomposition (e.g. on my GPU SVD takes about 30+secs, and concatenation takes less than 1sec), and is also mathematically equivalent to successive applying of multple LoRAs. When using this method, one should be aware that mixing LoRAs may result in target LoRA being too large, so in this case SVD decomposition should be preferred.
  3. Adds an ability to provide the desired target SVD rank when performing SVD decomposition. So, one can find a balance between inference speed and loss in precision.
  4. Adds an ability to provide additional parameters to torch.linalg.svd like driver to use during decomposition.
  5. Adds an ability to disable clamping matrices values after SVD decomposition. From my experiments I see that clamping this values for Stable Diffusion LoRAs also influences results compared to Automatic1111 webui.

You can see my previous results here #643.

webui output:
webui_output

SVD decomposed mixture of 2 LoRAs, clamped:
peft_diffusers_clamped

SVD decomposed mixture of 2 LoRAs, unclamped:
peft_diffusers_unclamped

@pacman100 your review is kindly appreciated.

…ty to provide target rank with SVD, added concat as method for mixing LoRAs adapters
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the fixes and the new method, this looks pretty solid.

I'm not an expert on this topic (@pacman100 will know better), but the implementation looks quite good. However, I would like to see the unit tests extended to cover the new combination type.

Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much @kovalexal for adding the concat method from our previous discussions and improving upon the svd method, you rock 🔥. LGTM! 🤗

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks a lot for addressing my comments.

@BenjaminBossan BenjaminBossan merged commit 438b16b into huggingface:main Aug 16, 2023
Guy-Bilitski pushed a commit to Guy-Bilitski/peft that referenced this pull request May 13, 2025
…ngface#817)

Added a new feature to concatenate the LoRA weights as a mixing method.
SVD now accepts more options, does not clamp by default anymore.
cyyever pushed a commit to cyyever/peft that referenced this pull request Sep 4, 2025
* Use uniform config

* quick fix

* refactor

* update docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants