• AIPressRoom
  • Posts
  • Hugging Face Diffusers can accurately load LoRA now | by Andrew Zhu | Jul, 2023

Hugging Face Diffusers can accurately load LoRA now | by Andrew Zhu | Jul, 2023

Utilizing the Newest Diffusers Monkey Patching operate to load LoRA produces precisely the identical end result evaluate with A1111

Pull the newest code from Hugging Face’s Diffusers code repository, and located that the latest code up to date associated to LoRA loading is up to date and might do Monkey-Patching LoRA loading now.

To put in the newest Diffusers:

pip set up -U git+https://github.com/huggingface/diffusers.git@fundamental

The LoRA loading operate was producing barely defective outcomes yesterday, in line with my check. This text discusses the right way to use the newest LoRA loader from the Diffusers bundle.

Load LoRA and replace the Secure Diffusion mannequin weight

It has been some time since programmers utilizing Diffusers can’t have the LoRA loaded in a straightforward manner. To load LoRA to a checkpoint mannequin and output the identical end result as A1111’s Secure Diffusion Webui did, we have to use extra customized code to load the weights as I supplied on this article.

The answer supplied on this article works properly and quick, whereas it requires extra administration on the LoRA alpha weight, we have to create a variable to recollect the present LoRA weight α. As a result of the load LoRA code merely provides put the A and B matrix from LoRA collectively.

After which merge with the primary checkpoint mannequin weight W.

To take away the LoRA weights, we are going to want a unfavourable -α to take away the LoRA weights, or recreate the pipeline.

The Monkey-Patching strategy to load LoRA

One other manner to make use of LoRA is patching the code that executes the module ahead course of, and bringing the LoRA weights in the course of the time of calculating textual content embedding and a focus rating.

And that is how Diffusers LoraLoaderMixin’s method to LoRA loading. The nice a part of this method is that no mannequin weight is up to date, we are able to simply reset the LoRA and supply a brand new α to outline the LoRA weight.