Skip to content

Unable to load Flux LoRA trained with OneTrainer – NotImplementedError in _convert_mixture_state_dict_to_diffusers #11441

Closed
@iamwavecut

Description

@iamwavecut

Describe the bug

Loading a LoRA that was:

  • trained with OneTrainer (master, FLUX-1 mode)
  • exported as a single .safetensors file (on Civitai)

via DiffusionPipeline.load_lora_weights() (or indirectly through Nunchaku’s v0.2.0 compose_lora) crashes at application start-up with:

File "diffusers/loaders/lora_conversion_utils.py", line 76, in _convert_mixture_state_dict_to_diffusers
    raise NotImplementedError
NotImplementedError

The exception is thrown because the LoRA state-dict contains keys that start with
lora_transformer_single_transformer_blocks_…, a pattern that the conversion helper does not yet handle.

Here is the list of key types in state_dict of the mentioned LoRA (multiple digit prefixed keys collapsed to %d):

lora_transformer_context_embedder
lora_transformer_norm_out_linear
lora_transformer_proj_out
lora_transformer_single_transformer_blocks_%d_attn_to_k
lora_transformer_single_transformer_blocks_%d_attn_to_q
lora_transformer_single_transformer_blocks_%d_attn_to_v
lora_transformer_single_transformer_blocks_%d_norm_linear
lora_transformer_single_transformer_blocks_%d_proj_mlp
lora_transformer_single_transformer_blocks_%d_proj_out
lora_transformer_time_text_embed_guidance_embedder_linear_%d
lora_transformer_time_text_embed_text_embedder_linear_%d
lora_transformer_time_text_embed_timestep_embedder_linear_%d
lora_transformer_transformer_blocks_%d_attn_add_k_proj
lora_transformer_transformer_blocks_%d_attn_add_q_proj
lora_transformer_transformer_blocks_%d_attn_add_v_proj
lora_transformer_transformer_blocks_%d_attn_to_add_out
lora_transformer_transformer_blocks_%d_attn_to_k
lora_transformer_transformer_blocks_%d_attn_to_out_0
lora_transformer_transformer_blocks_%d_attn_to_q
lora_transformer_transformer_blocks_%d_attn_to_v
lora_transformer_transformer_blocks_%d_ff_context_net_0_proj
lora_transformer_transformer_blocks_%d_ff_context_net_2
lora_transformer_transformer_blocks_%d_ff_net_0_proj
lora_transformer_transformer_blocks_%d_ff_net_2
lora_transformer_transformer_blocks_%d_norm1_context_linear
lora_transformer_transformer_blocks_%d_norm1_linear
lora_transformer_x_embedder

Reproduction

Download the LoRA file first on Civitai.
The example assumes that the safetensors file is located in the current working directory.

from diffusers.loaders.lora_conversion_utils import _convert_kohya_flux_lora_to_diffusers
import torch
from safetensors.torch import load_file

lora_path = "./rus1.3_100k.safetensors"

lora_state_dict = load_file(lora_path)
try:
    converted_lora = _convert_kohya_flux_lora_to_diffusers(lora_state_dict)
    print('Successfully converted lora')
except Exception as e:
    raise e

Logs

ERROR:    Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/starlette/routing.py", line 692, in lifespan
    async with self.lifespan_context(app) as maybe_state:
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
            ^^^^^^^^^^^^^^^^^^^^^
  File "/app/api.py", line 44, in lifespan
    composed_lora = compose_lora(enabled_loras)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/nunchaku/lora/flux/compose.py", line 17, in compose_lora
    lora = to_diffusers(lora)
            ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/nunchaku/lora/flux/diffusers_converter.py", line 17, in to_diffusers
    new_tensors, alphas = FluxLoraLoaderMixin.lora_state_dict(tensors, return_alphas=True)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/diffusers/loaders/lora_pipeline.py", line 1886, in lora_state_dict
    state_dict = _convert_kohya_flux_lora_to_diffusers(state_dict)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/diffusers/loaders/lora_conversion_utils.py", line 898, in _convert_kohya_flux_lora_to_diffusers
    return _convert_mixture_state_dict_to_diffusers(state_dict)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/diffusers/loaders/lora_conversion_utils.py", line 731, in _convert_mixture_state_dict_to_diffusers
    raise NotImplementedError
NotImplementedError

ERROR:    Application startup failed. Exiting.

System Info

diffusers@main as for April 28 2025

Who can help?

@sayakpaul I believe you've been involved in fixing OneTrainer LoRAs loading issues recently, would you be so kind to take your educated guess on this one.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions