Skip to content

Issue with Flux LoRAs trained with SimpleTuner #9134

@sayakpaul

Description

@sayakpaul

@bghira we're seeing issues when doing:

import torch
from diffusers import DiffusionPipeline

base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16).to("cuda")

pipe.load_lora_weights("pzc163/flux-lora-littletinies")
generator = torch.Generator(device="cuda").manual_seed(0)
image = pipe(
    prompt=f"a dog in the park",
    num_inference_steps=28,
    guidance_scale=3.5,
    width=1024,
    height=1024,
    generator=generator,
 ).images[0]

But it outputs gibberish. The inference code from https://huggingface.co/pzc163/flux-lora-littletinies shows many unsupported arguments such negative_prompt and guidance_rescale.

Have you found any bugs with inference?

Cc: @apolinario @asomoza

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions