-
Notifications
You must be signed in to change notification settings - Fork 6.6k
Closed
Description
@bghira we're seeing issues when doing:
import torch
from diffusers import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights("pzc163/flux-lora-littletinies")
generator = torch.Generator(device="cuda").manual_seed(0)
image = pipe(
prompt=f"a dog in the park",
num_inference_steps=28,
guidance_scale=3.5,
width=1024,
height=1024,
generator=generator,
).images[0]But it outputs gibberish. The inference code from https://huggingface.co/pzc163/flux-lora-littletinies shows many unsupported arguments such negative_prompt and guidance_rescale.
Have you found any bugs with inference?
Cc: @apolinario @asomoza
Metadata
Metadata
Assignees
Labels
No labels