Skip to content

Batch Processing for Multi ControlNet. #2704

@ahmadmustafaanis

Description

@ahmadmustafaanis

Describe the bug

I am trying to use Batch Processing for Multi ControlNet, but it does not work as intended.

Reproduction

pipe = StableDiffusionControlNetPipeline.from_pretrained(
    "my_model", torch_dtype=torch.float16,
    controlnet=[
        controlnet_pose, 
        controlnet_hed
    ],
).to("cuda")

output = pipe(
        ["prompt 1", "prompt 2"],   # Batch Processing for 2 prompts
        [[pose_image_low, hed_image_low], [pose_image_cc, hed_image_cc]],  # 2 Images for each prompt
        generator = [torch.Generator(device="cuda").manual_seed(16605685601386031741), torch.Generator(device="cuda").manual_seed(16605685601386031741)],  # 2 generators for each prompt
        negative_prompt=["low quality, wrong face, distorted face, ugly, beard", "low quality, wrong face, distorted face, tikka, ugly, beard"],  # 2 neg prompts
        num_inference_steps=50,
    controlnet_conditioning_scale = [1, 1.2, 1, 0.7]  # This gives an error
controlnet_conditioning_scale = [[1, 1.2], [1, 0.7]] # this gives an error
controlnet_conditioning_scale = [1, 1.2] # this works fine but I want to use different conditioning scale for both prompts
).images

Logs

No response

System Info

Latest installation

pip install git+https://github.com/huggingface/diffusers

GPU: Tesla T4

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions