Skip to content

[Community] FP16 ONNX produces incorrect output #1083

@kleiti

Description

@kleiti

Describe the bug

#932 enabled conversion of the main branch FP32 model (git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 -b main) to ONNX FP16. While it runs fine with OnnxStableDiffusionPipeline using DMLExecutionProvider (onnxruntime-directml==1.13.1), the produced image is just a black square.

Conversion was done with:
convert_stable_diffusion_checkpoint_to_onnx.py --model_path stable-diffusion-v1-4 --output_path fp16_sd14 --fp16

Reproduction

from diffusers import OnnxStableDiffusionPipeline

pipe = OnnxStableDiffusionPipeline.from_pretrained("fp16_sd14", provider="DmlExecutionProvider")
prompt = "viking storming a castle"
image = pipe(prompt).images[0]
image.save("viking.png")

Logs

No response

System Info

  • diffusers version: 0.7.0.dev0
  • Platform: Windows-10-10.0.22000-SP0
  • Python version: 3.9.13
  • PyTorch version (GPU?): 1.12.1+cpu (False)
  • Huggingface_hub version: 0.10.0
  • Transformers version: 4.22.2
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Metadata

Metadata

Assignees

Labels

bugSomething isn't workinggood first issueGood for newcomershelp wantedExtra attention is needed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions