-
Notifications
You must be signed in to change notification settings - Fork 6.3k
Closed
Description
Describe the bug
The ONNX support doesn't work with CUDAExecutionProvider
I installed onnxruntime-gpu
Running
import onnxruntime as ort ort.get_device()
results
GPU
and
ort.get_available_providers()
results
['CPUExecutionProvider', 'TensorrtExecutionProvider', 'CUDAExecutionProvider']
but diffusers complains onnxruntime not installed and wants me to install the cpu version(pip install onnxruntime).
Reproduction
Install
pip install onnxruntime-gpu
and run
from diffusers import StableDiffusionOnnxPipeline
pipe = StableDiffusionOnnxPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
revision="onnx",
provider="CUDAExecutionProvider",
use_auth_token=true,
)
Logs
No response
System Info
diffusers
version: 0.3.0- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Huggingface_hub version: 0.9.1
- Transformers version: 4.21.3
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Metadata
Metadata
Assignees
Labels
No labels