-
Notifications
You must be signed in to change notification settings - Fork 6.1k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
enable_model_cpu_offload
seem to be tripping over its feet if it's called a second time. See repro case.
From what I can see, pipe._execution_device
is cuda:0
after the first call, but switches to cpu
after the second call. I'd expect the second call to be pretty much a no-op.
Reproduction
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
>>> pipe.enable_model_cpu_offload()
>>> result = pipe(prompt="foo")
>>> result = pipe(prompt="bar")
>>> pipe.enable_model_cpu_offload()
>>> result = pipe(prompt="baz")
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Logs
No response
System Info
diffusers
version: 0.14.0- Platform: Linux-4.14.240
- Python version: 3.9.10
- PyTorch version (GPU?): 1.13.0a0+git49444c3 (True)
- Huggingface_hub version: 0.13.1
- Transformers version: 4.27.2
- Accelerate version: 0.17.1
- xFormers version: 0.0.16+6f3c20f.d20230309
remorses and alexisrolland
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working