Skip to content

Conversation

@dblunk88
Copy link
Contributor

@dblunk88 dblunk88 commented Nov 4, 2022

cpu offloading defaults to 'cuda'. Specifying what cuda device gives the user more freedom

If accepted, then all pipelines would need these minor changes

@dblunk88 dblunk88 changed the title mutli GPU support cpu offloading: mutli GPU support Nov 4, 2022
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@patrickvonplaten
Copy link
Contributor

This looks reasonable to me! @anton-l @pcuenca what do you think?

Copy link
Member

@pcuenca pcuenca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

Copy link
Member

@anton-l anton-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should just do cpu_offload(cpu_offloaded_model, self.device)? (emphasis on self.device)

@dblunk88
Copy link
Contributor Author

Maybe we should just do cpu_offload(cpu_offloaded_model, self.device)? (emphasis on self.device)

probably a better solution than mine :D

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool - thanks!

@patrickvonplaten patrickvonplaten merged commit 09d0546 into huggingface:main Nov 16, 2022
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants