Skip to content

Could not run train_lcm_distill_sd_wds with multi-GPU setting #6278

@akameswa

Description

@akameswa

Describe the bug

Got the following error while running the train_lcm_distill_sd_wds.py script:
AttributeError: 'DistributedDataParallel' object has no attribute 'config'
Occurs only when multi-gpu setting is used.

Changing the line at 1178 from:
w_embedding = guidance_scale_embedding(w, embedding_dim=unet.config.time_cond_proj_dim) to
w_embedding = guidance_scale_embedding(w, embedding_dim=accelerator.unwrap_model(unet).config.time_cond_proj_dim) solved the issue.

Attaching the referred source:
#3673

Reproduction

Follow full model distillation at tutorial. Use multi-gpu while configuring accelerate.

Logs

No response

System Info

  • diffusers version: 0.25.0.dev0
  • Platform: Linux-4.18.0-48-x86_64
  • Python version: 3.10.13
  • PyTorch version (GPU?): 2.0.1+cu117 (True)
  • Huggingface_hub version: 0.19.4
  • Transformers version: 4.36.1
  • Accelerate version: 0.25.0
  • xFormers version: 0.0.22
  • Using GPU in script?: YES
  • Using distributed or parallel set-up in script?: YES

Who can help?

@sayakpaul @patrickvonplaten

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions