Skip to content

Typo in load_pipeline_from_original_stable_diffusion_ckpt() method #2319

@p1atdev

Description

@p1atdev

Describe the bug

Currently it works fine, but I think inferenc.yaml should be inference.yaml.

with tempfile.TemporaryDirectory() as tmpdir:
if original_config_file is None:
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
original_config_file = os.path.join(tmpdir, "inferenc.yaml")
if key_name in checkpoint and checkpoint[key_name].shape[-1] == 1024:
if not os.path.isfile("v2-inference-v.yaml"):
# model_type = "v2"
r = requests.get(
" https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml"
)
open(original_config_file, "wb").write(r.content)
if global_step == 110000:
# v2.1 needs to upcast attention
upcast_attention = True
else:
if not os.path.isfile("v1-inference.yaml"):
# model_type = "v1"
r = requests.get(
" https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml"
)
open(original_config_file, "wb").write(r.content)
original_config = OmegaConf.load(original_config_file)

Reproduction

No response

Logs

No response

System Info

  • diffusers version: 0.13.0.dev0
  • Platform: Windows-10-10.0.22621-SP0
  • Python version: 3.9.15
  • PyTorch version (GPU?): 1.12.1+cu116 (True)
  • Huggingface_hub version: 0.11.1
  • Transformers version: 4.26.0
  • Accelerate version: 0.16.0
  • xFormers version: not installed
  • Using GPU in script?: RTX 3070 Ti
  • Using distributed or parallel set-up in script?:

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions