Skip to content

[Bug]: ValueError: state_dict cannot be passed together with a model name or a gguf_file #16950

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 6 tasks
farizy4n opened this issue Apr 16, 2025 · 0 comments
Open
2 of 6 tasks
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@farizy4n
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

it all just happened, because I haven't used A1111 for a long time. Lately I've been experimenting a lot with ComfyUi.

Steps to reproduce the problem

since the application started.

What should have happened?

Idk, I have done a clean install, but this still happens.

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2025-04-16-05-27.json

Console logs

Creating model from config: E:\AI\TensorRT\webui\configs\v1-inference.yaml
creating model quickly: ValueError
Traceback (most recent call last):
  File "threading.py", line 973, in _bootstrap
  File "threading.py", line 1016, in _bootstrap_inner
  File "E:\AI\TensorRT\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "E:\AI\TensorRT\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "contextlib.py", line 78, in inner
  File "E:\AI\TensorRT\webui\extensions\sd-webui-EasyPhoto\scripts\sdwebui.py", line 64, in __exit__
    sd_models.reload_model_weights()
  File "E:\AI\TensorRT\webui\modules\sd_models.py", line 977, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "E:\AI\TensorRT\webui\modules\sd_models.py", line 820, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "E:\AI\TensorRT\webui\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "E:\AI\TensorRT\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "E:\AI\TensorRT\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "E:\AI\TensorRT\webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "E:\AI\TensorRT\webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "E:\AI\TensorRT\webui\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(pretrained_model_name_or_path, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "E:\AI\TensorRT\system\python\lib\site-packages\transformers\modeling_utils.py", line 279, in _wrapper
    return func(*args, **kwargs)
  File "E:\AI\TensorRT\system\python\lib\site-packages\transformers\modeling_utils.py", line 3994, in from_pretrained
    raise ValueError(
ValueError: `state_dict` cannot be passed together with a model name or a `gguf_file`. Use one of the two loading strategies.

Failed to create model quickly; will retry using slow method.
Loading VAE weights specified in settings: E:\AI\TensorRT\webui\models\VAE\madebyollin-sdxl-vae-fp16-fix.safetensors
Applying attention optimization: Doggettx... done.
Model loaded in 10.6s (create model: 1.6s, apply weights to model: 8.1s, load VAE: 0.5s, move model to device: 0.2s).
Restoring base VAE
Applying attention optimization: Doggettx... done.
VAE weights loaded.

Additional information

No response

@farizy4n farizy4n added the bug-report Report of a bug, yet to be confirmed label Apr 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

1 participant