-
Notifications
You must be signed in to change notification settings - Fork 6.1k
Closed
Labels
Description
Describe the bug
train model with OneTrainer and Everydream2, both same error
this one with onetrainer
Fetching 11 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 115922.97it/s]
Loading pipeline components...: 33%|████████████████████████████████████████████████████████████████▎ | 2/6 [00:00<00:00, 178.17it/s]
Traceback (most recent call last):
File "/home/zznet/.local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 491, in from_single_file
loaded_sub_model = load_single_file_sub_model(
File "/home/zznet/.local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 156, in load_single_file_sub_model
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load CLIPTextModel. Weights for this component appear to be missing in the checkpoint.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zznet/workspace/ai-pipe/prod-ai-pipe-replace-model/runpod_app.py", line 14, in <module>
from pipe import text2img, getText2imgPipe, set_sampler, compel
File "/home/zznet/workspace/ai-pipe/prod-ai-pipe-replace-model/pipe.py", line 24, in <module>
text2imgPipe = StableDiffusionControlNetPipeline.from_single_file(
File "/home/zznet/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/zznet/.local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 506, in from_single_file
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load CLIPTextModel. Weights for this component appear to be missing in the checkpoint.
Please load the component before passing it in as an argument to `from_single_file`.
text_encoder = CLIPTextModel.from_pretrained('...')
pipe = StableDiffusionControlNetPipeline.from_single_file(<checkpoint path>, text_encoder=text_encoder)
this one use everydream
#7506 (comment)
Reproduction
text2imgPipe = StableDiffusionControlNetPipeline.from_single_file(
'./base.safetensors',
# vae = vae,
# '/home/zznet/workspace/stable-diffusion-webui/models/Stable-diffusion/majicmixRealistic_v7.safetensors',
# '/home/zznet/workspace/1-ot/save/2024-04-30_11-47-02-save-1650-110-0.safetensors',
controlnet = [
depth_control,
softedge_control,
inpaint_control
],
torch_dtype = torch.float16,
# custom_pipeline = 'lpw_stable_diffusion'
)
Logs
No response
System Info
main branch
Who can help?
No response