Skip to content

Commit c8d2117

Browse files
authored
Fix memory leak by properly detaching model finalizer (#9979)
When unloading models in load_models_gpu(), the model finalizer was not being explicitly detached, leading to a memory leak. This caused linear memory consumption increase over time as models are repeatedly loaded and unloaded. This change prevents orphaned finalizer references from accumulating in memory during model switching operations.
1 parent fccab99 commit c8d2117

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

comfy/model_management.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -645,7 +645,9 @@ def load_models_gpu(models, memory_required=0, force_patch_weights=False, minimu
645645
if loaded_model.model.is_clone(current_loaded_models[i].model):
646646
to_unload = [i] + to_unload
647647
for i in to_unload:
648-
current_loaded_models.pop(i).model.detach(unpatch_all=False)
648+
model_to_unload = current_loaded_models.pop(i)
649+
model_to_unload.model.detach(unpatch_all=False)
650+
model_to_unload.model_finalizer.detach()
649651

650652
total_memory_required = {}
651653
for loaded_model in models_to_load:

0 commit comments

Comments
 (0)