1
mirror of https://github.com/comfyanonymous/ComfyUI.git synced 2025-08-02 15:04:50 +08:00

Dependency Aware Node Caching for low RAM/VRAM machines (#7509)

* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.

* remove test code

* tidy code
This commit is contained in:
Chargeuk
2025-04-11 11:55:51 +01:00
committed by GitHub
parent f9207c6936
commit ed945a1790
4 changed files with 174 additions and 13 deletions

View File

@@ -156,7 +156,7 @@ def cuda_malloc_warning():
def prompt_worker(q, server_instance):
current_time: float = 0.0
e = execution.PromptExecutor(server_instance, lru_size=args.cache_lru)
e = execution.PromptExecutor(server_instance, lru_size=args.cache_lru, cache_none=args.cache_none)
last_gc_collect = 0
need_gc = False
gc_collect_interval = 10.0