-
-
Notifications
You must be signed in to change notification settings - Fork 741
Closed
Description
Hello Dask community !
I am running Dask on a linux server and I am very limited by my memory. My main problem is that the the dask scheduler process just keep eating more and more memory even though I don't submit any work to the workers yet ! I would like to understand what's going on and if there is any mitigation to this problem ?
To reproduce it:
from dask.distributed import Client
client = Client(memory_limit='100MB', processes=False, n_workers=4, threads_per_worker=1)
Then check memory with top | grep python
Ps : I succeded in limiting the worker process memory by adding --memory limit to the client call but this doesn't seem to be possible for the scheduler process.
Metadata
Metadata
Assignees
Labels
No labels