-
-
Notifications
You must be signed in to change notification settings - Fork 149
Make cli worker parameter flexible #606
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Another option we could pursue is to just call them as scripts without |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @hmacdope for contributing here!!
Just made a comment on the code, and it would be really nice to add the test you propose in your first post.
dask_jobqueue/core.py
Outdated
death_timeout=None, | ||
local_directory=None, | ||
extra=None, | ||
worker_command="distributed.cli.dask_worker", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be consistent with other kwargs, the default here should be None, and the true default defined in the configuration file. See how other kwargs are handled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
@guillaumeeb hopefully I have addressed your comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me! We don't need to wait for the Cuda issue to be fixed to merge, right?
Also, if at some point you could add a bit documentation about using this new option, this would be really nice!
We'll also wait till the CI is green! |
It is merged!
Yes I will raise an issue. |
@guillaumeeb any idea why the container builds are failing? |
Nope, I need to check these. This might not be a problem, but the CI / build (none) failing is, could try to check this one? |
I'm not sure what the problem is with the build, it looks like the image takes too much time to build, weird, more than 6 hours !! |
Dask/distributed recently dropped Python 3.8 and I noticed that the failing CI uses it, maybe that is the problem? |
@guillaumeeb need anything more from me here? |
@guillaumeeb @jacobtomlinson @lesteve would be great to get this finalised if possible? |
Sorry for the delay here @hmacdope, I wanted to check the CI but didn't have the time. I'll merge anyway. Would you need an official release at some point? |
Thanks so much @guillaumeeb! No worries everyone is busy. :) all good without an official release. |
Fixes #229
Depends rapidsai/dask-cuda#1181
Hi all!
Basic implementation of calling CLI's other than the
distributed
CLI.Will require rapidsai/dask-cuda#1181 to be merged first, adding a
main
entrypoint todask-cuda-worker
I have had to add a bit of a shim to filter out some CLI args that are not shared between the
dask-worker
anddask-worker-cuda
CLIs.Very basic test to see that it works:
I am a new contributor so please let me know if I am missing anything obvious.