-
-
Notifications
You must be signed in to change notification settings - Fork 32k
bpo-39812: Remove daemon threads in concurrent.futures #19149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
3f03bd7
7686ba5
21a12e9
b266ee5
320fc12
e548615
fdf66c3
4cf8c4e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -59,19 +59,6 @@ | |
import sys | ||
import traceback | ||
|
||
# Workers are created as daemon threads and processes. This is done to allow the | ||
# interpreter to exit when there are still idle processes in a | ||
# ProcessPoolExecutor's process pool (i.e. shutdown() was not called). However, | ||
# allowing workers to die with the interpreter has two undesirable properties: | ||
# - The workers would still be running during interpreter shutdown, | ||
# meaning that they would fail in unpredictable ways. | ||
# - The workers could be killed while evaluating a work item, which could | ||
# be bad if the callable being evaluated has external side-effects e.g. | ||
# writing to a file. | ||
# | ||
# To work around this problem, an exit handler is installed which tells the | ||
# workers to exit when their work queues are empty and then waits until the | ||
# threads/processes finish. | ||
Comment on lines
-62
to
-74
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe this section is no longer relevant now that the workers are no longer daemon threads. Correct me if I'm mistaken. |
||
|
||
_threads_wakeups = weakref.WeakKeyDictionary() | ||
_global_shutdown = False | ||
|
@@ -107,6 +94,12 @@ def _python_exit(): | |
for t, _ in items: | ||
t.join() | ||
|
||
# Register for `_python_exit()` to be called just before joining all | ||
# non-daemon threads. This is used instead of `atexit.register()` for | ||
# compatibility with subinterpreters, which no longer support daemon threads. | ||
# See bpo-39812 for context. | ||
threading._register_atexit(_python_exit) | ||
|
||
# Controls how many more calls than processes will be queued in the call queue. | ||
# A smaller number will mean that processes spend more time idle waiting for | ||
# work while a larger number will make Future.cancel() succeed less frequently | ||
|
@@ -306,9 +299,7 @@ def weakref_cb(_, thread_wakeup=self.thread_wakeup): | |
# {5: <_WorkItem...>, 6: <_WorkItem...>, ...} | ||
self.pending_work_items = executor._pending_work_items | ||
|
||
# Set this thread to be daemonized | ||
super().__init__() | ||
self.daemon = True | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm actually depending on this behavior and my code breaks in Python 3.9. Is there a way to set the threads to be daemon? The error I got:
which I believe is relevant. @aeros There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @laike9m Hmm, does your code specifically rely on submitting new jobs after That being said, there may be some reason to consider allowing users to opt-in to using daemon threads; particularly since the decision to prevent subinterpreters from using daemon threads was reverted (which was originally the main motivation for this PR, with the benefit of more predictable shutdown behavior). Do you have any thoughts @pitrou? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @laike9m What do you mean with "break"? Does the process not exit cleanly? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let me describe my use case. I'm using gRPC which starts a server using This server runs in another thread besides the main thread, and would wait using I'm not 100% that the root cause is this change, but I scanned through all changes in concurrent.futures in 3.9, and this seems to be the most likely one. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see, then I think you need to define a signal handler that will be called when Ctrl-C is pressed, and that will ask the server to stop gracefully. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. emm, it's actually not about what happens when users press Ctrl+C, but the server is not able to serve requests before that...
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'll try to come up with a minimal example that can reproduce this error. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, then it's simpler: you should ask the server to finish and wait for it to finish before you let the main thread exit. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds good, thanks. |
||
|
||
def run(self): | ||
# Main loop for the executor manager thread. | ||
|
@@ -732,5 +723,3 @@ def shutdown(self, wait=True, *, cancel_futures=False): | |
self._executor_manager_thread_wakeup = None | ||
|
||
shutdown.__doc__ = _base.Executor.shutdown.__doc__ | ||
|
||
atexit.register(_python_exit) | ||
Comment on lines
-735
to
-736
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This change is just to make the atexit registration be in the same location for both ProcessPoolExecutor and ThreadPoolExecutor (right under the definition for |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -3,6 +3,7 @@ | |
import os as _os | ||
import sys as _sys | ||
import _thread | ||
import functools | ||
|
||
from time import monotonic as _time | ||
from _weakrefset import WeakSet | ||
|
@@ -1346,6 +1347,27 @@ def enumerate(): | |
with _active_limbo_lock: | ||
return list(_active.values()) + list(_limbo.values()) | ||
|
||
|
||
_threading_atexits = [] | ||
_SHUTTING_DOWN = False | ||
|
||
def _register_atexit(func, *arg, **kwargs): | ||
"""CPython internal: register *func* to be called before joining threads. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I specified "CPython internal" instead of just "internal" to make it more clear that it's intended to be used across stdlib modules (as opposed to an internal helper function for At some point, I think this could graduate to the public API, but at the moment it's a bit too niche since subinterpreters have not yet officially made it into the stdlib. If such a utility is requested by users, I think it could be moved without much hassle though. That's partly why I included a detailed docstring. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This sounds good to me, thanks for the explanation. |
||
|
||
The registered *func* is called with its arguments just before all | ||
non-daemon threads are joined in `_shutdown()`. It provides a similar | ||
purpose to `atexit.register()`, but its functions are called prior to | ||
threading shutdown instead of interpreter shutdown. | ||
|
||
For similarity to atexit, the registered functions are called in reverse. | ||
""" | ||
if _SHUTTING_DOWN: | ||
raise RuntimeError("can't register atexit after shutdown") | ||
|
||
call = functools.partial(func, *arg, **kwargs) | ||
_threading_atexits.append(call) | ||
|
||
|
||
from _thread import stack_size | ||
|
||
# Create the main thread object, | ||
|
@@ -1367,6 +1389,8 @@ def _shutdown(): | |
# _shutdown() was already called | ||
return | ||
|
||
global _SHUTTING_DOWN | ||
_SHUTTING_DOWN = True | ||
# Main thread | ||
tlock = _main_thread._tstate_lock | ||
# The main thread isn't finished yet, so its thread state lock can't have | ||
|
@@ -1376,6 +1400,11 @@ def _shutdown(): | |
tlock.release() | ||
_main_thread._stop() | ||
|
||
# Call registered threading atexit functions before threads are joined. | ||
# Order is reversed, similar to atexit. | ||
for atexit_call in reversed(_threading_atexits): | ||
atexit_call() | ||
|
||
# Join all non-deamon threads | ||
while True: | ||
with _shutdown_locks_lock: | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
Removed daemon threads from :mod:`concurrent.futures` by adding | ||
an internal `threading._register_atexit()`, which calls registered functions | ||
prior to joining all non-daemon threads. This allows for compatibility | ||
with subinterpreters, which don't support daemon threads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a tilde
~
shortens the Sphinx markup link to appear as just the last part, I.E.ThreadPoolExecutor
andProcessPoolExecutor
. I typically use this when the classes are easily distinguishable based on their names alone, and the section already makes it clear what module it's in. This helps to make it a bit more succinct.