Skip to content

multiprocessing.get_logger() logger deadlock on first call by subprocess to logger.info("...") due to internal logger.debug(...) call by multiprocessing.Queue._start_thread #91555

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
AshleyT3 opened this issue Apr 15, 2022 · 16 comments
Labels
stdlib Python modules in the Lib dir topic-multiprocessing type-bug An unexpected behavior, bug, or error

Comments

@AshleyT3
Copy link

AshleyT3 commented Apr 15, 2022

Python: v3.10.2
Platform: Windows 10

Description: A deadlock occurs when a subprocess uses a logging.Logger returned multiprocessing.get_logger() where the initial logging level at the time of the first call to logger.info is level DEBUG or lower, which causes, during that same initial call to logger.info, an internal call to logger.debug(...) (at top of multiprocessing.Queue._start_thread), all of which causes the same thread to attempt to re-acquire an already held lock.

Workaround: Set logging level to INFO or higher (something above DEBUG) to prevent the logging.debug() statement at the top of Queue._start_thread from attempting to init/lock while init is already in progress.

The following example exhibits the issue when SHOW_THE_DEADLOCK==True. Set SHOW_THE_DEADLOCK==False to observe the workaround.

from concurrent.futures import ProcessPoolExecutor
import logging
import logging.handlers
from multiprocessing import freeze_support
import multiprocessing
import sys

SHOW_THE_DEADLOCK = True # True to show the bug, False to show workaround.

g_queue: multiprocessing.Queue = None
def global_init(logging_queue):
    global g_queue
    g_queue = logging_queue

def subprocess_logging_test():
    queue_handler = logging.handlers.QueueHandler(g_queue)
    l2 = multiprocessing.get_logger()
    l2.addHandler(queue_handler)
    if not SHOW_THE_DEADLOCK:
        l2.setLevel("INFO") # default level is UNSET, if level is <= DEBUG deadlock will occur, this prevents that.
        l2.info("write to log once at info and higher to perform thread init etc.")
    l2.setLevel("DEBUG")
    l2.info("If initial level is DEBUG, deadlock here, else OK.")
    l2.warning("If initial level is DEBUG, never reach this point, else OK.")
    l2.debug("If initial level is DEBUG, never reach this point, else OK.")
    l2.info("If initial level is DEBUG, never reach this point, else OK.")

def main():
    global g_queue
    g_queue = multiprocessing.Queue(maxsize=99)
    handler = logging.StreamHandler(stream=sys.stdout)
    listener = logging.handlers.QueueListener(g_queue, handler)
    listener.start()
    with ProcessPoolExecutor(
            initializer=global_init,
            initargs=(g_queue,)
    ) as pexec:
        f = pexec.submit(subprocess_logging_test)
        f.result()
    listener.stop()

if __name__ == '__main__':
    freeze_support()
    main()

The following is an annotated stack from the above example when SHOW_THE_DEADLOCK==True.

put (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:92)
    with self._notempty:          # <----- dead lock
        if self._thread is None:
            self._start_thread()
        self._buffer.append(obj)
        self._notempty.notify()
put_nowait (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:138)
enqueue (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1423)
emit (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1461)
handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:968)
callHandlers (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1696)
handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1634)
_log (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1624)
log (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1547)
debug (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\util.py:50)
_start_thread (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:165)
    def _start_thread(self):
        debug('Queue._start_thread()') # <----- at level DEBUG or lower, triggers logging call leading to deadlock.
put (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:94)
put_nowait (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\queues.py:138)
enqueue (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1423)
emit (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\handlers.py:1461)
handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:968)
callHandlers (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1696)
handle (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1634)
_log (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1624)
info (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\logging\__init__.py:1477)
subprocess_logging_test (c:\PythonTesting\logging_testing.py:25)
_process_worker (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\concurrent\futures\process.py:243)
run (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\process.py:108)
_bootstrap (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\process.py:315)
_main (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\spawn.py:129)
spawn_main (c:\Users\HappyPythonDev\AppData\Local\Programs\Python\Python310\Lib\multiprocessing\spawn.py:116)
 (:1)

Linked PRs

@akulakov
Copy link
Contributor

I reproduced in 3.11 latest version on MacOS, both the lockup and success when setting higher debug level.

@iamdbychkov
Copy link

see also:
#90321

@duaneg
Copy link
Contributor

duaneg commented Mar 27, 2025

minimal-mpq-repro.py
import logging.handlers
import multiprocessing

queue = multiprocessing.Queue()
handler = logging.handlers.QueueHandler(queue)
logger = multiprocessing.get_logger()
logger.addHandler(handler)
logger.setLevel("DEBUG")
logger.debug("deadlock")

Queue's have a _notempty condition variable, created with a non-recursive lock. In this deadlock the queue handler tries to start a thread the first time a message is enqueued, however that logs a debug message to the same logger, which does the same thing again, trying to acquire the same lock and deadlocking:

Traceback (most recent call first):
  ...
This line is deadlocked trying to acquire the _notempty lock:
  File "/home/duaneg/src/cpython/Lib/multiprocessing/queues.py", line 90, in put
    with self._notempty:
  ...
  File "/home/duaneg/src/cpython/Lib/multiprocessing/util.py", line 50, in debug
    _logger.log(DEBUG, msg, *args, stacklevel=2)
  File "/home/duaneg/src/cpython/Lib/multiprocessing/queues.py", line 174, in _start_thread
    debug('Queue._start_thread()')
  ...
Note this line is called holding the _notempty lock:
  File "/home/duaneg/src/cpython/Lib/multiprocessing/queues.py", line 92, in put
    self._start_thread()
  ...
  File "/home/duaneg/src/cpython/Lib/logging/__init__.py", line 1736, in callHandlers
    hdlr.handle(record)
  ...
  File "/home/duaneg/src/cpython/Lib/logging/__init__.py", line 1507, in debug
    self._log(DEBUG, msg, args, **kwargs)
  ...

We could make the lock recursive, however that will just turn the deadlock into a stack overflow. At present, while a handler is processing a log record for a logger, it must not log anything to that same logger.

Arguably this is fair enough: that seems like an inherently bad idea, and perhaps this issue (and #90321) should just be closed as a user error. In fact there is a warning about exactly these sort of possible deadlocks in the logging.Handler documentation. However, this is a nasty latent bug for anyone unlucky enough to hit it, and it might be difficult for e.g. custom handlers calling into third-party code to be sure it will never log.

As an alternative, we could relatively easily eliminate this whole class of bug by temporarily disabling logging for a given logger while its handlers are being called. I'll push a PR for consideration shortly.

Either way, I don't think this is a multiprocessing bug per se.

duaneg added a commit to duaneg/cpython that referenced this issue Mar 27, 2025
Prevent the possibility of re-entrancy and deadlock or infinite recursion
caused by logging triggered by logging by disabling logging while the logger is
handling log messages.
@vsajip
Copy link
Member

vsajip commented Mar 27, 2025

Either way, I don't think this is a multiprocessing bug per se

What value does that internal DEBUG logging in multiprocessing really have? What happens if you remove it - do this issue and #90321 both go away?

@duaneg
Copy link
Contributor

duaneg commented Mar 28, 2025

What value does that internal DEBUG logging in multiprocessing really have? What happens if you remove it - do this issue and #90321 both go away?

I can't comment on the value of that logging but removing it will indeed fix this issue and #90321. In this case, since multiprocessing and logging are all within the standard library and somewhat integrated already, perhaps that would be an appropriate and simpler/less risky fix.

I've done a quick code inspection of the other standard logging handlers and I don't think any of them have the same issue, although some do fairly involved things like making HTTP requests. More widely, custom handlers calling into third-party code may run into this issue and either having latent bugs or need tricky work-arounds. E.g. see the Sentry logging integration code:

https://github.com/getsentry/sentry-python/blob/2f4b0280048d103d95120ad5f802ec39157e3bc8/sentry_sdk/integrations/logging.py#L44

If folks think removing the logging from multiprocessing.Queue (or even multiprocessing entirely) is a better approach I'd be happy to whip up a PR to that effect.

@vsajip
Copy link
Member

vsajip commented Mar 28, 2025

In this case, since multiprocessing and logging are all within the standard library and somewhat integrated already

Well, multiprocessing introduces a custom logging level, which is allowed by logging for very specific scenarios but not recommended in general, so it's not as cohesive a design as one might think.

e.g. see the Sentry logging integration code

I only had a quick look, but it seems to just sidestep their own internal logging because it would cause recursion errors.

If folks think removing the logging from multiprocessing.Queue (or even multiprocessing entirely)

I would suggest opening a discuss.python.org topic to solicit feedback from other core developers. Personally, I would consider that removing the logging from multiprocessing.Queue is reasonable (but not, for now, changing anything that doesn't need to change to address the two issues). In general, an application or library should work exactly the same independently of logging verbosity settings, which in this case multiprocessing doesn't do, ISTM.

@picnixz picnixz added type-bug An unexpected behavior, bug, or error stdlib Python modules in the Lib dir labels Mar 28, 2025
@duaneg
Copy link
Contributor

duaneg commented Mar 29, 2025

I would suggest opening a discuss.python.org topic to solicit feedback from other core developers.

OK, will do, thanks!

Personally, I would consider that removing the logging from multiprocessing.Queue is reasonable (but not, for now, changing anything that doesn't need to change to address the two issues).

As it happens I was looking at a different bug in multiprocessing yesterday, and with this fresh in my mind used the built-in logging: it was very useful. It is also a publicly documented part of the module's API. Removing it doesn't feel appropriate to me, but let's see what other people think.

@vsajip
Copy link
Member

vsajip commented Mar 29, 2025

If not completely removing it, then perhaps consider keeping the fix in multiprocessing where I think it belongs?

@gpshead gpshead moved this to In Progress in Logging issues 🪵 Mar 29, 2025
@gpshead gpshead moved this to In Progress in Multiprocessing issues Mar 29, 2025
@duaneg
Copy link
Contributor

duaneg commented Mar 29, 2025

If not completely removing it, then perhaps consider keeping the fix in multiprocessing where I think it belongs?

I disagree that the fix belongs in multiprocessing: IMO the bug (assuming we treat this as a bug at all, and not user error) is in the logging code, which is using a lower-level data structure from an unrelated module in a way that happens to be unsafe in its particular context.

More over, this is just one example of a whole class of potential bugs, which I think we can and should prevent entirely by making the logging framework more robust, as per the PR.

@vsajip
Copy link
Member

vsajip commented Apr 11, 2025

IMO the bug (assuming we treat this as a bug at all, and not user error) is in the logging code, which is using a lower-level data structure from an unrelated module

Sorry, which lower-level data structure do you mean, and which unrelated module? Sorry if I'm being dense.

@duaneg
Copy link
Contributor

duaneg commented Apr 11, 2025

Sorry, which lower-level data structure do you mean, and which unrelated module? Sorry if I'm being dense.

Not at all! "Lower-level" was inaccurate and a poor choice of words on my part, sorry. Let me see if I can explain myself better.

QueueHandler takes a queue parameter, which its docs state "can be any queue-like object". You can pass it a queue.Queue, or your own implementation of a queue, and it will all work just fine. If whatever queue you provide does its own logging though, there is an obvious problem. On the face of it, this seems like user error: they have provided an object which is inappropriate for how it is used. However, the logging documentation specifically says, "if you are using multiprocessing, you should...use multiprocessing.Queue"! If this is a user bug, it is IMO also a documentation bug that this little trap is not noted. Moreover, this is very difficult for a user to ensure: their own code may not do any logging but what about third-party code, called directly or indirectly? The bug will not be immediately obvious if it only shows up when debugging is enabled.

So, I don't think we should treat it as a user error, but I don't think it is a multiprocessing.Queue bug either. That class predates the logging handler, as does its builtin logging. It was not originally designed or intended to be used for logging, and in fact AFAICT is mostly not used for logging. That logging came along and added a handler which doesn't work with it under some circumstances is not its fault.

As another example, an external logging handler which sends log messages to an HTTP endpoint will probably be implemented using one of the existing, mature external HTTP modules such as urllib3 (or requests which builds on top of it). Or possibly it is fancy modern code usingasyncio. Except, whoops, those modules have builtin logging! That is a useful feature and it would be absurd to ask them to remove it to make a logging handler's implementation easier. In order to safely use these libraries the handler implementation needs to go to some fairly extreme lengths to suppress their logging entirely. Sentry does, but the other susceptible third-party handlers I've reviewed on pypi do not: if debug level logging is enabled they will hang, crash, or spew logs in an infinite loop.

In my opinion, the best and most practical way forward is to deal with this in the logging core itself. My preference is to just drop any such recursive logging (this potentially loses useful log messages). Another option would be to store and send them separately, later (this doesn't work if sending a log message produces another log message, ad infinitum).

@vsajip
Copy link
Member

vsajip commented Apr 13, 2025

With these changes:

diff --git a/Lib/multiprocessing/queues.py b/Lib/multiprocessing/queues.py
index 925f0439000..73f1535581b 100644
--- a/Lib/multiprocessing/queues.py
+++ b/Lib/multiprocessing/queues.py
@@ -70,7 +70,7 @@ def _reset(self, after_fork=False):
         if after_fork:
             self._notempty._at_fork_reinit()
         else:
-            self._notempty = threading.Condition(threading.Lock())
+            self._notempty = threading.Condition(threading.RLock())
         self._buffer = collections.deque()
         self._thread = None
         self._jointhread = None
diff --git a/Lib/multiprocessing/util.py b/Lib/multiprocessing/util.py
index b7192042b9c..5d54a3bcb79 100644
--- a/Lib/multiprocessing/util.py
+++ b/Lib/multiprocessing/util.py
@@ -35,33 +35,35 @@
 INFO = 20
 SUBWARNING = 25
 
+META_LOGGER_NAME = 'multiprocessing.meta'
 LOGGER_NAME = 'multiprocessing'
 DEFAULT_LOGGING_FORMAT = '[%(levelname)s/%(processName)s] %(message)s'
 
 _logger = None
+_meta_logger = None
 _log_to_stderr = False
 
 def sub_debug(msg, *args):
-    if _logger:
-        _logger.log(SUBDEBUG, msg, *args, stacklevel=2)
+    if _meta_logger:
+        _meta_logger.log(SUBDEBUG, msg, *args, stacklevel=2)
 
 def debug(msg, *args):
-    if _logger:
-        _logger.log(DEBUG, msg, *args, stacklevel=2)
+    if _meta_logger:
+        _meta_logger.log(DEBUG, msg, *args, stacklevel=2)
 
 def info(msg, *args):
-    if _logger:
-        _logger.log(INFO, msg, *args, stacklevel=2)
+    if _meta_logger:
+        _meta_logger.log(INFO, msg, *args, stacklevel=2)
 
 def sub_warning(msg, *args):
-    if _logger:
-        _logger.log(SUBWARNING, msg, *args, stacklevel=2)
+    if _meta_logger:
+        _meta_logger.log(SUBWARNING, msg, *args, stacklevel=2)
 
 def get_logger():
     '''
     Returns logger used by multiprocessing
     '''
-    global _logger
+    global _logger, _meta_logger
     import logging
 
     with logging._lock:
@@ -70,6 +72,9 @@ def get_logger():
             _logger = logging.getLogger(LOGGER_NAME)
             _logger.propagate = 0
 
+            _meta_logger = logging.getLogger(META_LOGGER_NAME)
+            _meta_logger.propagate = 0
+
             # XXX multiprocessing should cleanup before logging
             if hasattr(atexit, 'unregister'):
                 atexit.unregister(_exit_function)

the hang or stack overflow does not occur in minimal-mpq-repro.py. About these changes:

  • An RLock is better than a Lock as the problem becomes immediately apparent.
  • Logging about the internals of part of the logging machinery itself should perhaps be logged to a meta-logger rather than the main logger, which can still be used for logging other aspects of multiprocessing.
  • If someone wants to troubleshoot the queue area of multiprocessing, nothing stops them from attaching a non-queue handler to the meta logger - a file or console handler, say - to see what's going on.
  • This change is not complete, as I've just moved all the internal logging over to the meta logger rather than looking more carefully at where logging is being done in multiprocessing.

What problems do you see with implementing my suggested changes in multiprocessing for handling the two issues mentioned here?

Update: tweaked diff to add a missing global for _meta_logger.

@vsajip
Copy link
Member

vsajip commented Apr 13, 2025

Also, please add a link here to the discuss.python.org topic where this was discussed, as the search on that site isn't ideal!

Never mind, found it: https://discuss.python.org/t/dealing-with-unbounded-recursion-in-logging-handlers/86365

@duaneg
Copy link
Contributor

duaneg commented Apr 14, 2025

  • Logging about the internals of part of the logging machinery itself should perhaps be logged to a meta-logger rather than the main logger, which can still be used for logging other aspects of multiprocessing.

Once again, as I see it, multiprocessing.Queue is not part of the internals of the logging module. It is part of multiprocessing. From the point of view of a developer debugging an issue in an mp application using queues (for general IPC, not necessarily logging!) it is just part of the mp machinery, and they will want its logs in the same place as the other mp logs.

  • If someone wants to troubleshoot the queue area of multiprocessing, nothing stops them from attaching a non-queue handler to the meta logger - a file or console handler, say - to see what's going on.

If we decide the user is responsible for configuring logging correctly in order to avoid these issues (i.e. we treat this issue as a "user error") we don't need to do anything, except maybe adding a note to the docs.

What problems do you see with implementing my suggested changes in multiprocessing for handling the two issues mentioned here?

It will potentially break existing code that configures the current logger and wants those logs. It doesn't prevent the bug, it just alters the configuration required to trigger it.

@vsajip
Copy link
Member

vsajip commented Apr 14, 2025

and they will want its logs in the same place as the other mp logs.

Then we can leave things as they are. Your proposed solution (disabling the logger while handlers are active) seems to potentially leave the logger disabled for a long period of time (indeterminate, depending on what other handlers are configured). Certain handlers (such as SMTPHandler, HTTPHandler) can be quite slow, depending on network latencies and responses from external services. Leaving a logger effectively disabled for indeterminate amounts to time could just introduce new problems.

(i.e. we treat this issue as a "user error")

Well, in this case multiprocessing is a user of logging, and it's certainly possible to consider this as such an error. I have no problem with adding caveats to the documentation to warn users of the danger. Feel free to propose some.

Ultimately, the reason for this problem seems to be that the multiprocessing implementation didn't consider this scenario and plan for it.

It will potentially break existing code that configures the current logger and wants those logs. It doesn't prevent the bug, it just alters the configuration required to trigger it.

But surely it prevents the bug in the simple configurations described in the example reproducers here and in #90321? As things are, they wouldn't get the logs anyway, because of the deadlock problem, right? A number of people who would currently run into the bug would do so no longer, so couldn't that could be considered an improvement?

If the configuration required to trigger is more complicated, then someone would have to go out of their way to trigger it, right? What would be the use case for such configuration?

I'm not actually proposing to make the changes in the above diff - it's only to show that other approaches are possible and to spark discussion.

@duaneg
Copy link
Contributor

duaneg commented Apr 14, 2025

Then we can leave things as they are

For sure we can. I think we can do better, but I could be wrong, and it isn't a big deal. Treating it as user error is a perfectly reasonable way to resolve the issue.

Your proposed solution (disabling the logger while handlers are active) seems to potentially leave the logger disabled for a long period of time (indeterminate, depending on what other handlers are configured)

It disables logging on a per-thread basis and only while the thread runs the message handlers (+filters). Which is to say, it is disabled only while running code that would trigger this bug if it logged a message. Or at least, that is my intent: if I've overlooked anything or there is a bug, please let me know!

It doesn't stop other threads from logging to the same logger and/or handlers.

(i.e. we treat this issue as a "user error")

Well, in this case multiprocessing is a user of logging, and it's certainly possible to consider this as such an error. I have no problem with adding caveats to the documentation to warn users of the danger. Feel free to propose some.

If we cannot find on an acceptable solution to prevent this in the code then I will indeed prepare a documentation patch. I remain optimistic we can prevent it, however :-)

If the configuration required to trigger is more complicated, then someone would have to go out of their way to trigger it, right? What would be the use case for such configuration?

The way I've been thinking of it is: if some code is sending logs to a multiprocessing.Queue, it is probably because it is running in a worker process that wants to send them to a central location, and it is probably a root handler. This bug is likely triggered by someone setting the logging level to "debug" at the root level in the worker, perhaps because they are trying to track down a particularly mysterious bug. Using a different logger won't help in that scenario.

I'm not actually proposing to make the changes in the above diff - it's only to show that other approaches are possible and to spark discussion.

Understood, and thanks. I find the feedback very valuable, from you and others. For now, however, I still think suppressing the logging while running the log handling code is the best solution.

duaneg added a commit to duaneg/cpython that referenced this issue Apr 17, 2025
vsajip pushed a commit that referenced this issue May 8, 2025
Prevent the possibility of re-entrancy leading to deadlock or infinite recursion (caused by logging triggered by logging), by disabling logging while the logger is handling log messages.
miss-islington pushed a commit to miss-islington/cpython that referenced this issue May 11, 2025
…1812)

Prevent the possibility of re-entrancy leading to deadlock or infinite recursion (caused by logging triggered by logging), by disabling logging while the logger is handling log messages.
(cherry picked from commit 2561e14)

Co-authored-by: Duane Griffin <[email protected]>
miss-islington pushed a commit to miss-islington/cpython that referenced this issue May 11, 2025
…1812)

Prevent the possibility of re-entrancy leading to deadlock or infinite recursion (caused by logging triggered by logging), by disabling logging while the logger is handling log messages.
(cherry picked from commit 2561e14)

Co-authored-by: Duane Griffin <[email protected]>
vsajip pushed a commit that referenced this issue May 12, 2025
…GH-133899)

Prevent the possibility of re-entrancy leading to deadlock or infinite recursion (caused by logging triggered by logging), by disabling logging while the logger is handling log messages.
(cherry picked from commit 2561e14)

Co-authored-by: Duane Griffin <[email protected]>
@vsajip vsajip closed this as completed May 12, 2025
@github-project-automation github-project-automation bot moved this from In Progress to Done in Multiprocessing issues May 12, 2025
@github-project-automation github-project-automation bot moved this from In Progress to Done in Logging issues 🪵 May 12, 2025
hawkeye217 added a commit to blakeblackshear/frigate that referenced this issue May 29, 2025
A Python bug (python/cpython#91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
NickM-27 pushed a commit to blakeblackshear/frigate that referenced this issue May 29, 2025
* use mp Manager to handle logging queues

A Python bug (python/cpython#91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.

* consolidate

* fix typing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stdlib Python modules in the Lib dir topic-multiprocessing type-bug An unexpected behavior, bug, or error
Projects
Development

No branches or pull requests

7 participants