-
-
Notifications
You must be signed in to change notification settings - Fork 32k
Simplify concurrent.futures.process code by using itertools.batched() #114221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems fine -- there's a check that chunksize
is >=1
in ProcessPoolExecutor.map
.
A
concurrent.features.process
get_chunks
to use new itertools.batched()
itertools.batched()
in concurrent.futures.process._get_chunks
You can inline the |
inline |
Yes, since it is a short one-liner. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
itertools.batched()
in concurrent.futures.process._get_chunks
Thank you for your contribution, @NewUserHa. But note that usually all but the most trivial changes require opening an issue, and if it is an optimization, its effect should be demonstrated in benchmarks. Not only that some line in a tight loop becomes faster, but that the stdlib code that include this line becomes faster. I accepted this change as exception, because it makes the code simpler and adds a demonstration of a new itertools function. But the bar may be higher for other changes. |
Advantages:
=1
.