-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Fixed #12771 (Progress value is not increased) #7956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes issue #12771 where the progress value was not being increased properly. When processing files using FileSettings (e.g., from compilation databases), the total file size is often 0, causing the progress percentage to always show 0%. The fix switches to using file count instead of file size for progress calculation when FileSettings are used or when total file size is 0.
- Modified progress reporting logic in ThreadExecutor and ProcessExecutor to use file counts instead of file sizes when appropriate
- Updated existing test to handle both percentage-based (0%, 25%, etc.) and file-count-based (0%) progress messages
- Added new parametrized test to verify progress reporting works correctly with different executor types and job counts
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| cli/threadexecutor.cpp | Modified to use file count for progress when total file size is 0 |
| cli/processexecutor.cpp | Modified to use file count for progress when using FileSettings |
| test/cli/other_test.py | Updated test to accept both 0% and percentage-based progress messages |
| test/cli/more-projects_test.py | Added new test to verify progress reporting with compilation database |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
test/cli/other_test.py
Outdated
| try: | ||
| if '{}/4 files checked 0% done'.format(i) in lines: | ||
| lines.remove('{}/4 files checked 0% done'.format(i)) | ||
| else: | ||
| lines.remove('{}/4 files checked {}% done'.format(i, i * 25)) | ||
| except ValueError: | ||
| assert False, f"Expected progress message for file {i}/4 not found in output. Checked for '{{}}/4 files checked 0% done' and '{{}}/4 files checked {{}}% done'. Output lines: {lines}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awkward. One way to improve it would be to put the expected message in a variable and use that in the checks so it is not duplicated.
And we should be expecting a specific output and not multiple ones. That dilutes the test.
And the message in the except is confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And we should be expecting a specific output and not multiple ones. That dilutes the test.
As far as I know it differs depending on in which order the files are finished. And the order will change because we use -j2. A test that runs fine locally does not run fine in CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know it differs depending on in which order the files are finished
the status is not reported in this way for markup files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the status is not reported in this way for markup files.
Probably https://trac.cppcheck.net/ticket/12167
As far as I know it differs depending on in which order the files are finished. And the order will change because we use
-j2. A test that runs fine locally does not run fine in CI.
Then we should split the test as long as they differentiate as allowing multiple results would not detect regressions or fixes.
|
A side note regarding progress which is outside of this scope. We had a discussion about using the file size to calculate the percentage. I was arguing that we should not be doing it since it seems arbitrary and confusing as the size might not reflect actual runtime and might be disproportionate to other files. While looking into stuff related to preprocessor configurations I realized that would be even more arbitrary because if a file which is very small but has lots of extracted configurations that is considered for the calculation. So the size might only makes sense if you have a single fixed configuration per file - and even then that configuration might reduce the file to nothing. And adding logic which takes the actual size after the preprocessing has been applied would be madness. |
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
|
@firewave now some CIs fail like this: Other CIs succeed. The test is successful locally; I get "0%" on my computer. (linux). I am not sure right now why there are different results locally and in CI but maybe platform makes a difference. |
I am not sure why it says "0%" on my computer I will try to dig into that soon.. |
|
So splitting it actually found another bug - nice. I never thought about the And I think we should still file one as it is a different issue then this PR was aiming to address. Otherwise I approve. |
|



No description provided.