-
Notifications
You must be signed in to change notification settings - Fork 29.9k
[test] Let pending test finish on abort #86307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
cf2fe6b
REVERT OR YOU WILL BE FIRED simulate failing test
eps1lon 432d002
[test] Let pending test finish on abort
eps1lon 9637774
Align test files when logging conclusion
eps1lon 149d47e
We got explicit resource managment at home
eps1lon 2e53312
Refactor to make it clear we don't double release
eps1lon e2b2745
Ensure timings are submitted with shouldContinueTestsOnError
eps1lon 828d6b1
Revert "REVERT OR YOU WILL BE FIRED simulate failing test"
eps1lon 17eb5ca
Stop retrying cleanup
eps1lon File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -501,10 +501,11 @@ ${ENDGROUP}`) | |
| `jest${process.platform === 'win32' ? '.CMD' : ''}` | ||
| ) | ||
| let firstError = true | ||
| let killed = false | ||
| const testController = new AbortController() | ||
| const testSignal = testController.signal | ||
| let hadFailures = false | ||
|
|
||
| const runTest = (/** @type {TestFile} */ test, isFinalRun, isRetry) => | ||
| const runTestOnce = (/** @type {TestFile} */ test, isFinalRun, isRetry) => | ||
| new Promise((resolve, reject) => { | ||
| const start = new Date().getTime() | ||
| let outputChunks = [] | ||
|
|
@@ -618,11 +619,11 @@ ${ENDGROUP}`) | |
| if (hideOutput) { | ||
| await outputSema.acquire() | ||
| const isExpanded = | ||
| firstError && !killed && !shouldContinueTestsOnError | ||
| firstError && !testSignal.aborted && !shouldContinueTestsOnError | ||
| if (isExpanded) { | ||
| firstError = false | ||
| process.stdout.write(`❌ ${test.file} output:\n`) | ||
| } else if (killed) { | ||
| } else if (testSignal.aborted) { | ||
| process.stdout.write(`${GROUP}${test.file} output (killed)\n`) | ||
| } else { | ||
| process.stdout.write(`${GROUP}❌ ${test.file} output\n`) | ||
|
|
@@ -636,7 +637,7 @@ ${ENDGROUP}`) | |
| output += chunk.toString() | ||
| } | ||
|
|
||
| if (process.env.CI && !killed) { | ||
| if (process.env.CI && !testSignal.aborted) { | ||
| errorsPerTests.set(test.file, output) | ||
| } | ||
|
|
||
|
|
@@ -682,10 +683,105 @@ ${ENDGROUP}`) | |
| }) | ||
| }) | ||
|
|
||
| const runTest = async (/** @type {TestFile} */ test) => { | ||
| let passed = false | ||
|
|
||
| const shouldSkipRetries = skipRetryTestManifest.find((t) => | ||
| t.includes(test.file) | ||
| ) | ||
| const numRetries = shouldSkipRetries ? 0 : originalRetries | ||
| if (shouldSkipRetries) { | ||
| console.log( | ||
| `Skipping retry for ${test.file} due to skipRetryTestManifest` | ||
| ) | ||
| } | ||
|
|
||
| for (let i = 0; i < numRetries + 1; i++) { | ||
| try { | ||
| console.log(`Starting ${test.file} retry ${i}/${numRetries}`) | ||
| const time = await runTestOnce( | ||
| test, | ||
| shouldSkipRetries || i === numRetries, | ||
| shouldSkipRetries || i > 0 | ||
| ) | ||
| timings.push({ | ||
| file: test.file, | ||
| time, | ||
| }) | ||
| passed = true | ||
| console.log( | ||
| `${test.file} finished on retry ${i}/${numRetries} in ${time / 1000}s` | ||
| ) | ||
| break | ||
| } catch (err) { | ||
| if (i < numRetries) { | ||
| try { | ||
| let testDir = path.dirname(path.join(__dirname, test.file)) | ||
|
|
||
| // if test is nested in a test folder traverse up a dir to ensure | ||
| // we clean up relevant test files | ||
| if (testDir.endsWith('/test') || testDir.endsWith('\\test')) { | ||
| testDir = path.join(testDir, '..') | ||
| } | ||
| console.log('Cleaning test files at', testDir) | ||
| await exec(`git clean -fdx "${testDir}"`) | ||
| await exec(`git checkout "${testDir}"`) | ||
| } catch (err) {} | ||
| } else { | ||
| console.error(`${test.file} failed due to ${err}`) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (!passed) { | ||
| hadFailures = true | ||
| const error = new Error( | ||
| // "failed to pass within" is a keyword parsed by next-pr-webhook | ||
| `${test.file} failed to pass within ${numRetries} retries` | ||
| ) | ||
| console.error(error.message) | ||
|
|
||
| if (!shouldContinueTestsOnError) { | ||
| testController.abort(error) | ||
| } else { | ||
| console.log( | ||
| `CONTINUE_ON_ERROR enabled, continuing tests after ${test.file} failed` | ||
| ) | ||
| } | ||
| } | ||
|
|
||
| // Emit test output if test failed or if we're continuing tests on error | ||
| // This is parsed by the commenter webhook to notify about failing tests | ||
| if ((!passed || shouldContinueTestsOnError) && isTestJob) { | ||
| try { | ||
| const testsOutput = await fsp.readFile( | ||
| `${test.file}${RESULTS_EXT}`, | ||
| 'utf8' | ||
| ) | ||
| const obj = JSON.parse(testsOutput) | ||
| obj.processEnv = { | ||
| NEXT_TEST_MODE: process.env.NEXT_TEST_MODE, | ||
| HEADLESS: process.env.HEADLESS, | ||
| } | ||
| await outputSema.acquire() | ||
| if (GROUP) console.log(`${GROUP}Result as JSON for tooling`) | ||
| console.log( | ||
| `--test output start--`, | ||
| JSON.stringify(obj), | ||
| `--test output end--` | ||
| ) | ||
| if (ENDGROUP) console.log(ENDGROUP) | ||
| outputSema.release() | ||
| } catch (err) { | ||
| console.log(`Failed to load test output`, err) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| const directorySemas = new Map() | ||
|
|
||
| const originalRetries = numRetries | ||
| await Promise.all( | ||
| const results = await Promise.allSettled( | ||
| tests.map(async (test) => { | ||
| const dirName = path.dirname(test.file) | ||
| let dirSema = directorySemas.get(dirName) | ||
|
|
@@ -695,109 +791,39 @@ ${ENDGROUP}`) | |
| if (/^test[/\\]integration/.test(test.file) && dirSema === undefined) { | ||
| directorySemas.set(dirName, (dirSema = new Sema(1))) | ||
| } | ||
| // TODO: Use explicit resource managment instead of this acquire/release pattern | ||
| // once CI runs with Node.js 24+. | ||
| if (dirSema) await dirSema.acquire() | ||
|
|
||
| await sema.acquire() | ||
| let passed = false | ||
|
|
||
| const shouldSkipRetries = skipRetryTestManifest.find((t) => | ||
| t.includes(test.file) | ||
| ) | ||
| const numRetries = shouldSkipRetries ? 0 : originalRetries | ||
| if (shouldSkipRetries) { | ||
| console.log( | ||
| `Skipping retry for ${test.file} due to skipRetryTestManifest` | ||
| ) | ||
| } | ||
|
|
||
| for (let i = 0; i < numRetries + 1; i++) { | ||
| try { | ||
| console.log(`Starting ${test.file} retry ${i}/${numRetries}`) | ||
| const time = await runTest( | ||
| test, | ||
| shouldSkipRetries || i === numRetries, | ||
| shouldSkipRetries || i > 0 | ||
| ) | ||
| timings.push({ | ||
| file: test.file, | ||
| time, | ||
| }) | ||
| passed = true | ||
| console.log( | ||
| `Finished ${test.file} on retry ${i}/${numRetries} in ${ | ||
| time / 1000 | ||
| }s` | ||
| ) | ||
| break | ||
| } catch (err) { | ||
| if (i < numRetries) { | ||
| try { | ||
| let testDir = path.dirname(path.join(__dirname, test.file)) | ||
|
|
||
| // if test is nested in a test folder traverse up a dir to ensure | ||
| // we clean up relevant test files | ||
| if (testDir.endsWith('/test') || testDir.endsWith('\\test')) { | ||
| testDir = path.join(testDir, '..') | ||
| } | ||
| console.log('Cleaning test files at', testDir) | ||
| await exec(`git clean -fdx "${testDir}"`) | ||
| await exec(`git checkout "${testDir}"`) | ||
| } catch (err) {} | ||
| } else { | ||
| console.error(`${test.file} failed due to ${err}`) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (!passed) { | ||
| console.error( | ||
| `${test.file} failed to pass within ${numRetries} retries` | ||
| ) | ||
|
|
||
| if (!shouldContinueTestsOnError) { | ||
| killed = true | ||
| children.forEach((child) => child.kill()) | ||
| cleanUpAndExit(1) | ||
| } else { | ||
| hadFailures = true | ||
| console.log( | ||
| `CONTINUE_ON_ERROR enabled, continuing tests after ${test.file} failed` | ||
| ) | ||
| try { | ||
| if (testSignal.aborted) { | ||
| // We already logged the abort reason. No need to include it in cause. | ||
| const error = new Error(`Skipped due to abort.`) | ||
| error.name = test.file | ||
| throw error | ||
| } | ||
| } | ||
|
|
||
| // Emit test output if test failed or if we're continuing tests on error | ||
| // This is parsed by the commenter webhook to notify about failing tests | ||
| if ((!passed || shouldContinueTestsOnError) && isTestJob) { | ||
| try { | ||
| const testsOutput = await fsp.readFile( | ||
| `${test.file}${RESULTS_EXT}`, | ||
| 'utf8' | ||
| ) | ||
| const obj = JSON.parse(testsOutput) | ||
| obj.processEnv = { | ||
| NEXT_TEST_MODE: process.env.NEXT_TEST_MODE, | ||
| HEADLESS: process.env.HEADLESS, | ||
| } | ||
| await outputSema.acquire() | ||
| if (GROUP) console.log(`${GROUP}Result as JSON for tooling`) | ||
| console.log( | ||
| `--test output start--`, | ||
| JSON.stringify(obj), | ||
| `--test output end--` | ||
| ) | ||
| if (ENDGROUP) console.log(ENDGROUP) | ||
| outputSema.release() | ||
| } catch (err) { | ||
| console.log(`Failed to load test output`, err) | ||
| } | ||
| await runTest(test) | ||
| } finally { | ||
| sema.release() | ||
| if (dirSema) dirSema.release() | ||
| } | ||
|
|
||
| sema.release() | ||
| if (dirSema) dirSema.release() | ||
| }) | ||
| ) | ||
|
|
||
| for (const result of results) { | ||
| if (result.status === 'rejected') { | ||
| hadFailures = true | ||
| console.error(result.reason) | ||
| } | ||
| } | ||
|
|
||
| if (hadFailures && !shouldContinueTestsOnError) { | ||
| // TODO: Does it make sense to update timings if there were failures if without shouldContinueTestsOnError? | ||
| return hadFailures | ||
| } | ||
|
|
||
| if (options.timings) { | ||
| const curTimings = {} | ||
| // let junitData = `<testsuites name="jest tests">` | ||
|
|
@@ -857,20 +883,20 @@ ${ENDGROUP}`) | |
| } | ||
| } | ||
|
|
||
| // Return whether there were any failures | ||
| return hadFailures | ||
| } | ||
|
|
||
| main() | ||
| .then((hadFailures) => { | ||
| main().then( | ||
| (hadFailures) => { | ||
| if (hadFailures) { | ||
| console.error('Some tests failed') | ||
| cleanUpAndExit(1) | ||
| return cleanUpAndExit(1) | ||
| } else { | ||
| cleanUpAndExit(0) | ||
| return cleanUpAndExit(0) | ||
| } | ||
| }) | ||
| .catch((err) => { | ||
| console.error(err) | ||
| cleanUpAndExit(1) | ||
| }) | ||
|
Comment on lines
-872
to
-876
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Placement was a bit odd because that basically meant we retry |
||
| }, | ||
| (reason) => { | ||
| console.error(reason) | ||
| return cleanUpAndExit(1) | ||
| } | ||
| ) | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests that are skipped due to abort are incorrectly counted as failures. When
testSignal.abortedis true (line 800), an error is thrown and caught byPromise.allSettledas a rejection. This setshadFailures = true(line 817) even though these tests were intentionally skipped, not failed. This defeats the purpose of the PR which aims to let pending tests finish gracefully without polluting failure metrics.Spotted by Graphite Agent

Is this helpful? React 👍 or 👎 to let us know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the relevant part, not
hadFailures: