Skip to content

Conversation

@gordonsyme
Copy link
Member

Rewrite the on-boarding docs now that we have a local CLI

@gordonsyme gordonsyme requested review from a team as code owners November 27, 2025 16:53
@liamclarkedev liamclarkedev force-pushed the gordon/smarter-testing-updates branch 3 times, most recently from 19e6899 to fdcd26e Compare November 28, 2025 10:08
@jvincent42 jvincent42 force-pushed the gordon/smarter-testing-updates branch 2 times, most recently from 9715df1 to 35d471f Compare December 1, 2025 13:54
@gordonsyme gordonsyme force-pushed the gordon/smarter-testing-updates branch 11 times, most recently from f051968 to 55fc45f Compare December 2, 2025 10:31
gordonsyme and others added 2 commits December 2, 2025 12:02
Rewrite the on-boarding docs now that we have a local CLI
Rename adaptive-testing to test-impact-analysis
Rename dynamic-batching to dynamic-test-splitting

Co-authored-by: Liam Clarke <[email protected]>
Co-authored-by: Jérémy Vincent <[email protected]>
@gordonsyme gordonsyme force-pushed the gordon/smarter-testing-updates branch from 55fc45f to 48b2160 Compare December 2, 2025 12:02

* Generating code coverage data is essential for determining how tests are related to code. If tests are run in a way that makes generating and accessing code coverage data tricky then Smarter Testing may not be a good fit.
* Smarter Testing works best when testing a single deployable artifact. For example, a monorepo with integration tests that span multiple packages/services, especially when services run in separate container, makes coverage generation and consolidation difficult.
* Smarter Testing works best when testing a single deployable artifact. For example, a monorepo with integration tests that span multiple packages/services, especially when services run in separate containers, makes coverage generation and consolidation difficult.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've received feedback that this part is not very clear. And if meant all tests needed to be in a single job

Copy link
Contributor

@rosieyohannan rosieyohannan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I have left a bunch of suggestions and questions up to this point. I have not yet fully reviewed from section 4. Will do once we handle these changes

Comment on lines +46 to +48
. Discovering tests.
. Running selected tests.
. Analysing test impact.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
. Discovering tests.
. Running selected tests.
. Analysing test impact.
* Discovering tests.
* Running selected tests.
* Analysing test impact.

Dynamic test splitting distributes your tests across parallel execution nodes. The system maintains a shared queue that each node pulls from to create a balanced workload.

When you configure parallelism in your job, Smarter Testing automatically:
When you configure parallelism in your job and enable Dynamic test splitting, Smarter Testing automatically:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When you configure parallelism in your job and enable Dynamic test splitting, Smarter Testing automatically:
When you configure parallelism in your job and enable dynamic test splitting, Smarter Testing automatically:

This approach prevents slower nodes from extending the job runtime while other nodes have finished. All nodes continue to run tests until the entire test suite has been completed.

=== Test impact analysis
Test impact analysis identifies which tests need to run based on the files that changed in your checked out code, stored as impact data for future test runs. The system works in two phases:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Test impact analysis identifies which tests need to run based on the files that changed in your checked out code, stored as impact data for future test runs. The system works in two phases:
Test impact analysis identifies which tests need to run based on the files that changed in your checked out code. Analysis results are stored as impact data for future test runs. The system works in two phases:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

felt like these was something missing in this sentence from the last version!

*Analysis phase*:: Builds a mapping between your tests and the code they exercise. Each test is run individually with code coverage enabled to determine which files it covers. Analysis runs on your default branch, but you can configure it to run on any branch with any trigger (webhook, API, or scheduled pipeline).

*Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default, this runs on feature branches, but you can customize which branches use selection mode through your CircleCI configuration.
*Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default test selection is applied on feature branches, and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
*Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default test selection is applied on feature branches, and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration.
*Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default, test selection is applied on feature branches and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration.

*Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default, this runs on feature branches, but you can customize which branches use selection mode through your CircleCI configuration.
*Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default test selection is applied on feature branches, and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration.

Test selection works by comparing the current state of the repository against the most recent test impact analysis data. Any tests that cover modified files will be selected for running. Additionally the entire test suite will be selected if any of the `full-test-run-paths` files are modified, see the <<test-suite-configuration-options,test suite configuration options>> for details.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Test selection works by comparing the current state of the repository against the most recent test impact analysis data. Any tests that cover modified files will be selected for running. Additionally the entire test suite will be selected if any of the `full-test-run-paths` files are modified, see the <<test-suite-configuration-options,test suite configuration options>> for details.
Test selection works by comparing the current state of the repository against the most recent test impact analysis data. Any tests that cover modified files are selected to run. Additionally, the entire test suite is selected if any of the `full-test-run-paths` files are modified. See the <<test-suite-configuration-options>> for details.

Comment on lines +368 to 370
. Update the `.circleci/test-suites.yml` with the run command.
. In order to upload test results in your CI jobs, the location of the JUnit output file path also needs to be set in the test suite configuration. Set `outputs.junit` in `.circleci/test-suites.yml` with your preferred JUnit output file path.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
. Update the `.circleci/test-suites.yml` with the run command.
. In order to upload test results in your CI jobs, the location of the JUnit output file path also needs to be set in the test suite configuration. Set `outputs.junit` in `.circleci/test-suites.yml` with your preferred JUnit output file path.
. Update the `.circleci/test-suites.yml` with the run command.
. In order to upload test results in your CI jobs, the location of the JUnit output file path also needs to be set in the test suite configuration. Set `outputs.junit` in `.circleci/test-suites.yml` with your preferred JUnit output file path:
+

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to bring the following tabs into this numbered step

====

=== 2.2 Populate the analysis command
Run the test suite and confirm that the `run` command runs the test atoms you expect:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Run the test suite and confirm that the `run` command runs the test atoms you expect:
. Run the test suite and confirm that the `run` command runs the test atoms you expect:
+


We recommend following the steps in <<getting-started>> first before enabling the Smarter Testing feature to ensure the `discover` and `run` commands are set up correctly.

Steps:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel a bit lost at this point. Are the following steps outlining what we are showing people how to do in the next couple of numbered steps 2.1 and 2.2? If so I can help make this clear

. Test selection is driven from test impact analysis data.
. The analysis phase is correctly analysing test impact.

The next step is to run your test suite in CI.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The next step is to run your test suite in CI.
The next step is to run your test suite in CI. This is covered in step 3.
The following troubleshooting steps are provided to debug any issues you are finding up to this point.

. Update your `.circleci/config.yml` to call the `circleci run testsuite "ci tests"` command instead of your regular test command.
. Push the change to your VCS.

E.g. if your CircleCI test job was:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
E.g. if your CircleCI test job was:
For example, if your CircleCI test job was:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants