-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Update Smarter Testing docs #9795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
19e6899 to
fdcd26e
Compare
9715df1 to
35d471f
Compare
f051968 to
55fc45f
Compare
Rewrite the on-boarding docs now that we have a local CLI Rename adaptive-testing to test-impact-analysis Rename dynamic-batching to dynamic-test-splitting Co-authored-by: Liam Clarke <[email protected]> Co-authored-by: Jérémy Vincent <[email protected]>
55fc45f to
48b2160
Compare
|
|
||
| * Generating code coverage data is essential for determining how tests are related to code. If tests are run in a way that makes generating and accessing code coverage data tricky then Smarter Testing may not be a good fit. | ||
| * Smarter Testing works best when testing a single deployable artifact. For example, a monorepo with integration tests that span multiple packages/services, especially when services run in separate container, makes coverage generation and consolidation difficult. | ||
| * Smarter Testing works best when testing a single deployable artifact. For example, a monorepo with integration tests that span multiple packages/services, especially when services run in separate containers, makes coverage generation and consolidation difficult. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've received feedback that this part is not very clear. And if meant all tests needed to be in a single job
rosieyohannan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I have left a bunch of suggestions and questions up to this point. I have not yet fully reviewed from section 4. Will do once we handle these changes
| . Discovering tests. | ||
| . Running selected tests. | ||
| . Analysing test impact. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| . Discovering tests. | |
| . Running selected tests. | |
| . Analysing test impact. | |
| * Discovering tests. | |
| * Running selected tests. | |
| * Analysing test impact. |
| Dynamic test splitting distributes your tests across parallel execution nodes. The system maintains a shared queue that each node pulls from to create a balanced workload. | ||
|
|
||
| When you configure parallelism in your job, Smarter Testing automatically: | ||
| When you configure parallelism in your job and enable Dynamic test splitting, Smarter Testing automatically: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| When you configure parallelism in your job and enable Dynamic test splitting, Smarter Testing automatically: | |
| When you configure parallelism in your job and enable dynamic test splitting, Smarter Testing automatically: |
| This approach prevents slower nodes from extending the job runtime while other nodes have finished. All nodes continue to run tests until the entire test suite has been completed. | ||
|
|
||
| === Test impact analysis | ||
| Test impact analysis identifies which tests need to run based on the files that changed in your checked out code, stored as impact data for future test runs. The system works in two phases: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Test impact analysis identifies which tests need to run based on the files that changed in your checked out code, stored as impact data for future test runs. The system works in two phases: | |
| Test impact analysis identifies which tests need to run based on the files that changed in your checked out code. Analysis results are stored as impact data for future test runs. The system works in two phases: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
felt like these was something missing in this sentence from the last version!
| *Analysis phase*:: Builds a mapping between your tests and the code they exercise. Each test is run individually with code coverage enabled to determine which files it covers. Analysis runs on your default branch, but you can configure it to run on any branch with any trigger (webhook, API, or scheduled pipeline). | ||
|
|
||
| *Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default, this runs on feature branches, but you can customize which branches use selection mode through your CircleCI configuration. | ||
| *Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default test selection is applied on feature branches, and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| *Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default test selection is applied on feature branches, and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration. | |
| *Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default, test selection is applied on feature branches and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration. |
| *Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default, this runs on feature branches, but you can customize which branches use selection mode through your CircleCI configuration. | ||
| *Selection phase*:: Compares changed files against the test impact data and selects only tests that exercise modified code. By default test selection is applied on feature branches, and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration. | ||
|
|
||
| Test selection works by comparing the current state of the repository against the most recent test impact analysis data. Any tests that cover modified files will be selected for running. Additionally the entire test suite will be selected if any of the `full-test-run-paths` files are modified, see the <<test-suite-configuration-options,test suite configuration options>> for details. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Test selection works by comparing the current state of the repository against the most recent test impact analysis data. Any tests that cover modified files will be selected for running. Additionally the entire test suite will be selected if any of the `full-test-run-paths` files are modified, see the <<test-suite-configuration-options,test suite configuration options>> for details. | |
| Test selection works by comparing the current state of the repository against the most recent test impact analysis data. Any tests that cover modified files are selected to run. Additionally, the entire test suite is selected if any of the `full-test-run-paths` files are modified. See the <<test-suite-configuration-options>> for details. |
| . Update the `.circleci/test-suites.yml` with the run command. | ||
| . In order to upload test results in your CI jobs, the location of the JUnit output file path also needs to be set in the test suite configuration. Set `outputs.junit` in `.circleci/test-suites.yml` with your preferred JUnit output file path. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| . Update the `.circleci/test-suites.yml` with the run command. | |
| . In order to upload test results in your CI jobs, the location of the JUnit output file path also needs to be set in the test suite configuration. Set `outputs.junit` in `.circleci/test-suites.yml` with your preferred JUnit output file path. | |
| . Update the `.circleci/test-suites.yml` with the run command. | |
| . In order to upload test results in your CI jobs, the location of the JUnit output file path also needs to be set in the test suite configuration. Set `outputs.junit` in `.circleci/test-suites.yml` with your preferred JUnit output file path: | |
| + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to bring the following tabs into this numbered step
| ==== | ||
|
|
||
| === 2.2 Populate the analysis command | ||
| Run the test suite and confirm that the `run` command runs the test atoms you expect: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Run the test suite and confirm that the `run` command runs the test atoms you expect: | |
| . Run the test suite and confirm that the `run` command runs the test atoms you expect: | |
| + |
|
|
||
| We recommend following the steps in <<getting-started>> first before enabling the Smarter Testing feature to ensure the `discover` and `run` commands are set up correctly. | ||
|
|
||
| Steps: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel a bit lost at this point. Are the following steps outlining what we are showing people how to do in the next couple of numbered steps 2.1 and 2.2? If so I can help make this clear
| . Test selection is driven from test impact analysis data. | ||
| . The analysis phase is correctly analysing test impact. | ||
|
|
||
| The next step is to run your test suite in CI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The next step is to run your test suite in CI. | |
| The next step is to run your test suite in CI. This is covered in step 3. | |
| The following troubleshooting steps are provided to debug any issues you are finding up to this point. |
| . Update your `.circleci/config.yml` to call the `circleci run testsuite "ci tests"` command instead of your regular test command. | ||
| . Push the change to your VCS. | ||
|
|
||
| E.g. if your CircleCI test job was: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| E.g. if your CircleCI test job was: | |
| For example, if your CircleCI test job was: |
Rewrite the on-boarding docs now that we have a local CLI