Storybook test runner turns all of your stories into executable tests.
Read the announcement: Interaction Testing with Storybook
- Features
- Getting started
- CLI Options
- Configuration
- Running against a deployed Storybook
- Running in CI
- Setting up code coverage
- Experimental test hook API
- Troubleshooting
- Future work
- β‘οΈ Zero config setup
- π¨ Smoke test all stories
βΆοΈ Test stories with play functions- π Test your stories in parallel in a headless browser
- π· Get feedback from error with a link directly to the story
- πΒ Debug them visually and interactively in a live browser with addon-interactions
- πΒ Powered by Jest and Playwright
- πΒ Watch mode, filters, and the conveniences you'd expect
- πΒ Code coverage reports
- Install the test runner and the interactions addon in Storybook:
yarn add @storybook/test-runner -DJest is a peer dependency. If you don't have it, also install it
yarn add jest@27 -D1.1 Optional instructions to install the Interactions addon for visual debugging of play functions
yarn add @storybook/addon-interactions @storybook/jest @storybook/testing-library -DThen add it to your .storybook/main.js config and enable debugging:
module.exports = {
addons: ['@storybook/addon-interactions'],
features: {
interactionsDebugger: true,
},
};- Add a
test-storybookscript to your package.json
{
"scripts": {
"test-storybook": "test-storybook"
}
}- Run Storybook (the test runner runs against a running Storybook instance):
yarn storybook- Run the test runner:
yarn test-storybookNOTE: The runner assumes that your Storybook is running on port
6006. If you're running Storybook in another port, either use --url or set the TARGET_URL before running your command like:yarn test-storybook --url http://localhost:9009 or TARGET_URL=http://localhost:9009 yarn test-storybook
Usage: test-storybook [options]
| Options | Description |
|---|---|
--help |
Output usage information test-storybook --help |
-s, --stories-json |
Run in stories json mode. Automatically detected (requires a compatible Storybook) test-storybook --stories-json |
--no-stories-json |
Disables stories json mode test-storybook --no-stories-json |
-c, --config-dir [dir-name] |
Directory where to load Storybook configurations from test-storybook -c .storybook |
--watch |
Run in watch mode test-storybook --watch |
--coverage |
Indicates that test coverage information should be collected and reported in the output test-storybook --coverage |
--url |
Define the URL to run tests in. Useful for custom Storybook URLs test-storybook --url http://the-storybook-url-here.com |
--browsers |
Define browsers to run tests in. One or multiple of: chromium, firefox, webkit test-storybook --browsers firefox chromium |
--maxWorkers [amount] |
Specifies the maximum number of workers the worker-pool will spawn for running tests test-storybook --maxWorkers=2 |
--no-cache |
Disable the cache test-storybook --no-cache |
--clearCache |
Deletes the Jest cache directory and then exits without running tests test-storybook --clearCache |
--verbose |
Display individual test results with the test suite hierarchy test-storybook --verbose |
-u, --updateSnapshot |
Use this flag to re-record every snapshot that fails during this test run test-storybook -u |
--eject |
Creates a local configuration file to override defaults of the test-runner test-storybook --eject |
The test runner is based on Jest and will accept the CLI options that Jest does, like --watch, --maxWorkers, etc.
The test runner works out of the box, but if you want better control over its configuration, you can run test-storybook --eject to create a local test-runner-jest.config.js file in the root folder of your project, which will be used by the test runner.
The test runner uses jest-playwright and you can pass testEnvironmentOptions to further configure it, such as how it's done above to run tests against all browsers instead of just chromium. For this you must eject the test runner configuration.
By default, the test runner assumes that you're running it against a locally served Storybook on port 6006.
If you want to define a target url so it runs against deployed Storybooks, you can do so by passing the TARGET_URL environment variable:
TARGET_URL=https://the-storybook-url-here.com yarn test-storybookOr by using the --url flag:
yarn test-storybook --url https://the-storybook-url-here.comBy default, the test runner transforms your story files into tests. It also supports a secondary "stories.json mode" which runs directly against your Storybook's stories.json, a static index of all the stories.
This is particularly useful for running against a deployed storybook because stories.json is guaranteed to be in sync with the Storybook you are testing. In the default, story file-based mode, your local story files may be out of sync--or you might not even have access to the source code. Furthermore, it is not possible to run the test-runner directly against .mdx stories, and stories.json mode must be used.
To run in stories.json mode, first make sure your Storybook has a v3 stories.json file. You can navigate to:
https://your-storybook-url-here.com/stories.json
It should be a JSON file and the first key should be "v": 3 followed by a key called "stories" containing a map of story IDs to JSON objects.
If your Storybook does not have a stories.json file, you can generate one provided:
- You are running SB6.4 or above
- You are not using
storiesOfstories
To enable stories.json in your Storybook, set the buildStoriesJson feature flag in .storybook/main.js:
module.exports = {
features: { buildStoriesJson: true },
};Once you have a valid stories.json file, your Storybook will be compatible with the "stories.json mode".
By default, the test runner will detect whether your Storybook URL is local or remote, and if it is remote, it will run in "stories.json mode" automatically. To disable it, you can pass the --no-stories-json flag:
yarn test-storybook --no-stories-jsonIf you are running tests against a local Storybook but for some reason want to run in "stories.json mode", you can pass the --stories-json flag:
yarn test-storybook --stories-jsonNOTE: stories.json mode is not compatible with watch mode.
If you want to add the test-runner to CI, there are a couple of ways to do so:
On Github actions, once services like Vercel, Netlify and others do deployment runs, they follow a pattern of emitting a deployment_status event containing the newly generated URL under deployment_status.target_url. You can use that URL and set it as TARGET_URL for the test-runner.
Here's an example of an action to run tests based on that:
name: Storybook Tests
on: deployment_status
jobs:
test:
timeout-minutes: 60
runs-on: ubuntu-latest
if: github.event.deployment_status.state == 'success'
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14.x'
- name: Install dependencies
run: yarn
- name: Run Storybook tests
run: yarn test-storybook
env:
TARGET_URL: '${{ github.event.deployment_status.target_url }}'NOTE: If you're running the test-runner against a
TARGET_URLof a remotely deployed Storybook (e.g. Chromatic), make sure that the URL loads a publicly available Storybook. Does it load correctly when opened in incognito mode on your browser? If your deployed Storybook is private and has authentication layers, the test-runner will hit them and thus not be able to access your stories. If that is the case, use the next option instead.
In order to build and run tests against your Storybook in CI, you might need to use a combination of commands involving the concurrently, http-server and wait-on libraries. Here's a recipe that does the following: Storybook is built and served locally, and once it is ready, the test runner will run against it.
{
"test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"yarn build-storybook --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && yarn test-storybook\""
}And then you can essentially run test-storybook:ci in your CI:
name: Storybook Tests
on: push
jobs:
test:
timeout-minutes: 60
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14.x'
- name: Install dependencies
run: yarn
- name: Run Storybook tests
run: yarn test-storybook:ciNOTE: Building Storybook locally makes it simple to test Storybooks that could be available remotely, but are under authentication layers. If you also deploy your Storybooks somewhere (e.g. Chromatic, Vercel, etc.), the Storybook URL can still be useful with the test-runner. You can pass it to the
REFERENCE_URLenvironment variable when running the test-storybook command, and if a story fails, the test-runner will provide a helpful message with the link to the story in your published Storybook instead.
The test runner supports code coverage with the --coverage flag or STORYBOOK_COLLECT_COVERAGE environment variable. The pre-requisite is that your components are instrumented using istanbul.
Given that your components' code runs in the context of a real browser, they have to be instrumented so that the test runner is able to collect coverage. In order to do so, you have to setup the instrumentation yourself.
Install the istanbul babel plugin:
yarn add -D babel-plugin-istanbulStorybook allows code transpilation with babel out of the box by configuring the babel function in your main.js. Add the istanbul plugin:
// .storybook/main.js
module.exports = {
// ...rest of your code here
babel: async (options) => {
options.plugins.push([
'istanbul',
{
// provide include patterns if you like
include: ['src/components/**'],
// provide exclude patterns if you like
exclude: [
'**/*.d.ts',
'**/*{.,-}{spec,stories,types}.{js,jsx,ts,tsx}',
],
},
]);
return options;
},
};The babel plugin has default options that might suffice to your project, however if you want to know which options are taken into account you can check them here.
After setting up instrumentation, run Storybook then run the test-runner with --coverage:
yarn test-storybook --coverageThe test runner will report the results in the CLI and generate a .nyc_output/coverage.json file which can be used by nyc.
Notice that it provides a message telling you that you can get a better, interactive summary of your code by running:
npx nyc report --reporter=lcov
This will generate a folder called coverage, containing an index.html file which can be explored and will show the coverage in detail:
nyc is a dependency of the test runner so you will already have it in your project. In the example above, the lcov reporter was used, which generates an output compatible with tools like Codecov. However, you can configure it to generate different reports, and you can find more information here.
The test runner renders a story and executes its play function if one exists. However, there are certain behaviors that are not possible to achieve via the play function, which executes in the browser. For example, if you want the test runner to take visual snapshots for you, this is something that is possible via Playwright/Jest, but must be executed in Node.
To enable use cases like visual or DOM snapshots, the test runner exports test hooks that can be overridden globally. These hooks give you access to the test lifecycle before and after the story is rendered.
There are three hooks: setup, preRender, and postRender. setup executes once before all the tests run. preRender and postRender execute within a test before and after a story is rendered.
The render functions are async functions that receive a Playwright Page and a context object with the current story id, title, and name. They are globally settable by @storybook/test-runner's setPreRender and setPostRender APIs.
All three functions can be set up in the configuration file .storybook/test-runner.js which can optionally export any of these functions.
NOTE: These test hooks are experimental and may be subject to breaking changes. We encourage you to test as much as possible within the story's play function.
The postRender function provides a Playwright page instance, of which you can use for DOM snapshot testing:
// .storybook/test-runner.js
module.exports = {
async postRender(page, context) {
// the #root element wraps the story
const elementHandler = await page.$('#root');
const innerHTML = await elementHandler.innerHTML();
expect(innerHTML).toMatchSnapshot();
},
};Here's a slightly different recipe for image snapshot testing:
// .storybook/test-runner.js
const { toMatchImageSnapshot } = require('jest-image-snapshot');
const customSnapshotsDir = `${process.cwd()}/__snapshots__`;
module.exports = {
setup() {
expect.extend({ toMatchImageSnapshot });
},
async postRender(page, context) {
const image = await page.screenshot();
expect(image).toMatchImageSnapshot({
customSnapshotsDir,
customSnapshotIdentifier: context.id,
});
},
};There is also an exported TestRunnerConfig type available for TypeScript users.
To visualize the test lifecycle, consider a simplified version of the test code automatically generated for each story in your Storybook:
it('button--basic', async () => {
// filled in with data for the current story
const context = { id: 'button--basic', title: 'Button', name: 'Basic' };
// playwright page https://playwright.dev/docs/pages
await page.goto(STORYBOOK_URL);
// pre-render hook
if (preRender) await preRender(page, context);
// render the story and run its play function (if applicable)
await page.execute('render', context);
// post-render hook
if (postRender) await postRender(page, context);
});While running tests using the hooks, you might want to get information from a story, such as the parameters passed to it, or its args. The test runner now provides a getStoryContext utility function that fetches the story context for the current story:
await getStoryContext(page, context);You can use it for multiple use cases, and here's an example that combines the story context and accessibility testing:
// .storybook/test-runner.js
const { getStoryContext } = require('@storybook/test-runner');
const { injectAxe, checkA11y } = require('axe-playwright');
module.exports = {
async preRender(page, context) {
await injectAxe(page);
},
async postRender(page, context) {
// Get entire context of a story, including parameters, args, argTypes, etc.
const storyContext = await getStoryContext(page, context);
// Do not test a11y for stories that disable a11y
if (storyContext.parameters?.a11y?.disable) {
return;
}
await checkA11y(page, '#root', {
detailedReport: true,
detailedReportOptions: {
html: true,
},
// pass axe options defined in @storybook/addon-a11y
axeOptions: storyContext.parameters?.a11y?.options
})
},
};Jest 28 has been released, but unfortunately jest-playwright is not yet compatible with it, therefore the test-runner is also not compatible. You likely are having an issue that looks like this:
TypeError: Jest: Got error running globalSetup
reason: Class extends value #<Object> is not a constructor or nullAs soon as jest-playwright is compatible, so the test-runner will be too. Please follow this issue for updates.
By default, the test runner truncates error outputs at 1000 characters, and you can check the full output directly in Storybook, in the browser. If you do want to change that limit, however, you can do so by setting the DEBUG_PRINT_LIMIT environment variable to a number of your choosing, for example, DEBUG_PRINT_LIMIT=5000 yarn test-storybook.
If your tests are timing out with Timeout - Async callback was not invoked within the 15000 ms timeout specified by jest.setTimeout, it might be that playwright couldn't handle to test the amount of stories you have in your project. Maybe you have a large amount of stories or your CI has a really low RAM configuration.
In either way, to fix it you should limit the amount of workers that run in parallel by passing the --maxWorkers option to your command:
{
"test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"yarn build-storybook --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && yarn test-storybook --maxWorkers=2\""
}There is currently a bug in Jest which means tests cannot be on a separate drive than the project. To work around this you will need to set the TEMP environment variable to a temporary folder on the same drive as your project. Here's what that would look like on GitHub Actions:
env:
# Workaround for https://github.com/facebook/jest/issues/8536
TEMP: ${{ runner.temp }}Because the displaying of reports and the underlying Jest process are separate, the reports can't be show in watch mode. However, the .nyc_output/coverage.json file is being generated, and you can show the reports by running npx nyc report in a separate terminal.
As the test runner is based on playwright, depending on your CI setup you might need to use specific docker images or other configuration. In that case, you can refer to the Playwright CI docs for more information.
Future plans involve adding support for the following features:
- π Run addon reports

