Tools
We have a few testing methods within the ink ecosystem:
Each uses a different framework so it's important to determine which kind of testing is appropriate for your use case
The ink ecosystem contains several environments:
Our end-to-end and visual regression tests both interact with the Exposé environment
Testing Library is the preferred choice but many tests were written in Jest and Enzyme. Regardless of the library, unit tests are predominantly used to test non-visual component behavior and React hooks.
Tests for each component should live in a <component name>.spec.tsx
file within the same folder of the component
Don't reference CSS IDs or class names. CSS identifiers of Ink components are subject to change without warning. Majority of our components are now styled which uses a dynamic and unique identifier. Referring to these identifiers in other code, such as unit tests, makes the other code more fragile.
Each test within a suite should test a single function or component behavior. Use selectors that resemble how users interact with the code, such as labels and content text. Read more here or here.
describe
to group tests with similar aspectsstructure
, behavior
, callbacks
, etcit
as the test executable but don't use the test
keywordarrange
, act
and assert
pattern to write your unit testsmount()
over shallow()
Button.spec.tsx
describe("Button", () => {describe("structure", () => {it("should render a button tag", () => {const wrapper = mount(<Button />);expect(wrapper.find("button").length).toBe(1);});});describe("callbacks", () => {it("should call prop on click", () => {const props = { onClick: jest.fn() };const wrapper = mount(<Button {...props} />);wrapper.find("button").simulate("click");expect(props.onClick).toHaveBeenCalled();});});})
To run the entire test suite:
yarn test
To see the tests coverage as well as the tests results:
yarn test:coverage
To run a specific component/test suite, pass a pattern to the command such as the name of a component or the path to the file you're trying to run:
yarn test <pattern>
We use Cypress
All tests are named as <component name>.cy.js
and live within the cypress folder. We use this to replicate expected user behavior and all tests are ran against the Expose environment.
Don't reference CSS IDs or class names. CSS identifiers of Ink components are subject to change without warning. Majority of our components are now styled which uses a dynamic and unique identifier. Referring to these identifiers in other code, such as unit tests, makes the other code more fragile.
Use selectors that resemble how users interact with the code, such as labels and content text
.should()
to do itdata-testid
cy.get('table[data-testid="your-data-testid"]')
describe
to group tests with similar aspectsstructure
, behavior
, callbacks
, etcit
as the test executable but don't use the test
keywordcypress/e2e/newTable.cy.js
Cypress.config("baseUrl", `${Cypress.config().baseUrl}NewTable`);describe("NewTable", () => {it("should be able to select rows", () => {const selector = 'table[data-testid="newtable-selectable"]';cy.get(selector).find('input[id="123"]').should("be.checked");cy.get(selector).find('label[for="123"]').click();cy.get(selector).find('input[id="123"]').should("not.be.checked");cy.get(selector).find('label[for="234"]').click();cy.get(selector).find('input[id="234"]').should("be.checked");});})
Cypress tests run against the Expose, so, we must run Expose:
yarn start:expose
Assuming you are running Expose on port 3000, you'll open a second terminal window to run Cypress:
yarn e2e:local
If you want to run Cypress against a different port, you can modify using:
yarn e2e --config baseUrl="http://localhost:<your-port>/"
The Cypress screen will present you some choices:
Cypress tests are live. If an errors occur, cypress will show where the problem is. It can also automatically re-run tests if you leave it open and modify a component's code. However, this doesn't work with samples are samples need to be scraped so Expose must be restarted.
Note: sometimes tests ran in CircleCI yield different results as CircleCI is running tests in "production" mode
We use BackstopJS to collect sample snapshots from our internal testing environment, Exposé. The goal is to cover as many visual variants in each component as possible through atomic sampling allowing us to check for cascading visual effects.
We are predominantly looking for unexpected changes to the visuals
There are only two ways to generate a snapshot;
<component name>.backstop.json
within the backstop-references folder⚠️ Sample snippets only snapshot the initial state on load!
We always use the key name
(in samples and in additional scenarios). This name must be unique, per component, no exceptions! Multiple components can have a sample named, "Base" but a component shouldn't have multiple samples with the same name.
.backstop.json
) don't conflict as they are namespaced to their respective environmentTechnical note: Backstop uses label
but to align ourselves between sample snippets (which uses name
already) and additional scenarios, name
is used in both places. Read more in (script comments).
Let's say our NewTable component has 2 samples, "Base NewTable" and "NewTable with twiddle rows"
NewTable/samples.js
[{group: "NewTable",name: "Base NewTable",environments: ["expose"],code: `<NewTable/>`,},{group: "NewTable",name: "NewTable with twiddle rows",environments: ["expose"],code: `<NewTable/>`,},]
However, this only snapshots the initial state. For the scenario "NewTable with twiddle rows", we want additional visual checks for expanded twiddle states. To ensure that additional scenarios correspond to the original sample, we want to reuse the original sample name but include a few words further describing the interaction.
NewTable.backstop.json
[{// The original sample is named "NewTable with twiddle rows" and// we're testing the visual when a single is expanded rowname: "NewTable with twiddle rows, Single row expanded",selectors: ["#newtable-with-twiddle-rows"],clickSelector: "#twiddle-table tbody > tr",},{// The original sample is also "NewTable with twiddle rows" but this time// we're checking the visual when all rows are expandedname: "NewTable with twiddle rows, All rows expanded",selectors: ["#newtable-with-twiddle-rows"],clickSelector: "button[data-testid='expand-all']",},]
Unfortunately this task isn't automated so if you update a sample, please check if there are corresponding backstop scenario names that need to be updated.
Similar to how we can set different environments[]
in each sample snippet, we use the key breakpoints[]
to include snapshots at various screen sizes.
By default, we snapshot at the desktop
size. Additional options are available:
breakpoints: ['mobile', 'tablet-portrait', 'tablet-landscape', 'desktop-large', 'ultra-wide'],
Technical note: Backstop uses viewports
but to align ourselves between sample snippets and additional scenarios, breakpoints
is used in both places. Read more in (script comments).
We know that Autograph's "Sign here" flag hides itself in mobile only so
Autograph/samples.js
[{group: "Autograph",name: "Default Autograph",environments: ["expose"],breakpoints: ["mobile", "tablet-portrait", "tablet-landscape", "desktop-large"],code: `<Autograph preventAutoFocus name="Tagg Palmer" id="autograph-default" />`,},]
There are only two reasons to use a skip:
skipFlakyBackstop: true
skipInitialBackstop: true
⚠️ If the test is flaky, file a Jira ticket so that we can circle back to fix and include the regression test. Include the name of the test, and any relevant information such as potential theories as to the cause of the flakiness.
When testing a component like Modal, we usually need a trigger to open the Modal such as a Button. We don't need to screenshot the Button, so we can manually add a scenario instead.
Modal/samples.js
[{group: "Modal",name: "Base Modal",skipInitialBackstop: true,environments: ["expose"],code: `() => {const [isOpen, setOpen] = React.useState(false);return (<><Button onClick={() => setOpen(true)}>Open Modal</Button>{isOpen && <Modal />}</>);}`,},]
Modal.backstop.json
[{name: "Base Modal",clickSelector: "button",// Instead of a section on the page, we want the whole screenselectors: ["viewport"],},]
Here are a few options we commonly use in our scenarios:
[{"name": "Component with dropdown, Active","selectors": ["#base-component"]// Perform one or more clicks before capturing the snapshot"clickSelectors": ["#button1", "#button2"],},{"name": "Component with buttons","selectors": ["#base-component"]// Scrolls to the given selector before taking the snapshot"scrollToSelector": "#component-with-buttons",},{"name": "Component, Hover","selectors": ["#base-component"]// A snapshot of the hover state can be captured by a snapshot"hoverSelectors": ["#button1", "#button2"],}]
See Backstop's documentation for all available options
We want our snapshots to be consistent which means we have to simulate the test to hold the state (e.g. hover, active, focus) on capture. Sometimes this requires modification to ignore focus rings or animation delays in order to not produce flaky tests.
clickSelectors[]
to prevent focus rings. For example, all samples display a useWindowWidth()
value in Expose. This is currently a small
element which we double as a way to click outside the sample.scrollToSelector
selector might be needed when used with viewport
snapshots to consistently snapshot the sample at the same position on a given pagepostInteractionWait
(can be a selector or a time in milliseconds, none by default) to delay capture if a component requires multiple clicks or an animation to completedelay
(none by default) as a last resortIt is rare but sometimes, we want to visually capture design issues that span multiple components. One such example is our Disabled Expose
to prevent color inconsistently.
DisabledExpose.tsx
to src/Expose/extra-exposes
Disabled: []
to the samples
object in src/Expose/helpers/getSamples.ts
,src/Expose/Expose.tsx
, add the route <Route exact path="/Disabled" component={DisabledExpose} />
⚠️ Although the whole idea of Backstop is to test against master as the reference, the best use of local Backstop however, is probably just for a sanity checks. This is mainly cause the local screenshot doesn't produce the same capture as the one ran in CI.
The CI also takes a while to complete so it's better to make sure our selectors are accurate so we don't end up with a badger.
Must be running Expose first
yarn start:expose
(Optional, in a separate terminal window) To skip rebuilding each time there are sample changes:
yarn scrape watch
In a separate terminal window, run backstop (with desired flags)
yarn backstop:test:local
The reference can actually be any branch but using master as the example:
// This command builds the Exposé for master and serves it at localhost:8000yarn expose:build && yarn expose:serve// Create reference snapshotsyarn backstop:reference
After all reference snapshots are created, you can stop the Exposé and switch to the branch you need to compare:
git checkout your_branch_name// This command will start Exposé again but on your_branch_name this timeyarn expose:build && yarn expose:serve// Run against the master references we created earlieryarn backstop:test
--filter
Only needs a partial match against the filename which contains:
component-
)name
viewports
values-expose__
)-backstop__
)(using local)
// To query all snapshots from NewTable:yarn backstop:test:local --filter=newtable// To running a specific sample named "NewTable with buttons":yarn backstop:test:local --filter=newtable-with-buttons// Run all Exposé samples of Alert only:yarn backstop:test:local --filter=alert-expose__// Run all additional Backstop scenarios only:yarn backstop:test:local --filter=-backstop__// Any sample that includes (z-index)yarn backstop:test:local --filter=z-index
Valid failing tests:
Always check the report (local or on CI) and review each failing test. If you don't know why something are failing, ask for help.
Example of failing a Backstop suite:
Example of a misspelled selector will bring up a badger:
If tests are failing due to intentional changes, you must approve the whole suite (to be used as the next reference) before merging, in your local terminal on your branch locally:
⚠️ Important! Never merge a failing test without checking
yarn backstop approve <FAILING_REPORT_URL_FROM_CI># Example url: https://output.circle-artifacts.com/output/job/74206be8-1b50-418b-92d9-36e8c2dbdf37/artifacts/0/backstop_data/html_report/index.html# Using the above, it would look like this:yarn backstop approve https://output.circle-artifacts.com/output/job/74206be8-1b50-418b-92d9-36e8c2dbdf37/artifacts/0/backstop_data/html_report/index.html
This generates a commit automatically and you will have to push it up to your branch for Backstop to rerun. It should pass the check. If it doesn't, there's probably a flaky test.
⚠️ It's best to check Slack to see if there's currently a merge going on at #ink-backstop
Backstop flakes were more common before but why not avoid having to re-approve backstop if someone merges a second before you (only applies if their branch updates the Backstop reference).
Sometimes we have have multiple branches that all need a Backstop approval so to avoid merging traffic, send a message with claim_backstop
where a bot will declare your intentions. Once the PR is merged, use unclaim_backstop
to let others know they can proceed with approving their branch's Backstop.
Is this page helpful?