You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: recipes/automated-testing.md
+34-86Lines changed: 34 additions & 86 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,47 +1,28 @@
1
1
# Automated Testing
2
2
## Glossary
3
-
**Confidence** - describes a degree to which passing tests guarantee that the
4
-
app is working.
5
-
**Determinism** - describes how easy it is to determine where the problem is
6
-
based on the failing test.
7
-
**Use Case** - a potential scenario in which a system receives external input
8
-
and responds to it. It defines the interactions between a role (user or another
9
-
system) and a system to achieve a goal.
10
-
**Combinatiorial Explosion** - the fast growth in the number of combinations
11
-
that need to be tested when multiple business rules are involved.
3
+
**Confidence** - describes a degree to which passing tests guarantee that the app is working.
4
+
**Determinism** - describes how easy it is to determine where the problem is based on the failing test.
5
+
**Use Case** - a potential scenario in which a system receives external input and responds to it. It defines the interactions between a role (user or another system) and a system to achieve a goal.
6
+
**Combinatiorial Explosion** - the fast growth in the number of combinations that need to be tested when multiple business rules are involved.
12
7
13
8
## Testing best practices
14
9
15
10
### Quality over quantity
16
11
Don't focus on achieving a specific code coverage percentage.
17
-
While code coverage can help us identify uncovered parts of the codebase, it
18
-
doesn't guarantee high confidence.
12
+
While code coverage can help us identify uncovered parts of the codebase, it doesn't guarantee high confidence.
19
13
20
-
Instead, focus on identifying important paths of the application, especially
21
-
from user's perspective.
22
-
User can be a developer using a shared function, a user interacting with the UI,
23
-
or a client using server app's JSON API.
24
-
Write tests to cover those paths in a way that gives confidence that each path,
25
-
and each separate part of the path works as expected.
14
+
Instead, focus on identifying important paths of the application, especially from user's perspective.
15
+
User can be a developer using a shared function, a user interacting with the UI, or a client using server app's JSON API.
16
+
Write tests to cover those paths in a way that gives confidence that each path, and each separate part of the path works as expected.
26
17
27
18
---
28
19
29
-
Flaky tests that produce inconsistent results ruin confidence in the test suite,
30
-
mask real issues, and are the source of frustration. The refactoring process to
31
-
address the flakiness is crucial and should be a priority.
32
-
To adequately deal with flaky tests it is important to know how to identify,
33
-
fix, and prevent them:
34
-
- Common characteristics of flaky tests include inconsistency, false positives
35
-
and negatives, and sensitivity to dependency, timing, ordering, and environment.
36
-
- Typical causes of the stated characteristics are concurrency, timing/ordering
37
-
problems, external dependencies, non-deterministic assertions, test environment
38
-
instability, and poorly written test logic.
39
-
- Detecting flaky tests can be achieved by rerunning, running tests in parallel,
40
-
executing in different environments, and analyzing test results.
41
-
- To fix and prevent further occurrences of flaky tests the following steps can
42
-
be taken, isolate tests, employ setup and cleanup routines, handle concurrency,
43
-
configure a stable test environment, improve error handling, simplify testing
44
-
logic, and proactively deal with typical causes of the flaky tests.
20
+
Flaky tests that produce inconsistent results ruin confidence in the test suite, mask real issues, and are the source of frustration. The refactoring process to address the flakiness is crucial and should be a priority.
21
+
To adequately deal with flaky tests it is important to know how to identify, fix, and prevent them:
22
+
- Common characteristics of flaky tests include inconsistency, false positives and negatives, and sensitivity to dependency, timing, ordering, and environment.
23
+
- Typical causes of the stated characteristics are concurrency, timing/ordering problems, external dependencies, non-deterministic assertions, test environment instability, and poorly written test logic.
24
+
- Detecting flaky tests can be achieved by rerunning, running tests in parallel, executing in different environments, and analyzing test results.
25
+
- To fix and prevent further occurrences of flaky tests the following steps can be taken, isolate tests, employ setup and cleanup routines, handle concurrency, configure a stable test environment, improve error handling, simplify testing logic, and proactively deal with typical causes of the flaky tests.
45
26
46
27
---
47
28
@@ -93,25 +74,19 @@ It is convenient to test a specific function at the lowest level so if the logic
93
74
changes, we can make minimal changes to the test suite and/or mocked data.
94
75
95
76
#### When to use
96
-
- Test a unit that implements the business logic, that's isolated from side
97
-
effects such as database interaction or HTTP request processing
77
+
- Test a unit that implements the business logic, that's isolated from side effects such as database interaction or HTTP request processing
98
78
- Test function or class method with multiple input-output permutations
99
79
100
80
#### When **not** to use
101
-
- To test unit that integrates different application layers, such as persistence
102
-
layer (database) or HTTP layer (see "Integration Tests") or performs disk I/O or
103
-
communicates with external system
81
+
- To test unit that integrates different application layers, such as persistence layer (database) or HTTP layer (see "Integration Tests") or performs disk I/O or communicates with external system
104
82
105
83
#### Best practices
106
84
- Unit tests should execute fast (<50ms)
107
85
- Use mocks and stubs through dependency injection (method or constructor injection)
108
86
109
87
#### Antipatterns
110
-
- Mocking infrastructure parts such as database I/O - instead, revert the
111
-
control by using the `AppService`, `Command` or `Query` to integrate unit
112
-
implementing business logic with the infrastructure layer of the application
113
-
- Monkey-patching dependencies used by the unit - instead, pass the dependencies
114
-
through the constructor or method, so that you can pass the mocks or stubs in the test
88
+
- Mocking infrastructure parts such as database I/O - instead, revert the control by using the `AppService`, `Command` or `Query` to integrate unit implementing business logic with the infrastructure layer of the application
89
+
- Monkey-patching dependencies used by the unit - instead, pass the dependencies through the constructor or method, so that you can pass the mocks or stubs in the test
115
90
116
91
117
92
### Integration Tests
@@ -120,36 +95,20 @@ With these tests, we test how multiple components of the system behave together.
120
95
121
96
#### Infrastructure
122
97
123
-
Running the tests on test infrastructure should be preferred to mocking, unlike
124
-
in unit tests. Ideally, a full application instance would be run, to mimic real
125
-
application behavior as close as possible.
126
-
This usually includes running the application connected to a test database,
127
-
inserting fake data into it during the test setup, and doing assertions on the
128
-
current state of the database. This also means integration test code should have
129
-
full access to the test infrastructure for querying.
98
+
Running the tests on test infrastructure should be preferred to mocking, unlike in unit tests. Ideally, a full application instance would be run, to mimic real application behavior as close as possible.
99
+
This usually includes running the application connected to a test database, inserting fake data into it during the test setup, and doing assertions on the current state of the database. This also means integration test code should have full access to the test infrastructure for querying.
130
100
> [!NOTE]
131
-
> Regardless of whether using raw queries or the ORM, simple queries should be
132
-
> used to avoid introducing business logic within tests.
101
+
> Regardless of whether using raw queries or the ORM, simple queries should be used to avoid introducing business logic within tests.
133
102
134
-
However, mocking can still be used when needed, for example when expecting
135
-
side-effects that call third party services.
103
+
However, mocking can still be used when needed, for example when expecting side-effects that call third party services.
136
104
137
105
#### Entry points
138
106
139
-
Integration test entry points can vary depending on the application use cases.
140
-
These include services, controllers, or the API. These are not set in stone and
141
-
should be taken into account when making a decision. For example:
142
-
- A use case that can be invoked through multiple different protocols can be
143
-
tested separately from them, to avoid duplication. A tradeoff in this case is
144
-
the need to write some basic tests for each of the protocols.
145
-
- A use case that will always be invokeable through a single protocol might
146
-
benefit enough from only being tested using that protocol. E.g. a HTTP API route
147
-
test might eliminate the need for a lower level, controller/service level test.
148
-
This would also enable testing the auth layer integration within these tests,
149
-
which might not have been possible otherwise depending on the technology used.
150
-
151
-
Multiple approaches can be used within the same application depending on the
152
-
requirements, to provide sufficient coverage.
107
+
Integration test entry points can vary depending on the application use cases. These include services, controllers, or the API. These are not set in stone and should be taken into account when making a decision. For example:
108
+
- A use case that can be invoked through multiple different protocols can be tested separately from them, to avoid duplication. A tradeoff in this case is the need to write some basic tests for each of the protocols.
109
+
- A use case that will always be invokeable through a single protocol might benefit enough from only being tested using that protocol. E.g. a HTTP API route test might eliminate the need for a lower level, controller/service level test. This would also enable testing the auth layer integration within these tests, which might not have been possible otherwise depending on the technology used.
110
+
111
+
Multiple approaches can be used within the same application depending on the requirements, to provide sufficient coverage.
153
112
154
113
#### Testing surface
155
114
@@ -173,16 +132,11 @@ time decoupling logic testing and endpoint testing.
173
132
- To verify the API endpoint performs authentication and authorization.
174
133
- To verify user permissions for that endpoint.
175
134
- To verify that invalid input is correctly handled.
176
-
- To verify the basic business logic is handled correctly, both in expected
177
-
success and failure cases.
178
-
- To verify infrastructure related side-effects, e.g. database changes or calls
179
-
to third party services.
135
+
- To verify the basic business logic is handled correctly, both in expected success and failure cases.
136
+
- To verify infrastructure related side-effects, e.g. database changes or calls to third party services.
180
137
181
138
#### When **not** to use
182
-
- For extensive testing of business logic permutations beyond fundamental
183
-
scenarios. Integration tests contain more overhead to write compared to unit
184
-
tests and can easily lead to a combinatorial explosion. Instead, unit tests
185
-
should be used for thorough coverage of these permutations.
139
+
- For extensive testing of business logic permutations beyond fundamental scenarios. Integration tests contain more overhead to write compared to unit tests and can easily lead to a combinatorial explosion. Instead, unit tests should be used for thorough coverage of these permutations.
186
140
- For testing third party services. We should assume they work as expected.
187
141
188
142
#### Best practices
@@ -203,16 +157,10 @@ returns the expected data. That means we write tests for the publically
203
157
available endpoints.
204
158
205
159
> [!NOTE]
206
-
> As mentioned in the Integration Tests section, API can be the entry point to
207
-
> the integration tests, meaning API tests are a subtype of integration tests.
208
-
> However, when we talk about API tests here, we are specifically referring to
209
-
> the public API contract tests, which don't have access to the internals of
210
-
> the application.
211
-
212
-
In the cases where API routes are covered extensively with integration tests,
213
-
API tests might not be needed, leaving more time for QA to focus on E2E tests.
214
-
However, in more complex architectures (e.g. integration tested microservices
215
-
behind an API gateway), API tests can be very useful.
160
+
> As mentioned in the Integration Tests section, API can be the entry point to the integration tests, meaning API tests are a subtype of integration tests. However, when we talk about API tests here, we are specifically referring to the public API contract tests, which don't have access to the internals of the application.
161
+
162
+
In the cases where API routes are covered extensively with integration tests, API tests might not be needed, leaving more time for QA to focus on E2E tests.
163
+
However, in more complex architectures (e.g. integration tested microservices behind an API gateway), API tests can be very useful.
0 commit comments