Skip to content

Commit 2381429

Browse files
authored
Merge pull request #9 from ExtensionEngine/feature/automated-testing
Add automated testing page
2 parents b492d90 + 0d3898e commit 2381429

File tree

2 files changed

+268
-0
lines changed

2 files changed

+268
-0
lines changed

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,10 @@
2121
4. [LTI - Learning Tools Interoperability Protocol](./recipes/lti.md)
2222
5. [CircleCI Build Guide](./recipes/circleci-build-guide.md)
2323

24+
## Guides
25+
26+
1. [Automated Testing](./recipes/automated-testing.md)
27+
2428
## 🙌 Want to contribute?
2529

2630
We are open to all kinds of contributions. If you want to:

recipes/automated-testing.md

Lines changed: 264 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,264 @@
1+
# Automated Testing
2+
## Glossary
3+
**Confidence** - describes a degree to which passing tests guarantee that the app is working.
4+
**Determinism** - describes how easy it is to determine where the problem is based on the failing test.
5+
**Use Case** - a potential scenario in which a system receives external input and responds to it. It defines the interactions between a role (user or another system) and a system to achieve a goal.
6+
**Combinatiorial Explosion** - the fast growth in the number of combinations that need to be tested when multiple business rules are involved.
7+
8+
## Testing best practices
9+
10+
### Quality over quantity
11+
Don't focus on achieving a specific code coverage percentage.
12+
While code coverage can help us identify uncovered parts of the codebase, it doesn't guarantee high confidence.
13+
14+
Instead, focus on identifying important paths of the application, especially from user's perspective.
15+
User can be a developer using a shared function, a user interacting with the UI, or a client using server app's JSON API.
16+
Write tests to cover those paths in a way that gives confidence that each path, and each separate part of the path works as expected.
17+
18+
---
19+
20+
Flaky tests that produce inconsistent results ruin confidence in the test suite, mask real issues, and are the source of frustration. The refactoring process to address the flakiness is crucial and should be a priority.
21+
To adequately deal with flaky tests it is important to know how to identify, fix, and prevent them:
22+
- Common characteristics of flaky tests include inconsistency, false positives and negatives, and sensitivity to dependency, timing, ordering, and environment.
23+
- Typical causes of the stated characteristics are concurrency, timing/ordering problems, external dependencies, non-deterministic assertions, test environment instability, and poorly written test logic.
24+
- Detecting flaky tests can be achieved by rerunning, running tests in parallel, executing in different environments, and analyzing test results.
25+
- To fix and prevent further occurrences of flaky tests the following steps can be taken, isolate tests, employ setup and cleanup routines, handle concurrency, configure a stable test environment, improve error handling, simplify testing logic, and proactively deal with typical causes of the flaky tests.
26+
27+
---
28+
29+
Be careful with tests that alter database state. We want to be able to run tests
30+
in parallel so do not write tests that depend on each other. Each test should be
31+
independent of the test suite.
32+
33+
---
34+
35+
Test for behavior and not implementation. Rather focus on writing tests that
36+
follow the business logic instead of programming logic. Avoid writing parts of
37+
the function implementation in the actual test assertion. This will lead to tight
38+
coupling of tests with internal implementation and the tests will have to be fixed
39+
each time the logic changes.
40+
41+
---
42+
43+
Writing quality tests is hard and it's easy to fall into common pitfalls of testing
44+
that the database update function actually updates the database. Start off simple
45+
and as the application grows in complexity, it will be easier to determine what
46+
should be tested more thoroughly. It is perfectly fine to have a small test suite
47+
that covers the critical code and the essentials. Small suites will run faster
48+
which means they will be run more often.
49+
50+
## Types of Automated Tests
51+
52+
There are different approaches to testing, and depending on boundaries of the
53+
test, we can split them into following categories:
54+
55+
- **Unit Tests**
56+
- **Integration Tests**
57+
- **API Tests**
58+
- **E2E Tests**
59+
- **Load/Performance Tests**
60+
- **Visual Tests**
61+
62+
*Note that some people can call these tests by different names, but for Studion
63+
internal purposes, this should be considered the naming convention.*
64+
65+
### Unit Tests
66+
67+
These are the most isolated tests that we can write. They should take a specific
68+
function/service/helper/module and test its functionality. Unit tests will
69+
usually require mocked data, but since we're testing the case when specific
70+
input produces specific output, the mocked data set should be minimal.
71+
72+
Unit testing is recommended for functions that contain a lot of logic and/or branching.
73+
It is convenient to test a specific function at the lowest level so if the logic
74+
changes, we can make minimal changes to the test suite and/or mocked data.
75+
76+
#### When to use
77+
- Test a unit that implements the business logic, that's isolated from side effects such as database interaction or HTTP request processing
78+
- Test function or class method with multiple input-output permutations
79+
80+
#### When **not** to use
81+
- To test unit that integrates different application layers, such as persistence layer (database) or HTTP layer (see "Integration Tests") or performs disk I/O or communicates with external system
82+
83+
#### Best practices
84+
- Unit tests should execute fast (<50ms)
85+
- Use mocks and stubs through dependency injection (method or constructor injection)
86+
87+
#### Antipatterns
88+
- Mocking infrastructure parts such as database I/O - instead, revert the control by using the `AppService`, `Command` or `Query` to integrate unit implementing business logic with the infrastructure layer of the application
89+
- Monkey-patching dependencies used by the unit - instead, pass the dependencies through the constructor or method, so that you can pass the mocks or stubs in the test
90+
91+
92+
### Integration Tests
93+
94+
With these tests, we test how multiple components of the system behave together.
95+
96+
#### Infrastructure
97+
98+
Running the tests on test infrastructure should be preferred to mocking, unlike in unit tests. Ideally, a full application instance would be run, to mimic real application behavior as close as possible.
99+
This usually includes running the application connected to a test database, inserting fake data into it during the test setup, and doing assertions on the current state of the database. This also means integration test code should have full access to the test infrastructure for querying.
100+
> [!NOTE]
101+
> Regardless of whether using raw queries or the ORM, simple queries should be used to avoid introducing business logic within tests.
102+
103+
However, mocking can still be used when needed, for example when expecting side-effects that call third party services.
104+
105+
#### Entry points
106+
107+
Integration test entry points can vary depending on the application use cases. These include services, controllers, or the API. These are not set in stone and should be taken into account when making a decision. For example:
108+
- A use case that can be invoked through multiple different protocols can be tested separately from them, to avoid duplication. A tradeoff in this case is the need to write some basic tests for each of the protocols.
109+
- A use case that will always be invokeable through a single protocol might benefit enough from only being tested using that protocol. E.g. a HTTP API route test might eliminate the need for a lower level, controller/service level test. This would also enable testing the auth layer integration within these tests, which might not have been possible otherwise depending on the technology used.
110+
111+
Multiple approaches can be used within the same application depending on the requirements, to provide sufficient coverage.
112+
113+
#### Testing surface
114+
115+
**TODO**: do we want to write anything about mocking the DB data/seeds?
116+
117+
In these tests we should cover **at least** the following:
118+
- **authorization** - make sure only logged in users with correct role/permissions
119+
can access this endpoint
120+
- **success** - if we send correct data, the endpoint should return response that
121+
contains correct data
122+
- **failure** - if we send incorrect data, the endpoint should handle the exception
123+
and return appropriate error status
124+
- **successful change** - successful request should make the appropriate change
125+
126+
If the endpoint contains a lot of logic where we need to mock a lot of different
127+
inputs, it might be a good idea to cover that logic with unit tests. Unit tests
128+
will require less overhead and will provide better performance while at the same
129+
time decoupling logic testing and endpoint testing.
130+
131+
#### When to use
132+
- To verify the API endpoint performs authentication and authorization.
133+
- To verify user permissions for that endpoint.
134+
- To verify that invalid input is correctly handled.
135+
- To verify the basic business logic is handled correctly, both in expected success and failure cases.
136+
- To verify infrastructure related side-effects, e.g. database changes or calls to third party services.
137+
138+
#### When **not** to use
139+
- For extensive testing of business logic permutations beyond fundamental scenarios. Integration tests contain more overhead to write compared to unit tests and can easily lead to a combinatorial explosion. Instead, unit tests should be used for thorough coverage of these permutations.
140+
- For testing third party services. We should assume they work as expected.
141+
142+
#### Best practices
143+
- Test basic functionality and keep the tests simple.
144+
- Prefer test infrastructure over mocking.
145+
- If the tested endpoint makes database changes, verify that the changes were
146+
actually made.
147+
- Assert that output data is correct.
148+
149+
#### Antipatterns
150+
- Aiming for code coverage percentage number. An app with 100% code coverage can
151+
have bugs. Instead, focus on writing meaningful, quality tests.
152+
153+
### API Tests
154+
155+
With these tests, we want to make sure our API contract is valid and the API
156+
returns the expected data. That means we write tests for the publically
157+
available endpoints.
158+
159+
> [!NOTE]
160+
> As mentioned in the Integration Tests section, API can be the entry point to the integration tests, meaning API tests are a subtype of integration tests. However, when we talk about API tests here, we are specifically referring to the public API contract tests, which don't have access to the internals of the application.
161+
162+
In the cases where API routes are covered extensively with integration tests, API tests might not be needed, leaving more time for QA to focus on E2E tests.
163+
However, in more complex architectures (e.g. integration tested microservices behind an API gateway), API tests can be very useful.
164+
165+
#### When to use
166+
- To make sure the API signature is valid.
167+
168+
#### When **not** to use
169+
- To test application logic.
170+
171+
#### Best practices
172+
- Write these tests with the tools which allow us to reuse the tests to write
173+
performance tests (K6).
174+
175+
#### Antipatterns
176+
177+
178+
### E2E Tests
179+
180+
These tests are executed within a browser environment (Playwright, Selenium, etc.).
181+
The purpose of these tests is to make sure that interacting with the application UI
182+
produces the expected result.
183+
184+
Usually, these tests will cover a large portion of the codebase with least
185+
amount of code.
186+
Because of that, they can be the first tests to be added to existing project that
187+
has no tests or has low test coverage.
188+
189+
These tests should not cover all of the use cases because they are the slowest to
190+
run. If we need to test edge cases, we should try to implement those at a lower
191+
level (integration or unit tests).
192+
193+
#### When to use
194+
- Test user interaction with the application UI.
195+
196+
#### When **not** to use
197+
- For data validation.
198+
199+
#### Best practices
200+
- Performance is key in these tests. We want to run tests as often as possible
201+
and good performance will allow that.
202+
- Flaky tests should be immediately disabled and refactored. Flaky tests will
203+
cause the team to ignore or bypass the tests and these should be dealt with immediately.
204+
205+
#### Antipatterns
206+
207+
### Performance Tests
208+
209+
These types of tests will reproduce a typical user scenario and then simulate a
210+
group of concurrent users and then measure the server's response time and overall
211+
performance.
212+
213+
They are typically used to stress test the infrastructure and measure the throughput
214+
of the application. They can expose bottlenecks and identify endpoints that need
215+
optimization.
216+
217+
Performance tests are supposed to be run on actual production environment since
218+
they test the performance of code **and** infrastructure. Keep in mind actual
219+
users when running performance tests. Best approach is to spin up a production
220+
clone and run tests against that environment.
221+
222+
#### When to use
223+
- To stress test infrastructure.
224+
- To measure how increased traffic affects load speeds and overall app performance.
225+
226+
#### When **not** to use
227+
- To test if the application works according to specs.
228+
- To test a specific user scenario.
229+
230+
#### Best practices
231+
- These tests should mimic actual human user in terms of click frequency and page
232+
navigation.
233+
- There should be multiple tests that test different paths in the system, not a
234+
single performance test.
235+
236+
#### Antipatterns
237+
- Running these tests locally or on an environment that doesn't match production
238+
in terms of infrastructure performance. (tests should be developed on a local
239+
instance, but the actual measurements should be performed live)
240+
241+
242+
### Visual Tests
243+
244+
The type of test where test runner navigates to browser page, takes screenshot
245+
and then compares the future screenshots with the reference screenshot.
246+
247+
These types of tests will cover a lot of ground with the least effort and can
248+
easily indicate a change in the app. The downside is that they're not very precise
249+
and the engineer needs to spend some time to determine the cause of the error.
250+
251+
#### When to use
252+
- When we want to cover broad range of features.
253+
- When we want to increase test coverage with least effort.
254+
- When we want to make sure there are no changes in the UI.
255+
256+
#### When **not** to use
257+
- To test a specific feature or business logic.
258+
- To test a specific user scenario.
259+
260+
#### Best practices
261+
- Have deterministic seeds so the UI always renders the same output.
262+
- Add as many pages as possible but keep the tests simple.
263+
264+
#### Antipatterns

0 commit comments

Comments
 (0)