Skip to content

Commit 5d83331

Browse files
authored
Merge pull request #15 from ExtensionEngine/examples/performance-testing
Performance testing - Initial Examples
2 parents 3597ad6 + be2e193 commit 5d83331

File tree

4 files changed

+159
-0
lines changed

4 files changed

+159
-0
lines changed

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,10 @@
2525

2626
1. [Automated Testing](./recipes/automated-testing.md)
2727

28+
## Examples
29+
30+
1. [Performance Testing - Antipattern Examples](./examples/performance-testing-antipattern-examples.md)
31+
2832
## 🙌 Want to contribute?
2933

3034
We are open to all kinds of contributions. If you want to:
102 KB
Loading
Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
# Performance Testing - Antipattern Examples
2+
3+
## Antipattern 1: Ignoring Think Time
4+
5+
Excluding think time between user actions can result in unrealistic performance metrics for certain types of tests, such as average, stress, and soak tests. However, think time is less critical for tests like breakpoint and spike tests, as other parameters can control these scenarios effectively. Incorporating think time is crucial when simulating real user behavior based on user scenarios. In the provided example, user actions are executed without any delay, which does not accurately reflect real-world conditions for this type of test.
6+
7+
```javascript
8+
export default function () {
9+
http.get('http://example.com/api/resource1');
10+
http.get('http://example.com/api/resource2');
11+
http.get('http://example.com/api/resource3');
12+
}
13+
```
14+
15+
### Solution
16+
17+
Introduce think time between user actions to simulate real user behavior. This example adds a random delay between 1 to 5 seconds between each request. The bigger the range, the more realistic the simulation.
18+
19+
```javascript
20+
import { randomIntBetween } from 'https://jslib.k6.io/k6-utils/1.4.0/index.js';
21+
import { sleep } from 'k6';
22+
23+
export default function () {
24+
http.get('http://example.com/api/resource1');
25+
sleep(randomIntBetween(1, 5));
26+
http.get('http://example.com/api/resource2');
27+
sleep(randomIntBetween(1, 5));
28+
http.get('http://example.com/api/resource3');
29+
}
30+
```
31+
32+
## Antipattern 2: Lack of Data Variation
33+
34+
Using static, hardcoded data for requests can cause caching mechanisms to produce artificially high performance metrics. In this example, the same username is used for every request, which may not represent real-world scenarios.
35+
36+
```javascript
37+
export default function () {
38+
const payload = JSON.stringify({
39+
username: 'username', // Static username used in every request
40+
password: 'password',
41+
});
42+
43+
http.post('http://example.com/api/login', payload);
44+
}
45+
```
46+
47+
### Solution
48+
49+
Use dynamic data or randomization to simulate different user scenarios. This example generates a random username for each request.
50+
51+
```javascript
52+
import exec from 'k6/execution';
53+
54+
export default function () {
55+
const payload = JSON.stringify({
56+
username: `username${exec.vu.idInTest}`, // Unique identifier for each virtual user, we will use it to be sure every username is unique
57+
password: 'password',
58+
});
59+
60+
http.post('http://example.com/api/login', payload);
61+
}
62+
```
63+
64+
## Antipattern 3: Not Scaling Virtual Users
65+
66+
Running performance tests with unrealistic numbers of virtual users or ramping up too quickly can lead to inaccurate results. In this example, the test starts with 1000 VUs immediately.
67+
68+
```javascript
69+
export const options = {
70+
vus: 1000,
71+
duration: '1m',
72+
};
73+
74+
export default function () {
75+
http.get('http://example.com/api/resource');
76+
}
77+
```
78+
79+
### Solution
80+
81+
Executors control how K6 schedules VUs and iterations. The executor that you choose depends on the goals of your test and the type of traffic you want to model. For example, the `ramping-vus` executor gradually increases the number of VUs over a specified duration, allowing for more realistic load testing for specific test types.
82+
83+
```javascript
84+
export const options = {
85+
discardResponseBodies: true,
86+
scenarios: {
87+
contacts: {
88+
executor: 'ramping-vus',
89+
startVUs: 0,
90+
stages: [
91+
{ duration: '20s', target: 10 },
92+
{ duration: '10s', target: 0 },
93+
],
94+
gracefulRampDown: '0s',
95+
},
96+
},
97+
};
98+
99+
export default function () {
100+
http.get('http://example.com/api/resource');
101+
// Injecting sleep
102+
// Sleep time is 500ms. Total iteration time is sleep + time to finish request.
103+
sleep(0.5);
104+
}
105+
```
106+
107+
Based upon our test scenario inputs and results:
108+
109+
- The configuration defines 2 stages for a total test duration of 30 seconds.
110+
- Stage 1 ramps up VUs linearly from the 0 to the target of 10 over a 20 second duration.
111+
- From the 10 VUs at the end of stage 1, stage 2 then ramps down VUs linearly to the target of 0 over a 10 second duration.
112+
- Each iteration of the default function is expected to be roughly 515ms, or ~2/s.
113+
- As the number of VUs changes, the iteration rate directly correlates; each addition of a VU increases the rate by about 2 iterations/s, whereas each subtraction of a VUs reduces by about 2 iterations/s.
114+
- The example performed ~300 iterations over the course of the test.
115+
116+
#### Chart representation of the test execution
117+
118+
![ramping-vus execution chart](../assets//images//examples/ramping-vus.png)
119+
120+
## Glossary
121+
122+
### **VU**
123+
124+
- Virtual User
125+
126+
### **Think Time**
127+
128+
- Amount of time a script stops during test execution to
129+
replicate delays experienced by real users while using an application.
130+
131+
### **Iteration**
132+
133+
- A single execution of the default function in a K6 script.
134+
135+
### **Average Test**
136+
137+
- Assess how the system performs under a typical load for your system or application. Typical load might be a regular day in production or an average timeframe in your daily traffic.
138+
139+
### **Stress Test**
140+
141+
- Help you discover how the system functions with the load at peak traffic.
142+
143+
### **Spike Test**
144+
145+
- A spike test verifies whether the system survives and performs under sudden and massive rushes of utilization.
146+
147+
### **Breakpoint Test**
148+
149+
- Breakpoint tests discover your system’s limits.
150+
151+
### **Soak Test**
152+
153+
- Soak tests are a variation of the average-load test. The main difference is the test duration. In a soak test, the peak load is usually an average load, but the peak load duration extends several hours or even days.

recipes/automated-testing.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -271,6 +271,8 @@ including varying user inputs and dataset sizes.
271271
behavior, including realistic click frequency, page navigation patterns, and
272272
input actions.
273273

274+
#### [Antipattern Examples](/examples/performance-testing-antipattern-examples.md)
275+
274276
### Visual Tests
275277

276278
The type of test where test runner navigates to browser page, takes snapshot and

0 commit comments

Comments
 (0)