From 403a63896a595a4ffc56eb678171810781122192 Mon Sep 17 00:00:00 2001 From: pranjalg1331 Date: Wed, 6 Aug 2025 18:36:10 +0530 Subject: [PATCH 1/5] updated docs for unit tests Signed-off-by: pranjalg1331 --- docs/IntelOwl/contribute.md | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-) diff --git a/docs/IntelOwl/contribute.md b/docs/IntelOwl/contribute.md index 543dbc6..6afb23c 100644 --- a/docs/IntelOwl/contribute.md +++ b/docs/IntelOwl/contribute.md @@ -4,7 +4,6 @@ There are a lot of different ways you could choose to contribute to the IntelOwl - main repository: [IntelOwl](https://github.com/intelowlproject/IntelOwl) -- official Python client: [pyintelowl](https://github.com/intelowlproject/pyintelowl). - official GO client: [go-intelowl](https://github.com/intelowlproject/go-intelowl). @@ -211,7 +210,33 @@ You may want to look at a few existing examples to start to build a new one, suc After having written the new python module, you have to remember to: 1. Put the module in the `file_analyzers` or `observable_analyzers` directory based on what it can analyze -2. Remember to use `_monkeypatch()` in its class to create automated tests for the new analyzer. This is a trick to have tests in the same class of its analyzer. +2. **Write Unit Tests for the Analyzer** + - Monkeypatch-based tests are no longer used. + - Instead, write a unit test class that inherits from `BaseAnalyzerTest` located at: + ``` + tests/api_app/analyzers_manager/unit_tests/[observable_analyzers|file_analyzers]/base_test_class.py + ``` + - Place your test file under the appropriate directory (`observable_analyzers` or `file_analyzers`). + - Example structure: + ``` + tests/ + └── api_app/ + └── analyzers_manager/ + └── unit_tests/ + ├── observable_analyzers/ + │ ├── test_mynewanalyzer.py + │ └── base_test_class.py + └── file_analyzers/ + ├── ... + ``` + + - Your test case should: + - Set `analyzer_class = YourAnalyzerClass` + - Implement `get_mocked_response()` using `unittest.mock.patch` to mock external calls + - Optionally implement `get_extra_config()` to provide additional runtime config + + - **For reference**, you can find numerous analyzer test examples already implemented under `tests/api_app/analyzers_manager/unit_tests/observable_analyzers/` and `file_analyzers/`. + 3. Create the configuration inside django admin in `Analyzers_manager/AnalyzerConfigs` (\* = mandatory, ~ = mandatory on conditions) 1. \*Name: specific name of the configuration 2. \*Python module: . From 44449575ade9b8479076850b268f931c7fb3d5f5 Mon Sep 17 00:00:00 2001 From: pranjalg1331 Date: Wed, 6 Aug 2025 18:38:02 +0530 Subject: [PATCH 2/5] correct error Signed-off-by: pranjalg1331 --- docs/IntelOwl/contribute.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/IntelOwl/contribute.md b/docs/IntelOwl/contribute.md index 6afb23c..03527d6 100644 --- a/docs/IntelOwl/contribute.md +++ b/docs/IntelOwl/contribute.md @@ -4,6 +4,7 @@ There are a lot of different ways you could choose to contribute to the IntelOwl - main repository: [IntelOwl](https://github.com/intelowlproject/IntelOwl) +- official Python client: [pyintelowl](https://github.com/intelowlproject/pyintelowl). - official GO client: [go-intelowl](https://github.com/intelowlproject/go-intelowl). From 5d8c9906d889ea38d86eebb96f33de6e75091d12 Mon Sep 17 00:00:00 2001 From: pranjalg1331 Date: Sun, 17 Aug 2025 16:49:12 +0530 Subject: [PATCH 3/5] more detialed Signed-off-by: pranjalg1331 --- docs/IntelOwl/contribute.md | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/docs/IntelOwl/contribute.md b/docs/IntelOwl/contribute.md index 03527d6..1f3f9a1 100644 --- a/docs/IntelOwl/contribute.md +++ b/docs/IntelOwl/contribute.md @@ -213,7 +213,9 @@ After having written the new python module, you have to remember to: 1. Put the module in the `file_analyzers` or `observable_analyzers` directory based on what it can analyze 2. **Write Unit Tests for the Analyzer** - Monkeypatch-based tests are no longer used. - - Instead, write a unit test class that inherits from `BaseAnalyzerTest` located at: + - All analyzers are now tested using pure unit tests that focus only on the business logic. This ensures that our tests are fast, deterministic, and independent of external services. + - To make writing tests easier, base test classes are available for both observable analyzers and file analyzers. These base classes handle all common setup and mocking, so you only need to define analyzer-specific behavior. +`BaseAnalyzerTest` located at: ``` tests/api_app/analyzers_manager/unit_tests/[observable_analyzers|file_analyzers]/base_test_class.py ``` @@ -232,12 +234,17 @@ After having written the new python module, you have to remember to: ``` - Your test case should: - - Set `analyzer_class = YourAnalyzerClass` - - Implement `get_mocked_response()` using `unittest.mock.patch` to mock external calls - - Optionally implement `get_extra_config()` to provide additional runtime config + - Each analyzer test class should inherit from the base test class provided in the same directory. + - Define which analyzer is under test by assigning it to analyzer_class - Set `analyzer_class = YourAnalyzerClass` + - Override `get_mocked_response()` to simulate the data your analyzer would normally produce. All external dependencies (e.g., API calls, file I/O, subprocesses) must be mocked — no real external calls should happen. + This method should return a list of all applied patches, ensuring that every external call used by the analyzer is properly mocked. + - Optionally override `get_extra_config()` to provide additional runtime config that are not already defined inside the base test class. - **For reference**, you can find numerous analyzer test examples already implemented under `tests/api_app/analyzers_manager/unit_tests/observable_analyzers/` and `file_analyzers/`. + + > Note: If your analyzer is Docker-based, you can refer to tests/api_app/analyzers_manager/unit_tests/file_analyzers/test_suricate.py for an example of how such analyzers are tested. + 3. Create the configuration inside django admin in `Analyzers_manager/AnalyzerConfigs` (\* = mandatory, ~ = mandatory on conditions) 1. \*Name: specific name of the configuration 2. \*Python module: . From 9e582483d8e42d68ae07ef065ce72366c82f64de Mon Sep 17 00:00:00 2001 From: pranjalg1331 Date: Wed, 20 Aug 2025 21:18:56 +0530 Subject: [PATCH 4/5] update Signed-off-by: pranjalg1331 --- docs/IntelOwl/contribute.md | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/docs/IntelOwl/contribute.md b/docs/IntelOwl/contribute.md index 1f3f9a1..8c650ad 100644 --- a/docs/IntelOwl/contribute.md +++ b/docs/IntelOwl/contribute.md @@ -661,11 +661,11 @@ Follow these guides to understand how to start to contribute to them while devel ## How to test the application -IntelOwl makes use of the django testing framework and the `unittest` library for unit testing of the API endpoints and End-to-End testing of the analyzers and connectors. +IntelOwl makes use of the django testing framework and the `unittest` library for unit testing of the API endpoints and End-to-End testing of the analyzers and connectors. ### Configuration -- In the encrypted folder `tests/test_files.zip` (password: "intelowl") there are some files that you can use for testing purposes. +- In the encrypted folder `tests/test_files.zip` (password: "intelowl") there are some files that you can use for testing purposes. The async_tests/ dir has mainly transactional test cases as they are run separately from other unit tests. - With the following environment variables you can customize your tests: @@ -711,6 +711,17 @@ To test a plugin in real environment, i.e. without mocked data, we suggest that Meaning that you have your plugin configured, you have selected a correct observable/file to analyze, and the final report shown in the GUI of IntelOwl is exactly what you wanted. +### Running Tests for a Specific Analyzer + +To test a particular analyzer, locate its corresponding unittest file inside: tests/api_app/analyzers_manager/unit_tests/[observable_analyzers / file_analyzers] + + +Once you’ve identified the test file, you can run it individually with: + +```bash + docker exec -ti intelowl_uwsgi python manage.py test tests.api_app.analyzers_manager.unit_tests.observable_analyzers. +``` + ##### Run tests available in a particular file Examples: From 5dd13e58dcd900561140cf4f8b0caaf4ac7ed06d Mon Sep 17 00:00:00 2001 From: pranjalg1331 Date: Wed, 20 Aug 2025 23:52:50 +0530 Subject: [PATCH 5/5] correct heading Signed-off-by: pranjalg1331 --- docs/IntelOwl/contribute.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/IntelOwl/contribute.md b/docs/IntelOwl/contribute.md index 8c650ad..b880f92 100644 --- a/docs/IntelOwl/contribute.md +++ b/docs/IntelOwl/contribute.md @@ -711,7 +711,7 @@ To test a plugin in real environment, i.e. without mocked data, we suggest that Meaning that you have your plugin configured, you have selected a correct observable/file to analyze, and the final report shown in the GUI of IntelOwl is exactly what you wanted. -### Running Tests for a Specific Analyzer +##### Running Tests for a Specific Analyzer To test a particular analyzer, locate its corresponding unittest file inside: tests/api_app/analyzers_manager/unit_tests/[observable_analyzers / file_analyzers]