Skip to content

Code Quality Benchmark

adrianzap edited this page Aug 10, 2019 · 54 revisions

To generate a benchmark, we have executed softwipe on a collection of programs, most of which are bioinformatics tools from the area of evolutionary biology. Some of the below tools (genesis, raxml-ng, repeatscounter, hyperphylo) have been developed in our lab. You will find a table containing the code quality scores below. Note that this is subject to change as we are refining our scoring criteria and including more tools.

Softwipe scores for each category are assigned such that the "best" program in each category that is not an outlier obtains a 10 out of 10 score, and the "worst" program in each category that is not an outlier is assigned a 0 out of 10 score. An outlier is defined to be a value that lies outside of Turkey's fences.

All code quality categories use relative scores. For instance, we calculate the number of compiler warnings per total Lines Of Code (LOC). Hence, we can use those relative scores to compare and rank the different programs in our benchmark. The overall score that is used for our ranking is simply the average over all score categories. You can find a detailed description of the scoring categories and the tools included in our benchmark below.

program overall compiler_and_sanitizer assertions cppcheck clang_tidy cyclomatic_complexity lizard_warnings unique kwstyle
genesis 9.0 8.4 10.0 8.2 9.1 10.0 9.6 8.9 7.9
hyperphylo 8.9 8.9 10.0 8.8 8.3 9.7 9.4 8.4 7.3
kahypar 8.7 7.1 10.0 8.5 9.2 9.8 9.5 5.6 10.0
repeatscounter 7.9 9.7 0.0 10.0 9.7 9.8 10.0 4.3 9.9
raxml-ng 7.9 10.0 8.4 6.0 9.1 8.0 7.6 4.4 9.3
dawg 7.3 7.0 4.6 7.5 9.7 9.8 9.1 2.6 8.4
phyml 6.4 9.4 9.7 4.6 10.0 4.9 5.4 6.0 1.5
swarm 6.4 3.8 0.0 8.9 10.0 7.9 7.9 3.1 9.9
minimap 6.1 3.2 5.1 5.3 10.0 6.3 6.8 8.4 3.3
samtools 5.9 8.1 2.5 7.0 6.0 4.9 5.4 8.9 4.7
sf 5.9 7.3 2.1 4.7 8.0 4.5 4.3 8.9 7.0
clustal 5.7 5.9 6.1 6.7 10.0 4.5 5.3 5.7 1.9
seq-gen 5.7 8.1 0.0 6.8 8.0 6.0 6.9 10.0 0.0
prank 5.2 4.5 10.0 0.0 7.1 7.2 7.8 0.9 4.4
vsearch 5.0 4.6 0.0 8.3 0.0 5.4 6.1 5.8 9.8
iqtree 4.8 0.0 5.1 4.5 2.2 8.4 8.3 5.1 5.2
tcoffee 4.4 5.1 0.0 5.7 7.9 4.9 5.8 4.6 1.0
mrbayes 4.3 9.8 3.0 8.2 7.1 0.0 0.7 3.5 2.5
gadget 4.3 7.8 0.0 6.9 4.7 0.0 0.0 5.7 9.1
ms 4.2 6.8 0.0 0.0 5.0 6.4 6.9 8.6 0.0
bpp 3.8 9.1 0.0 1.7 7.2 0.0 1.1 8.2 2.9
mafft 3.5 8.6 0.0 6.5 4.9 0.0 2.3 0.0 5.3
athena 3.3 4.2 0.1 0.0 3.8 5.2 5.2 0.6 7.7
indelible 1.7 0.0 0.0 0.3 2.6 0.2 3.4 7.4 0.0

Tools included

Bioinformatics-related tools:

  • indelible 1.03 simulates sequence data on phylogenetic trees paper
  • ms population genetics simulations paper
  • mafft 7.429 multiple sequence alignment paper
  • mrbayes 3.2.6 Bayesian phylogenetic inference paper
  • bpp 3.4 multispecies coalescent analyses paper
  • tcoffee multiple sequence alignment paper
  • prank 0.170427 multiple sequence alignment paper
  • sf (SweepFinder) population genetics paper
  • seq-gen 1.3.4 phylogenetic sequence evolution simulation paper
  • dawg 1.2 phylogenetic sequence evolution simulation github
  • repeatscounter evaluates quality of a data distribution for phylogenetic inference github
  • raxml-ng 0.8.1 phylogenetic inference paper
  • genesis 0.22.1 phylogeny library github
  • minimap 2.17-r943 pairwise sequence alignment paper
  • Clustal Omega 1.2.4 multiple sequence alignment paper
  • samtools 1.9 utilities for processing SAM (Sequence Alignment/Map) files paper
  • vsearch 2.13.4 metagenomics functions paper github
  • swarm 2.2.2 amplicon clustering paper github
  • phyml 3.3.20190321 phylogenetic inference paper
  • IQ-TREE 1.6.10 phylogenetic inference paper
  • HyperPhylo judicious hypergraph partitioning, for creating a data distribution for phylogenetic inference paper

Other tools:

  • KaHyPar hypergraph partitioning tool website
  • Athena++ magnetohydrodynamics paper
  • Gadget 2 simulations of cosmological structure formations paper

Scoring categories

  • compiler and sanitizer: Here, we compile each benchmark tool using the clang compiler and count the number of warnings. We activate almost all warnings for this. We have weighted the warnings, such that each warning has a weight of 1, 2, or 3, where 3 is most dangerous (for instance, implicit type conversions that might result in precision loss are level 3 warnings). We calculate a weighted sum of clang warnings, where each warning that occurs in the compilation adds its level (1, 2, or 3) to the weighted sum. Additionally, we execute the tool with clang sanitizers (ASan and UBSan) and if the sanitizers find warnings, we add them to the weighted sum. Sanitizer warnings default to level 3. The compiler and sanitizer score is calculated from the weighted sum of warnings per total LOC.
  • assertions: The count of assertions (C-Style assert(), static_assert(), or custom assert macros, if defined) per total LOC.
  • cppcheck: The count of warnings found by the static code analyzer cppcheck per total LOC. Cppcheck categorizes its warnings; we have assigned each category a weight, similarly to the compiler warnings.
  • clang-tidy: The count of warnings found by the static code analyzer clang-tidy per total LOC. Clang-tidy categorizes its warnings; we have assigned each category a weight, similarly to the cppcheck and compiler warnings.
  • cyclomatic complexity: The cyclomatic complexity is a software metric to quantify the complexity/modularity of a program. See full Wikipedia article here. We use the lizard tool to assess the cyclomatic complexity of our benchmark tools. Keep in mind that the above table does not contain the real cyclomatic complexity values, but the scores, which rate all tools relative to each other regarding their cyclomatic complexity.
  • lizard warnings: The number of functions that are considered too complex, relative to the total number of functions. Lizard counts a function as "too complex" if its cyclomatic complexity, its length, or its parameter count exceeds a certain treshold value.
  • unique rate: The amount of unique code; a higher amount of code duplication decreases this value. The unique rate is obtained using lizard.
  • kwstyle: The count of warnings found by the static code style analyzer KWStyle per total LOC. We configure KWStyle using the KWStyle.xml file that is delivered with softwipe.

How to create the benchmark

To calculate this benchmark, the results of all softwipe runs must be saved into a results directory that has one subdirectory for each tool that should be included in the benchmark. Most importantly, for each tool, the output of softwipe must be saved into a file called "softwipe_output.txt", which has to lie in the according subdirectory for that tool. For example, the directory structure has to look like this:

results/
results/tool1/
results/tool1/softwipe_output.txt
results/tool2/
results/tool2/softwipe_output.txt
...

Then, the script calculate_score_table.py can be used to parse all the softwipe output files and generate a csv that contains all scores. The script requires the path to the results directory (results/ in our example). The script contains a list called FOLDERS that contains the names of all subdirectories that will be included in the benchmark (tool1, tool2, etc. in out example). To add or remove a tool to/from the benchmark, edit this list.

The script recalculates all scores from the rates, rather than parsing the scores directly. This is done so that softwipe doesn't need to be rerun for all tools if the scoring functions get changed. The script simply uses softwipe's scoring functions from scoring.py. These scoring functions use the values calculated by the compare_results.py script, which are the best/worst values that are not outliers, as mentioned above.

Clone this wiki locally