I am trying to find the best way to analyze my project with cppcheck.
$cppcheck--version
Cppcheck2.17.1
I was benchmarking the same cppcheck command targeting:
- the root of the project source tree (.)
- a compilation database generated from the project source tree (--project=compile_commands.json)
with otherwise the same command line:
In both cases, the same number of source files is analyzed (~50k).
The run with the project source tree takes ~25min while the run with the compilation database takes ~3h.
Some observations:
source tree run:
Includes/Undefines/Defines are empty
~25min
cppcheck.xml: 123MB
compilation databaser run:
Defines: __PIC__=1
Includes: a very long list picked from the compilation database
Seems like files are not processed in parallel, despite the jobs flag ?
~3h
cppcheck.xml: 23MB
I don't understand why a "simple" source tree run would produce more results and faster than a compilation database based one.
Is there something I am missing ?
Last edit: Jam Bon 2025-12-19
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hard to say on the results, I compared some metrics from both runs:
Source tree:
File size: 123MB
~25min
Total errors: 219938
cstyleCast: 135350
knownConditionTrueFalse: 3159
nullPointerOutOfMemory: 8344
Compilation database:
File size: 23MB
~3h
Total errors: 37862
cstyleCast: 15363
knownConditionTrueFalse: 349
nullPointerOutOfMemory: 5796
For the same file, the source tree based run is reporting errors but nothing in the compilation database based one (and the file is in the database of course).
From that I would say the source tree based run is better (for both speed and results exhaustivity), but that seems weird to me.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am trying to find the best way to analyze my project with cppcheck.
I was benchmarking the same cppcheck command targeting:
- the root of the project source tree (
.)- a compilation database generated from the project source tree (
--project=compile_commands.json)with otherwise the same command line:
In both cases, the same number of source files is analyzed (~50k).
The run with the project source tree takes ~25min while the run with the compilation database takes ~3h.
Some observations:
source tree run:
compilation databaser run:
I don't understand why a "simple" source tree run would produce more results and faster than a compilation database based one.
Is there something I am missing ?
Last edit: Jam Bon 2025-12-19
It's quite usual that the compilation database will be slower, as all the include paths are provided all the headers will be included in the analysis.
Sometimes it's better to provide the paths and sometimes it's better to provide compilation database (if missing macros cause problems for instance).
I don't know what is better for you but I like that you try out both approaches..
Last edit: Daniel Marjamäki 2025-12-19
It's not obvious that this is better. Are the results good?
Hard to say on the results, I compared some metrics from both runs:
Source tree:
Compilation database:
For the same file, the source tree based run is reporting errors but nothing in the compilation database based one (and the file is in the database of course).
From that I would say the source tree based run is better (for both speed and results exhaustivity), but that seems weird to me.