Here is another very interesting article about issues with mmap(): https://www.golem.de/news/mmap-codeanalyse-mit-sechs-zeilen-bash-2006-148878.html
It is also in german but it should be no problem to machine translate it.
They also mentioned that no compiler, even with the new analysis functionality, and also Cppcheck did not find the bug.
I know that such a check is not yet possible with Cppcheck, but maybe it is useful to add it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I translated the webpage from german to english through https://www.translatetheweb.com and I think it made a really good job.
It seems we did not do very well in that article. :-( I don't know did they use juliet? I think that would be a pretty bad test suite to use.. imho it seems those test cases are tweaked for abstract execution.. and well that doesn't fit Cppcheck well. Real production code and real bugs (CVEs for instance) would have been preferable imho.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If Juliet isn't a good test set is there a better one out there? I always thought the idea of having test sets with known software bugs was a good idea. Juliet and the Toyota data sets were the only ones I had really known of. I was thinking Juliet would at least make a barrier for regressions out side of cppchecks own test.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Well.. Juliet and Toyota are the test suites I am familiar with also. I am sure they were written with great effort.
In "normal" analysis, Cppcheck looks for subtle clues in the code and use that info in data flow analysis. And we have "reverse" value flow analysis. Such analysis produce lots of good warnings in real code, but has no value in those synthetic tests.
In "bug hunting" analysis we have "forward" analysis with abstract interpretation. That is the kind of analysis the test cases are written for. I expect that nearly all bugs will be found when "bug hunting" has matured.
I think it's very good that tools are compared against each other. So I hope to see more such comparisons soon.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Well I'm not sure if cppcheck is currently running any release testing against Juliet/Toyota data sets, but I don't see that it could hurt. Also someone might naively run cppcheck without --bug-hunting and get the wrong opinion about what cppcheck can and can't detect. To me showing the difference between a run with --bug-hunting vs enable=all might give people a better understanding of what --bug-hunting can do.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I know the bug hunting check is new but after seeing this video on clang static analyzer it seems you can run the clang analyzer with z3 as its SMT solver. I was thinking of running cppcheck bug hunting, clang and clang witj z3 on the Juliet test set just to see how they all do.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Cppcheck version 2.0 has been mentioned on https://www.heise.de/news/Developer-Snapshots-Programmierer-News-in-ein-zwei-Saetzen-4725039.html
Here is another very interesting article about issues with
mmap()
:https://www.golem.de/news/mmap-codeanalyse-mit-sechs-zeilen-bash-2006-148878.html
It is also in german but it should be no problem to machine translate it.
They also mentioned that no compiler, even with the new analysis functionality, and also Cppcheck did not find the bug.
I know that such a check is not yet possible with Cppcheck, but maybe it is useful to add it.
Here is rule that is capable of finding such cases. An internal Cppcheck would be better, of course:
Last edit: orbitcowboy 2020-06-05
This article has been updated meanwhile. Now it states that Cppcheck is able to find mmap() issues using PCRE-rules.
Cppcheck version 2.1 has been mentioned here: https://www.heise.de/news/Developer-Snapshots-Programmierer-News-in-ein-zwei-Saetzen-4784076.html
Here is another publication about Cppcheck: https://www.heise.de/news/Code-Analyse-fuer-C-cppcheck-hat-den-Parser-renoviert-und-MISRA-Regeln-ergaenzt-4982082.html
one more article:
https://www.heise.de/hintergrund/Pruefstand-fuer-Testwerkzeuge-Codeanalyse-im-Praxiseinsatz-4679430.html?seite=all
in German :-(
Thanks!
I translated the webpage from german to english through https://www.translatetheweb.com and I think it made a really good job.
It seems we did not do very well in that article. :-( I don't know did they use juliet? I think that would be a pretty bad test suite to use.. imho it seems those test cases are tweaked for abstract execution.. and well that doesn't fit Cppcheck well. Real production code and real bugs (CVEs for instance) would have been preferable imho.
If Juliet isn't a good test set is there a better one out there? I always thought the idea of having test sets with known software bugs was a good idea. Juliet and the Toyota data sets were the only ones I had really known of. I was thinking Juliet would at least make a barrier for regressions out side of cppchecks own test.
Well.. Juliet and Toyota are the test suites I am familiar with also. I am sure they were written with great effort.
In "normal" analysis, Cppcheck looks for subtle clues in the code and use that info in data flow analysis. And we have "reverse" value flow analysis. Such analysis produce lots of good warnings in real code, but has no value in those synthetic tests.
In "bug hunting" analysis we have "forward" analysis with abstract interpretation. That is the kind of analysis the test cases are written for. I expect that nearly all bugs will be found when "bug hunting" has matured.
I think it's very good that tools are compared against each other. So I hope to see more such comparisons soon.
Well I'm not sure if cppcheck is currently running any release testing against Juliet/Toyota data sets, but I don't see that it could hurt. Also someone might naively run cppcheck without --bug-hunting and get the wrong opinion about what cppcheck can and can't detect. To me showing the difference between a run with --bug-hunting vs enable=all might give people a better understanding of what --bug-hunting can do.
I know the bug hunting check is new but after seeing this video on clang static analyzer it seems you can run the clang analyzer with z3 as its SMT solver. I was thinking of running cppcheck bug hunting, clang and clang witj z3 on the Juliet test set just to see how they all do.
especially cppcheck finds the case with the missing copy operator