I thought about a code freeze several times during the last releases. GitHub seems to not support it directly, at least i have not found such a functionality. It is possible to specify rules for branches to be more strict, but that is not exactly what i thought of.
I think such a freeze would be a good idea some days before a release.
Maybe some crashes could be fixed then (http://cppcheck.osuosl.org:8000/crash.html).
I will (try to) locally run some larger projects with such a frozen version and check for problems.
Last edit: versat 2019-04-29
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Making multithreading work again under Cygwin by using std::thread instead of the disabled fork() implementation maybe would be useful before the next release. Not sure. The ticket is https://trac.cppcheck.net/ticket/8973
I have looked at the multithreading code but do not fully understand it. And at the moment i do not have time to find out how it works and implement a std::thread solution.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
hmm.. if somebody wants to implement that I would be willing to delay the release a little for it.
But otherwise... if a cygwin user has been using -j before and now can't use it then I fear that will be really annoying. We could maybe activate the fork in the release. If it works, all is fine, if it does not work, the user has to remove the -j.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I have checked some packages reported to be crashing and i also observed it myself while running daca@home. Since some days several packages use so much RAM that Cppcheck is killed or the system crashes (it crashed my Win 10 system that really should be somewhat stable :) ).
I have created this ticket for the issue: https://trac.cppcheck.net/ticket/9122
In my opinion we are now starting to get release ready.. I have a feeling that we can release in 2-3 weeks. What is your feeling? Do you see some critical blockers?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
There is a large number of daca crashes in the last week from an arm system: Linux-4.14.50-v7+-armv7l-with-debian-9.4
They look lik this:
Program received signal SIGSEGV, Segmentation fault.
0x0009e1a8 in Token::astOperand2 (this=<optimized out>) at lib/token.h:1095
1095 return mImpl->mAstOperand2;
#0 0x0009e1a8 in Token::astOperand2 (this=<optimized out>) at lib/token.h:1095
#1 CheckCondition::check5991 (this=0x5112a0, this@entry=0x7effd7b8) at build/checkcondition.cpp:2298
#2 0x000a5954 in CheckCondition::runChecks (this=this@entry=0x2bb86c <_ZN12_GLOBAL__N_18instanceE>, tokenizer=0x7effe1e8, tokenizer@entry=0x132d44 <CppCheck::checkNormalTokens(Tokenizer const&)+232>, settings=0x7effec4c, settings@entry=0x2e4d98, errorLogger=errorLogger@entry=0x7effeb28) at lib/checkcondition.h:69
#3 0x00132d44 in CppCheck::checkNormalTokens (this=this@entry=0x7effeb28, tokenizer=...) at build/cppcheck.cpp:723
#4 0x001376c8 in CppCheck::checkFile (this=this@entry=0x7effeb28, filename=<error reading variable: Cannot create a lazy string with address 0x0, and a non-zero length.>, cfgname="", fileStream=...) at build/cppcheck.cpp:498
#5 0x00139c40 in CppCheck::check (this=this@entry=0x7effeb28, path="temp/libunicap-0.9.12/cpi/dcam/dcam.c") at build/cppcheck.cpp:157
#6 0x00250eb8 in CppCheckExecutor::check_internal (this=this@entry=0x7efff278, cppcheck=..., argv=argv@entry=0x7efff534) at cli/cppcheckexecutor.cpp:884
#7 0x00251660 in CppCheckExecutor::check (this=this@entry=0x7efff278, argc=argc@entry=13, argv=argv@entry=0x7efff534) at cli/cppcheckexecutor.cpp:198
#8 0x0005986c in main (argc=13, argv=0x7efff534) at cli/main.cpp:95
I have been unable to duplicate these on my ubuntu x64 system or my raspberry pi.
Where is CheckCondition::check5991? I don't get that function when I build with match compiler.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
hmm.. maybe the client must be restarted. I am not sure but I discovered my mistake quickly and maybe I removed my wrong commit with a force push.
So that client probably made a "git pull" before my force push. and now when it runs "git pull" it does not get the latest changes.
It should be possible to update the donate-cpu.py script to catch that "git pull" does not work.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Currently the only package where head crashes but 1.87 does not crash is http://cppcheck.osuosl.org:8000/asymptote
I can still reproduce the issue with head.
After a while Cppcheck starts to eat all RAM (32 GB installed). So this really could be a bug or something that should be fixed.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm looking at it. It's a stack overflow. I haven't reduced it yet.
It would be nice if daca also gave stack traces for the remaining crashses. Can we maybe memory limit the process or do somethig to force a stack trace for stack overflows?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
gnudatalanguage also has a new crash. This is caused by the recient caching optimization of template name position. The cahced value is no longer valid when an out of line member function gets instantiated.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It would be nice if daca also gave stack traces for the remaining crashses. Can we maybe memory limit the process or do somethig to force a stack trace for stack overflows?
Sadly limiting the memory seems to be not really possible in a reliable way. I have searched for a way to do this a while ago without success. Even with external tools like "ulimit" it did not really work for me (ulimit -v should work, but did not for me, at least not on all platforms).
On Linux there is the C function setrlimit(), but i am not sure how we could use it best.
Currently the package is reanalyzed via GDB only if Cppcheck returns with -11. I am not sure if and which other return codes should also trigger reanalysis via GDB. Sometimes there seem to be results with return code -9 where the analysis has been aborted in a normal way, sometimes it has been aborted by OOM Killer or so.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The crash page looks pretty good imho. Overall Cppcheck HEAD is about 10% faster than 1.87. The number of defects is not good but acceptable. Looking at the diffs.. it is not perfect but I guess it is acceptable more or less... so what do you think now?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm working on a patch that improves the template simplifier performance. I reciently changed the maximum template simplifier pass count to 4. This was necessary to fix a bug where we were expanding instantiations in template declarations which caused the generation of bad code. Some tests required the additional passes to get the same results. The bad effect of this change is that any uninstantiated templates will cause the template simplifier to always run the maximum number of passes.
My patch changes the maximul pass count to 10 but exits as soon as it detects a pass that makes no changes. This ensures that only the minimum number of passes required to get full simplification will be run. This way as many passes as necessary will be run to fully simplify very complex templates but uninstantiated templates will not cause unnecessary passes.
Unfortunately the template alias code doesn't support variadic aliases which can cause variadic template aliases to hit the maximum pass count. My patch fixes the variadic template alias problem but I want to test it on as much code as possible before I submit it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
In the last couple of days I have been trying to find out why HEAD shows so much fewer uninitMemberVar warnings than 1.87. I am hoping that this can be fixed before the release because the checker is almost ineffective now..
amai created a ticket that I believe is a blocker: https://trac.cppcheck.net/ticket/9182
I don't know how many times I have had bugs caused by spaces in paths. Too many. If we only could detect all these with cppcheck ... :-)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I believe we will need to focus on making cppcheck release ready for a while.
I thought about a code freeze several times during the last releases. GitHub seems to not support it directly, at least i have not found such a functionality. It is possible to specify rules for branches to be more strict, but that is not exactly what i thought of.
I think such a freeze would be a good idea some days before a release.
Maybe some crashes could be fixed then (http://cppcheck.osuosl.org:8000/crash.html).
I will (try to) locally run some larger projects with such a frozen version and check for problems.
Last edit: versat 2019-04-29
sounds good.
some goals I have;
do you have some further ideas? Do you see some specific ticket that you feel is a blocker?
Last edit: Daniel Marjamäki 2019-05-01
Making multithreading work again under Cygwin by using std::thread instead of the disabled fork() implementation maybe would be useful before the next release. Not sure. The ticket is https://trac.cppcheck.net/ticket/8973
I have looked at the multithreading code but do not fully understand it. And at the moment i do not have time to find out how it works and implement a std::thread solution.
hmm.. if somebody wants to implement that I would be willing to delay the release a little for it.
But otherwise... if a cygwin user has been using -j before and now can't use it then I fear that will be really annoying. We could maybe activate the fork in the release. If it works, all is fine, if it does not work, the user has to remove the -j.
It would be good to address the syntax errors on valid code:
https://trac.cppcheck.net/ticket/8890
https://trac.cppcheck.net/ticket/9057
https://trac.cppcheck.net/ticket/9109
yes those are higher-prio tickets.
the Cppcheck warnings that are temporarily suppressed need to be fixed. why do we add a check if we don't even care about the warnings ourselfes.
And also.. feel free to report here if you look at some daca@home warnings. Can we see if for instance stlFindInsert works good enough?
I have checked some packages reported to be crashing and i also observed it myself while running daca@home. Since some days several packages use so much RAM that Cppcheck is killed or the system crashes (it crashed my Win 10 system that really should be somewhat stable :) ).
I have created this ticket for the issue: https://trac.cppcheck.net/ticket/9122
Commit https://github.com/danmar/cppcheck/commit/81b02197e8cd7aad7e4a8a5813a23fb873e683c1 seems to cause these issues.
Last edit: versat 2019-05-07
Thanks!
In my opinion we are now starting to get release ready.. I have a feeling that we can release in 2-3 weeks. What is your feeling? Do you see some critical blockers?
The crashes have increased in the last days. Some days ago daca has shown no new crashes.
http://cppcheck.osuosl.org:8000/crash.html
Last edit: versat 2019-06-01
There is a large number of daca crashes in the last week from an arm system:
Linux-4.14.50-v7+-armv7l-with-debian-9.4
They look lik this:
I have been unable to duplicate these on my ubuntu x64 system or my raspberry pi.
Where is
CheckCondition::check5991
? I don't get that function when I build with match compiler.check5991 is a proof of concept that I committed by mistake and then reverted.
I will give a talk on june 13th were I briefly show how to create and share a simple check.
So will these crashes stop and be removed when run again?
hmm.. maybe the client must be restarted. I am not sure but I discovered my mistake quickly and maybe I removed my wrong commit with a force push.
So that client probably made a "git pull" before my force push. and now when it runs "git pull" it does not get the latest changes.
It should be possible to update the donate-cpu.py script to catch that "git pull" does not work.
I removed these results as they are just wrong. I'm not sure how to locate the client and tell the owner to reset it.
Currently the only package where head crashes but 1.87 does not crash is http://cppcheck.osuosl.org:8000/asymptote
I can still reproduce the issue with head.
After a while Cppcheck starts to eat all RAM (32 GB installed). So this really could be a bug or something that should be fixed.
I'm looking at it. It's a stack overflow. I haven't reduced it yet.
It would be nice if daca also gave stack traces for the remaining crashses. Can we maybe memory limit the process or do somethig to force a stack trace for stack overflows?
gnudatalanguage also has a new crash. This is caused by the recient caching optimization of template name position. The cahced value is no longer valid when an out of line member function gets instantiated.
Sadly limiting the memory seems to be not really possible in a reliable way. I have searched for a way to do this a while ago without success. Even with external tools like "ulimit" it did not really work for me (
ulimit -v
should work, but did not for me, at least not on all platforms).On Linux there is the C function setrlimit(), but i am not sure how we could use it best.
Currently the package is reanalyzed via GDB only if Cppcheck returns with -11. I am not sure if and which other return codes should also trigger reanalysis via GDB. Sometimes there seem to be results with return code -9 where the analysis has been aborted in a normal way, sometimes it has been aborted by OOM Killer or so.
The crash page looks pretty good imho. Overall Cppcheck HEAD is about 10% faster than 1.87. The number of defects is not good but acceptable. Looking at the diffs.. it is not perfect but I guess it is acceptable more or less... so what do you think now?
I'm working on a patch that improves the template simplifier performance. I reciently changed the maximum template simplifier pass count to 4. This was necessary to fix a bug where we were expanding instantiations in template declarations which caused the generation of bad code. Some tests required the additional passes to get the same results. The bad effect of this change is that any uninstantiated templates will cause the template simplifier to always run the maximum number of passes.
My patch changes the maximul pass count to 10 but exits as soon as it detects a pass that makes no changes. This ensures that only the minimum number of passes required to get full simplification will be run. This way as many passes as necessary will be run to fully simplify very complex templates but uninstantiated templates will not cause unnecessary passes.
Unfortunately the template alias code doesn't support variadic aliases which can cause variadic template aliases to hit the maximum pass count. My patch fixes the variadic template alias problem but I want to test it on as much code as possible before I submit it.
ok good!
In the last couple of days I have been trying to find out why HEAD shows so much fewer uninitMemberVar warnings than 1.87. I am hoping that this can be fixed before the release because the checker is almost ineffective now..
amai created a ticket that I believe is a blocker: https://trac.cppcheck.net/ticket/9182
I don't know how many times I have had bugs caused by spaces in paths. Too many. If we only could detect all these with cppcheck ... :-)