Will an unique subject become more helpful also for this issue? Would you like to avoid any confusion for relevant content differences?
I observed that the analysis tool “Cppcheck 2.10.3-2.1” did not warn about undefined behaviour in the following source code example. int my_test(void) { struct x * p1 = NULL; struct y * p2 = &(p1->my_item); if (!p1) return EINVAL; return 0; } Will the software development attention grow according to such questionable access attempts for data structure members?
I got into the mood to repeat this analysis approach for the tool version “2.9.3”. Thus I determined by my SmPL script “list_often_used_match_expressions.cocci” that 197 patterns were passed more than once to the member function “Token::Match”. 🔮 Will such incidence statistics trigger further software evolution?
💭 I am curious if another software developer (besides me) can get into the mood to apply a corresponding update suggestion which could be generated by the software “Coccinelle” (also with the help of a variant from the following script). @Remove_unnecessary_pointer_checks@ expression x; @@ -if (\(x != 0 \| x != NULL\)) free(x);
I am trying to increase the development attention also for tools like the following. Semantic patch language (Coccinelle software) AspectC++ Cppcheck Splint
I would prefer to get rid of another bit of redundant source code.
How do you think about to improve static source code analysis also for this software?
Remove unnecessary null pointer checks
Completion of error handling
Improve exception safety with smart pointers
Remove unnecessary null pointer checks
Thanks for your source code improvement.
I'm referring to NVRAM-based settings, not files. Thanks for your distinction. A boot entry was activated based on the boot manager “GRUB 2.04” in the meantime. I would become interested to add further boot entries for example by using the software “rEFInd” as a secondary boot manager.
…, so my guess is that the update process also wiped your boot entries. I did not get such an impression. The system configuration files were still available at the usual places.
It's hard to say what's happening without knowing what the error messages are. Would you like to add any further thoughts according to information from the following text in a white font on a black background? Reboot and Select proper Boot device or Insert Boot Media in selected Boot device and press a key
Thanks for another constructive feedback. 🤔 Would any software users like to add (and support) more helpful development ideas?
Thanks for your information about the current software situation.
It's hard to say what's happening without knowing what the error messages are. I am curious then if a corresponding photograph would become relevant. Do rEFInd messages contain any prefixes so that the displayed text can be distinguished from information by other software components? I became also curious how a specific BIOS version influenced the detection (or further handling) of the EFI system partition by your current boot manager in special ways.
I configured a boot menu for the software “rEFInd 0.12.0”. It worked for a while according to a simple test. I got into the mood to install the BIOS update “X399AOPR.F2” here. I noticed then that error messages were displayed on a text screen after computer restarts. 🤔 It seems that I need to reactivate desired menus with an other boot manager then.
I am used to the possibility that systems can be booted from ISO files also directly by the means of the software “GRUB 2”. Thus I am looking for further clarification according to configuration challenges together with the rEFInd boot manager. I have stored a few ISO files for known Linux systems in a EXT3 boot partition (and the EFI system partition). They are configured for a simple test so that the graphical boot menu display is working as expected for rEFInd so far. The settings need further...
Support for time series diagram with trilean values
Note, that using a unique_ptr for the data members is convenient as I do not need to take care of deleting it manually Such an effect would be desired, wouldn't it? but it does not improve anything towards exception safety ! How does this view fit to other information sources?
Improve exception safety with smart pointers
I suggest to take another look at improving source code analysis capabilities. Would you like to help any further with corresponding software development resources?
Will return value ignorance matter any more in such situations?
How much do you check run time characteristics for advanced source code analyses?
Can any circumstances evolve under which you would like to perform data processing by the programming means of a software framework like GStreamer?
I don't know. I find this information surprising. The first link in my previous post was published by a Microsft employee who helped design .NET. I find the usage of deterministic object destruction clear. I wonder also why the author “branbray” suggested in the article “Some Notes about Mixed Types” (from 2005-07-20) to use an extra null pointer check before the C++ delete operator call in the finalizer of the class template “Embedded”. 2) It seriously degrades application performance. How do you...
Which software behaviour may you expect for the deletion of objects allocated on the CLI heap according to the standard specification?
Did you get informed in the way that the delete operation from the language “C++/CLI” would not tolerate the passing of null pointers? Do you find information from the section “15.4.5 Delete” of the standard specification “ECMA-372” sufficient?
My bisection approach pointed the commit “comment out old memleak checking. maybe it can be removed.” (from 2018-11-20) out as “the first” questionable one for this issue. Now I am curious how the software evolution will be continued since further adjustments were applied also around the implementation of the function “runSimplifiedChecks”.
I am curious then how our different source code size tolerance can be adjusted to fix this issue easier.
How do you think about any help with bisecting such an issue? Do you find the source file “vglclient.cpp” (which was affected before the commit “Use delete[] instead of free() for memory allocated with new.” on 2019-12-12) interesting for further clarification?
I have determined that a commit between the program versions “1.85” and “1.86” influenced this source code search pattern in undesirable ways. Would you like to narrow questionable changes down further?
🔮 I hope that ways can be clarified to improve the software situation also around the reported false negative.
Should the check be called with a parameter less if such a list is kept empty so far? Is this variable superfluous at the moment?
I've had to rewrite Cppcheck infrastructure a couple of times because I learned the hard way there was limitations. While Clang infrastructure is still the same (just enhanced not rewritten), they designed it really well from the start. Will this experience influence the corresponding software development any further in positive ways?
I have taken another look at the following function implementation. void getErrorMessages(ErrorLogger *e, const Settings *settings) const OVERRIDE { // … const std::list<const Token *> callstack; c.mismatchAllocDealloc(callstack, "varname"); // … } Now I wonder about the properties for this local variable. 🔮 Should the “call stack” be filled with useful data anyhow?
You wrote in the issue report that you used 1.88, Such a version was also installed on my system by a software package. but in fact 1.84 has been used? Yes. And with 1.88 it is no longer detected? Two developers got this impression. Can you try to create a reduced example with 1.84 where the issue is detected? The questionable source code can be extracted from a single function implementation. But my motivation would be low at the moment to reduce it further.
I needed another moment to realise that I submitted the bug report “Fix a mismatching allocation and deallocation for the variable “buf”” based on an analysis result by the program version “1.84 dev”. Now I wonder why the same information is not presented by the version “1.88-24.d_t.13” here.
An extra null pointer check is not needed in functions like the following. CassiniSoldner Ellipsoid
Usage of augmented assignment statements
I wonder still about the identifier “_PROCESSOR_H_INLCUDED” (and word “HEADDER”) in the file archive “ModRSsim2Source8.21.2.7..zip”. I find include guards also questionable which contain double underscores.
Which ones exactly? I am proposing to reconsider the software situation (and data models) also for the handling of function return types in more detail for a while. Many functions "return" undefined values in output arguments if they are not successful. I agree. How would you like to represent corresponding facts then? Will execution failure information be occasionally stored also in these output parameters? Why exactly should this not be differentiated? I suggest to increase case distinctions here....
To use something similar as for the return value I propose to take additional case distinctions better into account. i suggest <use-outval/>. I find an empty XML element insufficient for the safe description of constraints in the discussed use cases. I think Cppcheck should only complain if the value is not used in code paths that handle a successful function execution. This desire is reasonable. - But it is too limiting for programming interface designs which can be technically possible. So we need...
To use something similar as for the return value I propose to take additional case distinctions better into account. i suggest <use-outval/>. I find an empty XML element insufficient for the safe description of constraints in the discussed use cases. I think Cppcheck should only complain if the value is not used in code paths that handle a successful function execution. This desire is reasonable. - But it is too limiting for programming interface designs which can be technically possible. So we need...
I do not think such configuration could be used if there is any conditionals. I imagine that this design variant can be an usual one when a general agreement can be achieved at all for the proper usage of function return values. I guess that is a typical use case. I would find it surprising if programmers would choose to use an output parameter instead of working with a function return value directly.
Should data from output parameters be generally used after it was determined that a function call succeeded? Would you care for the use case that such a function argument can be treated as a return value if the function return type would be “void”?
…, you are just making wild guesses. Did I present educated guesses once more because of experiences from computer science? The checks do not take so much time. They represent significant work, don't they? If you comment out all our checks then cppcheck will still be roughly as slow as it is today. I find such a feedback strange. How would you like to avoid undesirable software slowness then? If that would give you a 2x speed difference then we would have such option. I would find improvements nice...
Construction of string literals without using plus operators
Can you follow a development trend that more checks will result in longer run times for safe source code analysis? Would you like to select check sets which can fit to run time limits? How do you think about to measure the influence of string comparisons on the run time characteristics? Can any data processing benefit from work which would be invested in more efficient data structures? How much can the avoidance of duplicate code generation and execution help?
How do you think about other update candidates besides data processing for class templates?
The changes for these software versions resulted in a significant difference for the shown run time behaviour. Are you becoming more interested in precise software profiling?
The changes for these software versions resulted in a significant difference for the shown run time bheaviour. Are you becoming more interested in precise software profiling?
I find it questionable that interesting contents were deleted also from this discussion topic.
Usage of augmented assignment statements
…, while checking 1KB file take 4 seconds. Does the source code analysis result in any more run times which you find significant for your work flow? Would you like to distinguish relevant files by priority? Little file can include 100 other files, Do you maintain such a list manually? Were any inclusion specifications automatically generated? so file size doesn't matter. Other software details will become more interesting then, won't they? Do you recognise any special code patterns in the source...
Our project is too big … I would prefer to interpret the software situation in more constructive ways. Efficient data processing will be more helpful here, won't it? and I don't know how to deremine wich files are most slow. How do you think about to share any information from the file size distribution? Would you like to check the run time behaviour for code analysis on the biggest source file?
Would you like to improve the software build infrastructure for corresponding program profiling anyhow?
How do you think about to adjust the code markup in your message? To which software version did you refer?
But I do not know how to determine if this is some problem that should be looking into. Do you need any more help for the systematic comparison of run time characteristics?
Will file sizes matter in significant ways? Which C++ structures did become more challenging for efficient source code analysis?
Will file sizes matter in significant ways? Which C++ structures became more challenging for efficient source code analysis?
How do you think about to take another look at information from articles like the following? MSC11-C. Incorporate diagnostic tests using assertions ERR00-C. Adopt and implement a consistent and comprehensive error-handling policy Will changes on your analysis tool adjust any views on questionable programming practices?
Yes, our code is c++ with templates Can you share any more statistics for the analysed source code structure? Do you find data processing characteristics especially remarkable for any specific source files?
I guess that a quick “fix” would be the omission of a questionable bracket expression (instead of escaping a dash). Will the bracket expression “,:” need further adjustments? But I imagine that a safe specification for value ranges will be more challenging.
I guess that a quick “fix” would be the ommission of a questionable bracket expression (instead of escaping a dash). Will the bracket expression “,:” need further adjustments? But I imagine that a safe specification for value ranges will be more challenging.
How interesting do you find details from bug and issue trackers for the referenced software tools? (How much will deviations matter for data validation by these approaches?) I occasionally try to achieve something for the programming language “CMake”. I am curious under which circumstances possible extensions will get the desired software development attention also for build scripts. The XML data model contains aspects for further design considerations, doesn't it?
Thanks for your reminder for the tool “xmllint”. This one and other mentioned software will need further adjustments as usual. How much will we influence the situation here? How are the chances to offer configuration checks also for a CMake command (and not only for build targets by the “Makefile”)? I know something also about variations in regular expression syntax. Would you like to resolve additional error information anyhow?
This functionality is described in a way where I see high system configuration efforts. I imagine that more return values should usually be checked than ignored. Will it be more convenient to specify the circumstances for exceptions with return value usage instead? Would you like to compare source code analysis efforts between the management of white- and black-lists any more?
But you have to realize that you have implement these checkers from scratch. I hope that this software situation can change more. And you could then add other noisy checks to that addon. How much deviations will be tolerated from Cppcheck's original goals for the mentioned purpose?