You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(5) |
Jul
|
Aug
(1) |
Sep
|
Oct
(3) |
Nov
(2) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(14) |
May
(1) |
Jun
(2) |
Jul
(11) |
Aug
(8) |
Sep
|
Oct
(6) |
Nov
(3) |
Dec
(21) |
2003 |
Jan
|
Feb
(1) |
Mar
(4) |
Apr
|
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
(5) |
Sep
(10) |
Oct
(2) |
Nov
(2) |
Dec
(3) |
2004 |
Jan
(1) |
Feb
|
Mar
|
Apr
(2) |
May
(1) |
Jun
|
Jul
(4) |
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2005 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
|
Nov
(2) |
Dec
(3) |
2006 |
Jan
(10) |
Feb
(7) |
Mar
(32) |
Apr
(7) |
May
(9) |
Jun
(24) |
Jul
(8) |
Aug
(4) |
Sep
(7) |
Oct
(30) |
Nov
(34) |
Dec
(29) |
2007 |
Jan
(36) |
Feb
(26) |
Mar
(30) |
Apr
(80) |
May
(70) |
Jun
(94) |
Jul
(132) |
Aug
(75) |
Sep
(24) |
Oct
(19) |
Nov
(17) |
Dec
|
2008 |
Jan
|
Feb
(4) |
Mar
(34) |
Apr
(4) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(16) |
Dec
(28) |
2009 |
Jan
(9) |
Feb
(19) |
Mar
(4) |
Apr
(22) |
May
(18) |
Jun
(11) |
Jul
|
Aug
(1) |
Sep
(20) |
Oct
(8) |
Nov
(7) |
Dec
(7) |
2010 |
Jan
(2) |
Feb
(25) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
(5) |
2011 |
Jan
(12) |
Feb
(6) |
Mar
|
Apr
(2) |
May
(1) |
Jun
(4) |
Jul
(1) |
Aug
|
Sep
|
Oct
(9) |
Nov
(3) |
Dec
(1) |
2012 |
Jan
(1) |
Feb
|
Mar
(3) |
Apr
(2) |
May
(6) |
Jun
(7) |
Jul
|
Aug
(1) |
Sep
|
Oct
(21) |
Nov
(30) |
Dec
|
2013 |
Jan
(14) |
Feb
(36) |
Mar
(5) |
Apr
(28) |
May
(13) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(10) |
Oct
(14) |
Nov
(15) |
Dec
(28) |
2014 |
Jan
(30) |
Feb
(11) |
Mar
(4) |
Apr
(15) |
May
(2) |
Jun
(9) |
Jul
(1) |
Aug
|
Sep
(10) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(4) |
Feb
|
Mar
(5) |
Apr
(7) |
May
|
Jun
|
Jul
(6) |
Aug
(2) |
Sep
(2) |
Oct
|
Nov
(3) |
Dec
(11) |
2016 |
Jan
|
Feb
(2) |
Mar
(4) |
Apr
|
May
|
Jun
(2) |
Jul
(6) |
Aug
(1) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
2017 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(16) |
Jun
(1) |
Jul
|
Aug
(7) |
Sep
(11) |
Oct
|
Nov
|
Dec
(3) |
2021 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ken M. <zar...@nt...> - 2020-05-28 00:59:49
|
On Wed, May 27, 2020 at 11:12:30PM +0100, Ken Moffat via Check-devel wrote: (Adding Branden) > On Wed, May 27, 2020 at 12:35:41PM -0700, chr...@ma... wrote: > > On 2020-05-27 00:47, Branden Archer wrote: > > The build where glibc rejected -O0 was altered to use -O1 for glibc. > It hasn't changed the results for sigfpe in bash's tests in chroot, > and has not yet got to check, but I expect I'll need to continue > until I can boot the system (to get rid of the host glibc underlying > chroot) and retry check. Not hopeful, but need to try. > > Like many things, I think an answer may have been documented > somewhere, but so far I've not found anything relevant. > > Meanwhile, thanks for the help in understanding why the check tests > fail. > That new build has now been booted. I then built check and ran the tests - they all passed, so I did need to get away from the host glibc which was sitting under chroot. I think a -O1 build might be a bit slower than I like, and detuning from glibc's default of -O2 feels odd, but I guess I'm on the way forward and with a variety of things to try over the next week or so. Thanks again to you both! ĸen -- Do you not know that, what you belittle by the name tree is but the mere four-dimensional analogue of a whole multidimensional universe which - no, I can see you do not. -- Druellae (a Dryad) |
From: Ken M. <zar...@nt...> - 2020-05-27 22:12:40
|
On Wed, May 27, 2020 at 12:35:41PM -0700, chr...@ma... wrote: > On 2020-05-27 00:47, Branden Archer wrote: > > (I do not seem to be getting all the emails from Ken for some reason. I > > see some message I did not see in Fredrik's response.) > > My emails from Ken say "Ken Moffat via Check-devel", if there's an issue > it's probably related to that. I'll forward them all in order. Ken, please > reply-all if you aren't already, the mailing list won't send duplicate > messages. > I thought I was doing that, but I might have forgotten on some replies. If so, apologies. More likely is validation failures causing my mails to either be rejected (if so, I'll find out in a few days), or marked as spam. > > Perhaps the test needs to be updated to pick another signal to try. Or, > > maybe inducing a division by zero is a portable way to cause a SIGFPE. > > Since Check is a C unit testing framework, we could assume that tested > systems are compliant either with ISO C or a well-defined subset of ISO C, > e.g. ISO C without forking. > > The POSIX-2017 spec refers to ISO C and is clear that raise(SIGFPE) is > supposed to generate a signal: > > The ISO C standard only requires the signal names SIGABRT, SIGFPE, SIGILL, > SIGINT, SIGSEGV, and SIGTERM to be defined. An implementation need not > generate any of these six signals, except as a result of explicit use of > interfaces that generate signals, such as raise(), [CX] [Option Start] > kill(), the General Terminal Interface (see Special Characters), and the > kill utility, unless otherwise stated (see, for example, XSH Memory > Protection). > > https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html > > The C11 standard has the similar wording on page 283 (numbered as 265): > > http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf > > I'm not an expert in this area by any means. I'm most curious to know if > Ken's observed behaviour is a bug or if his system is working as intended, > and if so then why? Probably someone at gcc/clang/glibc/linux will know a > lot more about this. > > Chris At the moment it looks as if there is something odd going on in most of my systems (not sure if I mentioned here, but my athlon 200ge built with -O2 -march=native on gcc-9.2 works as expected, other machines don't). The build where glibc rejected -O0 was altered to use -O1 for glibc. It hasn't changed the results for sigfpe in bash's tests in chroot, and has not yet got to check, but I expect I'll need to continue until I can boot the system (to get rid of the host glibc underlying chroot) and retry check. Not hopeful, but need to try. Like many things, I think an answer may have been documented somewhere, but so far I've not found anything relevant. Meanwhile, thanks for the help in understanding why the check tests fail. ĸen -- Do you not know that, what you belittle by the name tree is but the mere four-dimensional analogue of a whole multidimensional universe which - no, I can see you do not. -- Druellae (a Dryad) |
From: Chris P. <chr...@ma...> - 2020-05-27 22:08:56
|
On 2020-05-27 00:47, Branden Archer wrote: > (I do not seem to be getting all the emails from Ken for some reason. I > see some message I did not see in Fredrik's response.) > > From Ken's experiment, it is interesting that a division by zero can > raise a floating point signal but signal(SIGFPE) cannot. Maybe > raise(SIGFPE) is not as good a test as we thought. There is at least one > other platform where trying to raise(SIGFPE) also does not work exactly > as planned (link <https://github.com/libcheck/check/issues/97>). That bug report is complaining that raise(SIGFPE) works fine from a simple program under Cygwin but not when Check is using it. Whereas Ken cannot get a simple program to work under Linux. So Check is failing in both cases, but maybe the Cygwin case shouldn't be failing and Ken's case should be failing? If that bug was resolved it might help. I have a Cygwin system but I'm just too busy to do more than comment for the forseable future. (Hence my off-the-cuff answers...) Chris |
From: <chr...@ma...> - 2020-05-27 19:36:04
|
On 2020-05-27 00:47, Branden Archer wrote: > (I do not seem to be getting all the emails from Ken for some reason. I > see some message I did not see in Fredrik's response.) My emails from Ken say "Ken Moffat via Check-devel", if there's an issue it's probably related to that. I'll forward them all in order. Ken, please reply-all if you aren't already, the mailing list won't send duplicate messages. > Perhaps the test needs to be updated to pick another signal to try. Or, > maybe inducing a division by zero is a portable way to cause a SIGFPE. Since Check is a C unit testing framework, we could assume that tested systems are compliant either with ISO C or a well-defined subset of ISO C, e.g. ISO C without forking. The POSIX-2017 spec refers to ISO C and is clear that raise(SIGFPE) is supposed to generate a signal: The ISO C standard only requires the signal names SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, and SIGTERM to be defined. An implementation need not generate any of these six signals, except as a result of explicit use of interfaces that generate signals, such as raise(), [CX] [Option Start] kill(), the General Terminal Interface (see Special Characters), and the kill utility, unless otherwise stated (see, for example, XSH Memory Protection). https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html The C11 standard has the similar wording on page 283 (numbered as 265): http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf I'm not an expert in this area by any means. I'm most curious to know if Ken's observed behaviour is a bug or if his system is working as intended, and if so then why? Probably someone at gcc/clang/glibc/linux will know a lot more about this. Chris |
From: Branden A. <b.m...@gm...> - 2020-05-27 07:47:29
|
(I do not seem to be getting all the emails from Ken for some reason. I see some message I did not see in Fredrik's response.) >From Ken's experiment, it is interesting that a division by zero can raise a floating point signal but signal(SIGFPE) cannot. Maybe raise(SIGFPE) is not as good a test as we thought. There is at least one other platform where trying to raise(SIGFPE) also does not work exactly as planned (link <https://github.com/libcheck/check/issues/97>). Looking at the commit history to determine why SIGFPE, the test was added in the initial revision of Check (link <https://github.com/libcheck/check/commit/daf41f96162e36a7d31e42f93f833472ef99ee6e#diff-2e4afdbfd1b0a116126cb3022ac187b0R40>). If there was a reason, it may be lost to the sands of time. Perhaps the test needs to be updated to pick another signal to try. Or, maybe inducing a division by zero is a portable way to cause a SIGFPE. - Branden On Tue, May 26, 2020 at 9:19 PM Ken Moffat via Check-devel < che...@li...> wrote: > On Tue, May 26, 2020 at 09:49:49PM +0100, Ken Moffat via Check-devel wrote: > > On Tue, May 26, 2020 at 12:46:01PM -0700, chr...@ma... > wrote: > > > On 2020-05-26 10:09, Ken Moffat via Check-devel wrote: > > > > > Try a -O0 build of glibc and then linux. I'm suggesting this because > you > > > also noticed problems in Bash. > > > > > > Chris > > > I changed my CFLAGS for glibc (only) to -O0 -march=native but no can > do: > > error: #error "glibc cannot be compiled without optimization" > > ĸen > -- > Do you not know that, what you belittle by the name tree is but the > mere four-dimensional analogue of a whole multidimensional universe > which - no, I can see you do not. -- Druellae (a Dryad) > > > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel > |
From: Ken M. <zar...@nt...> - 2020-05-27 04:19:28
|
On Tue, May 26, 2020 at 09:49:49PM +0100, Ken Moffat via Check-devel wrote: > On Tue, May 26, 2020 at 12:46:01PM -0700, chr...@ma... wrote: > > On 2020-05-26 10:09, Ken Moffat via Check-devel wrote: > > > Try a -O0 build of glibc and then linux. I'm suggesting this because you > > also noticed problems in Bash. > > > > Chris > I changed my CFLAGS for glibc (only) to -O0 -march=native but no can do: error: #error "glibc cannot be compiled without optimization" ĸen -- Do you not know that, what you belittle by the name tree is but the mere four-dimensional analogue of a whole multidimensional universe which - no, I can see you do not. -- Druellae (a Dryad) |
From: Ken M. <zar...@nt...> - 2020-05-26 20:49:58
|
On Tue, May 26, 2020 at 12:46:01PM -0700, chr...@ma... wrote: > On 2020-05-26 10:09, Ken Moffat via Check-devel wrote: > > So, raise on its own apparently doesn't generate the exception. > > Well, raise returns zero on success, so try: > > printf("raise: %d\n", raise(SIGFPE)) > > Raise can be implemented by kill, so try: > > printf("kill: %d\n", kill(getpid(), SIGFPE)) > raise: 0 kill: 0 > Try a -O0 build of glibc and then linux. I'm suggesting this because you > also noticed problems in Bash. > > Chris Hmm, I lack available systems. Will see if I can do chroot builds on another box, but will probably take more than a week (various rebuilds of packages from source, even on old systems, are coming up when firefox releases). Later. ĸen -- Do you not know that, what you belittle by the name tree is but the mere four-dimensional analogue of a whole multidimensional universe which - no, I can see you do not. -- Druellae (a Dryad) |
From: <chr...@ma...> - 2020-05-26 19:46:17
|
On 2020-05-26 10:09, Ken Moffat via Check-devel wrote: > So, raise on its own apparently doesn't generate the exception. Well, raise returns zero on success, so try: printf("raise: %d\n", raise(SIGFPE)) Raise can be implemented by kill, so try: printf("kill: %d\n", kill(getpid(), SIGFPE)) Try a -O0 build of glibc and then linux. I'm suggesting this because you also noticed problems in Bash. Chris |
From: Ken M. <zar...@nt...> - 2020-05-26 17:09:35
|
On Sun, May 24, 2020 at 11:21:34AM -0700, Branden Archer wrote: > Hi Ken. > Hi Branden, it's taken me a little while to get back to this, but here we go: > Thanks for taking a look at the logs, Chris. > > I'd like to give another look over the logs to see what may be amiss. > [...] > > and all those should report "Passed". I notice that there are two tests > which are not reporting the expected result: > > check_check_master.c:746:F:Core Tests:test_check_all_msgs:179: For > test 179:Signal Tests:test_fpe expected 'Received signal 8 (Floating > point exception)', got 'Passed' > check_check_master.c:746:F:Core Tests:test_check_all_msgs:180: For > test 180:Signal Tests:test_mark_point expected 'Received signal 8 > (Floating point exception)', got 'Shouldn't reach here' > > > Both of these failures are related to raising SIGFPE. Looking at the first > test, test_fpe: > > > START_TEST(test_fpe) > > { > > record_test_name(tcase_name()); > > record_failure_line_num(__LINE__-4); /* -4 as the failure is reported at > START_TEST() */ > > raise (SIGFPE); > > } > > > Apparently on the system under test raise(SIGFPE) is either not causing a > signal or Check is not able to detect the signal. Other signals being > tested (such as SIGSEGV in test_segv) are being correctly detected. > While going through the details of my other package tests in this build, I noticed that bash seemed to be failing because it didn't SIGFPE. ... > The signal tests are only enabled with fork() is enabled. If fork() is > disabled the test will not be run, as it will not be relevant (e.g. Check > cannot detect signals unless the test are run from within their own > processes). > > I suggest looking into an example program that raises a SIGFPE and > determining if it does cause a failure or not. For example. on my OSX > system: > > Brandens-MBP:raise brarcher$ cat test.c > > #include <stdio.h> > > #include <signal.h> > > > int main() { > > printf("Before\n"); > > raise(SIGFPE); > > printf("After\n"); > > return 0; > > } > > Brandens-MBP:raise brarcher$ gcc test.c > > Brandens-MBP:raise brarcher$ ./a.out > > Before > > Floating point exception: 8 > > Brandens-MBP:raise brarcher$ > > > Hope it helps. > > - Branden > Using that program: ken@plexi ~/check-debug $vim test.c ken@plexi ~/check-debug $gcc test.c ken@plexi ~/check-debug $./a.out Before After So I am indeed not getting that signal. However, if I change the program to do a division by zero ken@plexi ~/check-debug $diff -u test.c divbyzero.c --- test.c 2020-05-26 17:48:11.655277178 +0100 +++ divbyzero.c 2020-05-26 17:59:22.661766358 +0100 @@ -5,13 +5,18 @@ int main() { + int a = 1000; + int z = 0; + int result; + printf("Before\n"); - raise(SIGFPE); + //raise(SIGFPE); + result = a / z; printf("After\n"); - return 0; + return result; } I get the exception - ken@plexi ~/check-debug $./divbyzero Before Floating point exception So, raise on its own apparently doesn't generate the exception. ĸen -- Do you not know that, what you belittle by the name tree is but the mere four-dimensional analogue of a whole multidimensional universe which - no, I can see you do not. -- Druellae (a Dryad) |
From: <chr...@ma...> - 2020-05-25 17:15:48
|
[Resending, again.] This is a much better analysis, Branden, thank you. On 2020-05-24 11:21, Branden Archer wrote: > I notice that there are two tests > which are not reporting the expected result: > > check_check_master.c:746:F:Core Tests:test_check_all_msgs:179: For test 179:Signal Tests:test_fpe expected 'Received signal 8 (Floating point exception)', got 'Passed' > check_check_master.c:746:F:Core Tests:test_check_all_msgs:180: For test 180:Signal Tests:test_mark_point expected 'Received signal 8 (Floating point exception)', got 'Shouldn't reach here' It would be good if there was a way to filter the logs and show only the problematic test results. The simplest method would be to save a known good testing log, and then do something like: $ diff expected.log test_results.log | grep -E '^\-|^+' Chris |
From: Branden A. <b.m...@gm...> - 2020-05-24 18:21:56
|
Hi Ken. Thanks for taking a look at the logs, Chris. I'd like to give another look over the logs to see what may be amiss. At a high level, the test-suite.log file indicates there are two binaries which found at least one test failure, namely check_check and check_check_export. The tests in those binaries check various behaviours in Check, and confirm that stuff that should work does and that Check is able to detect when test failures occur (such as SIGSEGVs, etc). They run near identical tests. On my system (OSX) these two binaries do pass, and the first few lines of logs are the following: $ tests/check_check_export Running suite(s): Fork Sub 100%: Checks: 3, Failures: 0, Errors: 0 check_check_fork.c:38:P:Core:test_inc:0: Passed check_check_fork.c:47:P:Core:test_nofork_sideeffects:0: Passed check_check_fork.c:54:P:Core:test_nofork_pid:0: Passed Running suite(s): Check Servant Running suite(s):100%: Checks: 0, Failures: 0, Errors: 0 check.c:586: Bad status in set_fork_status Check Servant2 14%: Checks: 241, Failures: 166, Errors: 41 Note that failures and errors are being reported. The first part of the log (the lines above and the check_check_sub.c lines) are unit tests that try Check's API and either intentionally pass or intentionally fail. For example, this line check_check_sub.c:190:F:Simple Tests:test_ck_abort_msg:0: Failure expected attempts to invoke a failure with a specific message. Later on in the tests there are "test on the tests" (the check_check_master.c lines). These confirm that the tests which were run report success/failure as expected, the failure line number match, error messages match, etc. The test which check the error messages start with: check_check_master.c:679:P:Core Tests:test_check_all_msgs and all those should report "Passed". I notice that there are two tests which are not reporting the expected result: check_check_master.c:746:F:Core Tests:test_check_all_msgs:179: For test 179:Signal Tests:test_fpe expected 'Received signal 8 (Floating point exception)', got 'Passed' check_check_master.c:746:F:Core Tests:test_check_all_msgs:180: For test 180:Signal Tests:test_mark_point expected 'Received signal 8 (Floating point exception)', got 'Shouldn't reach here' Both of these failures are related to raising SIGFPE. Looking at the first test, test_fpe: START_TEST(test_fpe) { record_test_name(tcase_name()); record_failure_line_num(__LINE__-4); /* -4 as the failure is reported at START_TEST() */ raise (SIGFPE); } Apparently on the system under test raise(SIGFPE) is either not causing a signal or Check is not able to detect the signal. Other signals being tested (such as SIGSEGV in test_segv) are being correctly detected. The signal tests are only enabled with fork() is enabled. If fork() is disabled the test will not be run, as it will not be relevant (e.g. Check cannot detect signals unless the test are run from within their own processes). I suggest looking into an example program that raises a SIGFPE and determining if it does cause a failure or not. For example. on my OSX system: Brandens-MBP:raise brarcher$ cat test.c #include <stdio.h> #include <signal.h> int main() { printf("Before\n"); raise(SIGFPE); printf("After\n"); return 0; } Brandens-MBP:raise brarcher$ gcc test.c Brandens-MBP:raise brarcher$ ./a.out Before Floating point exception: 8 Brandens-MBP:raise brarcher$ Hope it helps. - Branden On Sat, May 23, 2020 at 11:56 PM <chr...@ma...> wrote: > On 2020-05-23 22:45, Ken Moffat via Check-devel wrote: > > On Thu, May 21, 2020 at 09:48:34PM -0700, chr...@ma... > wrote: > >> check_check_fixture.c:336:F:Core:test_ch_setup_sig:0: SRunner stat > string > >> incorrect with checked setup signal > > If you look at tests/check_check_fixture.c:336, it says in the comment > at the top of the function that the test will fail without fork. > > > Before I posted, I'd already tried with --disable-fork (no failures, > > but obviously a lot fewer tests were actually run) and I didn't get > > those messages. > > More evidence that it's a fork issue on your system. Official advice is > for you to use Check in --disable-fork mode. > > > But I'm at a loss about how to determine what is > > causing them. It looks as if > > > > Check Servant2 > > 19%: Checks: 260, Failures: 167, Errors: 43 > > > > means there were 43 unexpected errors, but deciphering testsuites > > seems to be an art which I have not acquired. The suites for > > different packages are all pretty different from each other - unless > > an individual test reports an obvious error where I can see how the > > expected result differs from what actually happened, and ideally get > > some error message along the way, then it might as well all be > > written in a language I can't read. > > I agree, it's hard to decipher our own testing logs. > > Without looking too hard, my guess is that these 43 errors are cascading > failures that result from your system's fork not working properly. > Ideally we would have some tests that check whether or not fork works, > and then not run the tests that require fork to work. Or in other > words, --disable-fork should be automatic if fork doesn't actually work. > > Assuming you still want to fix this, can you compile and run a simple > program that relies on fork? Examples here: > > https://www.geeksforgeeks.org/fork-system-call/ > > Also, what does Check's configure say about fork? There's explicit > testing in there. You can also look at config.log. > > If that all works fine, then you can start testing the other functions > related to fork that Check uses. They'll all show up in > https://linux.die.net/man/7/signal - src/check_run.c is a good starting > point to look for them. > > An alternatively useful thing would be a test in configure.ac that > forced --disable-fork if fork (or a related signal function) was not > working. > > Good luck! > > Chris > > > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel > |
From: <chr...@ma...> - 2020-05-24 06:56:15
|
On 2020-05-23 22:45, Ken Moffat via Check-devel wrote: > On Thu, May 21, 2020 at 09:48:34PM -0700, chr...@ma... wrote: >> check_check_fixture.c:336:F:Core:test_ch_setup_sig:0: SRunner stat string >> incorrect with checked setup signal If you look at tests/check_check_fixture.c:336, it says in the comment at the top of the function that the test will fail without fork. > Before I posted, I'd already tried with --disable-fork (no failures, > but obviously a lot fewer tests were actually run) and I didn't get > those messages. More evidence that it's a fork issue on your system. Official advice is for you to use Check in --disable-fork mode. > But I'm at a loss about how to determine what is > causing them. It looks as if > > Check Servant2 > 19%: Checks: 260, Failures: 167, Errors: 43 > > means there were 43 unexpected errors, but deciphering testsuites > seems to be an art which I have not acquired. The suites for > different packages are all pretty different from each other - unless > an individual test reports an obvious error where I can see how the > expected result differs from what actually happened, and ideally get > some error message along the way, then it might as well all be > written in a language I can't read. I agree, it's hard to decipher our own testing logs. Without looking too hard, my guess is that these 43 errors are cascading failures that result from your system's fork not working properly. Ideally we would have some tests that check whether or not fork works, and then not run the tests that require fork to work. Or in other words, --disable-fork should be automatic if fork doesn't actually work. Assuming you still want to fix this, can you compile and run a simple program that relies on fork? Examples here: https://www.geeksforgeeks.org/fork-system-call/ Also, what does Check's configure say about fork? There's explicit testing in there. You can also look at config.log. If that all works fine, then you can start testing the other functions related to fork that Check uses. They'll all show up in https://linux.die.net/man/7/signal - src/check_run.c is a good starting point to look for them. An alternatively useful thing would be a test in configure.ac that forced --disable-fork if fork (or a related signal function) was not working. Good luck! Chris |
From: Ken M. <zar...@nt...> - 2020-05-24 05:46:07
|
On Thu, May 21, 2020 at 09:48:34PM -0700, chr...@ma... wrote: > I sent this already, but now I'm resending from an address already > subscribed to the mailing list. > Yeah, lists can be fun - that bit me too. > Hi, > > Thanks for posting full logs. > > The following excerpts look like unexpected failures to me. I think it's > some problem with forks and signals. However, it's been a while since I've > looked at things closely, so don't take my word for it. > > Chris > > check_check.log > --------------- > Running suite(s): Fix Sub > 0%: Checks: 1, Failures: 1, Errors: 0 > check_check_fixture.c:36:S:Fix Sub:unchecked_setup:0: Test failure in > fixture > > check.c:586: Bad status in set_fork_status > Check Servant2 > > check_check_fixture.c:336:F:Core:test_ch_setup_sig:0: SRunner stat string > incorrect with checked setup signal > Before I posted, I'd already tried with --disable-fork (no failures, but obviously a lot fewer tests were actually run) and I didn't get those messages. But I'm at a loss about how to determine what is causing them. It looks as if Check Servant2 19%: Checks: 260, Failures: 167, Errors: 43 means there were 43 unexpected errors, but deciphering testsuites seems to be an art which I have not acquired. The suites for different packages are all pretty different from each other - unless an individual test reports an obvious error where I can see how the expected result differs from what actually happened, and ideally get some error message along the way, then it might as well all be written in a language I can't read. Thanks anyway. ĸen -- Remembering The People's Republic of Treacle Mine Road. Truth! Justice! Freedom! Reasonably priced Love! And a Hard-boiled Egg! |
From: <chr...@ma...> - 2020-05-22 05:13:42
|
I sent this already, but now I'm resending from an address already subscribed to the mailing list. Hi, Thanks for posting full logs. The following excerpts look like unexpected failures to me. I think it's some problem with forks and signals. However, it's been a while since I've looked at things closely, so don't take my word for it. Chris check_check.log --------------- Running suite(s): Fix Sub 0%: Checks: 1, Failures: 1, Errors: 0 check_check_fixture.c:36:S:Fix Sub:unchecked_setup:0: Test failure in fixture check.c:586: Bad status in set_fork_status Check Servant2 check_check_fixture.c:336:F:Core:test_ch_setup_sig:0: SRunner stat string incorrect with checked setup signal On 2020-05-21 19:51, Ken Moffat via Check-devel wrote: > I build check as part of linuxfromscratch, and I've been seeing > these errors since February 2018, but apparently not everyone > encounters them. I'm looking at a new approach to how we build the > fresh system, and as part of that I'm trying to track down why any > remaining test failures that I encounter are happening. > > So, I'm keen to understand what is different about my builds. The > logs are "somewhat large", so I assume the mailing list will not > accept them. Uploaded to > http://www.linuxfromscratch.org/~ken/check_test_errors/ > > I can see that each of these failing tests reports 43 errors, but > I'm none the wiser about which items in the log are the actual > errors. These logs are from building on a completed system built in > March, with various normal hardening and optimization CFLAGS, but > back in 2018 I didn't have those flags. All my builds are on > x86_64. > > Any help will be appreciated. Thanks. > > ĸen > |
From: Ken M. <zar...@nt...> - 2020-05-22 02:51:21
|
I build check as part of linuxfromscratch, and I've been seeing these errors since February 2018, but apparently not everyone encounters them. I'm looking at a new approach to how we build the fresh system, and as part of that I'm trying to track down why any remaining test failures that I encounter are happening. So, I'm keen to understand what is different about my builds. The logs are "somewhat large", so I assume the mailing list will not accept them. Uploaded to http://www.linuxfromscratch.org/~ken/check_test_errors/ I can see that each of these failing tests reports 43 errors, but I'm none the wiser about which items in the log are the actual errors. These logs are from building on a completed system built in March, with various normal hardening and optimization CFLAGS, but back in 2018 I didn't have those flags. All my builds are on x86_64. Any help will be appreciated. Thanks. ĸen -- Do you not know that, what you belittle by the name tree is but the mere four-dimensional analogue of a whole multidimensional universe which - no, I can see you do not. -- Druellae (a Dryad) |
From: Branden A. <b.m...@gm...> - 2019-10-30 05:32:39
|
Thanks for reporting this, Cancellier. Looking at the logs, it appears that the values expected from the floating point tests were too strict. The capitalization used on the floating values inf and NaN differ from the expectation. Those tests should be relaxed to allow for other capitalization, e.g. inf and INF should both be OK. The following bug has been logged to address this: https://github.com/libcheck/check/issues/234 If you have any further comments on the bug, kindly continue the discussion on GitHub. - Branden On Mon, Oct 28, 2019 at 6:55 AM Cancellier Walter < wal...@ge...> wrote: > Hi, > > > > I am trying to compile Check 0.13.0 on AIX 7.2 using gcc 8.1.0 . > > > > autoreconf --install, ./configure and make succeed; make check fails and > tells me to report to your email address. You can find full output below, > and I’m attaching tests/test-suite.log . > > > > Any feedback will be appreciated, thank you. > > > > Kind Regards, > > > > Walter > > > > $ make check > > Making check in lib > > Target "check" is up to date. > > Making check in src > > Target "check" is up to date. > > Making check in . > > Target "check-am" is up to date. > > Making check in checkmk > > make check-TESTS > > PASS: test/check_checkmk > > > ============================================================================ > > Testsuite summary for Check 0.13.0 > > > ============================================================================ > > # TOTAL: 1 > > # PASS: 1 > > # SKIP: 0 > > # XFAIL: 0 > > # FAIL: 0 > > # XPASS: 0 > > # ERROR: 0 > > > ============================================================================ > > Target "check" is up to date. > > Making check in tests > > make check-TESTS > > FAIL: check_check_export > > FAIL: check_check > > PASS: test_output.sh > > PASS: test_check_nofork.sh > > PASS: test_check_nofork_teardown.sh > > PASS: test_xml_output.sh > > PASS: test_log_output.sh > > PASS: test_set_max_msg_size.sh > > PASS: test_tap_output.sh > > > ============================================================================ > > Testsuite summary for Check 0.13.0 > > > ============================================================================ > > # TOTAL: 9 > > # PASS: 7 > > # SKIP: 0 > > # XFAIL: 0 > > # FAIL: 2 > > # XPASS: 0 > > # ERROR: 0 > > > ============================================================================ > > See tests/test-suite.log > > Please report to check-devel at lists dot sourceforge dot net > ============================================================================ > > make: 1254-004 The error code from the last command is 1. > > > > > > Stop. > > make: 1254-004 The error code from the last command is 2. > > > > > > Stop. > > make: 1254-004 The error code from the last command is 2. > > > > > > Stop. > > make: 1254-004 The error code from the last command is 1. > > > > > > Stop. > > $ > > > > > > > > *Walter Cancellier* > > Unix Systems Administrator and Engineer > > M. +39 334 6944112 > > wal...@ge... > > > > *Global Platform Services* > > > > *GSS – Generali Shared Services S.c.a.r.l.* > > Via Marocchesa 14 > > 31050 Mogliano Veneto / Italy > > T. +39 041 5493 664 > > > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel > |
From: Cancellier W. <wal...@ge...> - 2019-10-28 11:06:09
|
Hi, I am trying to compile Check 0.13.0 on AIX 7.2 using gcc 8.1.0 . autoreconf --install, ./configure and make succeed; make check fails and tells me to report to your email address. You can find full output below, and I'm attaching tests/test-suite.log . Any feedback will be appreciated, thank you. Kind Regards, Walter $ make check Making check in lib Target "check" is up to date. Making check in src Target "check" is up to date. Making check in . Target "check-am" is up to date. Making check in checkmk make check-TESTS PASS: test/check_checkmk ============================================================================ Testsuite summary for Check 0.13.0 ============================================================================ # TOTAL: 1 # PASS: 1 # SKIP: 0 # XFAIL: 0 # FAIL: 0 # XPASS: 0 # ERROR: 0 ============================================================================ Target "check" is up to date. Making check in tests make check-TESTS FAIL: check_check_export FAIL: check_check PASS: test_output.sh PASS: test_check_nofork.sh PASS: test_check_nofork_teardown.sh PASS: test_xml_output.sh PASS: test_log_output.sh PASS: test_set_max_msg_size.sh PASS: test_tap_output.sh ============================================================================ Testsuite summary for Check 0.13.0 ============================================================================ # TOTAL: 9 # PASS: 7 # SKIP: 0 # XFAIL: 0 # FAIL: 2 # XPASS: 0 # ERROR: 0 ============================================================================ See tests/test-suite.log Please report to check-devel at lists dot sourceforge dot net ============================================================================ make: 1254-004 The error code from the last command is 1. Stop. make: 1254-004 The error code from the last command is 2. Stop. make: 1254-004 The error code from the last command is 2. Stop. make: 1254-004 The error code from the last command is 1. Stop. $ [cid:image001.png@01D58D87.E9D3F060] Walter Cancellier Unix Systems Administrator and Engineer M. +39 334 6944112 wal...@ge...<mailto:wal...@ge...> Global Platform Services GSS - Generali Shared Services S.c.a.r.l. Via Marocchesa 14 31050 Mogliano Veneto / Italy T. +39 041 5493 664 |
From: Nicholas H. <nj...@ae...> - 2017-02-28 16:21:30
|
Good question! I think it is used like an integration test - so if someone submits a Pull Request that causes the test to fail, questions will be asked! Or if when installing from source, the test fails, the user might be more cautious about using it. But when if a bot building binaries the test fails, given that there aren't maintainers for individual packages. So if something did fail, I am not sure how the someone who know about the package would be notified. nick. On 2017-02-28 01:31, Branden Archer wrote: > Cool, thanks for updating Homebrew! > > Out of curiosity, what happens if the test fails? > > - Branden > > On Mon, Feb 27, 2017 at 1:44 PM, Nicholas Humfrey <nj...@ae...> > wrote: > >> Hello, >> >> Just to say that check version 0.11.0 is now in Homebrew for Mac OS: >> http://braumeister.org/formula/check [1] >> >> I also added a simple test definition to ensure that the build >> works. >> This is run automatically by the brew bot when building binaries for >> Mac. >> >> nick. >> >> > ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, SlashDot.org! http://sdm.link/slashdot >> _______________________________________________ >> Check-devel mailing list >> Che...@li... >> https://lists.sourceforge.net/lists/listinfo/check-devel [2] > > > > Links: > ------ > [1] http://braumeister.org/formula/check > [2] https://lists.sourceforge.net/lists/listinfo/check-devel |
From: Branden A. <b.m...@gm...> - 2017-02-28 01:31:36
|
Cool, thanks for updating Homebrew! Out of curiosity, what happens if the test fails? - Branden On Mon, Feb 27, 2017 at 1:44 PM, Nicholas Humfrey <nj...@ae...> wrote: > Hello, > > Just to say that check version 0.11.0 is now in Homebrew for Mac OS: > http://braumeister.org/formula/check > > I also added a simple test definition to ensure that the build works. > This is run automatically by the brew bot when building binaries for > Mac. > > > nick. > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel > |
From: Nicholas H. <nj...@ae...> - 2017-02-27 18:44:39
|
Hello, Just to say that check version 0.11.0 is now in Homebrew for Mac OS: http://braumeister.org/formula/check I also added a simple test definition to ensure that the build works. This is run automatically by the brew bot when building binaries for Mac. nick. |
From: Branden A. <b.m...@gm...> - 2016-11-29 04:16:57
|
> launch the specified command (eg 'diff') to allow the user to inspect what changed Having a mechanism for helping a developer determine the failure is helpful. I wonder though if something like dumping the contents to a file and running diff would be a bit much. Although one may be running a test in isolation, it may be more likely that there are many tests which are run. This would result in many temp files and runs of diff. It may be more direct if the developer instruments the test(s) in question such that it dump the contents being checked for further inspection. Besides that, I am wondering how the feature could be cross-platform. Linux/Unix I am familiar with, but it would be a little more effort to get Windows working. Not shooting down the idea. Just thinking through it. How useful would it be for your use case, and how would the output of the diff command relate to Check's logging? Would the output only be available in some of Check's logging types? - Branden On Sun, Nov 27, 2016 at 4:11 PM, Nicholas Humfrey <nj...@ae...> wrote: > Just when I really need this functionality and start to think about > implementing it, I find that someone has already done the work in the > form of ck_assert_mem_*. Laziness is rewarded! > > For those who having seen, ck_assert_mem_eq, test_ck_assert_mem_ne was > merged into master on Nov 3: > https://github.com/libcheck/check/commit/908497153ccbf91169e6e4de76dab1 > 9ab1103464 > > Something that would enhance this would be to have better tooling to > work out what the differences are when a test fails. What do you think > about having a CK_DIFF_TOOL environment variable, which when set would > create two temporary files containing the expected and actual outputs in > hex, launch the specified command (eg 'diff') to allow the user to > inspect what changed? > > I am personally using ck_assert_mem_eq to verify that a generated > Ethernet frame/packet matches what it should, so less than 1500 bytes > but quite a bit more than CK_MAX_ASSERT_MEM_PRINT_SIZE, which defaults > to 64 bytes. > > > nick. > > > > On 2016-07-31 14:42, Nicholas Humfrey wrote: > > Thanks for the advice Chris. > > > > I will give it ago - need to get the balance right between being overly > > complex to implement and being useful. Will see if there is something > > that compares binaries that I can steal. > > > > nick. > > > > > > On 2016-07-31 00:02, Chris Pickett wrote: > >> Hi Nick, > >> > >> It sounds fine to me. How about if you get it working for your own > >> project and then show us what it looks like? For a long enough buffer > >> you'll want some kind of start offset plus a window that starts before > >> the differences and finishes after them. Maybe somebody has already > >> built a nice bindiff? It could be fiddly, consider if there are > >> multiple bits of the binary that differ. I'm not sure but bin_eq > >> might be a better name than hex_eq. > >> > >> Chris > >> > >> Nicholas Humfrey wrote: > >>> Hello, > >>> > >>> I have been working on project that has tests that do quite a bit of > >>> comparing between binary buffers. I have thought that a > >>> ck_assert_hex_eq() assertion would be useful, that compares two > >>> pointers and displays the difference, in hex, if they are not equal. > >>> > >>> ck_assert_hex_eq(X, Y, len); > >>> > >>> Internally it would use memcmp() to perform the comparison. > >>> The complexity would be in printing the difference as hex. > >>> > >>> > >>> Would there be any interest in me contributing something like this to > >>> check? > >>> > >>> > >>> nick. > >>> > >>> ------------------------------------------------------------ > ------------------ > >>> _______________________________________________ > >>> Check-devel mailing list > >>> Che...@li... > >>> https://lists.sourceforge.net/lists/listinfo/check-devel > > > > > > ------------------------------------------------------------ > ------------------ > > _______________________________________________ > > Check-devel mailing list > > Che...@li... > > https://lists.sourceforge.net/lists/listinfo/check-devel > > > ------------------------------------------------------------ > ------------------ > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel > |
From: Nicholas H. <nj...@ae...> - 2016-11-27 21:33:41
|
Just when I really need this functionality and start to think about implementing it, I find that someone has already done the work in the form of ck_assert_mem_*. Laziness is rewarded! For those who having seen, ck_assert_mem_eq, test_ck_assert_mem_ne was merged into master on Nov 3: https://github.com/libcheck/check/commit/908497153ccbf91169e6e4de76dab19ab1103464 Something that would enhance this would be to have better tooling to work out what the differences are when a test fails. What do you think about having a CK_DIFF_TOOL environment variable, which when set would create two temporary files containing the expected and actual outputs in hex, launch the specified command (eg 'diff') to allow the user to inspect what changed? I am personally using ck_assert_mem_eq to verify that a generated Ethernet frame/packet matches what it should, so less than 1500 bytes but quite a bit more than CK_MAX_ASSERT_MEM_PRINT_SIZE, which defaults to 64 bytes. nick. On 2016-07-31 14:42, Nicholas Humfrey wrote: > Thanks for the advice Chris. > > I will give it ago - need to get the balance right between being overly > complex to implement and being useful. Will see if there is something > that compares binaries that I can steal. > > nick. > > > On 2016-07-31 00:02, Chris Pickett wrote: >> Hi Nick, >> >> It sounds fine to me. How about if you get it working for your own >> project and then show us what it looks like? For a long enough buffer >> you'll want some kind of start offset plus a window that starts before >> the differences and finishes after them. Maybe somebody has already >> built a nice bindiff? It could be fiddly, consider if there are >> multiple bits of the binary that differ. I'm not sure but bin_eq >> might be a better name than hex_eq. >> >> Chris >> >> Nicholas Humfrey wrote: >>> Hello, >>> >>> I have been working on project that has tests that do quite a bit of >>> comparing between binary buffers. I have thought that a >>> ck_assert_hex_eq() assertion would be useful, that compares two >>> pointers and displays the difference, in hex, if they are not equal. >>> >>> ck_assert_hex_eq(X, Y, len); >>> >>> Internally it would use memcmp() to perform the comparison. >>> The complexity would be in printing the difference as hex. >>> >>> >>> Would there be any interest in me contributing something like this to >>> check? >>> >>> >>> nick. >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Check-devel mailing list >>> Che...@li... >>> https://lists.sourceforge.net/lists/listinfo/check-devel > > > ------------------------------------------------------------------------------ > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel |
From: Branden A. <b.m...@gm...> - 2016-08-01 17:43:58
|
There was a feature request from SourceForge around 12/17/2013 regarding something similar. Here is a link: https://sourceforge.net/p/check/feature-requests/30/ One outcome from that discussion was the question of how best to generate the printed message without creating a memory leak. The underlying call which would check the assertion is: /** * Fail the test if the expression is false; print message on failure * * @param expr expression to evaluate * @param ... message to print (in printf format) if expression is false * * @note If the check fails, the remaining of the test is aborted * * @since 0.9.6 */ #define ck_assert_msg(expr, ...) \ (expr) ? \ _mark_point(__FILE__, __LINE__) : \ _ck_assert_failed(__FILE__, __LINE__, "Assertion '"#expr"' failed" , ## __VA_ARGS__, NULL) That is, if the message relates to a binary array the string representing the binary array must be formed before making the call. This will lead to either an arbitrary sized buffer being used (e.g. 128 chars), or something allocated with malloc. If the latter is used, the memory will not be freed after the assertion. Take a look at the discussion at a potential jumping off point. I think that adding something like ck_assert_mem_eq would be nice to have. - Branden On Sun, Jul 31, 2016 at 9:42 AM, Nicholas Humfrey <nj...@ae...> wrote: > Thanks for the advice Chris. > > I will give it ago - need to get the balance right between being overly > complex to implement and being useful. Will see if there is something > that compares binaries that I can steal. > > nick. > > > On 2016-07-31 00:02, Chris Pickett wrote: > > Hi Nick, > > > > It sounds fine to me. How about if you get it working for your own > > project and then show us what it looks like? For a long enough buffer > > you'll want some kind of start offset plus a window that starts before > > the differences and finishes after them. Maybe somebody has already > > built a nice bindiff? It could be fiddly, consider if there are > > multiple bits of the binary that differ. I'm not sure but bin_eq > > might be a better name than hex_eq. > > > > Chris > > > > Nicholas Humfrey wrote: > >> Hello, > >> > >> I have been working on project that has tests that do quite a bit of > >> comparing between binary buffers. I have thought that a > >> ck_assert_hex_eq() assertion would be useful, that compares two > >> pointers and displays the difference, in hex, if they are not equal. > >> > >> ck_assert_hex_eq(X, Y, len); > >> > >> Internally it would use memcmp() to perform the comparison. > >> The complexity would be in printing the difference as hex. > >> > >> > >> Would there be any interest in me contributing something like this to > >> check? > >> > >> > >> nick. > >> > >> > ------------------------------------------------------------------------------ > >> _______________________________________________ > >> Check-devel mailing list > >> Che...@li... > >> https://lists.sourceforge.net/lists/listinfo/check-devel > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel > |
From: Nicholas H. <nj...@ae...> - 2016-07-31 13:42:20
|
Thanks for the advice Chris. I will give it ago - need to get the balance right between being overly complex to implement and being useful. Will see if there is something that compares binaries that I can steal. nick. On 2016-07-31 00:02, Chris Pickett wrote: > Hi Nick, > > It sounds fine to me. How about if you get it working for your own > project and then show us what it looks like? For a long enough buffer > you'll want some kind of start offset plus a window that starts before > the differences and finishes after them. Maybe somebody has already > built a nice bindiff? It could be fiddly, consider if there are > multiple bits of the binary that differ. I'm not sure but bin_eq > might be a better name than hex_eq. > > Chris > > Nicholas Humfrey wrote: >> Hello, >> >> I have been working on project that has tests that do quite a bit of >> comparing between binary buffers. I have thought that a >> ck_assert_hex_eq() assertion would be useful, that compares two >> pointers and displays the difference, in hex, if they are not equal. >> >> ck_assert_hex_eq(X, Y, len); >> >> Internally it would use memcmp() to perform the comparison. >> The complexity would be in printing the difference as hex. >> >> >> Would there be any interest in me contributing something like this to >> check? >> >> >> nick. >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Check-devel mailing list >> Che...@li... >> https://lists.sourceforge.net/lists/listinfo/check-devel |
From: Chris P. <chr...@ma...> - 2016-07-30 23:02:59
|
Hi Nick, It sounds fine to me. How about if you get it working for your own project and then show us what it looks like? For a long enough buffer you'll want some kind of start offset plus a window that starts before the differences and finishes after them. Maybe somebody has already built a nice bindiff? It could be fiddly, consider if there are multiple bits of the binary that differ. I'm not sure but bin_eq might be a better name than hex_eq. Chris Nicholas Humfrey wrote: > Hello, > > I have been working on project that has tests that do quite a bit of > comparing between binary buffers. I have thought that a > ck_assert_hex_eq() assertion would be useful, that compares two pointers > and displays the difference, in hex, if they are not equal. > > ck_assert_hex_eq(X, Y, len); > > Internally it would use memcmp() to perform the comparison. > The complexity would be in printing the difference as hex. > > > Would there be any interest in me contributing something like this to > check? > > > nick. > > ------------------------------------------------------------------------------ > _______________________________________________ > Check-devel mailing list > Che...@li... > https://lists.sourceforge.net/lists/listinfo/check-devel |