|
From: Julian S. <js...@ac...> - 2014-09-03 11:03:18
|
A beta tarball for 3.10.0 is available now at http://www.valgrind.org/downloads/valgrind-3.10.0.BETA1.tar.bz2 (md5sum = dee188c79a9795fee178ba17f42c40b3) Please give it a try in configurations that are important for you, and report any problems you have, either on this mailing list, or (preferably) via our bug tracker at https://bugs.kde.org/enter_bug.cgi?product=valgrind If nothing critical emerges, a final release is likely to happen late next week. For details about what's new in 3.10, see the NEWS file in the tarball. Some of the highlights are: - Initial support for AArch64 ARMv8 (64-bit ARM) - 64-bit LE POWER Architecture support - much improved MacOSX 10.9 support - MIPS32 support for Android - stack traces through inlined function calls - EXIDX unwinding on ARM - Improved error messages for Memcheck and Helgrind - Reduced memory use for storing debug info - A huge number of bug fixes J |
|
From: Phil L. <plo...@sa...> - 2014-09-03 13:00:16
|
Is more information available about these changes? I'd like to see more detail on "improved error messages" and "stack traces through inlined function calls". -----Original Message----- From: Julian Seward [mailto:js...@ac...] Sent: Wednesday, September 03, 2014 7:04 AM To: Val...@li... Subject: [Valgrind-users] Valgrind-3.10.0 BETA is available for testing A beta tarball for 3.10.0 is available now at http://www.valgrind.org/downloads/valgrind-3.10.0.BETA1.tar.bz2 (md5sum = dee188c79a9795fee178ba17f42c40b3) Please give it a try in configurations that are important for you, and report any problems you have, either on this mailing list, or (preferably) via our bug tracker at https://bugs.kde.org/enter_bug.cgi?product=valgrind If nothing critical emerges, a final release is likely to happen late next week. For details about what's new in 3.10, see the NEWS file in the tarball. Some of the highlights are: - Initial support for AArch64 ARMv8 (64-bit ARM) - 64-bit LE POWER Architecture support - much improved MacOSX 10.9 support - MIPS32 support for Android - stack traces through inlined function calls - EXIDX unwinding on ARM - Improved error messages for Memcheck and Helgrind - Reduced memory use for storing debug info - A huge number of bug fixes J ------------------------------------------------------------------------------ Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/ _______________________________________________ Valgrind-users mailing list Val...@li... https://lists.sourceforge.net/lists/listinfo/valgrind-users |
|
From: Florian K. <fl...@ei...> - 2014-09-03 17:02:58
|
On 03.09.2014 13:03, Julian Seward wrote: > > A beta tarball for 3.10.0 is available now at > http://www.valgrind.org/downloads/valgrind-3.10.0.BETA1.tar.bz2 > (md5sum = dee188c79a9795fee178ba17f42c40b3) I just tested this on my x86-64. I see one unexpected failure in the testsuite: helgrind/tests/hg05_race2 fails like so: --- hg05_race2.stderr.exp 2014-09-02 12:32:36.000000000 +0200 +++ hg05_race2.stderr.out 2014-09-03 17:47:06.739851222 +0200 @@ -26,8 +26,7 @@ at 0x........: th (hg05_race2.c:17) by 0x........: mythread_wrapper (hg_intercepts.c:...) ... - Location 0x........ is 0 bytes inside foo.poot[5].plop[11], - declared at hg05_race2.c:24, in frame #x of thread x + Address 0x........ is on thread #x's stack ---------------------------------------------------------------- @@ -42,8 +41,7 @@ at 0x........: th (hg05_race2.c:17) by 0x........: mythread_wrapper (hg_intercepts.c:...) ... - Location 0x........ is 0 bytes inside foo.poot[5].plop[11], - declared at hg05_race2.c:24, in frame #x of thread x + Address 0x........ is on thread #x's stack THis is with gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2. Is this a known issue? Florian |
|
From: Florian K. <fl...@ei...> - 2014-09-03 19:02:38
|
On 03.09.2014 19:02, Florian Krohm wrote: > On 03.09.2014 13:03, Julian Seward wrote: >> >> A beta tarball for 3.10.0 is available now at >> http://www.valgrind.org/downloads/valgrind-3.10.0.BETA1.tar.bz2 >> (md5sum = dee188c79a9795fee178ba17f42c40b3) > > I just tested this on my x86-64. I see one unexpected failure in the > testsuite: > > helgrind/tests/hg05_race2 > fails like so: > > --- hg05_race2.stderr.exp 2014-09-02 12:32:36.000000000 +0200 > +++ hg05_race2.stderr.out 2014-09-03 17:47:06.739851222 +0200 > @@ -26,8 +26,7 @@ > at 0x........: th (hg05_race2.c:17) > by 0x........: mythread_wrapper (hg_intercepts.c:...) > ... > - Location 0x........ is 0 bytes inside foo.poot[5].plop[11], > - declared at hg05_race2.c:24, in frame #x of thread x > + Address 0x........ is on thread #x's stack > > ---------------------------------------------------------------- > > @@ -42,8 +41,7 @@ > at 0x........: th (hg05_race2.c:17) > by 0x........: mythread_wrapper (hg_intercepts.c:...) > ... > - Location 0x........ is 0 bytes inside foo.poot[5].plop[11], > - declared at hg05_race2.c:24, in frame #x of thread x > + Address 0x........ is on thread #x's stack > > > THis is with gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2. Is this a known issue? I reran the regtest and this time the test passed. That does not sound good. Looks like I should run memcheck on helgrind.... Florian |
|
From: Philippe W. <phi...@sk...> - 2014-09-03 19:55:14
|
On Wed, 2014-09-03 at 21:02 +0200, Florian Krohm wrote: > I reran the regtest and this time the test passed. That does not sound > good. Looks like I should run memcheck on helgrind.... I did run the full test suite in an outer memcheck some days ago. Did not find anything wrong; but the difficulty with some tests (in particular with memcheck tests) is that the wrong behaviour in the test is causing the outer memchecvk to detect a bug in the JITted code generated for the misbheaving test case. But for hg05_race2, no error reported by memcheck. Note however that helgrind test cases are not (necessarily) deterministic. Sometimes, depending on thread scheduling, you have (or not errors). So, we can imagine that the race is sometimes detected in one thread; sometimes in the other, and then we have 'lost' the exact place at which the race happened. Would be worth trying with --fair-sched=yes and see if that is more reproducable Philippe > > Florian > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
|
From: Florian K. <fl...@ei...> - 2014-09-03 20:57:58
|
On 03.09.2014 21:56, Philippe Waroquiers wrote: > On Wed, 2014-09-03 at 21:02 +0200, Florian Krohm wrote: > >> I reran the regtest and this time the test passed. That does not sound >> good. Looks like I should run memcheck on helgrind.... > I did run the full test suite in an outer memcheck some days ago. > Did not find anything wrong; Oh, cool. Saves me some time. > Would be worth trying with --fair-sched=yes and see if that is > more reproducable Did not make a difference. I have not been able to reproduce the failure with or without --fair-sched=yes. Florian |