You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(2) |
2
|
|
3
(9) |
4
(9) |
5
(16) |
6
(13) |
7
(12) |
8
(11) |
9
(6) |
|
10
|
11
(23) |
12
(31) |
13
(20) |
14
(7) |
15
|
16
(9) |
|
17
(2) |
18
(1) |
19
(12) |
20
(9) |
21
(8) |
22
(14) |
23
(2) |
|
24
(9) |
25
(11) |
26
(17) |
27
(14) |
28
(38) |
29
(25) |
30
(15) |
|
31
(8) |
|
|
|
|
|
|
|
From: Philippe W. <phi...@sk...> - 2011-07-28 23:12:04
|
On Thu, 2011-07-28 at 17:37 -0500, Rich Coe wrote: > Hi Albert, > > I reproduced the crash with src/pgk/reflect. > Here's what is happening to cause this: > - V detects an error (might) occur. > - V starts to capture the stack of the occurance. > - if we assume the stack pointer and stack frames are correct, > as V walks the stack, one of the frames does not conform to ABI > and the previous frame pointer in the current frame points to > non-existing memory. > > I have not proved that the frame does not conform to the ABI, there > are many other reasons for the invalid pointer. You might use the Valgrind gdbserver to use gdb to do the stacktrace just before the error is reported by Valgrind. If gdb+Valgrind gdbserver can produce a stacktrace of the simulated cpu and Valgrind "core" cannot make a stacktrace with the same simulated cpu state, then it looks like the Valgrind stacktrace logic is to be enhanced/corrected. If both can't make a stacktrace, but the native gdb can do a stacktrace, then that looks more like a Valgrind simulation bug. Philippe |
|
From: Rich C. <rc...@wi...> - 2011-07-28 22:37:32
|
Hi Albert,
I reproduced the crash with src/pgk/reflect.
Here's what is happening to cause this:
- V detects an error (might) occur.
- V starts to capture the stack of the occurance.
- if we assume the stack pointer and stack frames are correct,
as V walks the stack, one of the frames does not conform to ABI
and the previous frame pointer in the current frame points to
non-existing memory.
I have not proved that the frame does not conform to the ABI, there
are many other reasons for the invalid pointer.
Since this happened on frame 7, a work-around is to run V with
--num-callers=6, and the program runs to completion.
Rich
On Thu, 28 Jul 2011 23:19:36 +0200
Albert Strasheim <fu...@gm...> wrote:
> Hello again
>
> On Thu, Jul 28, 2011 at 11:01 PM, Albert Strasheim <fu...@gm...> wrote:
> > Hello
> > On Thu, Jul 28, 2011 at 10:52 PM, Rich Coe <rc...@wi...> wrote:
> >> Hi,
> >> Using go tip:9218 on linux x86_64, I couldn't reproduce this.
> >> What about your environment makes this issue unique ?
> >> Rich
> > You're right! I guess I was accidentally still running 3.6.1 instead
> > of the build from SVN.
> > I guess bug 262916 can be closed now.
> > The next thing to do is to take a closer look at the warnings produced.
>
> I've done some more tests, and most packages work, but the 6.out from
> the reflect package still produces a crash at m_stacktrace.c:334.
>
> I'm definitely running the code from SVN now.
>
> Regards
>
> Albert
--
Rich Coe rc...@wi...
|
|
From: Philippe W. <phi...@sk...> - 2011-07-28 22:33:21
|
On Thu, 2011-07-28 at 23:50 +0200, Albert Strasheim wrote:
> Agreed. The segfault is a separate problem that has to be addressed.
You might try to compare how stacktraces are computed between
Valgrind core, Valgrind gdbserver, native gdb (cfr previous mail).
Alternatively, you might experiment by disabling various Valgrind JIT
optimisations e.g. using the options:
--vex-iropt-level=<0..2> [2]
--vex-iropt-precise-memory-exns=no|yes [no]
--vex-iropt-unroll-thresh=<0..400> [120]
--vex-guest-chase-thresh=<0..99> [10]
I think to disable as much as possible; you have to use respectively
0, yes, 0, 0.
Philippe
|
|
From: Albert S. <fu...@gm...> - 2011-07-28 21:51:16
|
Hello On Thu, Jul 28, 2011 at 11:46 PM, Philippe Waroquiers <phi...@sk...> wrote: > On Thu, 2011-07-28 at 23:33 +0200, Albert Strasheim wrote: >> Warning: Unfortunately, this client request is unreliable and best avoided. >> Is this still the case with Valgrind in SVN? >> If so, is this something that could be fixed relatively easily or are >> there dragons here? > The svn doc still contains the same warning. > But the doc also tells: > "Use this if you're using a user-level thread package and are noticing > spurious errors from Valgrind about uninitialized memory reads." I'm seeing what I think are spurious errors for the Go binaries that do run to completion under Valgrind. I don't think this will fix the SIGSEGV, > I am not sure an valgrind SEGV matches this. > Do you see the same crash with other tools ? > (e.g. callgrind/cachegrind/massif/...) ? Agreed. The segfault is a separate problem that has to be addressed. Regards Albert |
|
From: Philippe W. <phi...@sk...> - 2011-07-28 21:46:51
|
On Thu, 2011-07-28 at 23:33 +0200, Albert Strasheim wrote: > Warning: Unfortunately, this client request is unreliable and best avoided. > > Is this still the case with Valgrind in SVN? > > If so, is this something that could be fixed relatively easily or are > there dragons here? The svn doc still contains the same warning. But the doc also tells: "Use this if you're using a user-level thread package and are noticing spurious errors from Valgrind about uninitialized memory reads." I am not sure an valgrind SEGV matches this. Do you see the same crash with other tools ? (e.g. callgrind/cachegrind/massif/...) ? |
|
From: Albert S. <fu...@gm...> - 2011-07-28 21:33:40
|
Hello On Thu, Jul 28, 2011 at 10:46 PM, Alexander Potapenko <gl...@go...> wrote: > For example, goroutines may be tricky to mirror in a straightforward > way (using threads), because there can be thousands of them, while > Valgrind has a strict limit for the number of threads. > Second, they use segmented stacks in Go, which are extended if needed. > Valgrind does not know about this and may be confused (I suppose the > segfaults in your report could denote the demand for increasing the > stack size) On this note, I see Valgrind has three client requests that might be useful: VALGRIND_STACK_REGISTER VALGRIND_STACK_DEREGISTER VALGRIND_STACK_CHANGE Adding these to the Go runtime should be relatively straight-forward. However, the documentation at http://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.clientreq says: Warning: Unfortunately, this client request is unreliable and best avoided. Is this still the case with Valgrind in SVN? If so, is this something that could be fixed relatively easily or are there dragons here? Regards Albert |
|
From: Albert S. <fu...@gm...> - 2011-07-28 21:20:03
|
Hello again On Thu, Jul 28, 2011 at 11:01 PM, Albert Strasheim <fu...@gm...> wrote: > Hello > On Thu, Jul 28, 2011 at 10:52 PM, Rich Coe <rc...@wi...> wrote: >> Hi, >> Using go tip:9218 on linux x86_64, I couldn't reproduce this. >> What about your environment makes this issue unique ? >> Rich > You're right! I guess I was accidentally still running 3.6.1 instead > of the build from SVN. > I guess bug 262916 can be closed now. > The next thing to do is to take a closer look at the warnings produced. I've done some more tests, and most packages work, but the 6.out from the reflect package still produces a crash at m_stacktrace.c:334. I'm definitely running the code from SVN now. Regards Albert |
|
From: Albert S. <fu...@gm...> - 2011-07-28 21:02:06
|
Hello On Thu, Jul 28, 2011 at 10:52 PM, Rich Coe <rc...@wi...> wrote: > Hi, > > Using go tip:9218 on linux x86_64, I couldn't reproduce this. > What about your environment makes this issue unique ? > > Rich You're right! I guess I was accidentally still running 3.6.1 instead of the build from SVN. I guess bug 262916 can be closed now. The next thing to do is to take a closer look at the warnings produced. Thanks. Regards Albert |
|
From: Albert S. <fu...@gm...> - 2011-07-28 20:57:54
|
Hello On Thu, Jul 28, 2011 at 10:46 PM, Alexander Potapenko <gl...@go...> wrote: > Running the binaries under Nullgrind (valgrind --tool=none) is better > to start with, because Go programs are likely to trigger some > incompatibilities in the Valgrind runtime itself, not necessarily > Memcheck. Thanks. I can successfully run all of the Go test programs I tried with Nullgrind. > For example, goroutines may be tricky to mirror in a straightforward > way (using threads), because there can be thousands of them, while > Valgrind has a strict limit for the number of threads. Go also spawns threads. Valgrind probably shouldn't have care about goroutines. > Second, they use segmented stacks in Go, which are extended if needed. > Valgrind does not know about this and may be confused (I suppose the > segfaults in your report could denote the demand for increasing the > stack size) Valgrind does say: ==20554== Warning: client switching stacks? SP change: 0x7ff000440 --> 0x4841fb8 ==20554== to suppress, use: --max-stackframe=34267194504 or greater ==20554== Warning: client switching stacks? SP change: 0x4841f68 --> 0x7ff000470 ==20554== to suppress, use: --max-stackframe=34267194632 or greater ==20554== Warning: client switching stacks? SP change: 0x7ff0003f0 --> 0xf84000f050 ==20554== to suppress, use: --max-stackframe=1031882730592 or greater ==20554== further instances of this message will not be shown. but I still have to read through the documentation of --max-stackframe and friends more carefully. > A side question: is there any particular reason you need Valgrind for Go? Go programs can use C libraries through something called cgo. I'd like to be able to check this C code inside my Go binaries. Also, Valgrind might be able to find bugs in the C part of the Go runtime (goroutine scheduler, garbage collector, etc.). Regards Albert |
|
From: Philippe W. <phi...@sk...> - 2011-07-28 20:53:43
|
> Above the code there is this comment: > > /* Note: re "- 1 * sizeof(UWord)", need to take account of the > fact that we are prodding at & ((UWord*)fp)[1] and so need to > adjust the limit check accordingly. Omitting this has been > observed to cause segfaults on rare occasions. */ > > so it seems Go binaries are triggering this "rare occasion". > > Any thoughts on how to proceed would be appreciated. I do not know much about how Valgrind computes the stack trace of the simulated cpu, so the below suggestion might be useless but it does not cost much to suggest :). So, in 3.7.0 svn, Valgrind has an embedded gdbserver. To see what goes wrong in the stack trace of the simulated cpu, you might try to compare how the Valgrind core computes a backtrace ( the one that fails) to the way gdb computes a backtrace (through the Valgrind gdbserver) to the way gdb computes a backtrace when debugging natively 6.out. (you will have for the last two to put a break just before the 6.out instruction which causes the problematic stacktrace). Philippe |
|
From: Rich C. <rc...@wi...> - 2011-07-28 20:52:44
|
Hi,
Using go tip:9218 on linux x86_64, I couldn't reproduce this.
What about your environment makes this issue unique ?
Rich
On Thu, 28 Jul 2011 21:02:11 +0200
Albert Strasheim <fu...@gm...> wrote:
> Hello all
>
> I have been testing Valgrind 3.6.1 and the latest from SVN with some
> Go binaries.
>
> A bug I reported against 3.6.0 still seems to exit:
>
> https://bugs.kde.org/show_bug.cgi?id=262916
>
> Running Go binaries under Valgrind lead to a segfault in
> vgPlain_get_StackTrace_wrk.
>
> To reproduce:
>
> Follow instructions at http://golang.org/doc/install.html
> cd $GOROOT/src/pkg/ebnf
> make test
> valgrind ./6.out
>
> Are there any developers that could take a look? It would be really
> nice to have Valgrind working with Go binaries that call C libraries.
>
> Thanks!
>
> Regards
>
> Albert
>
> ------------------------------------------------------------------------------
> Got Input? Slashdot Needs You.
> Take our quick survey online. Come on, we don't ask for help often.
> Plus, you'll get a chance to win $100 to spend on ThinkGeek.
> http://p.sf.net/sfu/slashdot-survey
> _______________________________________________
> Valgrind-developers mailing list
> Val...@li...
> https://lists.sourceforge.net/lists/listinfo/valgrind-developers
--
Rich Coe rc...@wi...
|
|
From: Alexander P. <gl...@go...> - 2011-07-28 20:46:47
|
Running the binaries under Nullgrind (valgrind --tool=none) is better to start with, because Go programs are likely to trigger some incompatibilities in the Valgrind runtime itself, not necessarily Memcheck. For example, goroutines may be tricky to mirror in a straightforward way (using threads), because there can be thousands of them, while Valgrind has a strict limit for the number of threads. Second, they use segmented stacks in Go, which are extended if needed. Valgrind does not know about this and may be confused (I suppose the segfaults in your report could denote the demand for increasing the stack size) A side question: is there any particular reason you need Valgrind for Go? Alex |
|
From: Christian B. <bor...@de...> - 2011-07-28 20:30:52
|
Nightly build on fedora390 ( Fedora 13/14/15 mix with gcc 3.5.3 on z196 (s390x) ) Started at 2011-07-28 22:10:01 CEST Ended at 2011-07-28 22:31:07 CEST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 493 tests, 99 stderr failures, 7 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/err_disable3 (stderr) memcheck/tests/err_disable4 (stderr) memcheck/tests/linux/timerfd-syscall (stderr) none/tests/pth_cancel1 (stderr) none/tests/pth_cancel2 (stderr) none/tests/pth_exit (stderr) none/tests/pth_exit2 (stderr) none/tests/s390x/ex_clone (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) helgrind/tests/locked_vs_unlocked1_rev (stderr) helgrind/tests/locked_vs_unlocked2 (stderr) helgrind/tests/locked_vs_unlocked3 (stderr) helgrind/tests/pth_barrier3 (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc23_bogus_condwait (stderr) drd/tests/annotate_barrier (stderr) drd/tests/annotate_hb_race (stderr) drd/tests/annotate_hbefore (stderr) drd/tests/annotate_ignore_read (stderr) drd/tests/annotate_ignore_rw (stderr) drd/tests/annotate_ignore_rw2 (stderr) drd/tests/annotate_ignore_write (stderr) drd/tests/annotate_ignore_write2 (stderr) drd/tests/annotate_order_1 (stderr) drd/tests/annotate_order_2 (stderr) drd/tests/annotate_order_3 (stderr) drd/tests/annotate_rwlock (stderr) drd/tests/annotate_smart_pointer (stderr) drd/tests/annotate_smart_pointer2 (stderr) drd/tests/annotate_spinlock (stderr) drd/tests/annotate_static (stderr) drd/tests/annotate_trace_memory (stderr) drd/tests/atomic_var (stderr) drd/tests/bar_bad (stderr) drd/tests/bar_trivial (stdout) drd/tests/bar_trivial (stderr) drd/tests/bug-235681 (stderr) drd/tests/circular_buffer (stderr) drd/tests/fp_race (stderr) drd/tests/fp_race2 (stderr) drd/tests/free_is_write (stderr) drd/tests/free_is_write2 (stderr) drd/tests/hg01_all_ok (stderr) drd/tests/hg02_deadlock (stderr) drd/tests/hg03_inherit (stderr) drd/tests/hg04_race (stderr) drd/tests/hg05_race2 (stderr) drd/tests/hg06_readshared (stderr) drd/tests/linuxthreads_det (stderr) drd/tests/matinv (stdout) drd/tests/matinv (stderr) drd/tests/monitor_example (stderr) drd/tests/pth_barrier (stderr) drd/tests/pth_barrier2 (stderr) drd/tests/pth_barrier3 (stderr) drd/tests/pth_barrier_race (stderr) drd/tests/pth_broadcast (stderr) drd/tests/pth_cancel_locked (stderr) drd/tests/pth_cleanup_handler (stderr) drd/tests/pth_cond_race (stderr) drd/tests/pth_cond_race2 (stderr) drd/tests/pth_cond_race3 (stderr) drd/tests/pth_create_chain (stderr) drd/tests/pth_detached (stderr) drd/tests/pth_detached2 (stderr) drd/tests/pth_detached3 (stderr) drd/tests/pth_detached_sem (stdout) drd/tests/pth_inconsistent_cond_wait (stderr) drd/tests/pth_spinlock (stderr) drd/tests/read_and_free_race (stderr) drd/tests/rwlock_race (stderr) drd/tests/rwlock_test (stderr) drd/tests/sem_as_mutex (stderr) drd/tests/sem_as_mutex2 (stderr) drd/tests/sem_as_mutex3 (stderr) drd/tests/sem_open (stderr) drd/tests/sem_open2 (stderr) drd/tests/sem_open3 (stderr) drd/tests/sem_open_traced (stderr) drd/tests/sigalrm (stderr) drd/tests/tc01_simple_race (stderr) drd/tests/tc02_simple_tls (stderr) drd/tests/tc03_re_excl (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc05_simple_race (stderr) drd/tests/tc06_two_races (stderr) drd/tests/tc07_hbl1 (stdout) drd/tests/tc07_hbl1 (stderr) drd/tests/tc08_hbl2 (stdout) drd/tests/tc08_hbl2 (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc11_XCHG (stdout) drd/tests/tc11_XCHG (stderr) drd/tests/tc16_byterace (stderr) drd/tests/tc17_sembar (stderr) drd/tests/tc18_semabuse (stderr) drd/tests/tc19_shadowmem (stderr) drd/tests/tc21_pthonce (stdout) drd/tests/tc21_pthonce (stderr) drd/tests/tc22_exit_w_lock (stderr) drd/tests/tc23_bogus_condwait (stderr) drd/tests/tc24_nonzero_sem (stderr) drd/tests/thread_name (stderr) drd/tests/threaded-fork (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 492 tests, 26 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/err_disable3 (stderr) memcheck/tests/err_disable4 (stderr) memcheck/tests/linux/timerfd-syscall (stderr) none/tests/pth_cancel1 (stderr) none/tests/pth_cancel2 (stderr) none/tests/pth_exit (stderr) none/tests/pth_exit2 (stderr) none/tests/s390x/ex_clone (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) helgrind/tests/locked_vs_unlocked1_rev (stderr) helgrind/tests/locked_vs_unlocked2 (stderr) helgrind/tests/locked_vs_unlocked3 (stderr) helgrind/tests/pth_barrier1 (stderr) helgrind/tests/pth_barrier2 (stderr) helgrind/tests/pth_barrier3 (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc23_bogus_condwait (stderr) drd/tests/circular_buffer (stderr) drd/tests/pth_cancel_locked (stderr) drd/tests/pth_cleanup_handler (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc23_bogus_condwait (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Thu Jul 28 22:21:07 2011 --- new.short Thu Jul 28 22:31:07 2011 *************** *** 8,10 **** ! == 492 tests, 26 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/err_disable3 (stderr) --- 8,10 ---- ! == 493 tests, 99 stderr failures, 7 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/err_disable3 (stderr) *************** *** 17,19 **** none/tests/s390x/ex_clone (stderr) - helgrind/tests/hg04_race (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) --- 17,18 ---- *************** *** 22,25 **** helgrind/tests/locked_vs_unlocked3 (stderr) - helgrind/tests/pth_barrier1 (stderr) - helgrind/tests/pth_barrier2 (stderr) helgrind/tests/pth_barrier3 (stderr) --- 21,22 ---- *************** *** 29,36 **** --- 26,116 ---- helgrind/tests/tc23_bogus_condwait (stderr) + drd/tests/annotate_barrier (stderr) + drd/tests/annotate_hb_race (stderr) + drd/tests/annotate_hbefore (stderr) + drd/tests/annotate_ignore_read (stderr) + drd/tests/annotate_ignore_rw (stderr) + drd/tests/annotate_ignore_rw2 (stderr) + drd/tests/annotate_ignore_write (stderr) + drd/tests/annotate_ignore_write2 (stderr) + drd/tests/annotate_order_1 (stderr) + drd/tests/annotate_order_2 (stderr) + drd/tests/annotate_order_3 (stderr) + drd/tests/annotate_rwlock (stderr) + drd/tests/annotate_smart_pointer (stderr) + drd/tests/annotate_smart_pointer2 (stderr) + drd/tests/annotate_spinlock (stderr) + drd/tests/annotate_static (stderr) + drd/tests/annotate_trace_memory (stderr) + drd/tests/atomic_var (stderr) + drd/tests/bar_bad (stderr) + drd/tests/bar_trivial (stdout) + drd/tests/bar_trivial (stderr) + drd/tests/bug-235681 (stderr) drd/tests/circular_buffer (stderr) + drd/tests/fp_race (stderr) + drd/tests/fp_race2 (stderr) + drd/tests/free_is_write (stderr) + drd/tests/free_is_write2 (stderr) + drd/tests/hg01_all_ok (stderr) + drd/tests/hg02_deadlock (stderr) + drd/tests/hg03_inherit (stderr) + drd/tests/hg04_race (stderr) + drd/tests/hg05_race2 (stderr) + drd/tests/hg06_readshared (stderr) + drd/tests/linuxthreads_det (stderr) + drd/tests/matinv (stdout) + drd/tests/matinv (stderr) + drd/tests/monitor_example (stderr) + drd/tests/pth_barrier (stderr) + drd/tests/pth_barrier2 (stderr) + drd/tests/pth_barrier3 (stderr) + drd/tests/pth_barrier_race (stderr) + drd/tests/pth_broadcast (stderr) drd/tests/pth_cancel_locked (stderr) drd/tests/pth_cleanup_handler (stderr) + drd/tests/pth_cond_race (stderr) + drd/tests/pth_cond_race2 (stderr) + drd/tests/pth_cond_race3 (stderr) + drd/tests/pth_create_chain (stderr) + drd/tests/pth_detached (stderr) + drd/tests/pth_detached2 (stderr) + drd/tests/pth_detached3 (stderr) + drd/tests/pth_detached_sem (stdout) + drd/tests/pth_inconsistent_cond_wait (stderr) + drd/tests/pth_spinlock (stderr) + drd/tests/read_and_free_race (stderr) + drd/tests/rwlock_race (stderr) + drd/tests/rwlock_test (stderr) + drd/tests/sem_as_mutex (stderr) + drd/tests/sem_as_mutex2 (stderr) + drd/tests/sem_as_mutex3 (stderr) + drd/tests/sem_open (stderr) + drd/tests/sem_open2 (stderr) + drd/tests/sem_open3 (stderr) + drd/tests/sem_open_traced (stderr) + drd/tests/sigalrm (stderr) + drd/tests/tc01_simple_race (stderr) + drd/tests/tc02_simple_tls (stderr) + drd/tests/tc03_re_excl (stderr) drd/tests/tc04_free_lock (stderr) + drd/tests/tc05_simple_race (stderr) + drd/tests/tc06_two_races (stderr) + drd/tests/tc07_hbl1 (stdout) + drd/tests/tc07_hbl1 (stderr) + drd/tests/tc08_hbl2 (stdout) + drd/tests/tc08_hbl2 (stderr) drd/tests/tc09_bad_unlock (stderr) + drd/tests/tc11_XCHG (stdout) + drd/tests/tc11_XCHG (stderr) + drd/tests/tc16_byterace (stderr) + drd/tests/tc17_sembar (stderr) + drd/tests/tc18_semabuse (stderr) + drd/tests/tc19_shadowmem (stderr) + drd/tests/tc21_pthonce (stdout) + drd/tests/tc21_pthonce (stderr) + drd/tests/tc22_exit_w_lock (stderr) drd/tests/tc23_bogus_condwait (stderr) + drd/tests/tc24_nonzero_sem (stderr) + drd/tests/thread_name (stderr) + drd/tests/threaded-fork (stderr) |
|
From: Christian B. <bor...@de...> - 2011-07-28 20:27:47
|
Nightly build on sless390 ( SUSE Linux Enterprise Server 11 SP1 gcc 4.3.4 on z196 (s390x) ) Started at 2011-07-28 22:05:01 CEST Ended at 2011-07-28 22:27:32 CEST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 492 tests, 42 stderr failures, 1 stdout failure, 6 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcbreak (stderrB) gdbserver_tests/mcclean_after_fork (stderrB) gdbserver_tests/mcinfcallWSRU (stderrB) gdbserver_tests/mcleak (stderrB) gdbserver_tests/mssnapshot (stderrB) gdbserver_tests/nlpasssigalrm (stderrB) memcheck/tests/err_disable3 (stderr) memcheck/tests/err_disable4 (stderr) memcheck/tests/linux/timerfd-syscall (stderr) none/tests/faultstatus (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) helgrind/tests/locked_vs_unlocked1_rev (stderr) helgrind/tests/locked_vs_unlocked2 (stderr) helgrind/tests/locked_vs_unlocked3 (stderr) helgrind/tests/pth_barrier3 (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc23_bogus_condwait (stderr) drd/tests/annotate_ignore_rw (stderr) drd/tests/annotate_order_1 (stderr) drd/tests/annotate_order_2 (stderr) drd/tests/annotate_rwlock (stderr) drd/tests/annotate_smart_pointer (stderr) drd/tests/annotate_spinlock (stderr) drd/tests/annotate_static (stderr) drd/tests/circular_buffer (stderr) drd/tests/free_is_write (stderr) drd/tests/hg01_all_ok (stderr) drd/tests/hg06_readshared (stderr) drd/tests/matinv (stdout) drd/tests/matinv (stderr) drd/tests/monitor_example (stderr) drd/tests/pth_barrier3 (stderr) drd/tests/pth_cancel_locked (stderr) drd/tests/pth_cleanup_handler (stderr) drd/tests/pth_cond_race (stderr) drd/tests/pth_create_chain (stderr) drd/tests/pth_spinlock (stderr) drd/tests/rwlock_test (stderr) drd/tests/sem_open3 (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc16_byterace (stderr) drd/tests/tc17_sembar (stderr) drd/tests/tc19_shadowmem (stderr) drd/tests/tc21_pthonce (stderr) drd/tests/tc23_bogus_condwait (stderr) drd/tests/thread_name (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 491 tests, 19 stderr failures, 0 stdout failures, 6 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcbreak (stderrB) gdbserver_tests/mcclean_after_fork (stderrB) gdbserver_tests/mcinfcallWSRU (stderrB) gdbserver_tests/mcleak (stderrB) gdbserver_tests/mssnapshot (stderrB) gdbserver_tests/nlpasssigalrm (stderrB) memcheck/tests/err_disable3 (stderr) memcheck/tests/err_disable4 (stderr) memcheck/tests/linux/timerfd-syscall (stderr) none/tests/faultstatus (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) helgrind/tests/locked_vs_unlocked1_rev (stderr) helgrind/tests/locked_vs_unlocked2 (stderr) helgrind/tests/locked_vs_unlocked3 (stderr) helgrind/tests/pth_barrier1 (stderr) helgrind/tests/pth_barrier2 (stderr) helgrind/tests/pth_barrier3 (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc23_bogus_condwait (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc23_bogus_condwait (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Thu Jul 28 22:17:37 2011 --- new.short Thu Jul 28 22:27:32 2011 *************** *** 8,10 **** ! == 491 tests, 19 stderr failures, 0 stdout failures, 6 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcbreak (stderrB) --- 8,10 ---- ! == 492 tests, 42 stderr failures, 1 stdout failure, 6 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcbreak (stderrB) *************** *** 19,21 **** none/tests/faultstatus (stderr) - helgrind/tests/hg04_race (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) --- 19,20 ---- *************** *** 24,27 **** helgrind/tests/locked_vs_unlocked3 (stderr) - helgrind/tests/pth_barrier1 (stderr) - helgrind/tests/pth_barrier2 (stderr) helgrind/tests/pth_barrier3 (stderr) --- 23,24 ---- *************** *** 31,35 **** --- 28,59 ---- helgrind/tests/tc23_bogus_condwait (stderr) + drd/tests/annotate_ignore_rw (stderr) + drd/tests/annotate_order_1 (stderr) + drd/tests/annotate_order_2 (stderr) + drd/tests/annotate_rwlock (stderr) + drd/tests/annotate_smart_pointer (stderr) + drd/tests/annotate_spinlock (stderr) + drd/tests/annotate_static (stderr) + drd/tests/circular_buffer (stderr) + drd/tests/free_is_write (stderr) + drd/tests/hg01_all_ok (stderr) + drd/tests/hg06_readshared (stderr) + drd/tests/matinv (stdout) + drd/tests/matinv (stderr) + drd/tests/monitor_example (stderr) + drd/tests/pth_barrier3 (stderr) + drd/tests/pth_cancel_locked (stderr) + drd/tests/pth_cleanup_handler (stderr) + drd/tests/pth_cond_race (stderr) + drd/tests/pth_create_chain (stderr) + drd/tests/pth_spinlock (stderr) + drd/tests/rwlock_test (stderr) + drd/tests/sem_open3 (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) + drd/tests/tc16_byterace (stderr) + drd/tests/tc17_sembar (stderr) + drd/tests/tc19_shadowmem (stderr) + drd/tests/tc21_pthonce (stderr) drd/tests/tc23_bogus_condwait (stderr) + drd/tests/thread_name (stderr) |
|
From: Albert S. <fu...@gm...> - 2011-07-28 20:21:53
|
Hello again On Thu, Jul 28, 2011 at 9:02 PM, Albert Strasheim <fu...@gm...> wrote: > I have been testing Valgrind 3.6.1 and the latest from SVN with some > Go binaries. > A bug I reported against 3.6.0 still seems to exit: > https://bugs.kde.org/show_bug.cgi?id=262916 > Running Go binaries under Valgrind lead to a segfault in > vgPlain_get_StackTrace_wrk. The line that crashes is this: m_stacktrace.c:334: uregs.xip = (((UWord*)uregs.xbp)[1]); (gdb) p uregs $1 = {xip = 1066225967935, xsp = 75771824, xbp = 4294966222} Above the code there is this comment: /* Note: re "- 1 * sizeof(UWord)", need to take account of the fact that we are prodding at & ((UWord*)fp)[1] and so need to adjust the limit check accordingly. Omitting this has been observed to cause segfaults on rare occasions. */ so it seems Go binaries are triggering this "rare occasion". Any thoughts on how to proceed would be appreciated. Thanks! Regards Albert |
|
From: Albert S. <fu...@gm...> - 2011-07-28 19:02:38
|
Hello all I have been testing Valgrind 3.6.1 and the latest from SVN with some Go binaries. A bug I reported against 3.6.0 still seems to exit: https://bugs.kde.org/show_bug.cgi?id=262916 Running Go binaries under Valgrind lead to a segfault in vgPlain_get_StackTrace_wrk. To reproduce: Follow instructions at http://golang.org/doc/install.html cd $GOROOT/src/pkg/ebnf make test valgrind ./6.out Are there any developers that could take a look? It would be really nice to have Valgrind working with Go binaries that call C libraries. Thanks! Regards Albert |
|
From: <sv...@va...> - 2011-07-28 18:51:30
|
Author: bart
Date: 2011-07-28 19:46:38 +0100 (Thu, 28 Jul 2011)
New Revision: 11934
Log:
Yet another threading tool regression test scheduler sensitivity fix
Modified:
trunk/helgrind/tests/tc01_simple_race.c
Modified: trunk/helgrind/tests/tc01_simple_race.c
===================================================================
--- trunk/helgrind/tests/tc01_simple_race.c 2011-07-28 18:06:44 UTC (rev 11933)
+++ trunk/helgrind/tests/tc01_simple_race.c 2011-07-28 18:46:38 UTC (rev 11934)
@@ -17,13 +17,13 @@
int main ( void )
{
+ const struct timespec delay = { 0, 100 * 1000 * 1000 };
pthread_t child;
-
if (pthread_create(&child, NULL, child_fn, NULL)) {
perror("pthread_create");
exit(1);
}
-
+ nanosleep(&delay, 0);
/* Unprotected relative to child */
x++;
|
|
From: Christian B. <bor...@de...> - 2011-07-28 18:49:20
|
On 28/07/11 18:41, Florian Krohm wrote: > On 07/28/2011 12:21 PM, Bart Van Assche wrote: >> >> I've noticed that the "diff" attachment is missing from the nightly >> build output e-mailed from the systems administered by Christian ? > > yeah, I noticed this myself. And the svn revision number is missing, > too. Ah well, I'm sure Christian will fix it when he's back from his > leave in 3 weeks. Just checked my mail. I tried a short hack on the sles system. Lets see, if that worked out. I have to look at tc04 and tc09, but tc23 is broken even without valgrind: [...] r= pthread_cond_wait(&cv, (pthread_mutex_t*)(1 + (char*)&mx[0]) ); [...] this will cause a compare and swap instruction on a misaligned address, which causes a sigill. |
|
From: <sv...@va...> - 2011-07-28 18:11:33
|
Author: bart Date: 2011-07-28 19:06:44 +0100 (Thu, 28 Jul 2011) New Revision: 11933 Log: Verify drd/tests/pth_detached stderr output instead of the stdout output. Modified: trunk/drd/tests/pth_detached.c trunk/drd/tests/pth_detached.stderr.exp trunk/drd/tests/pth_detached.stdout.exp trunk/drd/tests/pth_detached.vgtest trunk/drd/tests/pth_detached2.stderr.exp trunk/drd/tests/pth_detached2.stdout.exp trunk/drd/tests/pth_detached2.vgtest Modified: trunk/drd/tests/pth_detached.c =================================================================== --- trunk/drd/tests/pth_detached.c 2011-07-28 17:48:48 UTC (rev 11932) +++ trunk/drd/tests/pth_detached.c 2011-07-28 18:06:44 UTC (rev 11933) @@ -80,6 +80,7 @@ pthread_mutex_destroy(&s_mutex); write(STDOUT_FILENO, "\n", 1); + fprintf(stderr, "Done.\n"); return 0; } Modified: trunk/drd/tests/pth_detached.stderr.exp =================================================================== --- trunk/drd/tests/pth_detached.stderr.exp 2011-07-28 17:48:48 UTC (rev 11932) +++ trunk/drd/tests/pth_detached.stderr.exp 2011-07-28 18:06:44 UTC (rev 11933) @@ -1,3 +1,4 @@ +Done. ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) Modified: trunk/drd/tests/pth_detached.stdout.exp =================================================================== --- trunk/drd/tests/pth_detached.stdout.exp 2011-07-28 17:48:48 UTC (rev 11932) +++ trunk/drd/tests/pth_detached.stdout.exp 2011-07-28 18:06:44 UTC (rev 11933) @@ -1 +0,0 @@ -.. Modified: trunk/drd/tests/pth_detached.vgtest =================================================================== --- trunk/drd/tests/pth_detached.vgtest 2011-07-28 17:48:48 UTC (rev 11932) +++ trunk/drd/tests/pth_detached.vgtest 2011-07-28 18:06:44 UTC (rev 11933) @@ -1,3 +1,4 @@ prereq: ./supported_libpthread prog: pth_detached args: 1 1 +stdout_filter: ../../tests/filter_sink Modified: trunk/drd/tests/pth_detached2.stderr.exp =================================================================== --- trunk/drd/tests/pth_detached2.stderr.exp 2011-07-28 17:48:48 UTC (rev 11932) +++ trunk/drd/tests/pth_detached2.stderr.exp 2011-07-28 18:06:44 UTC (rev 11933) @@ -1,3 +1,4 @@ +Done. ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) Modified: trunk/drd/tests/pth_detached2.stdout.exp =================================================================== --- trunk/drd/tests/pth_detached2.stdout.exp 2011-07-28 17:48:48 UTC (rev 11932) +++ trunk/drd/tests/pth_detached2.stdout.exp 2011-07-28 18:06:44 UTC (rev 11933) @@ -1 +0,0 @@ -.................... Modified: trunk/drd/tests/pth_detached2.vgtest =================================================================== --- trunk/drd/tests/pth_detached2.vgtest 2011-07-28 17:48:48 UTC (rev 11932) +++ trunk/drd/tests/pth_detached2.vgtest 2011-07-28 18:06:44 UTC (rev 11933) @@ -2,3 +2,4 @@ vgopts: --read-var-info=yes prog: pth_detached args: 10 10 +stdout_filter: ../../tests/filter_sink |
|
From: <sv...@va...> - 2011-07-28 17:53:36
|
Author: bart
Date: 2011-07-28 18:48:48 +0100 (Thu, 28 Jul 2011)
New Revision: 11932
Log:
Yet another regression test scheduling sensitivity fix
Modified:
trunk/helgrind/tests/tc05_simple_race.c
Modified: trunk/helgrind/tests/tc05_simple_race.c
===================================================================
--- trunk/helgrind/tests/tc05_simple_race.c 2011-07-28 17:41:49 UTC (rev 11931)
+++ trunk/helgrind/tests/tc05_simple_race.c 2011-07-28 17:48:48 UTC (rev 11932)
@@ -22,13 +22,13 @@
int main ( void )
{
+ const struct timespec delay = { 0, 100 * 1000 * 1000 };
pthread_t child;
-
if (pthread_create(&child, NULL, child_fn, NULL)) {
perror("pthread_create");
exit(1);
}
-
+ nanosleep(&delay, 0);
/* "Thread 1" in the paper */
y = y + 1;
pthread_mutex_lock( &mu );
|
|
From: <sv...@va...> - 2011-07-28 17:46:38
|
Author: bart
Date: 2011-07-28 18:41:49 +0100 (Thu, 28 Jul 2011)
New Revision: 11931
Log:
Two more scheduler sensitivity fixes for thread tool regression tests
Modified:
trunk/drd/tests/annotate_hb_race.c
trunk/helgrind/tests/tc16_byterace.c
Modified: trunk/drd/tests/annotate_hb_race.c
===================================================================
--- trunk/drd/tests/annotate_hb_race.c 2011-07-28 17:40:49 UTC (rev 11930)
+++ trunk/drd/tests/annotate_hb_race.c 2011-07-28 17:41:49 UTC (rev 11931)
@@ -32,11 +32,11 @@
U_ANNOTATE_HAPPENS_BEFORE(&s_i);
pthread_create(&tid[0], 0, thread_func, &result[0]);
- pthread_create(&tid[1], 0, thread_func, &result[1]);
+ //pthread_create(&tid[1], 0, thread_func, &result[1]);
s_i = 1;
pthread_join(tid[0], NULL);
- pthread_join(tid[1], NULL);
+ //pthread_join(tid[1], NULL);
fprintf(stderr, "Done.\n");
Modified: trunk/helgrind/tests/tc16_byterace.c
===================================================================
--- trunk/helgrind/tests/tc16_byterace.c 2011-07-28 17:40:49 UTC (rev 11930)
+++ trunk/helgrind/tests/tc16_byterace.c 2011-07-28 17:41:49 UTC (rev 11931)
@@ -16,14 +16,14 @@
int main ( void )
{
+ const struct timespec delay = { 0, 100 * 1000 * 1000 };
int i;
pthread_t child;
-
if (pthread_create(&child, NULL, child_fn, NULL)) {
perror("pthread_create");
exit(1);
}
-
+ nanosleep(&delay, 0);
/* Unprotected relative to child, but harmless, since different
bytes accessed */
for (i = 0; i < 5; i++)
|
|
From: <sv...@va...> - 2011-07-28 17:45:38
|
Author: bart
Date: 2011-07-28 18:40:49 +0100 (Thu, 28 Jul 2011)
New Revision: 11930
Log:
Micro-optimize the matinv regression test
Modified:
trunk/drd/tests/matinv.c
Modified: trunk/drd/tests/matinv.c
===================================================================
--- trunk/drd/tests/matinv.c 2011-07-28 15:04:08 UTC (rev 11929)
+++ trunk/drd/tests/matinv.c 2011-07-28 17:40:49 UTC (rev 11930)
@@ -173,6 +173,7 @@
elem_t* const a = p->a;
const int rows = p->rows;
const int cols = p->cols;
+ elem_t aii;
for (i = 0; i < p->rows; i++)
{
@@ -197,13 +198,10 @@
}
}
// Normalize row i.
- if (a[i * cols + i] != 0)
- {
- for (k = cols - 1; k >= 0; k--)
- {
- a[i * cols + k] /= a[i * cols + i];
- }
- }
+ aii = a[i * cols + i];
+ if (aii != 0)
+ for (k = i; k < cols; k++)
+ a[i * cols + k] /= aii;
}
pthread_barrier_wait(p->b);
// Reduce all rows j != i.
|
|
From: Florian K. <br...@ac...> - 2011-07-28 16:41:40
|
On 07/28/2011 12:21 PM, Bart Van Assche wrote: > > I've noticed that the "diff" attachment is missing from the nightly > build output e-mailed from the systems administered by Christian ? yeah, I noticed this myself. And the svn revision number is missing, too. Ah well, I'm sure Christian will fix it when he's back from his leave in 3 weeks. Florian |
|
From: Bart V. A. <bva...@ac...> - 2011-07-28 16:22:14
|
On Thu, Jul 28, 2011 at 6:07 PM, Florian Krohm <br...@ac...> wrote: > I don't doubt it. The nightly system z build which happens on a z196 > (latest model) shows only 3 testcase failures in DRD: > drd/tests/tc04_free_lock (stderr) > drd/tests/tc09_bad_unlock (stderr) > drd/tests/tc23_bogus_condwait (stderr) > > Not sure what the nature of the failure is. > Christian has done more testing with DRD on newer system z models and > feels that it runs well on those. This is what we state in README.s390 > So the problem I'm seeing is more likely elsewhere. I've noticed that the "diff" attachment is missing from the nightly build output e-mailed from the systems administered by Christian ? Having that output sent together with the nightly build output itself would make it possible to analyze regression test failures without having to e-mail the person who is managing these asking for more information. Bart. |
|
From: Florian K. <br...@ac...> - 2011-07-28 16:07:19
|
On 07/28/2011 11:11 AM, Bart Van Assche wrote: > > Strange. I've never noticed something similar on any of the systems I > have access to (Linux/i386, Linux/x86_64, Linux/ppc and Darwin). On > all these systems DRD runs fine - not only the regression tests but > also real-world multithreaded applications. > I don't doubt it. The nightly system z build which happens on a z196 (latest model) shows only 3 testcase failures in DRD: drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc23_bogus_condwait (stderr) Not sure what the nature of the failure is. Christian has done more testing with DRD on newer system z models and feels that it runs well on those. This is what we state in README.s390 So the problem I'm seeing is more likely elsewhere. Florian |