You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(11) |
2
(9) |
3
(11) |
4
(12) |
5
(11) |
|
6
(9) |
7
(13) |
8
(6) |
9
(7) |
10
(7) |
11
(11) |
12
(13) |
|
13
(7) |
14
(6) |
15
(7) |
16
(19) |
17
(20) |
18
(9) |
19
(9) |
|
20
(6) |
21
(7) |
22
(11) |
23
(16) |
24
(14) |
25
(24) |
26
(16) |
|
27
(20) |
28
(58) |
29
(7) |
30
(10) |
31
(15) |
|
|
|
From: Nicholas N. <nj...@cs...> - 2006-08-27 22:32:37
|
On Sat, 26 Aug 2006, Bart Van Assche wrote: > Julian, you made the following comment about drd: > >> - Only print a stack trace if the data address is identified as being >> in a heap block. I tried to find where in your code this is done, >> but failed. > > This is the code that does the error reporting (drd_bitmap3.c): > > VG_(maybe_record_error)(tid1, DataRaceErr, VG_(get_IP)(tid1), "data > race", &dri); > > It was also my desire not to print a stack trace in this case, and have each > data race counted as an error. Is this possible with Valgrind ? I don't think so. You could add a function to m_errormgr.c that did this, eg. VG_(maybe_record_error_no_stacktrace). > By this time I have replaced the above statement by a call to > VG_(message)(...). Nick |
|
From: Julian S. <js...@ac...> - 2006-08-27 18:06:20
|
> You reported trouble when running knode underd drd. For me, knode runs stably, but hangs at exit (100% cpu, memory use high but constant). > an effect that is also explained in the papers on DIOTA. The solution is to > change thread_discard_ordered_segments() such that if the number of > segments associated with a thread becomes too large, that these segments > are merged (bitwise or) into a single segment. Yes. That sounds a useful optimisation to add at some point. I was more concerned about whether the various races that drd reports for knode are real or not, considering that some synchronisation points may be missed. One thing you could do is add dummy wrappers for the missing functions, which simply assert and bomb the system when called. Then we will know for sure which ones do need to be implemented. > The following functions still have to be intercepted: > pthread_mutex_trylock(), the rwlock family and the barrier-related > functions. The last two families of functions are AFAIK not that frequently > used. What about the sem_init, sem_wait, sem_trywait, sem_post, sem_getvalue, sem_destroy set of functions? J |
|
From: Julian S. <js...@ac...> - 2006-08-27 17:52:24
|
> These wrappers certainly can be re-enabled. One reason why I disabled them > is that the algorithm in drd for reporting allocation call stacks is not > fool-proof: e.g. when free() is called after a race occurred but before it > is reported, then no call stack will be printed. True; but given that we need all the info we can to make sense of race addresses, and also considering the overhead isn't that large, I don't care if the algorithm isn't foolproof. If it turns out to be a problem we can easily enough add a freed blocks queue, like memcheck has, so then it will be able to print a backtrace for addresses in recently freed blocks. J |
|
From: Bart V. A. <bar...@gm...> - 2006-08-27 16:26:37
|
Changes: - fixed pthread_cond_wait() wrapper. - fixed bug in error reporting: data symbols were always reported to have size 1, and it was always reported that races occurred at offset zero. - compiles again. - updated to-do list. See also: http://home.scarlet.be/~bvassche/drd/valgrind-6012-core-2006-08-27.patch<http://home.scarlet.be/%7Ebvassche/drd/valgrind-6012-core-2006-08-27.patch> http://home.scarlet.be/~bvassche/drd/valgrind-6012-drd-2006-08-27.tar.bz2<http://home.scarlet.be/%7Ebvassche/drd/valgrind-6012-drd-2006-08-27.tar.bz2> |
|
From: Bart V. A. <bar...@gm...> - 2006-08-27 15:29:32
|
On 8/27/06, Julian Seward <js...@ac...> wrote: > > On Saturday 26 August 2006 17:22, Bart Van Assche wrote: > > > - Disabled malloc()/free() and friends wrappers. > > Maybe they could be re-enabled? Wrapping malloc/free provides useful > extra > info on the location of contended addresses located inside heap blocks, > and > as per my patch it is no longer a big space overhead. > These wrappers certainly can be re-enabled. One reason why I disabled them is that the algorithm in drd for reporting allocation call stacks is not fool-proof: e.g. when free() is called after a race occurred but before it is reported, then no call stack will be printed. |
|
From: Bart V. A. <bar...@gm...> - 2006-08-27 15:26:46
|
On 8/27/06, Julian Seward <js...@ac...> wrote: > > > I have a wider question. I guess it is critical, for avoiding > false errors, that the tool sees all synchronisation points, else > it will be comparing segments that it should not compare, and so > may reporting false errors. Correct. So the question is: how do we know what subset of the pthread_ > functions is it necessary to intercept in order to make the tool > reasonably robust? I see you intercept a few .. > > create, join, mutex_init, mutex_destroy, mutex_lock, > mutex_unlock, mutex_trylock, cond_wait > > but surely there are a lot more that need to be intercepted? The following functions still have to be intercepted: pthread_mutex_trylock(), the rwlock family and the barrier-related functions. The last two families of functions are AFAIK not that frequently used. The following functions do not have to be intercepted: pthread_cond_signal(), pthread_cond_broadcast(). Now that we have a basic framework, how much of that pthread > intercept stuff can be imported from Diota (Diota is GPLd, right?) > DIOTA is indeed GPL'd. The drd code is not based on the DIOTA code however -- I started from the DIOTA papers. Implementing interception for the remaining synchronization functions is not that hard. |
|
From: Bart V. A. <bar...@gm...> - 2006-08-27 15:10:21
|
Hello Julian,
I did not analyze the libio locking code in glibc in detail. But I think
the purpose of the _IO_lock_lock() macro and friends is that it is a speed
optimization compared to pthread_mutex_lock() and friends: _IO_lock_lock()
only performs a function call when it has to block. If the lock is free, it
proceeds without having to do any function call. There might be a problem
however with _IO_lock_lock() and _IO_lock_unlock(): these macro's assume
that C's increment and decrement operators are atomic. I'm not sure that
this is guaranteed on multiprocessors.
On 8/27/06, Julian Seward <js...@ac...> wrote:
>
> On Sunday 27 August 2006 15:25, Bart Van Assche wrote:
> > Hello Julian,
> >
> > You asked me to look at the drd output for pth_once. Can you please
> be
> > more specific ?
>
> What I originally meant was, can you look at why drd asserted. But since
> that's solved now, investigating the errors from glibc is also useful.
>
> > By this time I looked up why drd complains about
> > _IO_stdfile_1_lock. The reason is simple: at least in glibc
> > 2.3.5_IO_lock_t::cnt and _IO_lock_t::owner are accessed in a
> > non-threadsafe way
> > (_IO_lock_t is a member of _IO_FILE_plus, the datatype of
> > stdin/stdout/stderr). [...]
>
> That's very interesting. Is it a glibc bug or did the glibc authors
> intend that, do you know?
>
|
|
From: Bart V. A. <bar...@gm...> - 2006-08-27 14:59:25
|
Hello Julian,
You reported trouble when running knode underd drd. When I tried to run
knode under drd with segment tracing enabled (inst/bin/valgrind --tool=drd
--trace-segment=yes knode), I observed the following behavior:
- As long as knode was running single-threaded, everything went fine (only
one segment was kept in memory).
- After knode created a second thread, the number of segments associated
with the first thread kept increasing, while there was only one segment
associated with the second thread.
Result: memory use kept increasing, and knode got killed by the OOM handler.
I'm not familiar with the source code of knode, but from the vector clocks I
can see that no synchronization actions are performed by thread two (vector
clock component for thread 2 remains at 1 all the time). That is why the
number of segments for the first thread keeps increasing. This is an effect
that is also explained in the papers on DIOTA. The solution is to change
thread_discard_ordered_segments() such that if the number of segments
associated with a thread becomes too large, that these segments are merged
(bitwise or) into a single segment. The result is that data race reports
become less precise but that memory consumption stays within reasonable
bounds. See also the paragraph called "Merging Segments" in the paper "An
Efficient Data Race Detector Backend for DIOTA".
|
|
From: Julian S. <js...@ac...> - 2006-08-27 14:58:22
|
On Sunday 27 August 2006 15:25, Bart Van Assche wrote: > Hello Julian, > > You asked me to look at the drd output for pth_once. Can you please be > more specific ? What I originally meant was, can you look at why drd asserted. But since that's solved now, investigating the errors from glibc is also useful. > By this time I looked up why drd complains about > _IO_stdfile_1_lock. The reason is simple: at least in glibc > 2.3.5_IO_lock_t::cnt and _IO_lock_t::owner are accessed in a > non-threadsafe way > (_IO_lock_t is a member of _IO_FILE_plus, the datatype of > stdin/stdout/stderr). [...] That's very interesting. Is it a glibc bug or did the glibc authors intend that, do you know? J |
|
From: Bart V. A. <bar...@gm...> - 2006-08-27 14:25:54
|
Hello Julian,
You asked me to look at the drd output for pth_once. Can you please be
more specific ? By this time I looked up why drd complains about
_IO_stdfile_1_lock. The reason is simple: at least in glibc
2.3.5_IO_lock_t::cnt and _IO_lock_t::owner are accessed in a
non-threadsafe way
(_IO_lock_t is a member of _IO_FILE_plus, the datatype of
stdin/stdout/stderr). I did not yet try to analyze the other reported data
races.
>From glibc-2.3.5/nptl/sysdeps/pthread/bits/stdio-lock.h:
/* The locking here is very inexpensive, even for inlining. */
#define _IO_lock_inexpensive 1
typedef struct { int lock; int cnt; void *owner; } _IO_lock_t;
#define _IO_lock_lock(_name) \
do
{ \
void *__self =
THREAD_SELF; \
if ((_name).owner !=
__self) \
{ \
lll_lock
((_name).lock); \
(_name).owner =
__self; \
} \
++(_name).cnt; \
} while (0)
$ inst/bin/valgrind --tool=drd none/tests/pth_once
==7955== drd, a data race detector.
==7955== Copyright (C) 2006, and GNU GPL'd, by Bart Van Assche.
THIS SOFTWARE IS A PROTOTYPE, AND IS NOT YET RELEASED
==7955== Using LibVEX rev 1579, a library for dynamic binary translation.
==7955== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP.
==7955== Using valgrind-3.3.0.SVN, a dynamic binary instrumentation
framework.
==7955== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al.
==7955== For more details, rerun with: -v
==7955==
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955== by 0x403E9BC: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
2.4.so)
==7955== by 0x401CAB3: pthread_create@* (vg_preloaded.c:135)
==7955== by 0x8048693: main (pth_once.c:65)
==7955==
==7955== 1st segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955== by 0x403E9BC: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
2.4.so)
==7955== by 0x401CAB3: pthread_create@* (vg_preloaded.c:135)
==7955== by 0x8048693: main (pth_once.c:65)
==7955==
==7955== 2nd segment start (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment end (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x40FB7FB: (within /lib/libc-2.4.so)
==7955== by 0x40AC654: new_do_write (in /lib/libc-2.4.so)
==7955== by 0x40AC90E: _IO_do_write@@GLIBC_2.1 (in /lib/libc-2.4.so)
==7955== by 0x40AD237: _IO_file_overflow@@GLIBC_2.1 (in /lib/libc-2.4.so)
==7955== by 0x40AFA92: __overflow (in /lib/libc-2.4.so)
==7955== by 0x40A400C: puts (in /lib/libc-2.4.so)
==7955== by 0x80485C5: welcome (pth_once.c:37)
==7955== by 0x40428BA: pthread_once (in /lib/libpthread-2.4.so)
==7955== by 0x401CB3E: vg_thread_wrapper (vg_preloaded.c:109)
==7955== by 0x403E34A: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0xBEF14FD0 sz 8 W R (stack of VG t 1; kernel t 7955; POSIX t
68605616)
==7955== 0xBEF14FD8 sz 4 W W (stack of VG t 1; kernel t 7955; POSIX t
68605616)
welcome: Welcome
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955== by 0x403E9BC: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
2.4.so)
==7955== by 0x401CAB3: pthread_create@* (vg_preloaded.c:135)
==7955== by 0x8048693: main (pth_once.c:65)
==7955==
==7955== 1st segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955== by 0x403E9BC: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
2.4.so)
==7955== by 0x401CAB3: pthread_create@* (vg_preloaded.c:135)
==7955== by 0x8048693: main (pth_once.c:65)
==7955==
==7955== 2nd segment start (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment end (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x0404A110 sz 4 W W (__nptl_nthreads (offset 0, size 4) in
/lib/libpthread-2.4.so, libpthread.so.0:Data)
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
identify_yourself: Hi, I'm a thread
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955== by 0x403E9BC: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
2.4.so)
==7955== by 0x401CAB3: pthread_create@* (vg_preloaded.c:135)
==7955== by 0x8048693: main (pth_once.c:65)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x0401B310 sz 4 W W (_rtld_local (offset 752, size 1524) in
/lib/ld-2.4.so, ld-linux.so.2:Data)
==7955== 0x0403817D sz 1 W W (heap)
==7955== 0x0496EC04 sz 4 W W (stack of VG t 2; kernel t 7956; POSIX t
76999584)
==7955== 0x0496ED9C sz 4 R W (stack of VG t 2; kernel t 7956; POSIX t
76999584)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 2, kernel t 7956, POSIX t 76999584)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 3, kernel t 7957, POSIX t 85392288)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment end (VG t 3, kernel t 7957, POSIX t 85392288)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x0401E000 sz 36 W W (heap)
==7955== 0x040383FD sz 1 W W (heap)
==7955== 0x0404A020 sz 4 R W ( /lib/libpthread-2.4.so, libpthread.so.0:Data)
==7955== 0x0404A0E8 sz 4 W R ( /lib/libpthread-2.4.so, libpthread.so.0:Data)
==7955== 0x041694E0 sz 4 W R (_IO_2_1_stdout_ (offset 0, size 152) in
/lib/libc-2.4.so, libc.so.6:Data)
==7955== 0x041694E4 sz 4 W W (_IO_2_1_stdout_ (offset 4, size 152) in
/lib/libc-2.4.so, libc.so.6:Data)
==7955== 0x041694E8 sz 4 W W (_IO_2_1_stdout_ (offset 8, size 152) in
/lib/libc-2.4.so, libc.so.6:Data)
==7955== 0x041694EC sz 4 W W (_IO_2_1_stdout_ (offset 12, size 152) in
/lib/libc-2.4.so, libc.so.6:Data)
==7955== 0x041694F0 sz 12 W W (_IO_2_1_stdout_ (offset 16, size 152) in
/lib/libc-2.4.so, libc.so.6:Data)
==7955== 0x041694FC sz 8 W R (_IO_2_1_stdout_ (offset 28, size 152) in
/lib/libc-2.4.so, libc.so.6:Data)
==7955== 0x04169548 sz 4 W R (_IO_2_1_stdout_ (offset 104, size 152) in
/lib/libc-2.4.so, libc.so.6:Data)
==7955== 0x0416A0D0 sz 12 W W (_IO_stdfile_1_lock (offset 0, size 12) in
/lib/libc-2.4.so, libc.so.6:BSS)
==7955== 0x08049960 sz 4 W R (
/home/bart/software/valgrind-svn/none/tests/pth_once, NONE:Data)
==7955== 0x08049968 sz 4 W R (
/home/bart/software/valgrind-svn/none/tests/pth_once, NONE:Data)
==7955== 0x08049994 sz 4 W W (welcome_once_block (offset 0, size 4) in
/home/bart/software/valgrind-svn/none/tests/pth_once, NONE:BSS)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 3, kernel t 7957, POSIX t 85392288)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 3, kernel t 7957, POSIX t 85392288)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x0516FC04 sz 4 W W (stack of VG t 3; kernel t 7957; POSIX t
85392288)
==7955== 0x0516FD9C sz 4 R W (stack of VG t 3; kernel t 7957; POSIX t
85392288)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 4, kernel t 7958, POSIX t 93784992)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 4, kernel t 7958, POSIX t 93784992)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x05970C04 sz 4 W W (stack of VG t 4; kernel t 7958; POSIX t
93784992)
==7955== 0x05970D9C sz 4 R W (stack of VG t 4; kernel t 7958; POSIX t
93784992)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 5, kernel t 7959, POSIX t 102177696)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 5, kernel t 7959, POSIX t 102177696)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x06171C04 sz 4 W W (stack of VG t 5; kernel t 7959; POSIX t
102177696)
==7955== 0x06171D9C sz 4 R W (stack of VG t 5; kernel t 7959; POSIX t
102177696)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 6, kernel t 7960, POSIX t 110570400)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 6, kernel t 7960, POSIX t 110570400)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x06972C04 sz 4 W W (stack of VG t 6; kernel t 7960; POSIX t
110570400)
==7955== 0x06972D9C sz 4 R W (stack of VG t 6; kernel t 7960; POSIX t
110570400)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 7, kernel t 7961, POSIX t 118963104)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 7, kernel t 7961, POSIX t 118963104)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x07173C04 sz 4 W W (stack of VG t 7; kernel t 7961; POSIX t
118963104)
==7955== 0x07173D9C sz 4 R W (stack of VG t 7; kernel t 7961; POSIX t
118963104)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 8, kernel t 7962, POSIX t 127355808)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 8, kernel t 7962, POSIX t 127355808)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x07974C04 sz 4 W W (stack of VG t 8; kernel t 7962; POSIX t
127355808)
==7955== 0x07974D9C sz 4 R W (stack of VG t 8; kernel t 7962; POSIX t
127355808)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 9, kernel t 7963, POSIX t 151301024)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 9, kernel t 7963, POSIX t 151301024)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x0904AC04 sz 4 W W (stack of VG t 9; kernel t 7963; POSIX t
151301024)
==7955== 0x0904AD9C sz 4 R W (stack of VG t 9; kernel t 7963; POSIX t
151301024)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 10, kernel t 7964, POSIX t 159693728)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 10, kernel t 7964, POSIX t 159693728)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x0984BC04 sz 4 W W (stack of VG t 10; kernel t 7964; POSIX t
159693728)
==7955== 0x0984BD9C sz 4 R W (stack of VG t 10; kernel t 7964; POSIX t
159693728)
==7955==
----------------------------------------------------------------------
==7955== 1st segment start (VG t 11, kernel t 7965, POSIX t 168086432)
==7955== at 0x410A648: clone (in /lib/libc-2.4.so)
==7955==
==7955== 1st segment end (VG t 11, kernel t 7965, POSIX t 168086432)
==7955== at 0x403E3BD: start_thread (in /lib/libpthread-2.4.so)
==7955== by 0x410A65D: clone (in /lib/libc-2.4.so)
==7955==
==7955== 2nd segment start (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== 2nd segment end (VG t 1, kernel t 7955, POSIX t 68605616)
==7955== at 0x401C4FE: pthread_join (vg_preloaded.c:164)
==7955== by 0x80486EF: main (pth_once.c:76)
==7955==
==7955== Data addresses accessed by both segments:
==7955== 0x0A04CC04 sz 4 W W (stack of VG t 11; kernel t 7965; POSIX t
168086432)
==7955== 0x0A04CD9C sz 4 R W (stack of VG t 11; kernel t 7965; POSIX t
168086432)
main: Goodbye
==7955==
==7955== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
|
|
From: <js...@ac...> - 2006-08-27 10:41:45
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2006-08-27 09:00:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 207 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: Tom H. <th...@cy...> - 2006-08-27 03:12:38
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2006-08-27 03:00:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 266 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/mempool (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/tls (stdout) |
|
From: <js...@ac...> - 2006-08-27 02:59:45
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2006-08-27 03:30:01 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 236 tests, 5 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <to...@co...> - 2006-08-27 02:46:10
|
Nightly build on dunsmere ( athlon, Fedora Core 5 ) started at 2006-08-27 03:30:06 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 238 tests, 5 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2006-08-27 02:24:59
|
Nightly build on dellow ( x86_64, Fedora Core 5 ) started at 2006-08-27 03:10:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 264 tests, 4 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/mempool (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2006-08-27 02:24:28
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2006-08-27 03:15:02 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Last 20 lines of verbose log follow echo /tmp/ccumgjbt.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccumgjbt.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccumgjbt.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccumgjbt.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccumgjbt.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccumgjbt.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccumgjbt.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccumgjbt.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 rm insn_mmx.c insn_sse2.c insn_fpu.c insn_mmxext.c insn_sse.c insn_sse3.c insn_cmov.c insn_basic.c make[5]: Leaving directory `/tmp/valgrind.14819/valgrind/none/tests/x86' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/tmp/valgrind.14819/valgrind/none/tests/x86' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/tmp/valgrind.14819/valgrind/none/tests' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/tmp/valgrind.14819/valgrind/none' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.14819/valgrind' make: *** [check] Error 2 ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Last 20 lines of verbose log follow echo /tmp/ccrOMAtV.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccrOMAtV.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccrOMAtV.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccrOMAtV.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccrOMAtV.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccrOMAtV.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccrOMAtV.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccrOMAtV.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 rm insn_mmx.c insn_sse2.c insn_fpu.c insn_mmxext.c insn_sse.c insn_sse3.c insn_cmov.c insn_basic.c make[5]: Leaving directory `/tmp/valgrind.14819/valgrind/none/tests/x86' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/tmp/valgrind.14819/valgrind/none/tests/x86' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/tmp/valgrind.14819/valgrind/none/tests' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/tmp/valgrind.14819/valgrind/none' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.14819/valgrind' make: *** [check] Error 2 ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Aug 27 03:19:39 2006 --- new.short Sun Aug 27 03:24:20 2006 *************** *** 7,16 **** Last 20 lines of verbose log follow echo ! /tmp/ccrOMAtV.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccrOMAtV.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccrOMAtV.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccrOMAtV.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccrOMAtV.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccrOMAtV.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccrOMAtV.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccrOMAtV.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 --- 7,16 ---- Last 20 lines of verbose log follow echo ! /tmp/ccumgjbt.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccumgjbt.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccumgjbt.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccumgjbt.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccumgjbt.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccumgjbt.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccumgjbt.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccumgjbt.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 |
|
From: Tom H. <th...@cy...> - 2006-08-27 02:20:18
|
Nightly build on lloyd ( x86_64, Fedora Core 3 ) started at 2006-08-27 03:05:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 264 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/leakotron (stdout) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Julian S. <js...@ac...> - 2006-08-27 01:37:32
|
I tried starting OpenOffice on drd, and it reported a bunch of races, and after a while makes no forward progress but uses 100% cpu and eats memory. Maybe this is related to the knode-hang-at-exit I reported just now. I have a wider question. I guess it is critical, for avoiding false errors, that the tool sees all synchronisation points, else it will be comparing segments that it should not compare, and so may reporting false errors. So the question is: how do we know what subset of the pthread_ functions is it necessary to intercept in order to make the tool reasonably robust? I see you intercept a few .. create, join, mutex_init, mutex_destroy, mutex_lock, mutex_unlock, mutex_trylock, cond_wait but surely there are a lot more that need to be intercepted? Now that we have a basic framework, how much of that pthread intercept stuff can be imported from Diota (Diota is GPLd, right?) J On Saturday 26 August 2006 17:22, Bart Van Assche wrote: > Hello, > > I have applied the following changes to the drd prototype: > - Fixed huge memory leak (thanks Julian !). > - Support for Ist_Dirty with side effects (Julian's patch). > - Cleanup of error reporting. > - Disabled malloc()/free() and friends wrappers. > - Removed #include "drd/drd_clientreq.h" from coregrind/vg_preloaded.c. > - Various small changes. > > Source code: > http://home.scarlet.be/~bvassche/drd/valgrind-6012.patch > http://home.scarlet.be/~bvassche/drd/valgrind-6012-drd-2006-08-26.tar.bz2 > > What I consider now as a priority is to get drd working on pth_cvsimple. > Anyone any idea what the behavior of Valgrind is when a wrapper is defined > in vg_preloaded.c for a function that is defined in multiple shared > libraries (e.g. pthread_mutex_lock()) ? |
|
From: Julian S. <js...@ac...> - 2006-08-27 00:48:19
|
On Saturday 26 August 2006 17:22, Bart Van Assche wrote: > - Disabled malloc()/free() and friends wrappers. Maybe they could be re-enabled? Wrapping malloc/free provides useful extra info on the location of contended addresses located inside heap blocks, and as per my patch it is no longer a big space overhead. J |
|
From: Julian S. <js...@ac...> - 2006-08-27 00:44:56
|
> I don't have an answer yet why drd fails on pth_cvsimple.
Fixed. The wrapper function for pthread_cond_wait in vg_preloaded.c
never took effect, because (1) it was for a function "pthread_cont_wait"
and (2), even when you fix the typo, libpthread.so does not export
a function with exactly the name "pthread_cond_wait":
sewardj@suse10:~/VgTRUNK/drd$ nm /lib/tls/libpthread-2.3.5.so \
| grep " T " \
| | grep pthread_cond_wait
00008060 T pthread_cond_wait@GLIBC_2.0
000079f0 T pthread_cond_wait@@GLIBC_2.3.2
These are versioned glibc symbols, and to be reliable we need to
intercept both. Therefore I asked the wrapper instead to intercept
any function in libpthread.so whose name matches "pthread_cond_wait@*";
that way it intercepts both entry points. A similar trick already
applies to the pthread_create intercept.
The actual fix is trivial; vg_preloaded.c:260 needs to be changed to
PTH_FUNC(int, pthreadZucondZuwaitZAZa, // pthread_cond_wait@*
The Z-encodings for _, @ and * are Zu, ZA, Za respectively; that's
how the name is generated. With this change pth_cvsimple runs
successfully.
--trace-redir=yes is your friend for such games. It tells you
(repeatedly) the redirection specifications in effect. A
specification is basically a statement saying
"redirect function F in object with soname S to some
replacement function G."
Both F and S may have wildcards, to make it more flexible.
It also shows you the active redirections, that is, the subset
of the specifications currently in effect. Both the spec and
active sets are updated after every .so load/unload.
---------
This fix means knode in KDE 3.5.4 (a threaded app) does not crash at
exit any more. Now it appears to hang instead :-)
If you intercept pthread_cond_wait, do you also need to intercept
pthread_cond_broadcast too?
J
> But when I enable
> mutex tracing in drd, it looks like threads 2 and 3 were able to both lock
> count_lock. How can I verify that the pthread_mutex_lock() implementation
> of libpthread.so.0 is called and not the implementation of libc.so.6 ? For
> most pthread functions there are dummy implementation present in glibc.
>
> > inst/bin/valgrind --tool=drd --trace-mutex=yes none/tests/pth_cvsimple
>
> ==13717== drd, a data race detector.
> ==13717== Copyright (C) 2006, and GNU GPL'd, by Bart Van Assche.
> THIS SOFTWARE IS A PROTOTYPE, AND IS NOT YET RELEASED
> ==13717== Using LibVEX rev 1579, a library for dynamic binary translation.
> ==13717== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP.
> ==13717== Using valgrind-3.3.0.SVN, a dynamic binary instrumentation
> framework.
> ==13717== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al.
> ==13717== For more details, rerun with: -v
> ==13717==
> --13717-- drd_post_mutex_lock tid = 1, mutex 0x401B2E0 rc 0 owner 0
> --13717-- drd_pre_mutex_unlock tid = 1, mutex 0x401B2E0 rc 1 owner 1
> --13717-- drd_post_mutex_lock tid = 2, mutex 0x8049A48 rc 0 owner 0
> --13717-- drd_post_mutex_lock tid = 3, mutex 0x8049A48 rc 1 owner 2
> --13717-- The impossible happened: mutex 0x8049A48 is locked simultaneously
> by two threads (recursion count 1, owners 2 and 3) !
>
> drd: drd_mutex.c:96 (mutex_lock): the 'impossible' happened.
> ==13717== at 0x38007585: report_and_quit (m_libcassert.c:136)
> ==13717== by 0x380078AF: vgPlain_assert_fail (m_libcassert.c:200)
> ==13717== by 0x380021B8: mutex_lock (drd_mutex.c:96)
> ==13717== by 0x380016F3: drd_post_mutex_lock (drd_main.c:202)
> ==13717== by 0x380265EE: do_client_request (scheduler.c:1256)
> ==13717== by 0x38027C5B: vgPlain_scheduler (scheduler.c:872)
> ==13717== by 0x38046B63: run_a_thread_NORETURN (syswrap-linux.c:87)
> ==13717== by 0x38046DC3: vgModuleLocal_start_thread_NORETURN (
> syswrap-linux.c:207)
> ==13717== by 0x38049088: (within
> /home/bart/software/valgrind-svn/inst/lib/valgrind/x86-linux/drd)
> ==13717== by 0x382450AF: temporary (in
> /home/bart/software/valgrind-svn/inst/lib/valgrind/x86-linux/drd)
> ==13717== by 0x8: ???
>
> sched status:
> running_tid=3
>
> Thread 1: status = VgTs_Yielding
> ==13717== at 0x410A648: clone (in /lib/libc-2.4.so)
> ==13717== by 0x403E9BC: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
> 2.4.so)
> ==13717== by 0x401CAAF: pthread_create@* (vg_preloaded.c:135)
> ==13717== by 0x80486FF: main (pth_cvsimple.c:68)
>
> Thread 2: status = VgTs_WaitSys
> ==13717== at 0x40417E6: pthread_cond_wait@@GLIBC_2.3.2 (in
> /lib/libpthread-2.4.so)
> ==13717== by 0x401CB3A: vg_thread_wrapper (vg_preloaded.c:109)
> ==13717== by 0x403E34A: start_thread (in /lib/libpthread-2.4.so)
> ==13717== by 0x410A65D: clone (in /lib/libc-2.4.so)
>
> Thread 3: status = VgTs_Runnable
> ==13717== at 0x401C71D: pthread_mutex_lock (vg_preloaded.c:211)
> ==13717== by 0x80485F5: inc_count (pth_cvsimple.c:34)
> ==13717== by 0x401CB3A: vg_thread_wrapper (vg_preloaded.c:109)
> ==13717== by 0x403E34A: start_thread (in /lib/libpthread-2.4.so)
> ==13717== by 0x410A65D: clone (in /lib/libc-2.4.so)
|