You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(14) |
2
(16) |
3
(7) |
|
4
(7) |
5
(9) |
6
(8) |
7
(10) |
8
(16) |
9
(15) |
10
(9) |
|
11
(11) |
12
(13) |
13
(8) |
14
(8) |
15
(10) |
16
(25) |
17
(7) |
|
18
(7) |
19
(13) |
20
(10) |
21
(14) |
22
(11) |
23
(12) |
24
(8) |
|
25
(19) |
26
(10) |
27
(16) |
28
(13) |
|
|
|
|
From: <sv...@va...> - 2007-02-20 19:23:29
|
Author: sewardj
Date: 2007-02-20 19:23:19 +0000 (Tue, 20 Feb 2007)
New Revision: 6606
Log:
Make ppc32/64-aix5 work again following recent VG_(tt_fast) rearrangement.
Modified:
trunk/coregrind/m_dispatch/dispatch-ppc32-aix5.S
trunk/coregrind/m_dispatch/dispatch-ppc64-aix5.S
trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S
Modified: trunk/coregrind/m_dispatch/dispatch-ppc32-aix5.S
===================================================================
--- trunk/coregrind/m_dispatch/dispatch-ppc32-aix5.S 2007-02-19 11:51:16 UTC (rev 6605)
+++ trunk/coregrind/m_dispatch/dispatch-ppc32-aix5.S 2007-02-20 19:23:19 UTC (rev 6606)
@@ -267,9 +267,12 @@
128(r1) (=orig guest_state)
*/
- /* Has the guest state pointer been messed with? If yes, exit. */
+ /* Has the guest state pointer been messed with? If yes, exit.
+ Also set up & VG_(tt_fast) early in an attempt at better
+ scheduling. */
lwz 5,128(1) /* original guest_state ptr */
cmpw 5,31
+ lwz 5,tocent__vgPlain_tt_fast(2) /* r5 = &tt_fast */
bne gsp_changed
/* save the jump address in the guest state */
@@ -281,24 +284,17 @@
beq counter_is_zero
/* try a fast lookup in the translation cache */
- /* r4 = VG_TT_FAST_HASH(addr) * sizeof(ULong*)
- = ((r3 >>u 2) & VG_TT_FAST_MASK) << 2 */
- rlwinm 4,3, 0, 32-2-VG_TT_FAST_BITS, 31-2
-
- lwz 5,tocent__vgPlain_tt_fast(2) /* r5 = &tt_fast */
-
- lwzx 5,5,4 /* r5 = tt_fast[r5] */
-
- lwz 6,4(5) /* big-endian, so comparing 2nd 32bit word */
+ /* r4 = VG_TT_FAST_HASH(addr) * sizeof(FastCacheEntry)
+ = ((r3 >>u 2) & VG_TT_FAST_MASK) << 3 */
+ rlwinm 4,3,1, 29-VG_TT_FAST_BITS, 28 /* entry# * 8 */
+ add 5,5,4 /* & VG_(tt_fast)[entry#] */
+ lwz 6,0(5) /* .guest */
+ lwz 7,4(5) /* .host */
cmpw 3,6
bne fast_lookup_failed
- /* Found a match. Call tce[1], which is 8 bytes along, since
- each tce element is a 64-bit int. */
- addi 8,5,8
- mtctr 8
-
- /* run the translation */
+ /* Found a match. Call .host. */
+ mtctr 7
bctrl
/* On return from guest code:
@@ -306,7 +302,6 @@
r31 may be unchanged (guest_state), or may indicate further
details of the control transfer requested to *r3.
*/
-
/* start over */
b VG_(run_innerloop__dispatch_unprofiled)
/*NOTREACHED*/
@@ -325,10 +320,12 @@
Stack state:
128(r1) (=orig guest_state)
*/
-
- /* Has the guest state pointer been messed with? If yes, exit. */
+ /* Has the guest state pointer been messed with? If yes, exit.
+ Also set up & VG_(tt_fast) early in an attempt at better
+ scheduling. */
lwz 5,128(1) /* original guest_state ptr */
cmpw 5,31
+ lwz 5,tocent__vgPlain_tt_fast(2) /* r5 = &tt_fast */
bne gsp_changed
/* save the jump address in the guest state */
@@ -340,31 +337,25 @@
beq counter_is_zero
/* try a fast lookup in the translation cache */
- /* r4 = VG_TT_FAST_HASH(addr) * sizeof(ULong*)
- = ((r3 >>u 2) & VG_TT_FAST_MASK) << 2 */
- rlwinm 4,3, 0, 32-2-VG_TT_FAST_BITS, 31-2
-
- lwz 5,tocent__vgPlain_tt_fast(2) /* r5 = &tt_fast */
-
- lwzx 5,5,4 /* r5 = tt_fast[r4] */
-
- lwz 6,4(5) /* big-endian, so comparing 2nd 32bit word */
+ /* r4 = VG_TT_FAST_HASH(addr) * sizeof(FastCacheEntry)
+ = ((r3 >>u 2) & VG_TT_FAST_MASK) << 3 */
+ rlwinm 4,3,1, 29-VG_TT_FAST_BITS, 28 /* entry# * 8 */
+ add 5,5,4 /* & VG_(tt_fast)[entry#] */
+ lwz 6,0(5) /* .guest */
+ lwz 7,4(5) /* .host */
cmpw 3,6
bne fast_lookup_failed
/* increment bb profile counter */
+ srwi 4,4,1 /* entry# * sizeof(UInt*) */
lwz 9,tocent__vgPlain_tt_fastN(2) /* r9 = &tt_fastN */
- lwzx 7,9,4 /* r7 = tt_fastN[r4] */
- lwz 10,0(7)
+ lwzx 8,9,4 /* r7 = tt_fastN[r4] */
+ lwz 10,0(8)
addi 10,10,1
- stw 10,0(7)
+ stw 10,0(8)
- /* Found a match. Call tce[1], which is 8 bytes along, since
- each tce element is a 64-bit int. */
- addi 8,5,8
- mtctr 8
-
- /* run the translation */
+ /* Found a match. Call .host. */
+ mtctr 7
bctrl
/* On return from guest code:
@@ -372,11 +363,11 @@
r31 may be unchanged (guest_state), or may indicate further
details of the control transfer requested to *r3.
*/
-
/* start over */
- b VG_(run_innerloop__dispatch_profiled)
+ b VG_(run_innerloop__dispatch_unprofiled)
/*NOTREACHED*/
+
/*----------------------------------------------------*/
/*--- exit points ---*/
/*----------------------------------------------------*/
Modified: trunk/coregrind/m_dispatch/dispatch-ppc64-aix5.S
===================================================================
--- trunk/coregrind/m_dispatch/dispatch-ppc64-aix5.S 2007-02-19 11:51:16 UTC (rev 6605)
+++ trunk/coregrind/m_dispatch/dispatch-ppc64-aix5.S 2007-02-20 19:23:19 UTC (rev 6606)
@@ -203,7 +203,7 @@
/* hold dispatch_ctr (NOTE: 32-bit value) in r29 */
ld 5,tocent__vgPlain_dispatch_ctr(2)
- lwz 29,0(5)
+ lwz 29,0(5) /* 32-bit zero-extending load */
/* set host FPU control word to the default mode expected
by VEX-generated code. See comments in libvex.h for
@@ -258,6 +258,7 @@
/* Has the guest state pointer been messed with? If yes, exit. */
ld 5,256(1) /* original guest_state ptr */
cmpd 5,31
+ ld 5,tocent__vgPlain_tt_fast(2) /* &VG_(tt_fast) */
bne gsp_changed
/* save the jump address in the guest state */
@@ -269,24 +270,18 @@
beq counter_is_zero
/* try a fast lookup in the translation cache */
- /* r4 = VG_TT_FAST_HASH(addr) * sizeof(ULong*)
- = ((r3 >>u 2) & VG_TT_FAST_MASK) << 3 */
- rldicl 4,3, 62, 64-VG_TT_FAST_BITS
- sldi 4,4,3
-
- ld 5,tocent__vgPlain_tt_fast(2) /* r5 = &tt_fast */
-
- ldx 5,5,4 /* r5 = VG_(tt_fast)[VG_TT_FAST_HASH(addr)] */
- ld 6,0(5) /* r6 = (r5)->orig_addr */
+ /* r4 = VG_TT_FAST_HASH(addr) * sizeof(FastCacheEntry)
+ = ((r3 >>u 2) & VG_TT_FAST_MASK) << 4 */
+ rldicl 4,3, 62, 64-VG_TT_FAST_BITS /* entry# */
+ sldi 4,4,4 /* entry# * sizeof(FastCacheEntry) */
+ add 5,5,4 /* &VG_(tt_fast)[entry#] */
+ ld 6,0(5) /* .guest */
+ ld 7,8(5) /* .host */
cmpd 3,6
bne fast_lookup_failed
- /* Found a match. Call tce[1], which is 8 bytes along, since
- each tce element is a 64-bit int. */
- addi 8,5,8
- mtctr 8
-
- /* run the translation */
+ /* Found a match. Call .host. */
+ mtctr 7
bctrl
/* On return from guest code:
@@ -294,7 +289,6 @@
r31 may be unchanged (guest_state), or may indicate further
details of the control transfer requested to *r3.
*/
-
/* start over */
b VG_(run_innerloop__dispatch_unprofiled)
/*NOTREACHED*/
@@ -317,6 +311,7 @@
/* Has the guest state pointer been messed with? If yes, exit. */
ld 5,256(1) /* original guest_state ptr */
cmpd 5,31
+ ld 5,tocent__vgPlain_tt_fast(2) /* &VG_(tt_fast) */
bne gsp_changed
/* save the jump address in the guest state */
@@ -328,31 +323,26 @@
beq counter_is_zero
/* try a fast lookup in the translation cache */
- /* r4 = VG_TT_FAST_HASH(addr) * sizeof(ULong*)
- = ((r3 >>u 2) & VG_TT_FAST_MASK) << 3 */
- rldicl 4,3, 62, 64-VG_TT_FAST_BITS
- sldi 4,4,3
-
- ld 5,tocent__vgPlain_tt_fast(2) /* r5 = &tt_fast */
-
- ldx 5,5,4 /* r5 = VG_(tt_fast)[VG_TT_FAST_HASH(addr)] */
- ld 6,0(5) /* r6 = (r5)->orig_addr */
+ /* r4 = VG_TT_FAST_HASH(addr) * sizeof(FastCacheEntry)
+ = ((r3 >>u 2) & VG_TT_FAST_MASK) << 4 */
+ rldicl 4,3, 62, 64-VG_TT_FAST_BITS /* entry# */
+ sldi 4,4,4 /* entry# * sizeof(FastCacheEntry) */
+ add 5,5,4 /* &VG_(tt_fast)[entry#] */
+ ld 6,0(5) /* .guest */
+ ld 7,8(5) /* .host */
cmpd 3,6
bne fast_lookup_failed
/* increment bb profile counter */
ld 9,tocent__vgPlain_tt_fastN(2) /* r9 = &tt_fastN */
- ldx 7,9,4 /* r7 = tt_fastN[r4] */
- lwz 10,0(7)
+ srdi 4,4,1 /* entry# * sizeof(UInt*) */
+ ldx 8,9,4 /* r7 = tt_fastN[r4] */
+ lwz 10,0(8)
addi 10,10,1
- stw 10,0(7)
+ stw 10,0(8)
- /* Found a match. Call tce[1], which is 8 bytes along, since
- each tce element is a 64-bit int. */
- addi 8,5,8
- mtctr 8
-
- /* run the translation */
+ /* Found a match. Call .host. */
+ mtctr 7
bctrl
/* On return from guest code:
@@ -360,7 +350,6 @@
r31 may be unchanged (guest_state), or may indicate further
details of the control transfer requested to *r3.
*/
-
/* start over */
b VG_(run_innerloop__dispatch_profiled)
/*NOTREACHED*/
Modified: trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S
===================================================================
--- trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S 2007-02-19 11:51:16 UTC (rev 6605)
+++ trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S 2007-02-20 19:23:19 UTC (rev 6606)
@@ -204,7 +204,7 @@
/* hold dispatch_ctr (=32bit value) in r29 */
ld 29,.tocent__vgPlain_dispatch_ctr@toc(2)
- lwz 29,0(29)
+ lwz 29,0(29) /* 32-bit zero-extending load */
/* set host FPU control word to the default mode expected
by VEX-generated code. See comments in libvex.h for
|
|
From: Bart V. A. <bar...@gm...> - 2007-02-20 17:29:37
|
Is the patch below inserted in the right place ? This patch solves the false
positives reported by drd.
Note: this patch breaks the convention that the third argument of the
start_client_code and stop_client_code tracking functions is the number of
already executed basic blocks.
Index: coregrind/m_signals.c
===================================================================
--- coregrind/m_signals.c (revision 6604)
+++ coregrind/m_signals.c (working copy)
@@ -1602,7 +1602,12 @@
/* Set up the thread's state to deliver a signal */
if (!is_sig_ign(info->si_signo))
+ {
+ VG_TRACK(stop_client_code, VG_(running_tid), 0);
+ VG_(running_tid) = tid;
+ VG_TRACK(start_client_code, tid, 0);
deliver_signal(tid, info);
+ }
/* longjmp back to the thread's main loop to start executing the
handler. */
Bart.
On 2/20/07, Bart Van Assche <bar...@gm...> wrote:
>
> Hello,
>
> I started again working on the drd tool where I stopped last December:
> analyzing the cause of false positives caused by signal delivery. These
> false positives might be caused by the way the Valgrind core delivers
> signals to clients. I have written a small client program to analyze this
> issue (see also attachment). Basically it does the following:
> - main thread:
> * install a signal handler for SIGALRM.
> * create thread 2
> * wait until thread 2 is running
> * send SIGALRM to thread 2 (using pthread_kill).
> * join thread 2
> - thread 2:
> * call clock_nanosleep() with an interval length of 10s. This call gets
> interrupted by the pthread_kill() invoked by the main thread, and it is on
> the context of this thread that the signal handler gets called.
>
> When I run this client program through drd then a data race is reported on
> the arguments passed to the signal handler, which I do not understand. I
> started analyzing this and made the following observations:
> 1. The first time drd_trace_store() reports an access to location
> 0x4aa9dc0 Valgrind's core has set the thread ID to 1. This can't be
> correct -- if you look at the call stack, you can see that this is a call
> stack of thread 2. Further analysis has shown that tracing this store action
> is triggered by vgPlain_sigframe_create() (I have inserted a tl_assert(0) in
> the store trace code).
> 2. At the time the signal handler is called, apparently the thread ID is
> (correctly) set to 2.
> 3. drd reports a false positive because Valgrind's core had told it that a
> store and load have been performed on the same location but with different
> thread ID's.
>
> My questions to the other Valgrind developers are:
> - Do you agree with my analysis ?
> - If so, would it be hard to make sure that the thread ID is set correctly
> before vgPlain_sigframe_create() is called ?
>
>
> Full drd output:
>
> ./vg-in-place --tool=drd --trace-address=$(echo $((0x4aa9dc0)))
> drd/tests/sigalrm
> ==20274== drd, a data race detector.
> ==20274== Copyright (C) 2006, and GNU GPL'd, by Bart Van Assche.
> THIS SOFTWARE IS A PROTOTYPE, AND IS NOT YET RELEASED
> ==20274== Using LibVEX rev 1680, a library for dynamic binary translation.
> ==20274== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP.
> ==20274== Using valgrind-3.3.0.SVN, a dynamic binary instrumentation
> framework.
> ==20274== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al.
> ==20274== For more details, rerun with: -v
> ==20274==
> main: kernel thread ID 20274 / Valgrind thread ID 1
> thread: kernel thread ID 20275 / Valgrind thread ID 2
> ==20274== store 0x4aa9dc0 size 720 thread 1
> ==20274== at 0x405F536: clock_nanosleep (in /lib/librt-2.5.so)
> ==20274== by 0x8048BE2: thread_func(void*) ( sigalrm.cpp:50)
> ==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
> ==20274== by 0x4048111: start_thread (in /lib/libpthread-2.5.so)
> ==20274== by 0x423A2ED: clone (in /lib/libc- 2.5.so)
> ==20274== load 0x4aa9dc0 size 4 thread 2
> ==20274== at 0x8048974: SignalHandler(int) (sigalrm.cpp:42)
> ==20274== by 0x8048BE2: thread_func(void*) (sigalrm.cpp:50)
> ==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
> ==20274== by 0x4048111: start_thread (in /lib/libpthread-2.5.so)
> ==20274== by 0x423A2ED: clone (in /lib/libc-2.5.so)
> ==20274== Thread 2:
> ==20274== Conflicting load by thread 2 at 0x04aa9dc0 size 4
> ==20274== at 0x8048974: SignalHandler(int) (sigalrm.cpp:42)
> ==20274== by 0x8048BE2: thread_func(void*) (sigalrm.cpp:50)
> ==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
> ==20274== by 0x4048111: start_thread (in /lib/libpthread- 2.5.so)
> ==20274== by 0x423A2ED: clone (in /lib/libc-2.5.so)
> ==20274== Allocation context: stack of thread 2, offset -4672
> ==20274== Other segment start (thread 1)
> ==20274== at 0x423A2D8: clone (in /lib/libc-2.5.so)
> ==20274== by 0x4048818: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
> 2.5.so)
> ==20274== by 0x4024C5A: pthread_create@* (drd_preloaded.c:191)
> ==20274== by 0x8048AE8: main (sigalrm.cpp:69)
> ==20274== Other segment end (thread 1)
> ==20274== at 0x4049517: pthread_join (in /lib/libpthread-2.5.so)
> ==20274== by 0x4023FDE: pthread_join (drd_preloaded.c:220)
> ==20274== by 0x8048B3F: main (sigalrm.cpp:75)
> ==20274== end 0x4aa9dc0 size 4 thread 2
> ==20274== at 0x8048974: SignalHandler(int) (sigalrm.cpp:42)
> ==20274== by 0x8048BE2: thread_func(void*) (sigalrm.cpp :50)
> ==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
> ==20274== by 0x4048111: start_thread (in /lib/libpthread-2.5.so)
> ==20274== by 0x423A2ED: clone (in /lib/libc- 2.5.so)
> ==20274==
> ==20274== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 15 from 5)
>
>
> This is the call stack at the time of the first "trace store" action in
> the above output:
>
> ==19613== by 0x3800248F: drd_trace_store (drd_main.c:182)
> ==19613== by 0x38002576: drd_post_mem_write (drd_main.c:265)
> ==19613== by 0x3805F9F0: vgPlain_sigframe_create (sigframe-x86-linux.c
> :494)
> ==19613== by 0x380188B4: deliver_signal (m_signals.c:1053)
> ==19613== by 0x38018A79: async_signalhandler (m_signals.c:1605)
> ==19613== by 0x380176EF: (within /home/bart/software/valgrind
> -svn/drd/drd-x86-linux)
>
>
> Regards,
>
> Bart Van Assche.
>
>
--
Met vriendelijke groeten,
Bart Van Assche.
|
|
From: Bart V. A. <bar...@gm...> - 2007-02-20 12:03:53
|
Hello,
I started again working on the drd tool where I stopped last December:
analyzing the cause of false positives caused by signal delivery. These
false positives might be caused by the way the Valgrind core delivers
signals to clients. I have written a small client program to analyze this
issue (see also attachment). Basically it does the following:
- main thread:
* install a signal handler for SIGALRM.
* create thread 2
* wait until thread 2 is running
* send SIGALRM to thread 2 (using pthread_kill).
* join thread 2
- thread 2:
* call clock_nanosleep() with an interval length of 10s. This call gets
interrupted by the pthread_kill() invoked by the main thread, and it is on
the context of this thread that the signal handler gets called.
When I run this client program through drd then a data race is reported on
the arguments passed to the signal handler, which I do not understand. I
started analyzing this and made the following observations:
1. The first time drd_trace_store() reports an access to location 0x4aa9dc0
Valgrind's core has set the thread ID to 1. This can't be correct -- if you
look at the call stack, you can see that this is a call stack of thread 2.
Further analysis has shown that tracing this store action is triggered by
vgPlain_sigframe_create() (I have inserted a tl_assert(0) in the store trace
code).
2. At the time the signal handler is called, apparently the thread ID is
(correctly) set to 2.
3. drd reports a false positive because Valgrind's core had told it that a
store and load have been performed on the same location but with different
thread ID's.
My questions to the other Valgrind developers are:
- Do you agree with my analysis ?
- If so, would it be hard to make sure that the thread ID is set correctly
before vgPlain_sigframe_create() is called ?
Full drd output:
./vg-in-place --tool=drd --trace-address=$(echo $((0x4aa9dc0)))
drd/tests/sigalrm
==20274== drd, a data race detector.
==20274== Copyright (C) 2006, and GNU GPL'd, by Bart Van Assche.
THIS SOFTWARE IS A PROTOTYPE, AND IS NOT YET RELEASED
==20274== Using LibVEX rev 1680, a library for dynamic binary translation.
==20274== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP.
==20274== Using valgrind-3.3.0.SVN, a dynamic binary instrumentation
framework.
==20274== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al.
==20274== For more details, rerun with: -v
==20274==
main: kernel thread ID 20274 / Valgrind thread ID 1
thread: kernel thread ID 20275 / Valgrind thread ID 2
==20274== store 0x4aa9dc0 size 720 thread 1
==20274== at 0x405F536: clock_nanosleep (in /lib/librt-2.5.so)
==20274== by 0x8048BE2: thread_func(void*) (sigalrm.cpp:50)
==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
==20274== by 0x4048111: start_thread (in /lib/libpthread-2.5.so)
==20274== by 0x423A2ED: clone (in /lib/libc-2.5.so)
==20274== load 0x4aa9dc0 size 4 thread 2
==20274== at 0x8048974: SignalHandler(int) (sigalrm.cpp:42)
==20274== by 0x8048BE2: thread_func(void*) (sigalrm.cpp:50)
==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
==20274== by 0x4048111: start_thread (in /lib/libpthread-2.5.so)
==20274== by 0x423A2ED: clone (in /lib/libc-2.5.so)
==20274== Thread 2:
==20274== Conflicting load by thread 2 at 0x04aa9dc0 size 4
==20274== at 0x8048974: SignalHandler(int) (sigalrm.cpp:42)
==20274== by 0x8048BE2: thread_func(void*) (sigalrm.cpp:50)
==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
==20274== by 0x4048111: start_thread (in /lib/libpthread-2.5.so)
==20274== by 0x423A2ED: clone (in /lib/libc-2.5.so)
==20274== Allocation context: stack of thread 2, offset -4672
==20274== Other segment start (thread 1)
==20274== at 0x423A2D8: clone (in /lib/libc-2.5.so)
==20274== by 0x4048818: pthread_create@@GLIBC_2.1 (in /lib/libpthread-
2.5.so)
==20274== by 0x4024C5A: pthread_create@* (drd_preloaded.c:191)
==20274== by 0x8048AE8: main (sigalrm.cpp:69)
==20274== Other segment end (thread 1)
==20274== at 0x4049517: pthread_join (in /lib/libpthread-2.5.so)
==20274== by 0x4023FDE: pthread_join (drd_preloaded.c:220)
==20274== by 0x8048B3F: main (sigalrm.cpp:75)
==20274== end 0x4aa9dc0 size 4 thread 2
==20274== at 0x8048974: SignalHandler(int) (sigalrm.cpp:42)
==20274== by 0x8048BE2: thread_func(void*) (sigalrm.cpp:50)
==20274== by 0x4024E8C: vg_thread_wrapper (drd_preloaded.c:133)
==20274== by 0x4048111: start_thread (in /lib/libpthread-2.5.so)
==20274== by 0x423A2ED: clone (in /lib/libc-2.5.so)
==20274==
==20274== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 15 from 5)
This is the call stack at the time of the first "trace store" action in the
above output:
==19613== by 0x3800248F: drd_trace_store (drd_main.c:182)
==19613== by 0x38002576: drd_post_mem_write (drd_main.c:265)
==19613== by 0x3805F9F0: vgPlain_sigframe_create (sigframe-x86-linux.c
:494)
==19613== by 0x380188B4: deliver_signal (m_signals.c:1053)
==19613== by 0x38018A79: async_signalhandler (m_signals.c:1605)
==19613== by 0x380176EF: (within
/home/bart/software/valgrind-svn/drd/drd-x86-linux)
Regards,
Bart Van Assche.
|
|
From: <js...@ac...> - 2007-02-20 09:37:39
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2007-02-20 09:00:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 219 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <js...@ac...> - 2007-02-20 05:28:48
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2007-02-20 04:55:01 GMT Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 254 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-02-20 03:24:06
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-02-20 03:15:02 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Last 20 lines of verbose log follow echo /tmp/ccIi5AAE.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccIi5AAE.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccIi5AAE.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccIi5AAE.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccIi5AAE.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccIi5AAE.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccIi5AAE.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/ccIi5AAE.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 rm insn_mmx.c insn_sse2.c insn_fpu.c insn_mmxext.c insn_sse.c insn_sse3.c insn_cmov.c insn_basic.c make[5]: Leaving directory `/tmp/valgrind.11377/valgrind/none/tests/x86' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/tmp/valgrind.11377/valgrind/none/tests/x86' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/tmp/valgrind.11377/valgrind/none/tests' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/tmp/valgrind.11377/valgrind/none' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.11377/valgrind' make: *** [check] Error 2 ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Last 20 lines of verbose log follow echo /tmp/cclJWL47.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/cclJWL47.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/cclJWL47.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/cclJWL47.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/cclJWL47.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/cclJWL47.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/cclJWL47.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' /tmp/cclJWL47.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 rm insn_mmx.c insn_sse2.c insn_fpu.c insn_mmxext.c insn_sse.c insn_sse3.c insn_cmov.c insn_basic.c make[5]: Leaving directory `/tmp/valgrind.11377/valgrind/none/tests/x86' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/tmp/valgrind.11377/valgrind/none/tests/x86' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/tmp/valgrind.11377/valgrind/none/tests' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/tmp/valgrind.11377/valgrind/none' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.11377/valgrind' make: *** [check] Error 2 ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Feb 20 03:19:32 2007 --- new.short Tue Feb 20 03:23:56 2007 *************** *** 7,16 **** Last 20 lines of verbose log follow echo ! /tmp/cclJWL47.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/cclJWL47.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/cclJWL47.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/cclJWL47.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/cclJWL47.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/cclJWL47.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/cclJWL47.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/cclJWL47.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 --- 7,16 ---- Last 20 lines of verbose log follow echo ! /tmp/ccIi5AAE.s:4393: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccIi5AAE.s:4513: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccIi5AAE.s:4633: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccIi5AAE.s:4753: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccIi5AAE.s:4873: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccIi5AAE.s:4993: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccIi5AAE.s:5113: Error: no such instruction: `fisttpq -56(%ebp)' ! /tmp/ccIi5AAE.s:5233: Error: no such instruction: `fisttpq -56(%ebp)' make[5]: *** [insn_sse3.o] Error 1 |
|
From: Tom H. <th...@cy...> - 2007-02-20 03:23:30
|
Nightly build on dellow ( x86_64, Fedora Core 6 ) started at 2007-02-20 03:10:06 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 288 tests, 4 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) |
|
From: Tom H. <th...@cy...> - 2007-02-20 03:18:50
|
Nightly build on lloyd ( x86_64, Fedora Core 3 ) started at 2007-02-20 03:05:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 288 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-02-20 03:12:01
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-02-20 03:00:04 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 290 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: <js...@ac...> - 2007-02-20 01:17:21
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-02-20 02:00:01 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 225 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |