|
From: Philippe W. <phi...@sk...> - 2012-02-19 18:30:03
|
For working on multi-threaded valgrind, I am trying helgrind or drd
on a none tool. This does not work.
>From what I have experimented, any outer tool that redirects
malloc/free/... will not work with an inner tool that does not
redirect malloc/free/...
Typically, an outer memcheck/helgrind/drd/... cannot run an
inner none/callgrind/cachegrind
(confirmed by doing all combinations. They all fail for similar
reasons. The below discusses helgrind+none more in details).
The symptoms (for an outer helgrind and inner none): a thread cannot be
created (see below). Investigating further, this is caused by malloc/...
not working. When run with outer helgrind + inner none, the malloc
calls returns 0x0. The inner also complaints it can't handle helgrind
client request (see below).
After investigation, I have concluded that the problem is that the inner
tool (none in this case) is interpreting the redirection special symbols
found in the vgpreload of the outer tool.
So, the none tool is "installing" redirection for e.g. malloc to
redirect to a function it does not have.
Extract of the trace of the problem (see func=0x0)
>--14266:2:transtab discard_translations(0x4e26d93, 1) req by redir_new_DebugInfo(to_addr)
>--14266:2:transtab FAST, ec = 77
>==14266== Adding active redirection:
>--14266-- new: 0x052c9750 (memcpy ) R-> (0000.0) 0x04e26d93 memcpy
>--14266-- REDIR: 0x52c08c0 (malloc) redirected to 0x4e25c68 (malloc)
>--14266-- VG_USERREQ__CLIENT_CALL1: func=0x0
loops/sleep_ms/burn/threads_spec: 100000 0 10000 B-B-B-B-
>--14266-- REDIR: 0x5034c30 (pthread_create@@GLIBC_2.2.5) redirected to 0x4e2ae52 (pthread_create@*)
>==14266== Warning:
>==14266== unhandled client request: 0x48470127 (HG+0x127). Perhaps
>==14266== VG_(needs).client_requests should be set?
>--14266-- REDIR: 0x52bfe50 (calloc) redirected to 0x4e24f2b (calloc)
>--14266-- VG_USERREQ__CLIENT_CALL2: func=0x0
>--14266-- REDIR: 0x5038660 (pthread_rwlock_rdlock) redirected to 0x4e280df (pthread_rwlock_rdlock)
>--14266-- REDIR: 0x52c9750 (memcpy) redirected to 0x4e26d93 (memcpy)
>--14266-- REDIR: 0x5038c70 (pthread_rwlock_unlock) redirected to 0x4e27c04 (pthread_rwlock_unlock)
>--14266-- VG_USERREQ__CLIENT_CALL1: func=0x0
Unexpected error.
--14266:1:gdbsrv signal 6 tid 1
--14266:1:gdbsrv not connected => pass
>--14266:1:gdbsrv signal 6 tid 1
>--14266:1:gdbsrv not connected => pass
>==14266==
>==14266== Process terminating with default action of signal 6 (SIGABRT)
>--14266:1:mallocfr newSuperblock at 0x3F4254000 (pszB 1048544) owner VALGRIND/exectxt
>==14266== at 0x527C165: raise (raise.c:64)
>==14266== by 0x527EF6F: abort (abort.c:92)
>==14266== by 0x52752B0: __assert_fail (assert.c:81)
>==14266== by 0x503537C: pthread_create@@GLIBC_2.2.5 (allocatestack.c:573)
>==14266== by 0x4E2AD46: pthread_create_WRK (hg_intercepts.c:255)
>==14266== by 0x4E2AE5A: pthread_create@* (hg_intercepts.c:286)
>==14266== by 0x400F3A: main (parallel_sleepers.c:161)
>--14266:1:syswrap- thread_wrapper(tid=1): exit
>--14266:1:syswrap- run_a_thread_NORETURN(tid=1): post-thread_wrapper
>--14266:1:syswrap- run_a_thread_NORETURN(tid=1): last one standing
I have somewhat bypassed the problem with the patch below,
which avoids the inner to execute the redirection of a vgpreload
of the outer.
It looks like an outer helgrind+inner cachegrind goes much
better (still running).
Outer helgrind + inner none goes further, but then the outer generates a SEGV
when computing a backtrace when the thread 1 is cloning a thread 2.
Any idea if the patch below is the way to go ?
Or if there is something which I have not understood/did wrong ?
Philippe
ndex: coregrind/m_redir.c
===================================================================
--- coregrind/m_redir.c (revision 12391)
+++ coregrind/m_redir.c (working copy)
@@ -49,6 +49,7 @@
#include "pub_core_xarray.h"
#include "pub_core_clientstate.h" // VG_(client___libc_freeres_wrapper)
#include "pub_core_demangle.h" // VG_(maybe_Z_demangle)
+#include "pub_core_libcproc.h" // VG_(libdir)
#include "config.h" /* GLIBC_2_* */
@@ -389,6 +390,7 @@
Bool isText;
const UChar* newdi_soname;
+
# if defined(VG_PLAT_USES_PPCTOC)
check_ppcTOCs = True;
# endif
@@ -397,6 +399,23 @@
newdi_soname = VG_(DebugInfo_get_soname)(newdi);
vg_assert(newdi_soname != NULL);
+#ifdef ENABLE_INNER
+ {
+ const UChar* newdi_filename;
+ VG_(message)(Vg_DebugMsg, "VALGRIND_LIB %s\n", VG_(libdir));
+ newdi_filename = VG_(DebugInfo_get_filename)(newdi);
+ VG_(message)(Vg_DebugMsg, "checking ignoring redir in %s %s\n", newdi_soname, newdi_filename);
+ /* avoid reading the redirections which are for the outer. */
+ if (VG_(strstr)(newdi_filename, "/vgpreload")) {
+ VG_(message)(Vg_DebugMsg, "contains /vgpreload\n");
+ if( !VG_(strstr)(newdi_filename, (Char*) VG_(libdir))) {
+ VG_(message)(Vg_DebugMsg, "not containing inner VG_(libdir) => ignoring redir in %s\n", newdi_filename);
+ return;
+ }
+ }
+ }
+#endif
+
/* stay sane: we don't already have this. */
for (ts = topSpecs; ts; ts = ts->next)
vg_assert(ts->seginfo != newdi);
|
|
From: Philippe W. <phi...@sk...> - 2012-02-19 20:08:51
|
On Sun, 2012-02-19 at 19:30 +0100, Philippe Waroquiers wrote:
> Outer helgrind + inner none goes further, but then the outer generates a SEGV
> when computing a backtrace when the thread 1 is cloning a thread 2.
Adding to the inner none tool --vex-iropt-precise-memory-exns=yes
seems to solve the problem (at least, the 4 threads are starting up).
>
> Any idea if the patch below is the way to go ?
> Or if there is something which I have not understood/did wrong ?
If positive feedback about the below patch, I will clean it up
and modify the README_DEVELOPPERS to speak about
-vex-iropt-precise-memory-exns=yes
Philippe
>
> Philippe
>
> ndex: coregrind/m_redir.c
> ===================================================================
> --- coregrind/m_redir.c (revision 12391)
> +++ coregrind/m_redir.c (working copy)
> @@ -49,6 +49,7 @@
> #include "pub_core_xarray.h"
> #include "pub_core_clientstate.h" // VG_(client___libc_freeres_wrapper)
> #include "pub_core_demangle.h" // VG_(maybe_Z_demangle)
> +#include "pub_core_libcproc.h" // VG_(libdir)
>
> #include "config.h" /* GLIBC_2_* */
>
> @@ -389,6 +390,7 @@
> Bool isText;
> const UChar* newdi_soname;
>
> +
> # if defined(VG_PLAT_USES_PPCTOC)
> check_ppcTOCs = True;
> # endif
> @@ -397,6 +399,23 @@
> newdi_soname = VG_(DebugInfo_get_soname)(newdi);
> vg_assert(newdi_soname != NULL);
>
> +#ifdef ENABLE_INNER
> + {
> + const UChar* newdi_filename;
> + VG_(message)(Vg_DebugMsg, "VALGRIND_LIB %s\n", VG_(libdir));
> + newdi_filename = VG_(DebugInfo_get_filename)(newdi);
> + VG_(message)(Vg_DebugMsg, "checking ignoring redir in %s %s\n", newdi_soname, newdi_filename);
> + /* avoid reading the redirections which are for the outer. */
> + if (VG_(strstr)(newdi_filename, "/vgpreload")) {
> + VG_(message)(Vg_DebugMsg, "contains /vgpreload\n");
> + if( !VG_(strstr)(newdi_filename, (Char*) VG_(libdir))) {
> + VG_(message)(Vg_DebugMsg, "not containing inner VG_(libdir) => ignoring redir in %s\n", newdi_filename);
> + return;
> + }
> + }
> + }
> +#endif
> +
> /* stay sane: we don't already have this. */
> for (ts = topSpecs; ts; ts = ts->next)
> vg_assert(ts->seginfo != newdi);
|
|
From: Julian S. <js...@ac...> - 2012-02-20 23:20:40
|
> Any idea if the patch below is the way to go ? > Or if there is something which I have not understood/did wrong ? Your analysis + patch sound plausible, is the best I can say. You are in unexplored territory -- I don't think anybody has been down this road before. J |
|
From: Philippe W. <phi...@sk...> - 2012-02-22 21:25:13
|
On Tue, 2012-02-21 at 00:19 +0100, Julian Seward wrote:
> > Any idea if the patch below is the way to go ?
> > Or if there is something which I have not understood/did wrong ?
>
> Your analysis + patch sound plausible, is the best I can say.
> You are in unexplored territory -- I don't think anybody has been
> down this road before.
Continuing based on this patch: the whole stuff was crashing with
SIGSEGV during stack trace production.
After investigation, here is what I believe is happening:
The inner Valgrind has two stacks for each thread it is running :
the stack used by Valgrind
the stack used by the synthetic cpu (used by the guest code).
When the guess process is doing a system call, the inner valgrind
is switching the stack to the synthetic stack (so as to have
the system call executed in the desired guest context).
The outer Valgrind detects this stack switch
(and reports "client switching stack?" warning).
This is all ok.
Except than when the outer Valgrind has to produce a stack trace, it
does not have a correct value for the stack limits : the 'fp_min' value
is the current value of the stack pointer (this is correct),
but the 'fp_max' value is from the stack of the synthetic cpu,
while the fp_max value should differ depending on the fact that
the inner valgrind is working on one or the other stack.
This means that the stacktrace unwinding code believes it can continue
to unwind past the segment stack.
This then causes a SIGSEGV.
I have solved this problem with the following patch (currently
only for the amd64 stack unwinding, but I think it should be
generalised).
The below is based on the assumption that whatever the stack value,
the fp_max value must be in the segment where the current stack pointer
is.
With this patch, I was able to run (on amd64) helgrind over none,
helgrind over memcheck, memcheck over memcheck, ...
I have annotated the pipe lock and the futex lock with RWLOCK
annotations. With this, helgrind detects some (but not many) possible
data races (on the current trunk i.e. the "single threaded" version).
(without annotation, helgrind detected thousands of problems, which is
not surprising).
The patch below must be in the "outer" valgrind, so cannot be made
conditional on ENABLE_INNER.
Feedback about this analysis and the patch below would be appreciated.
Note that the below should probably better be put in a common
place between all architectures (e.g. close to the VG_(stack_limits)
call or even maybe inside VG_(stack_limits)).
Effectively, whatever the architecture, it looks like the stack highest
word must be in the segment where the stack pointer is currently.
If the below is ok, I will prepare 3 (cleaned up) patches:
1. a patch with the change below
2. the patch to have the inner ignoring the vgpreload
of the outer (conditionalised on ENABLE_INNER)
3. an helgrind annotation patch (conditionalised on ENABLE_INNER)
Thanks
Philippe
Index: coregrind/m_stacktrace.c
===================================================================
--- coregrind/m_stacktrace.c (revision 12398)
+++ coregrind/m_stacktrace.c (working copy)
@@ -253,6 +253,12 @@
if (fp_max >= sizeof(Addr))
fp_max -= sizeof(Addr);
+ {
+ const NSegment * stack = VG_(am_find_nsegment) ( uregs.xsp );
+ if (fp_max > stack->end)
+ fp_max = stack->end;
+ }
+
if (debug)
VG_(printf)("max_n_ips=%d fp_min=0x%lx fp_max_orig=0x%lx, "
"fp_max=0x%lx ip=0x%lx fp=0x%lx\n",
|
|
From: Julian S. <js...@ac...> - 2012-02-22 22:07:04
|
Amazing stuff. > I have annotated the pipe lock and the futex lock with RWLOCK > annotations. With this, helgrind detects some (but not many) possible > data races (on the current trunk i.e. the "single threaded" version). Quick question -- can you post some of the races? I am interested to see what it found. J |
|
From: Philippe W. <phi...@sk...> - 2012-02-25 15:16:26
|
On Wed, 2012-02-22 at 23:05 +0100, Julian Seward wrote: > > I have annotated the pipe lock and the futex lock with RWLOCK > > annotations. With this, helgrind detects some (but not many) possible > > data races (on the current trunk i.e. the "single threaded" version). > > Quick question -- can you post some of the races? I am interested > to see what it found. Here is a summary of the results of running x86 helgrind and drd on none tool. Note that program being run (parallel_sleepers.c) has itself a race condition which is detected by the outer Valgrind. Quite a lot of the errors are in the locking primitives (e.g. sema_up, sema_down). trace_outer_drd_trace_inner_none_parallel_--fair-sched=no.txt:==4391== ERROR SUMMARY: 595 errors from 31 contexts (suppressed: 0 from 0) trace_outer_drd_trace_inner_none_parallel_--fair-sched=yes.txt:==4383== ERROR SUMMARY: 778 errors from 23 contexts (suppressed: 0 from 0) trace_outer_helgrind_trace_inner_none_parallel_--fair-sched=no.txt:==4376== ERROR SUMMARY: 487 errors from 25 contexts (suppressed: 0 from 0) trace_outer_helgrind_trace_inner_none_parallel_--fair-sched=yes.txt:==4369== ERROR SUMMARY: 23 errors from 11 contexts (suppressed: 0 from 0) I have not investigated errors in details. In particular, no idea why drd finds more errors than helgrind (both with or without fair-sched). At first sight, the helgrind errors looks plausible. Patch, result files, test program and launcher script attached in https://bugs.kde.org/show_bug.cgi?id=294812 Before committing, it would be nice to have a review of the patch. (I will in parallel work on the stack segment patch). Philippe |
|
From: Julian S. <js...@ac...> - 2012-02-23 09:25:24
|
> If the below is ok, I will prepare 3 (cleaned up) patches: > 1. a patch with the change below > 2. the patch to have the inner ignoring the vgpreload > of the outer (conditionalised on ENABLE_INNER) > 3. an helgrind annotation patch (conditionalised on ENABLE_INNER) Ok for 2 and 3, but please please add the description from your previous email as a big comment in 2, else nobody will understand the rationale. For 1 we maybe should clean up the unwinders so that they are allowed to travel inside the segment found by VG_(am_find_nsegment) and nowhere else. This would maybe clean up some ad hoc checks. Problem is VG_(am_find_nsegment) is expensive and Helgrind can ask for an unwind very very often (100K+ times/sec) so I would like to see if there is a way we can cache its results on a per thread basis, somehow (since they will change only rarely). J |
|
From: Philippe W. <phi...@sk...> - 2012-02-26 17:34:41
|
On Thu, 2012-02-23 at 10:24 +0100, Julian Seward wrote: > > If the below is ok, I will prepare 3 (cleaned up) patches: > > 1. a patch with the change below > > 2. the patch to have the inner ignoring the vgpreload > > of the outer (conditionalised on ENABLE_INNER) > > 3. an helgrind annotation patch (conditionalised on ENABLE_INNER) > > Ok for 2 and 3, but please please add the description from your > previous email as a big comment in 2, else nobody will understand > the rationale. > > For 1 we maybe should clean up the unwinders so that they are > allowed to travel inside the segment found by VG_(am_find_nsegment) > and nowhere else. This would maybe clean up some ad hoc checks. > Problem is VG_(am_find_nsegment) is expensive and Helgrind > can ask for an unwind very very often (100K+ times/sec) so I > would like to see if there is a way we can cache its results > on a per thread basis, somehow (since they will change only rarely). Attached in https://bugs.kde.org/show_bug.cgi?id=294812 a new version of the patch solving the unwinder problem by using register/deregister client requests in the inner. (tested an outer on an inner on x86/amd64/ppc64). Philippe |
|
From: Bart V. A. <bva...@ac...> - 2012-03-08 19:43:34
|
On 02/22/12 22:05, Julian Seward wrote: >> I have annotated the pipe lock and the futex lock with RWLOCK >> annotations. With this, helgrind detects some (but not many) possible >> data races (on the current trunk i.e. the "single threaded" version). > Quick question -- can you post some of the races? I am interested > to see what it found. The number of races reported should have been reduced significantly for r12437. I haven't analyzed the remaining reports yet. Here is an example of what is still reported with drd as outer and as inner tool and for client program drd/tests/tsan_unittest 3: ==29763== Conflicting store by thread 1 at 0x00638664 size 4 ==29763== at 0x280C8945: ??? (syscall-amd64-linux.S:147) ==29763== by 0x7: ??? ==29763== by 0x3F66D0E1F: ??? ==29763== by 0x3F66D0E2F: ??? ==29763== by 0x28C6E97F: ??? ==29763== by 0xC9: ??? ==29763== by 0xC9: ??? ==29763== by 0x2902D17F: ??? ==29763== by 0xAF: ??? ==29763== Allocation context: BSS section of /home/bart/software/valgrind.git/drd/tests/tsan_unittest ==29763== Other segment start (thread 2) ==29763== at 0x280C87E7: vgModuleLocal_sema_up (sema.c:144) ==29763== by 0x2807A328: vgPlain_release_BigLock (scheduler.c:302) ==29763== by 0x2807CDEB: vgPlain_client_syscall (syswrap-main.c:1470) ==29763== by 0x28079CCF: handle_syscall (scheduler.c:957) ==29763== by 0x2807AEC9: vgPlain_scheduler (scheduler.c:1179) ==29763== by 0x2808AD0E: run_a_thread_NORETURN (syswrap-linux.c:102) ==29763== by 0x2808B05A: vgModuleLocal_start_thread_NORETURN (syswrap-linux.c:290) ==29763== by 0x280A8A7D: ??? (in /home/bart/software/valgrind/drd/drd-amd64-linux) And this is what is reported for the same program with helgrind as outer and none as inner tool: ==30505== ---Thread-Announcement------------------------------------------ ==30505== ==30505== Thread #2 was created ==30505== at 0x280878C2: ??? (in /home/bart/software/valgrind/none/none-amd64-linux) ==30505== by 0x2808AD08: vgSysWrap_amd64_linux_sys_clone_before (syswrap-amd64-linux.c:306) ==30505== by 0x2805BB57: vgPlain_client_syscall (syswrap-main.c:1382) ==30505== by 0x28058B1F: handle_syscall (scheduler.c:957) ==30505== by 0x28059D19: vgPlain_scheduler (scheduler.c:1179) ==30505== by 0x28069B5E: run_a_thread_NORETURN (syswrap-linux.c:102) ==30505== ==30505== ---Thread-Announcement------------------------------------------ ==30505== ==30505== Thread #1 is the program's root thread ==30505== ==30505== ---------------------------------------------------------------- ==30505== ==30505== Lock at 0x3F18011B0 was first observed ==30505== at 0x280B150E: vgModuleLocal_sema_init (sema.c:79) ==30505== by 0x2805ACFF: create_sched_lock (sched-lock-generic.c:55) ==30505== by 0x28058DA2: init_BigLock (scheduler.c:308) ==30505== by 0x28059568: vgPlain_scheduler_init_phase1 (scheduler.c:566) ==30505== by 0x2801DEBA: valgrind_main (m_main.c:2013) ==30505== by 0x28021755: _start_in_C_linux (m_main.c:2799) ==30505== by 0x2801C510: ??? (in /home/bart/software/valgrind/none/none-amd64-linux) ==30505== ==30505== Possible data race during write of size 4 at 0x638664 by thread #2 ==30505== Locks held: 1, at address 0x3F18011B0 ==30505== at 0x3F65C8730: ??? ==30505== by 0x183CA: ??? ==30505== by 0x28C3C72F: ??? ==30505== by 0x3F800FF4F: ??? ==30505== by 0x3F800FEBF: ??? ==30505== by 0x28C3C71F: ??? ==30505== by 0xD968621: pthread_cond_signal@@GLIBC_2.3.2 (pthread_cond_signal.S:52) ==30505== ==30505== This conflicts with a previous read of size 4 by thread #1 ==30505== Locks held: none ==30505== at 0x280B1AF5: ??? (syscall-amd64-linux.S:147) ==30505== by 0x7: ??? ==30505== by 0x3F653EE1F: ??? ==30505== by 0x3F653EE2F: ??? ==30505== by 0x28C3AE9F: ??? ==30505== by 0xC9: ??? ==30505== by 0xC9: ??? ==30505== by 0x28FF969F: ??? Bart. |
|
From: Philippe W. <phi...@sk...> - 2012-03-08 20:13:23
|
On Thu, 2012-03-08 at 19:43 +0000, Bart Van Assche wrote: > On 02/22/12 22:05, Julian Seward wrote: > >> I have annotated the pipe lock and the futex lock with RWLOCK > >> annotations. With this, helgrind detects some (but not many) possible > >> data races (on the current trunk i.e. the "single threaded" version). > > Quick question -- can you post some of the races? I am interested > > to see what it found. > > The number of races reported should have been reduced significantly for > r12437. I haven't analyzed the remaining reports yet. Here is an example > of what is still reported with drd as outer and as inner tool and for > client program drd/tests/tsan_unittest 3: Nice work. In some cases, the outer valgrind is detecting "bugs" in the program executed by the inner valgrind. At least this is the conclusion I had on analysing in depth a race condition reported on running the parallel sleeper test : the address given in the race condition reported by the outer valgrind was the address of a variable of the program executed by the inner valgrind, and this variable was effectively not properly protected. As the code JIT-ted by the inner Valgrind containing no debug info, the outer Valgrind cannot make a proper stack trace for it. No idea if the below is the same case. A side note on outer/inner activities I am working on: I am currently having headaches running the regression tests in an outer/inner setup: all the 32 bits tests are failing when running on a 64 bits bi-arch platform : something nasty in the aspacemgr (same tests are working fine on a 32 bits fedora x86). Apart of this, the 64 bits tests are working reasonably well. If I cannot solve the 32 bits on 64 bits problem this week-end, I will commit in the current state. Philippe > > ==29763== Conflicting store by thread 1 at 0x00638664 size 4 > ==29763== at 0x280C8945: ??? (syscall-amd64-linux.S:147) > ==29763== by 0x7: ??? > ==29763== by 0x3F66D0E1F: ??? > ==29763== by 0x3F66D0E2F: ??? > ==29763== by 0x28C6E97F: ??? > ==29763== by 0xC9: ??? > ==29763== by 0xC9: ??? > ==29763== by 0x2902D17F: ??? > ==29763== by 0xAF: ??? > ==29763== Allocation context: BSS section of > /home/bart/software/valgrind.git/drd/tests/tsan_unittest > ==29763== Other segment start (thread 2) > ==29763== at 0x280C87E7: vgModuleLocal_sema_up (sema.c:144) > ==29763== by 0x2807A328: vgPlain_release_BigLock (scheduler.c:302) > ==29763== by 0x2807CDEB: vgPlain_client_syscall (syswrap-main.c:1470) > ==29763== by 0x28079CCF: handle_syscall (scheduler.c:957) > ==29763== by 0x2807AEC9: vgPlain_scheduler (scheduler.c:1179) > ==29763== by 0x2808AD0E: run_a_thread_NORETURN (syswrap-linux.c:102) > ==29763== by 0x2808B05A: vgModuleLocal_start_thread_NORETURN > (syswrap-linux.c:290) > ==29763== by 0x280A8A7D: ??? (in > /home/bart/software/valgrind/drd/drd-amd64-linux) > > And this is what is reported for the same program with helgrind as outer > and none as inner tool: > > ==30505== ---Thread-Announcement------------------------------------------ > ==30505== > ==30505== Thread #2 was created > ==30505== at 0x280878C2: ??? (in > /home/bart/software/valgrind/none/none-amd64-linux) > ==30505== by 0x2808AD08: vgSysWrap_amd64_linux_sys_clone_before > (syswrap-amd64-linux.c:306) > ==30505== by 0x2805BB57: vgPlain_client_syscall (syswrap-main.c:1382) > ==30505== by 0x28058B1F: handle_syscall (scheduler.c:957) > ==30505== by 0x28059D19: vgPlain_scheduler (scheduler.c:1179) > ==30505== by 0x28069B5E: run_a_thread_NORETURN (syswrap-linux.c:102) > ==30505== > ==30505== ---Thread-Announcement------------------------------------------ > ==30505== > ==30505== Thread #1 is the program's root thread > ==30505== > ==30505== ---------------------------------------------------------------- > ==30505== > ==30505== Lock at 0x3F18011B0 was first observed > ==30505== at 0x280B150E: vgModuleLocal_sema_init (sema.c:79) > ==30505== by 0x2805ACFF: create_sched_lock (sched-lock-generic.c:55) > ==30505== by 0x28058DA2: init_BigLock (scheduler.c:308) > ==30505== by 0x28059568: vgPlain_scheduler_init_phase1 (scheduler.c:566) > ==30505== by 0x2801DEBA: valgrind_main (m_main.c:2013) > ==30505== by 0x28021755: _start_in_C_linux (m_main.c:2799) > ==30505== by 0x2801C510: ??? (in > /home/bart/software/valgrind/none/none-amd64-linux) > ==30505== > ==30505== Possible data race during write of size 4 at 0x638664 by thread #2 > ==30505== Locks held: 1, at address 0x3F18011B0 > ==30505== at 0x3F65C8730: ??? > ==30505== by 0x183CA: ??? > ==30505== by 0x28C3C72F: ??? > ==30505== by 0x3F800FF4F: ??? > ==30505== by 0x3F800FEBF: ??? > ==30505== by 0x28C3C71F: ??? > ==30505== by 0xD968621: pthread_cond_signal@@GLIBC_2.3.2 > (pthread_cond_signal.S:52) > ==30505== > ==30505== This conflicts with a previous read of size 4 by thread #1 > ==30505== Locks held: none > ==30505== at 0x280B1AF5: ??? (syscall-amd64-linux.S:147) > ==30505== by 0x7: ??? > ==30505== by 0x3F653EE1F: ??? > ==30505== by 0x3F653EE2F: ??? > ==30505== by 0x28C3AE9F: ??? > ==30505== by 0xC9: ??? > ==30505== by 0xC9: ??? > ==30505== by 0x28FF969F: ??? > > Bart. |