You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(5) |
2
(5) |
3
(17) |
4
(21) |
5
(24) |
6
(14) |
7
(14) |
|
8
(14) |
9
(18) |
10
(13) |
11
(15) |
12
(12) |
13
(4) |
14
(11) |
|
15
(10) |
16
(6) |
17
(14) |
18
(16) |
19
(10) |
20
(3) |
21
(12) |
|
22
(12) |
23
(11) |
24
(19) |
25
(15) |
26
(14) |
27
(16) |
28
(12) |
|
From: Philippe W. <phi...@sk...> - 2015-02-02 21:41:13
|
On Mon, 2015-02-02 at 07:51 +0100, Matthias Schwarzott wrote: > Hi! > > Just out of interest: Is the new x32 ABI for amd64 already supported in > valgrind? From looking at the source-code I would guess not. No, it is not supported. > > What would be necessary to support this? Good question, not clear to me. > > Ideas: > * Support the missing syscalls. Is there a lot of x32 specific syscalls ? Or is it needed to rewrite all/a lot of the wrappers ? What about VEX lib, which I guess might be impacted a lot ? > * Support for more "secondary" archs (Makefile cleanup) Wondering if we shouldn't drop the idea of having primary and secondary platforms, and just have a kind of 'top' configure/make that would be called once per arch. That might be simpler than having a primary, secondary and tertiary approach. At least, at my work, for strange reasons, we have to use 2 different compilers for 32 bits and 64 bits. The easiest was just to have a configure/make with one compiler, and then with the other compiler in another directory, and have make install called twice (thereby adding files, or replacing files). The only thing to pay attention to is that some programs (e.g. valgrind, vgdb, ...) have to be 'multi-arch', i.e. support whatever is necessary, and that the 'richer' arch is to be installed latest. (richer = e.g. the vgdb 64 bit version, that can work with both 32 bits and 64 bits 'clients'). At least initially, adding a new primary arch x32 might be already a gigantic first step, the concept of tertiary platform can for sure be done later :). > * A valgrind-loader for x32 You mean the 'normal' executable that launches the tool. This one is pretty independent of the arch, to my knowledge. > * Making 32bit vs. 64bit memory management depend on ABI and not on cpu > architecture. Not clear what you mean here. For sure, assuming x32 starts to be used significantly, adding x32 support will be a nice thing to have. (note however that I saw no request/demand up to now). Philippe |
|
From: Florian K. <fl...@ei...> - 2015-02-02 18:16:42
|
Consider this snippet from m_addrinfo.c:
const NSegment *seg = VG_(am_find_nsegment) (a);
...
if (seg->kind == SkFileC)
ai->Addr.SegmentKind.filename
= VG_(strdup)("mc.da.skfname", VG_(am_get_filename)(seg));
ai->Addr.SegmentKind.hasR = seg->hasR;
ai->Addr.SegmentKind.hasW = seg->hasW;
ai->Addr.SegmentKind.hasX = seg->hasX;
There is nothing wrong here. It is clear, though, that the code
assumes that strdup does not modify the memory pointed to by 'seg'.
But that is not necessarily true. The problem occurs when there is
no memory available to store the string. In that case we may get the
following call chain:
- strdup
- arena_malloc
- newSuperblock
- am_mmap_anon_float_valgrind
- add_segment
- split_nsegments_lo_and_hi
- split_nsegment_at
The last function in this chain may do this:
for (j = nsegments_used-1; j > i; j--)
nsegments[j+1] = nsegments[j];
So... with 'seg' being a pointer into the nsegments array it means that
after the strdup the contents of the memory pointed to by seg
may have been changed by above loop.
Not so good.
The pointers to NSegment we return and deal with in the various
externally visible aspacemgr functions cannot be pointers into the
sorted array of nsegments if its contents gets moved around.
I'm currently thinking of adding
NSegment *sorted_nsegments[VG_N_SEGMENTS]
as a fix. Basically, introducing another level of indirection. But I
haven't given it too much thought yet and an open for ideas.
I won't be getting to fixing this until sometime next week.
Florian
|
|
From: Florian K. <fl...@ei...> - 2015-02-02 17:21:53
|
On 31.01.2015 01:29, sv...@va... wrote:
>
> For a segment name to become unused there must be an assignment to
> NSegment::fnIdx which was previously assigned a return value from
> allocate_segname.
Yes.
> There is no such assignment.
Wrong. I missed this assignment (in pass #2 of preen_segments):
nsegments[w] = nsegments[r];
This happens when two named segments can be merged. One would expect (at
least I would) that only named segments with the same name can be merged
but that is not the case. See thread entitled "a question about
maybe_merge_segments" of Nov 11, 2014.
So with r14898 we could have an occasional segment name leak. I did a
few experiments. No named segments were merged (I have never see that
happen).
#segments #segnames SegName size string table size
abiword 1175 290 290580 14203
vlc 960 665 666330 33171
avidemux 1049 243 243486 12338
All sizes in bytes.
The SegName size is sizeof(SegName) * #segment names.
For all experiments the savings is approx 95%
That looks like significant enough improvement to me to allow an
occasional, if any, leak of a segment name.
Florian
|
|
From: Matthias S. <zz...@ge...> - 2015-02-02 06:51:24
|
Hi! Just out of interest: Is the new x32 ABI for amd64 already supported in valgrind? From looking at the source-code I would guess not. What would be necessary to support this? Ideas: * Support the missing syscalls. * Support for more "secondary" archs (Makefile cleanup) * A valgrind-loader for x32 * Making 32bit vs. 64bit memory management depend on ABI and not on cpu architecture. Regards Matthias |
|
From: Mike L. <mik...@gm...> - 2015-02-01 21:24:02
|
Ah, I overlooked going through the Valgrind options first. Using --time-stamp=yes along with VG_(message) should be both valid for what I want, and will also abide by general Valgrind conventions. Thank you! Mike On Sun, Feb 1, 2015 at 4:22 PM, Philippe Waroquiers < phi...@sk...> wrote: > On Sun, 2015-02-01 at 16:06 -0500, Mike Lui wrote: > > Hi! > > > > I'm writing a logger for a profiler that attaches to Callgrind, and I > > wanted to log the time for my error/log messages, along with the > > message itself. I'm using Valgrind's VG_(message) system, and plan to > > integrate Valgrind's errormgr at some point, but right now I just want > > to deal with it myself. > > > > > > I didn't find Valgrind wrappers to deal with system time and wanted to > > know if there's anything built in to request time? For now, I'm > > including <time.h> and using the > > time(time_t* rawtime) > > strftime(char*, int, const char*, localtime(*time_t)) methods. > > > > > > Mainly I want to know if there's a built in Valgrind method for this, > > or if I should just muck around with the Makefiles to include <time.h> > > manually, since I get include errors with > > #include <time.h> > > Valgrind tools cannot make any use of glibc or any other library. > So, you cannot use strftime. > > If you give option --time-stamp=yes, then user messages > will contain an elapsed time since Valgrind startup. > > You will be able to call the __NR_time syscall in valgrind, but > there is no function in valgrind that will print the result > in a human readable form. > > > Philippe > > > |
|
From: Philippe W. <phi...@sk...> - 2015-02-01 21:21:05
|
On Sun, 2015-02-01 at 16:06 -0500, Mike Lui wrote: > Hi! > > I'm writing a logger for a profiler that attaches to Callgrind, and I > wanted to log the time for my error/log messages, along with the > message itself. I'm using Valgrind's VG_(message) system, and plan to > integrate Valgrind's errormgr at some point, but right now I just want > to deal with it myself. > > > I didn't find Valgrind wrappers to deal with system time and wanted to > know if there's anything built in to request time? For now, I'm > including <time.h> and using the > time(time_t* rawtime) > strftime(char*, int, const char*, localtime(*time_t)) methods. > > > Mainly I want to know if there's a built in Valgrind method for this, > or if I should just muck around with the Makefiles to include <time.h> > manually, since I get include errors with > #include <time.h> Valgrind tools cannot make any use of glibc or any other library. So, you cannot use strftime. If you give option --time-stamp=yes, then user messages will contain an elapsed time since Valgrind startup. You will be able to call the __NR_time syscall in valgrind, but there is no function in valgrind that will print the result in a human readable form. Philippe |
|
From: Mike L. <mik...@gm...> - 2015-02-01 21:06:37
|
Hi! I'm writing a logger for a profiler that attaches to Callgrind, and I wanted to log the time for my error/log messages, along with the message itself. I'm using Valgrind's VG_(message) system, and plan to integrate Valgrind's errormgr at some point, but right now I just want to deal with it myself. I didn't find Valgrind wrappers to deal with system time and wanted to know if there's anything built in to request time? For now, I'm including <time.h> and using the *time(time_t* rawtime)* *strftime(*char*, int, const char*, *localtime(**time_t*)) *methods. Mainly I want to know if there's a built in Valgrind method for this, or if I should just muck around with the Makefiles to include <time.h> manually, since I get include errors with *#include *<time.h> Thanks, Mike |
|
From: <ma...@bu...> - 2015-02-01 04:34:16
|
valgrind revision: 14898
VEX revision: 3080
C compiler: gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-7)
GDB: GNU gdb (GDB) Fedora 7.7.1-21.fc20
Assembler: GNU assembler version 2.23.2
C library: GNU C Library (GNU libc) stable release version 2.18
uname -mrs: Linux 3.17.7-200.fc20.s390x s390x
Vendor version: Fedora 20 (Heisenbug)
Nightly build on lfedora1 ( Fedora release 20 (Heisenbug), s390x )
Started at 2015-02-01 00:00:01 UTC
Ended at 2015-02-01 00:57:09 UTC
Results unchanged from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 670 tests, 3 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures ==
helgrind/tests/locked_vs_unlocked2 (stderr)
helgrind/tests/pth_cond_destroy_busy (stderr)
helgrind/tests/tc22_exit_w_lock (stderr)
=================================================
./valgrind-new/helgrind/tests/locked_vs_unlocked2.stderr.diff
=================================================
--- locked_vs_unlocked2.stderr.exp 2015-02-01 00:28:32.788994625 +0000
+++ locked_vs_unlocked2.stderr.out 2015-02-01 00:46:49.378994625 +0000
@@ -16,13 +16,13 @@
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: main (locked_vs_unlocked2.c:58)
- Address 0x........ is 0 bytes inside data symbol "mx2a"
+ by 0x........: main (locked_vs_unlocked2.c:59)
+ Address 0x........ is 0 bytes inside data symbol "mx2b"
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: main (locked_vs_unlocked2.c:59)
- Address 0x........ is 0 bytes inside data symbol "mx2b"
+ by 0x........: main (locked_vs_unlocked2.c:58)
+ Address 0x........ is 0 bytes inside data symbol "mx2a"
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
=================================================
./valgrind-new/helgrind/tests/pth_cond_destroy_busy.stderr.diff
=================================================
--- pth_cond_destroy_busy.stderr.exp 2015-02-01 00:28:32.748994625 +0000
+++ pth_cond_destroy_busy.stderr.out 2015-02-01 00:46:56.318994625 +0000
@@ -47,4 +47,4 @@
First pthread_cond_destroy() call returned EBUSY.
Second pthread_cond_destroy() call returned success.
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 6 errors from 3 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc22_exit_w_lock.stderr.diff
=================================================
--- tc22_exit_w_lock.stderr.exp 2015-02-01 00:28:32.738994625 +0000
+++ tc22_exit_w_lock.stderr.out 2015-02-01 00:48:17.378994625 +0000
@@ -13,6 +13,23 @@
---Thread-Announcement------------------------------------------
+Thread #x is the program's root thread
+
+----------------------------------------------------------------
+
+Possible data race during write of size 8 at 0x........ by thread #x
+Locks held: none
+ ...
+ by 0x........: pthread_create@* (hg_intercepts.c:...)
+ by 0x........: main (tc22_exit_w_lock.c:42)
+
+This conflicts with a previous read of size 8 by thread #x
+Locks held: none
+ ...
+ Address 0x........ is in a rw- anonymous segment
+
+---Thread-Announcement------------------------------------------
+
Thread #x was created
...
by 0x........: pthread_create@* (hg_intercepts.c:...)
@@ -23,10 +40,6 @@
Thread #x: Exiting thread still holds 1 lock
...
----Thread-Announcement------------------------------------------
-
-Thread #x is the program's root thread
-
----------------------------------------------------------------
Thread #x: Exiting thread still holds 1 lock
@@ -34,4 +47,4 @@
by 0x........: main (tc22_exit_w_lock.c:48)
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 5 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc22_exit_w_lock.stderr.diff-kfail-x86
=================================================
--- tc22_exit_w_lock.stderr.exp-kfail-x86 2015-02-01 00:28:32.748994625 +0000
+++ tc22_exit_w_lock.stderr.out 2015-02-01 00:48:17.378994625 +0000
@@ -3,7 +3,6 @@
Thread #x was created
...
- by 0x........: pthread_create_WRK (hg_intercepts.c:...)
by 0x........: pthread_create@* (hg_intercepts.c:...)
by 0x........: main (tc22_exit_w_lock.c:39)
@@ -14,9 +13,25 @@
---Thread-Announcement------------------------------------------
+Thread #x is the program's root thread
+
+----------------------------------------------------------------
+
+Possible data race during write of size 8 at 0x........ by thread #x
+Locks held: none
+ ...
+ by 0x........: pthread_create@* (hg_intercepts.c:...)
+ by 0x........: main (tc22_exit_w_lock.c:42)
+
+This conflicts with a previous read of size 8 by thread #x
+Locks held: none
+ ...
+ Address 0x........ is in a rw- anonymous segment
+
+---Thread-Announcement------------------------------------------
+
Thread #x was created
...
- by 0x........: pthread_create_WRK (hg_intercepts.c:...)
by 0x........: pthread_create@* (hg_intercepts.c:...)
by 0x........: main (tc22_exit_w_lock.c:42)
@@ -25,14 +40,11 @@
Thread #x: Exiting thread still holds 1 lock
...
----Thread-Announcement------------------------------------------
-
-Thread #x is the program's root thread
-
----------------------------------------------------------------
Thread #x: Exiting thread still holds 1 lock
...
+ by 0x........: main (tc22_exit_w_lock.c:48)
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 5 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/helgrind/tests/locked_vs_unlocked2.stderr.diff
=================================================
--- locked_vs_unlocked2.stderr.exp 2015-02-01 00:00:16.728994625 +0000
+++ locked_vs_unlocked2.stderr.out 2015-02-01 00:18:35.438994625 +0000
@@ -16,13 +16,13 @@
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: main (locked_vs_unlocked2.c:58)
- Address 0x........ is 0 bytes inside data symbol "mx2a"
+ by 0x........: main (locked_vs_unlocked2.c:59)
+ Address 0x........ is 0 bytes inside data symbol "mx2b"
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: main (locked_vs_unlocked2.c:59)
- Address 0x........ is 0 bytes inside data symbol "mx2b"
+ by 0x........: main (locked_vs_unlocked2.c:58)
+ Address 0x........ is 0 bytes inside data symbol "mx2a"
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
=================================================
./valgrind-old/helgrind/tests/pth_cond_destroy_busy.stderr.diff
=================================================
--- pth_cond_destroy_busy.stderr.exp 2015-02-01 00:00:16.688994625 +0000
+++ pth_cond_destroy_busy.stderr.out 2015-02-01 00:18:42.258994625 +0000
@@ -47,4 +47,4 @@
First pthread_cond_destroy() call returned EBUSY.
Second pthread_cond_destroy() call returned success.
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 6 errors from 3 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/helgrind/tests/tc22_exit_w_lock.stderr.diff
=================================================
--- tc22_exit_w_lock.stderr.exp 2015-02-01 00:00:16.668994625 +0000
+++ tc22_exit_w_lock.stderr.out 2015-02-01 00:20:00.788994625 +0000
@@ -13,6 +13,23 @@
---Thread-Announcement------------------------------------------
+Thread #x is the program's root thread
+
+----------------------------------------------------------------
+
+Possible data race during write of size 8 at 0x........ by thread #x
+Locks held: none
+ ...
+ by 0x........: pthread_create@* (hg_intercepts.c:...)
+ by 0x........: main (tc22_exit_w_lock.c:42)
+
+This conflicts with a previous read of size 8 by thread #x
+Locks held: none
+ ...
+ Address 0x........ is in a rw- anonymous segment
+
+---Thread-Announcement------------------------------------------
+
Thread #x was created
...
by 0x........: pthread_create@* (hg_intercepts.c:...)
@@ -23,10 +40,6 @@
Thread #x: Exiting thread still holds 1 lock
...
----Thread-Announcement------------------------------------------
-
-Thread #x is the program's root thread
-
----------------------------------------------------------------
Thread #x: Exiting thread still holds 1 lock
@@ -34,4 +47,4 @@
by 0x........: main (tc22_exit_w_lock.c:48)
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 5 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-old/helgrind/tests/tc22_exit_w_lock.stderr.diff-kfail-x86
=================================================
--- tc22_exit_w_lock.stderr.exp-kfail-x86 2015-02-01 00:00:16.688994625 +0000
+++ tc22_exit_w_lock.stderr.out 2015-02-01 00:20:00.788994625 +0000
@@ -3,7 +3,6 @@
Thread #x was created
...
- by 0x........: pthread_create_WRK (hg_intercepts.c:...)
by 0x........: pthread_create@* (hg_intercepts.c:...)
by 0x........: main (tc22_exit_w_lock.c:39)
@@ -14,9 +13,25 @@
---Thread-Announcement------------------------------------------
+Thread #x is the program's root thread
+
+----------------------------------------------------------------
+
+Possible data race during write of size 8 at 0x........ by thread #x
+Locks held: none
+ ...
+ by 0x........: pthread_create@* (hg_intercepts.c:...)
+ by 0x........: main (tc22_exit_w_lock.c:42)
+
+This conflicts with a previous read of size 8 by thread #x
+Locks held: none
+ ...
+ Address 0x........ is in a rw- anonymous segment
+
+---Thread-Announcement------------------------------------------
+
Thread #x was created
...
- by 0x........: pthread_create_WRK (hg_intercepts.c:...)
by 0x........: pthread_create@* (hg_intercepts.c:...)
by 0x........: main (tc22_exit_w_lock.c:42)
@@ -25,14 +40,11 @@
Thread #x: Exiting thread still holds 1 lock
...
----Thread-Announcement------------------------------------------
-
-Thread #x is the program's root thread
-
----------------------------------------------------------------
Thread #x: Exiting thread still holds 1 lock
...
+ by 0x........: main (tc22_exit_w_lock.c:48)
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 5 errors from 4 contexts (suppressed: 0 from 0)
|
|
From: <ma...@bu...> - 2015-02-01 02:16:24
|
valgrind revision: 14898
VEX revision: 3080
C compiler: gcc (Debian 4.7.2-5) 4.7.2
GDB: GNU gdb (GDB) 7.4.1-debian
Assembler: GNU assembler (GNU Binutils for Debian) 2.22
C library: GNU C Library (Debian EGLIBC 2.13-38+deb7u7) stable release version 2.13
uname -mrs: Linux 3.2.0-4-amd64 x86_64
Vendor version: Debian GNU/Linux 7 (wheezy)
Nightly build on wildebeest ( Debian 7.8 wheezy x86_64 )
Started at 2015-02-01 00:00:01 UTC
Ended at 2015-02-01 02:15:24 UTC
Results differ from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 686 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures ==
helgrind/tests/pth_destroy_cond (stderr)
=================================================
== Results from 24 hours ago ==
=================================================
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... done
Regression test results follow
== 686 tests, 0 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures ==
=================================================
== Difference between 24 hours ago and now ==
=================================================
*** old.short 2015-02-01 00:31:10.566260783 +0000
--- new.short 2015-02-01 00:58:43.333997170 +0000
***************
*** 4,6 ****
Building valgrind ... done
! Running regression tests ... done
--- 4,6 ----
Building valgrind ... done
! Running regression tests ... failed
***************
*** 8,10 ****
! == 686 tests, 0 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures ==
--- 8,11 ----
! == 686 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures ==
! helgrind/tests/pth_destroy_cond (stderr)
--tools=none,memcheck,callgrind,helgrind,cachegrind,drd,massif --reps=3 --vg=../valgrind-new --vg=../valgrind-old
-- Running tests in perf ----------------------------------------------
-- bigcode1 --
bigcode1 valgrind-new:0.14s no: 2.4s (16.9x, -----) me: 4.9s (35.3x, -----) ca:18.9s (134.7x, -----) he: 3.1s (21.9x, -----) ca: 5.5s (38.9x, -----) dr: 2.9s (20.9x, -----) ma: 2.7s (19.0x, -----)
bigcode1 valgrind-old:0.14s no: 2.4s (17.0x, -0.8%) me: 4.7s (33.6x, 4.9%) ca:18.8s (134.1x, 0.4%) he: 2.8s (20.1x, 7.8%) ca: 5.2s (36.9x, 5.1%) dr: 2.7s (19.3x, 7.5%) ma: 2.7s (19.0x, 0.0%)
-- bigcode2 --
bigcode2 valgrind-new:0.14s no: 5.6s (39.9x, -----) me:11.8s (84.0x, -----) ca:37.1s (265.1x, -----) he: 7.0s (50.1x, -----) ca:10.1s (71.8x, -----) dr: 6.6s (47.0x, -----) ma: 6.4s (45.5x, -----)
bigcode2 valgrind-old:0.14s no: 5.5s (39.3x, 1.4%) me:11.9s (85.0x, -1.2%) ca:36.5s (261.1x, 1.5%) he: 6.8s (48.6x, 3.1%) ca: 9.5s (67.9x, 5.5%) dr: 6.4s (46.0x, 2.1%) ma: 6.5s (46.5x, -2.2%)
-- bz2 --
bz2 valgrind-new:0.68s no: 2.4s ( 3.6x, -----) me: 8.7s (12.8x, -----) ca:15.9s (23.3x, -----) he:11.1s (16.2x, -----) ca:13.5s (19.9x, -----) dr:14.4s (21.2x, -----) ma: 2.2s ( 3.2x, -----)
bz2 valgrind-old:0.68s no: 2.3s ( 3.3x, 7.0%) me: 7.7s (11.3x, 11.6%) ca:15.9s (23.4x, -0.3%) he:11.6s (17.0x, -4.7%) ca:13.9s (20.5x, -3.2%) dr:15.0s (22.1x, -4.3%) ma: 2.3s ( 3.4x, -6.0%)
-- fbench --
fbench valgrind-new:0.29s no: 1.3s ( 4.6x, -----) me: 5.0s (17.3x, -----) ca: 7.3s (25.3x, -----) he: 3.9s (13.5x, -----) ca: 4.0s (13.8x, -----) dr: 3.3s (11.4x, -----) ma: 1.5s ( 5.0x, -----)
fbench valgrind-old:0.29s no: 1.4s ( 4.7x, -1.5%) me: 4.8s (16.7x, 3.6%) ca: 7.4s (25.4x, -0.5%) he: 3.9s (13.4x, 0.5%) ca: 4.0s (13.9x, -0.5%) dr: 3.3s (11.4x, 0.0%) ma: 1.5s ( 5.1x, -0.7%)
-- ffbench --
ffbench valgrind-new:0.28s no: 1.3s ( 4.6x, -----) me: 4.0s (14.1x, -----) ca: 2.4s ( 8.6x, -----) he:10.0s (35.6x, -----) ca: 5.2s (18.6x, -----) dr: 4.4s (15.7x, -----) ma: 1.2s ( 4.3x, -----)
ffbench valgrind-old:0.28s no: 1.3s ( 4.6x, 0.0%) me: 4.0s (14.2x, -1.0%) ca: 2.5s ( 9.0x, -4.1%) he: 9.7s (34.7x, 2.6%) ca: 5.6s (20.1x, -7.9%) dr: 4.5s (16.1x, -3.0%) ma: 1.2s ( 4.3x, 0.0%)
-- heap --
heap valgrind-new:0.12s no: 0.9s ( 7.7x, -----) me: 7.3s (60.6x, -----) ca: 7.8s (64.7x, -----) he: 9.2s (76.6x, -----) ca: 4.0s (33.8x, -----) dr: 5.7s (47.5x, -----) ma: 6.2s (52.0x, -----)
heap valgrind-old:0.12s no: 0.8s ( 7.1x, 7.6%) me: 7.1s (59.3x, 2.2%) ca: 7.8s (65.2x, -0.8%) he: 9.0s (75.2x, 1.7%) ca: 4.0s (33.5x, 0.7%) dr: 5.5s (46.2x, 2.8%) ma: 6.4s (53.1x, -2.1%)
-- heap_pdb4 --
heap_pdb4 valgrind-new:0.16s no: 1.0s ( 6.0x, -----) me:12.4s (77.5x, -----) ca: 8.8s (55.2x, -----) he:10.8s (67.4x, -----) ca: 4.5s (28.1x, -----) dr: 6.6s (41.5x, -----) ma: 6.6s (41.1x, -----)
heap_pdb4 valgrind-old:0.16s no: 0.9s ( 5.8x, 3.1%) me:11.5s (71.9x, 7.2%) ca: 9.0s (56.1x, -1.6%) he:10.5s (65.8x, 2.4%) ca: 4.6s (28.7x, -2.2%) dr: 6.5s (40.5x, 2.4%) ma: 6.5s (40.4x, 1.7%)
-- many-loss-records --
many-loss-records valgrind-new:0.01s no: 0.4s (37.0x, -----) me: 1.9s (191.0x, -----) ca: 1.4s (136.0x, -----) he: 1.7s (169.0x, -----) ca: 0.9s (90.0x, -----) dr: 1.5s (146.0x, -----) ma: 1.4s (137.0x, -----)
many-loss-records valgrind-old:0.01s no: 0.4s (44.0x,-18.9%) me: 1.9s (190.0x, 0.5%) ca: 1.3s (133.0x, 2.2%) he: 1.6s (165.0x, 2.4%) ca: 0.9s (88.0x, 2.2%) dr: 1.4s (143.0x, 2.1%) ma: 1.3s (133.0x, 2.9%)
-- many-xpts --
many-xpts valgrind-new:0.03s no: 0.4s (14.7x, -----) me: 2.5s (83.7x, -----) ca: 3.4s (112.3x, -----) he: 3.1s (104.7x, -----) ca: 1.3s (42.3x, -----) dr: 1.9s (65.0x, -----) ma: 2.0s (66.7x, -----)
many-xpts valgrind-old:0.03s no: 0.4s (14.3x, 2.3%) me: 2.5s (83.0x, 0.8%) ca: 3.4s (114.0x, -1.5%) he: 3.1s (103.3x, 1.3%) ca: 1.2s (41.0x, 3.1%) dr: 1.9s (63.7x, 2.1%) ma: 2.0s (68.0x, -2.0%)
-- sarp --
sarp valgrind-new:0.02s no: 0.4s (19.0x, -----) me: 3.2s (160.0x, -----) ca: 2.4s (117.5x, -----) he:10.1s (504.0x, -----) ca: 1.2s (62.0x, -----) dr: 1.3s (64.0x, -----) ma: 0.4s (21.0x, -----)
sarp valgrind-old:0.02s no: 0.4s (19.5x, -2.6%) me: 3.2s (161.0x, -0.6%) ca: 2.2s (111.5x, 5.1%) he: 9.9s (495.0x, 1.8%) ca: 1.2s (61.5x, 0.8%) dr: 1.3s (64.0x, 0.0%) ma: 0.4s (21.0x, 0.0%)
-- tinycc --
tinycc valgrind-new:0.24s no: 1.9s ( 8.1x, -----) me:11.5s (47.8x, -----) ca:14.0s (58.4x, -----) he:13.1s (54.4x, -----) ca:10.5s (43.6x, -----) dr:10.2s (42.4x, -----) ma: 3.3s (13.6x, -----)
tinycc valgrind-old:0.24s no: 1.9s ( 8.0x, 1.0%) me:11.0s (45.9x, 4.0%) ca:13.8s (57.4x, 1.7%) he:13.0s (54.2x, 0.4%) ca:10.4s (43.3x, 0.7%) dr:10.1s (42.2x, 0.4%) ma: 3.2s (13.5x, 0.6%)
-- Finished tests in perf ----------------------------------------------
== 11 programs, 154 timings =================
2776.25user 33.07system 1:16:40elapsed 61%CPU (0avgtext+0avgdata 499708maxresident)k
141528inputs+453632outputs (257major+11734856minor)pagefaults 0swaps
=================================================
./valgrind-new/helgrind/tests/pth_destroy_cond.stderr.diff
=================================================
--- pth_destroy_cond.stderr.exp 2015-02-01 00:31:17.258259607 +0000
+++ pth_destroy_cond.stderr.out 2015-02-01 00:47:54.894104707 +0000
@@ -5,6 +5,34 @@
by 0x........: pthread_create@* (hg_intercepts.c:...)
by 0x........: main (pth_destroy_cond.c:29)
+---Thread-Announcement------------------------------------------
+
+Thread #x is the program's root thread
+
+----------------------------------------------------------------
+
+ Lock at 0x........ was first observed
+ at 0x........: pthread_mutex_init (hg_intercepts.c:...)
+ by 0x........: main (pth_destroy_cond.c:25)
+ Address 0x........ is 0 bytes inside data symbol "mutex"
+
+Possible data race during read of size 1 at 0x........ by thread #x
+Locks held: 1, at address 0x........
+ at 0x........: my_memcmp (hg_intercepts.c:...)
+ by 0x........: pthread_cond_destroy_WRK (hg_intercepts.c:...)
+ by 0x........: pthread_cond_destroy@* (hg_intercepts.c:...)
+ by 0x........: ThreadFunction (pth_destroy_cond.c:18)
+ by 0x........: mythread_wrapper (hg_intercepts.c:...)
+ ...
+
+This conflicts with a previous write of size 4 by thread #x
+Locks held: none
+ ...
+ by 0x........: pthread_cond_wait_WRK (hg_intercepts.c:...)
+ by 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
+ by 0x........: main (pth_destroy_cond.c:31)
+ Address 0x........ is 4 bytes inside data symbol "cond"
+
----------------------------------------------------------------
Thread #x: pthread_cond_destroy: destruction of condition variable being waited upon
|