You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(15) |
|
2
(24) |
3
(16) |
4
(17) |
5
(11) |
6
(20) |
7
(11) |
8
(15) |
|
9
(10) |
10
(9) |
11
(10) |
12
(24) |
13
(16) |
14
(15) |
15
(8) |
|
16
(13) |
17
(15) |
18
(35) |
19
(11) |
20
(10) |
21
(11) |
22
(9) |
|
23
(10) |
24
(9) |
25
(9) |
26
(9) |
27
(9) |
28
(12) |
29
(16) |
|
30
(12) |
|
|
|
|
|
|
|
From: Tom H. <to...@co...> - 2006-04-18 23:31:41
|
In message <200...@ac...> you wrote: > > > I seem to recall that I decided that the problem was a lack > > of DWARF debug information for the system call frame when valgrind > > replaces the vDSO routines with it's own ones. The system ones > > delberately have DWARF unwind information and ours don't. > > That would make sense. From Joe's printout it seems like the > unwinder manages to unwind 6 frames before going wrong, so it's > not the signal delivery frame that's the problem. > > Uh .. but which of our routines did you mean? I'm unclear. > Is it VG_(x86_linux_REDIR_FOR__dl_sysinfo_int80) ? That's the routine I was thinking of, yes. Tom -- Tom Hughes (to...@co...) http://www.compton.nu/ |
|
From: <sv...@va...> - 2006-04-18 22:34:52
|
Author: njn
Date: 2006-04-18 23:34:48 +0100 (Tue, 18 Apr 2006)
New Revision: 5857
Log:
- Fix indentation in one section of Cachegrind
- In the same section, use VG_(percentify) to avoid overflow when computi=
ng
information for -v printing.
Modified:
trunk/cachegrind/cg_main.c
Modified: trunk/cachegrind/cg_main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/cachegrind/cg_main.c 2006-04-18 02:04:52 UTC (rev 5856)
+++ trunk/cachegrind/cg_main.c 2006-04-18 22:34:48 UTC (rev 5857)
@@ -1105,7 +1105,7 @@
=20
static void cg_fini(Int exitcode)
{
- static Char buf1[128], buf2[128], buf3[128], fmt [128];
+ static Char buf1[128], buf2[128], buf3[128], buf4[123], fmt[128];
=20
CC D_total;
ULong L2_total_m, L2_total_mr, L2_total_mw,
@@ -1195,29 +1195,35 @@
=20
// Various stats
if (VG_(clo_verbosity) > 1) {
- Int debug_lookups =3D full_debugs + fn_debugs +
- file_line_debugs + no_debugs;
+ Int debug_lookups =3D full_debugs + fn_debugs +
+ file_line_debugs + no_debugs;
=20
- VG_(message)(Vg_DebugMsg, "");
- VG_(message)(Vg_DebugMsg, "cachegrind: distinct files: %d", disti=
nct_files);
- VG_(message)(Vg_DebugMsg, "cachegrind: distinct fns: %d", disti=
nct_fns);
- VG_(message)(Vg_DebugMsg, "cachegrind: distinct lines: %d", disti=
nct_lines);
- VG_(message)(Vg_DebugMsg, "cachegrind: distinct instrs:%d", disti=
nct_instrs);
- VG_(message)(Vg_DebugMsg, "cachegrind: debug lookups : %d", =
debug_lookups);
- VG_(message)(Vg_DebugMsg, "cachegrind: with full info:%3d%% =
(%d)",=20
- full_debugs * 100 / debug_lookups, full_debugs);
- VG_(message)(Vg_DebugMsg, "cachegrind: with file/line info:%3d%% =
(%d)",=20
- file_line_debugs * 100 / debug_lookups, file_line_de=
bugs);
- VG_(message)(Vg_DebugMsg, "cachegrind: with fn name info:%3d%% =
(%d)",=20
- fn_debugs * 100 / debug_lookups, fn_debugs);
- VG_(message)(Vg_DebugMsg, "cachegrind: with zero info:%3d%% =
(%d)",=20
- no_debugs * 100 / debug_lookups, no_debugs);
- VG_(message)(Vg_DebugMsg, "cachegrind: string table size: %u",
- VG_(OSet_Size)(stringTable));
- VG_(message)(Vg_DebugMsg, "cachegrind: CC table size: %u",
- VG_(OSet_Size)(CC_table));
- VG_(message)(Vg_DebugMsg, "cachegrind: InstrInfo table size: %u",
- VG_(OSet_Size)(instrInfoTable));
+ VG_(message)(Vg_DebugMsg, "");
+ VG_(message)(Vg_DebugMsg, "cachegrind: distinct files: %d", distin=
ct_files);
+ VG_(message)(Vg_DebugMsg, "cachegrind: distinct fns: %d", distin=
ct_fns);
+ VG_(message)(Vg_DebugMsg, "cachegrind: distinct lines: %d", distin=
ct_lines);
+ VG_(message)(Vg_DebugMsg, "cachegrind: distinct instrs:%d", distin=
ct_instrs);
+ VG_(message)(Vg_DebugMsg, "cachegrind: debug lookups : %d", d=
ebug_lookups);
+ =20
+ VG_(percentify)(full_debugs, debug_lookups, 1, 6, buf1);
+ VG_(percentify)(file_line_debugs, debug_lookups, 1, 6, buf2);
+ VG_(percentify)(fn_debugs, debug_lookups, 1, 6, buf3);
+ VG_(percentify)(no_debugs, debug_lookups, 1, 6, buf4);
+ VG_(message)(Vg_DebugMsg, "cachegrind: with full info:%s (%d)=
",=20
+ buf1, full_debugs);
+ VG_(message)(Vg_DebugMsg, "cachegrind: with file/line info:%s (%d)=
",=20
+ buf2, file_line_debugs);
+ VG_(message)(Vg_DebugMsg, "cachegrind: with fn name info:%s (%d)=
",=20
+ buf3, fn_debugs);
+ VG_(message)(Vg_DebugMsg, "cachegrind: with zero info:%s (%d)=
",=20
+ buf4, no_debugs);
+
+ VG_(message)(Vg_DebugMsg, "cachegrind: string table size: %u",
+ VG_(OSet_Size)(stringTable));
+ VG_(message)(Vg_DebugMsg, "cachegrind: CC table size: %u",
+ VG_(OSet_Size)(CC_table));
+ VG_(message)(Vg_DebugMsg, "cachegrind: InstrInfo table size: %u",
+ VG_(OSet_Size)(instrInfoTable));
}
}
=20
|
|
From: Joseph M L. <val...@jo...> - 2006-04-18 20:54:02
|
On Fedcore Core 4, using the latest gcc and glibc rpms: % rpm -q gcc glibc gcc-4.0.2-8.fc4 glibc-2.3.6-3 I compile the attached program with: % g++ -Wall -g -lpthread test.cc And am able to reproduce the problem with both 2.4.1 and 3.1.1 Joe Julian Seward wrote: > Now I'm even more confused. I can't reproduce this hang using > either the trunk, 3.1.1 or 3.0.1, using g++ 4.0.2 on SuSE 10.0, > on x86. > > Can you clarify? What do I need to do to reproduce this with > a recent Valgrind? > > J > > On Friday 14 April 2006 14:07, Joseph M Link wrote: >> I have also reproduced this on FC4 with a highly threaded application >> that uses pthread_cancel() and depends on pthread cleanup handlers. >> >> I am still using 2.4.1, and didn't see anything to indicate that it has >> been fixed in more recent versions. Anyone have any luck with this issue? >> >> Thanks, >> Joe >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by xPML, a groundbreaking scripting language >> that extends applications into web and mobile media. Attend the live >> webcast and join the prime developer group breaking into this new coding >> territory! >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 >> _______________________________________________ >> Valgrind-developers mailing list >> Val...@li... >> https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Julian S. <js...@ac...> - 2006-04-18 20:23:17
|
On Tuesday 18 April 2006 14:45, Ashley Pittman wrote: > > I've still got a script for merging multiple xml files into one output > file I'm working on, along with some tidyups to the xml output itself, > it would be good to get this merged for the next release. > > I need to tidy it up a bit but should be able to submit a patch next > week something, is this OK? That sounds good. Please send it along. J |
|
From: Julian S. <js...@ac...> - 2006-04-18 20:09:56
|
Now I'm even more confused. I can't reproduce this hang using either the trunk, 3.1.1 or 3.0.1, using g++ 4.0.2 on SuSE 10.0, on x86. Can you clarify? What do I need to do to reproduce this with a recent Valgrind? J On Friday 14 April 2006 14:07, Joseph M Link wrote: > I have also reproduced this on FC4 with a highly threaded application > that uses pthread_cancel() and depends on pthread cleanup handlers. > > I am still using 2.4.1, and didn't see anything to indicate that it has > been fixed in more recent versions. Anyone have any luck with this issue? > > Thanks, > Joe > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live > webcast and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Julian S. <js...@ac...> - 2006-04-18 19:57:10
|
> If valgrind has any frames on the stack (as distinguished from user > frames on the stack) then the unwinder requires the DWARF2 description > of those frames, or it will get lost. Using dl_iterate_phdr(), the > runtime loader ld-linux.so.2 supplies the description for all the > modules (main program, shared libraries, dlopen files) that it knows > about; obviously this list [should] excludes anything from valgrind. Implication is that even if we add unwind info to V-supplied frames, it will not help unless ld-linux.so is aware of it. Which it probably isn't. J |
|
From: Julian S. <js...@ac...> - 2006-04-18 19:54:30
|
> I seem to recall that I decided that the problem was a lack > of DWARF debug information for the system call frame when valgrind > replaces the vDSO routines with it's own ones. The system ones > delberately have DWARF unwind information and ours don't. That would make sense. From Joe's printout it seems like the unwinder manages to unwind 6 frames before going wrong, so it's not the signal delivery frame that's the problem. Uh .. but which of our routines did you mean? I'm unclear. Is it VG_(x86_linux_REDIR_FOR__dl_sysinfo_int80) ? J |
|
From: Tom H. <to...@co...> - 2006-04-18 16:27:23
|
In message <44450ECE.8000600@BitWagon.com>
John Reiser <jreiser@BitWagon.com> wrote:
> Joseph M Link wrote:
> > uw_frame_state_for() (gcc-4.0.2/gcc/unwind-dw2.c) is used by way of
> > _Unwind_ForcedUnwind() to determine end of stack.
> >
> > uw_frame_state_for() uses _Unwind_Find_FDE() which at some point returns
> > NULL. This leads to the end of stack indication in both native and
> > valgrind, it is just prematurely NULL under valgrind.
> >
> > _Unwind_Find_FDE() (gcc-4.0.2/gcc/unwind-dw2-fde-glibc.c) uses
> > dl_iterate_phdr() with its callback, _Unwind_IteratePhdrCallback().
> >
> > This gets to a point where I don't really know what I am looking at. The
> > gist is that _Unwind_IteratePhdrCallback() doesn't find the fde and
> > leaves data->ret NULL, which is what _Unwind_Find_FDE() returns,
> > signaling the premature end of stack.
>
> I ran into related problems with _Unwind_ForcedUnwind(), but with the
> kernel vDSO [virtual dynamic shared object "linux-gate.so.1"]:
> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=180351
>
> If valgrind has any frames on the stack (as distinguished from user
> frames on the stack) then the unwinder requires the DWARF2 description
> of those frames, or it will get lost. Using dl_iterate_phdr(), the
> runtime loader ld-linux.so.2 supplies the description for all the
> modules (main program, shared libraries, dlopen files) that it knows
> about; obviously this list [should] excludes anything from valgrind.
That rings a bell... I remember looking at this, but I don't seem
to have added my conclusions to the bug for some reason.
I seem to recall that I decided that the problem was a lack
of DWARF debug information for the system call frame when valgrind
replaces the vDSO routines with it's own ones. The system ones
delberately have DWARF unwind information and ours don't.
I thought I had tried to add DWARF unwind information to our
one but I can't find any trace of that now.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: John R.
|
Joseph M Link wrote: > uw_frame_state_for() (gcc-4.0.2/gcc/unwind-dw2.c) is used by way of > _Unwind_ForcedUnwind() to determine end of stack. > > uw_frame_state_for() uses _Unwind_Find_FDE() which at some point returns > NULL. This leads to the end of stack indication in both native and > valgrind, it is just prematurely NULL under valgrind. > > _Unwind_Find_FDE() (gcc-4.0.2/gcc/unwind-dw2-fde-glibc.c) uses > dl_iterate_phdr() with its callback, _Unwind_IteratePhdrCallback(). > > This gets to a point where I don't really know what I am looking at. The > gist is that _Unwind_IteratePhdrCallback() doesn't find the fde and > leaves data->ret NULL, which is what _Unwind_Find_FDE() returns, > signaling the premature end of stack. I ran into related problems with _Unwind_ForcedUnwind(), but with the kernel vDSO [virtual dynamic shared object "linux-gate.so.1"]: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=180351 If valgrind has any frames on the stack (as distinguished from user frames on the stack) then the unwinder requires the DWARF2 description of those frames, or it will get lost. Using dl_iterate_phdr(), the runtime loader ld-linux.so.2 supplies the description for all the modules (main program, shared libraries, dlopen files) that it knows about; obviously this list [should] excludes anything from valgrind. At one time I considered noticing that libgcc_s had been loaded, then calling __register_frame* to inform the unwinder about the frame descriptions for my "auditing" code. However, I managed to avoid this, by using a kernel patch that used more general DWARF2 unwind info for linux-gate.so.1. Search the LKML for my patch "i386 rt_sigframe glexibility to virtualize signal delivery" (2006-02-15.) In yet another project, I found it necessary to produce a libgcc_s with modified _Unwind_GetIP() and _Unwind_SetIP() routines, and make sure that these toutines were called instead of the inline "(CONTEXT)->ra" etc. Hope this helps. -- |
|
From: Joseph M L. <val...@jo...> - 2006-04-18 16:00:19
|
> What magic incantations have you used to compile glibc etc? What do > we need to do to reproduce what you've done? I am currently only building gcc-4.0.2 and instrumenting these calls. I compile and use the test program reported with the bug. > Can you print out the sequence of pc values presented as the first arg > to _Unwind_Find_FDE? I suspect it's not that the relevant FDEs are not > found; rather that the pc values for which FDEs are sought are bogus. running without valgrind gives the following output: main: creating thread ... main: waiting for thread to be ready ... main: thread is ready main: cancelling thread ... main: waiting for thread to be clean... PC = 0x268816 PC = 0xd96cfd PC = 0xd94da6 PC = 0x36f214 PC = 0x343951 PC = 0x2f97de PC = 0x2f9602 PC = 0x8048780 main: cleaning up PC = 0x2689a6 PC = 0x80487a1 PC = 0xd90bd3 with valgrind: laptop % valgrind -q --tool=none a.out main: creating thread ... main: waiting for thread to be ready ... main: thread is ready main: cancelling thread ... main: waiting for thread to be clean... PC = 0x3aab7816 PC = 0x3a9a6cfd PC = 0x3a9a4da6 PC = 0x3a99f3a7 PC = 0x3a9a649f PC = 0xafeff021 (hangs here) Joe |
|
From: Julian S. <js...@ac...> - 2006-04-18 15:36:45
|
> This gets to a point where I don't really know what I am looking at. > The gist is that _Unwind_IteratePhdrCallback() doesn't find the fde and > leaves data->ret NULL, which is what _Unwind_Find_FDE() returns, > signaling the premature end of stack. What magic incantations have you used to compile glibc etc? What do we need to do to reproduce what you've done? Can you print out the sequence of pc values presented as the first arg to _Unwind_Find_FDE? I suspect it's not that the relevant FDEs are not found; rather that the pc values for which FDEs are sought are bogus. J |
|
From: Joseph M L. <val...@jo...> - 2006-04-18 15:21:32
|
uw_frame_state_for() (gcc-4.0.2/gcc/unwind-dw2.c) is used by way of _Unwind_ForcedUnwind() to determine end of stack. uw_frame_state_for() uses _Unwind_Find_FDE() which at some point returns NULL. This leads to the end of stack indication in both native and valgrind, it is just prematurely NULL under valgrind. _Unwind_Find_FDE() (gcc-4.0.2/gcc/unwind-dw2-fde-glibc.c) uses dl_iterate_phdr() with its callback, _Unwind_IteratePhdrCallback(). This gets to a point where I don't really know what I am looking at. The gist is that _Unwind_IteratePhdrCallback() doesn't find the fde and leaves data->ret NULL, which is what _Unwind_Find_FDE() returns, signaling the premature end of stack. Can someone help from this point? Joe Joseph M Link wrote: > It appears that unwind_stop() is doing the longjmp because it thinks it > is at the end of the stack. It thinks this because gcc's > _Unwind_ForcedUnwind() tells it that it is at the end of the stack. > > Moving my investigation to gcc. > > Joe > > Julian Seward wrote: >> That's a good start. If you can figure out how to compile glibc so >> as to get more details on what's going on inside unwind_stop(), we >> might be in with a chance of at least understanding what the problem >> is. >> >> J >> >> On Tuesday 18 April 2006 05:05, Joseph M Link wrote: >>> So far, I have traced that pthread_cancel() basically hands off to gcc >>> 4.0.2's _Unwind_ForcedUnwind(), passing it its unwind_stop() callback. >>> >>> unwind_stop() is defined in the pthread library, >>> glibc-2.3.6/nptl/unwind.c. It is called, presumably, for each frame. >>> When running natively, it always returns. When running under valgrind, >>> after a certain number of iterations, it doesn't return. I assume it is >>> doing the longjmp() at the end of the call. I've tried to instrument >>> the call, but I am having trouble building the library (something about >>> an undefined GLIBC_PRIVATE). >>> >>> Joe >>> >>> Julian Seward wrote: >>>> On Friday 14 April 2006 14:07, Joseph M Link wrote: >>>>> I have also reproduced this on FC4 with a highly threaded application >>>>> that uses pthread_cancel() and depends on pthread cleanup handlers. >>>>> >>>>> I am still using 2.4.1, and didn't see anything to indicate that it >>>>> has >>>>> been fixed in more recent versions. Anyone have any luck with this >>>>> issue? >>>> It would be nice to fix this, yes. Er .. no .. nobody afaik has chased >>>> it any more. It's not an easy one. My belief is that pthread_cancel >>>> throws a signal at the target thread, and the signal handler starts >>>> unwinding the stack. This is not working because the unwinder is >>>> seeing a signal frame which is different from what it expects, so it >>>> gives up. At least, that's my theory. My first line of approach would >>>> be to figure out if that's really what pthread_cancel does, and if so >>>> what it expects the signal frame to look like. >>>> >>>> J >>> ------------------------------------------------------- >>> This SF.Net email is sponsored by xPML, a groundbreaking scripting >>> language >>> that extends applications into web and mobile media. Attend the live >>> webcast and join the prime developer group breaking into this new coding >>> territory! >>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 >>> _______________________________________________ >>> Valgrind-developers mailing list >>> Val...@li... >>> https://lists.sourceforge.net/lists/listinfo/valgrind-developers >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by xPML, a groundbreaking scripting >> language >> that extends applications into web and mobile media. Attend the live >> webcast >> and join the prime developer group breaking into this new coding >> territory! >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 >> _______________________________________________ >> Valgrind-developers mailing list >> Val...@li... >> https://lists.sourceforge.net/lists/listinfo/valgrind-developers > |
|
From: Joseph M L. <val...@jo...> - 2006-04-18 14:26:55
|
It appears that unwind_stop() is doing the longjmp because it thinks it is at the end of the stack. It thinks this because gcc's _Unwind_ForcedUnwind() tells it that it is at the end of the stack. Moving my investigation to gcc. Joe Julian Seward wrote: > That's a good start. If you can figure out how to compile glibc so > as to get more details on what's going on inside unwind_stop(), we > might be in with a chance of at least understanding what the problem > is. > > J > > On Tuesday 18 April 2006 05:05, Joseph M Link wrote: >> So far, I have traced that pthread_cancel() basically hands off to gcc >> 4.0.2's _Unwind_ForcedUnwind(), passing it its unwind_stop() callback. >> >> unwind_stop() is defined in the pthread library, >> glibc-2.3.6/nptl/unwind.c. It is called, presumably, for each frame. >> When running natively, it always returns. When running under valgrind, >> after a certain number of iterations, it doesn't return. I assume it is >> doing the longjmp() at the end of the call. I've tried to instrument >> the call, but I am having trouble building the library (something about >> an undefined GLIBC_PRIVATE). >> >> Joe >> >> Julian Seward wrote: >>> On Friday 14 April 2006 14:07, Joseph M Link wrote: >>>> I have also reproduced this on FC4 with a highly threaded application >>>> that uses pthread_cancel() and depends on pthread cleanup handlers. >>>> >>>> I am still using 2.4.1, and didn't see anything to indicate that it has >>>> been fixed in more recent versions. Anyone have any luck with this >>>> issue? >>> It would be nice to fix this, yes. Er .. no .. nobody afaik has chased >>> it any more. It's not an easy one. My belief is that pthread_cancel >>> throws a signal at the target thread, and the signal handler starts >>> unwinding the stack. This is not working because the unwinder is >>> seeing a signal frame which is different from what it expects, so it >>> gives up. At least, that's my theory. My first line of approach would >>> be to figure out if that's really what pthread_cancel does, and if so >>> what it expects the signal frame to look like. >>> >>> J >> ------------------------------------------------------- >> This SF.Net email is sponsored by xPML, a groundbreaking scripting language >> that extends applications into web and mobile media. Attend the live >> webcast and join the prime developer group breaking into this new coding >> territory! >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 >> _______________________________________________ >> Valgrind-developers mailing list >> Val...@li... >> https://lists.sourceforge.net/lists/listinfo/valgrind-developers > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live webcast > and join the prime developer group breaking into this new coding territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Ashley P. <as...@qu...> - 2006-04-18 13:45:59
|
On Mon, 2006-04-17 at 20:47 +0100, Julian Seward wrote:
> 3.2.0 will support {x86,amd64,ppc32,ppc64}-linux. It will contain
> the following major changes:
I've still got a script for merging multiple xml files into one output
file I'm working on, along with some tidyups to the xml output itself,
it would be good to get this merged for the next release.
I need to tidy it up a bit but should be able to submit a patch next
week something, is this OK?
It's a script that loads the memcheck xml output files and translates
them into the normal memcheck output we all know and love. The main use
of it is that it can load many xml input files and create one output
file, using the log-file-qualifier= option to tag errors against
specific input files. It's for use in parallel (MPI) programs where
there may be thousands of individual processes making up a simple
application.
Going along with this are some changes to the xml output to add some
information which is in the normal output but missing from the xml.
Ashley,
|
|
From: <js...@ac...> - 2006-04-18 13:35:31
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2006-04-18 02:00:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 204 tests, 12 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/stack_changes (stdout) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: Josef W. <Jos...@gm...> - 2006-04-18 12:14:46
|
On Tuesday 18 April 2006 13:12, Julian Seward wrote: > Just 10 bbs account for 93% of all executed bbs. They are: > > Total score = 29991666790 > > 0: (2797737062 9.32%) 2797737062 9.32% 0x4D3FAEF QGList::next()+15 > 1: (5595474124 18.65%) 2797737062 9.32% 0x4D3FAE0 QGList::next() > 2: (8392034273 27.98%) 2796560149 9.32% 0x4D3FAF6 QGList::next()+22 Ah, thanks. I should get rid of these linked lists... > 3: (11187229411 37.30%) 2795195138 9.31% 0x806670C > 4: (13982424549 46.62%) 2795195138 9.31% 0x8066716 > 5: (16777619687 55.94%) 2795195138 9.31% 0x806673A > 6: (19572238018 65.25%) 2794618331 9.31% 0x8076CBE > 7: (22366361173 74.57%) 2794123155 9.31% 0x8076CCA > 8: (25160484328 83.89%) 2794123155 9.31% 0x8076CC2 > 9: (27954312092 93.20%) 2793827764 9.31% 0x8076CB0 > > The 0x80xxxxx ones are from /opt/kde3/bin/kcachegrind itself, which > is unfortunately stripped, so no symbols. I used objdump to get the > bbs shown below for them. So maybe something in kcachegrind > itself is doing many traversals of a very long QGList? Does this > mean anything to you? I think I check the list before inserting an object, so there is no double insertion. Deep down in building the internal data model while loading... > 806670c: 8b 91 e0 00 00 00 mov 0xe0(%ecx),%edx > 8066712: 85 d2 test %edx,%edx > 8066714: 74 24 je 806673a > <_ZN8QPainter4saveEv@plt+0x7572> Now that is about painting... Probably the progress bar. As far as I can remember, I trigger repainting every second, so this is not really friendly for running under valgrind, but should not be relevant in reality. Josef |
|
From: Josef W. <Jos...@gm...> - 2006-04-18 12:02:42
|
On Tuesday 18 April 2006 03:34, Julian Seward wrote: > This gets stranger all the time. On 'none' it dies at 7% loaded with > > ==31091== Process terminating with default action of signal 14 (SIGALRM) > ... > ==31091== by 0x503E6C7: (within /lib/tls/libc-2.3.5.so) > ==31091== by 0x507BA49: _int_malloc (in /lib/tls/libc-2.3.5.so) > ==31091== by 0x507D5F2: malloc (in /lib/tls/libc-2.3.5.so) > ==31091== by 0x4FBA3C6: operator new(unsigned) > (in /usr/lib/libstdc++.so.6.0.6) > ==31091== by 0x80778A6: (within /opt/kde3/bin/kcachegrind) Now that is really strange. Perhaps some KDE guru here as an idea why a SIGALRM is raised? > On memcheck it doesn't die at that point, and no errors are reported. So this looks like some bug together with the native malloc implementation (or a KDE replacement for this)? > Me mystified. Me too. Josef |
|
From: Julian S. <js...@ac...> - 2006-04-18 12:00:21
|
That's a good start. If you can figure out how to compile glibc so as to get more details on what's going on inside unwind_stop(), we might be in with a chance of at least understanding what the problem is. J On Tuesday 18 April 2006 05:05, Joseph M Link wrote: > So far, I have traced that pthread_cancel() basically hands off to gcc > 4.0.2's _Unwind_ForcedUnwind(), passing it its unwind_stop() callback. > > unwind_stop() is defined in the pthread library, > glibc-2.3.6/nptl/unwind.c. It is called, presumably, for each frame. > When running natively, it always returns. When running under valgrind, > after a certain number of iterations, it doesn't return. I assume it is > doing the longjmp() at the end of the call. I've tried to instrument > the call, but I am having trouble building the library (something about > an undefined GLIBC_PRIVATE). > > Joe > > Julian Seward wrote: > > On Friday 14 April 2006 14:07, Joseph M Link wrote: > >> I have also reproduced this on FC4 with a highly threaded application > >> that uses pthread_cancel() and depends on pthread cleanup handlers. > >> > >> I am still using 2.4.1, and didn't see anything to indicate that it has > >> been fixed in more recent versions. Anyone have any luck with this > >> issue? > > > > It would be nice to fix this, yes. Er .. no .. nobody afaik has chased > > it any more. It's not an easy one. My belief is that pthread_cancel > > throws a signal at the target thread, and the signal handler starts > > unwinding the stack. This is not working because the unwinder is > > seeing a signal frame which is different from what it expects, so it > > gives up. At least, that's my theory. My first line of approach would > > be to figure out if that's really what pthread_cancel does, and if so > > what it expects the signal frame to look like. > > > > J > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live > webcast and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Josef W. <Jos...@gm...> - 2006-04-18 11:57:48
|
On Tuesday 18 April 2006 02:22, Julian Seward wrote: > Obviously I should upgrade my P4 to a Pentium M :-) Currently, I would recommend a Core Duo notebook ;-) > Er, I don't know. That really is very strange. Probably KCachegrind is simply loading all parts, and not only the 1 MB you sent me (see my other mail). > kcachegrind's loading progress bar (bottom RHS) goes from > 0% to 81% in about ~ 15 seconds, but then it stays at 81% for > 15 minutes and then 92% for another 15 mins. My guess is that > it is somehow running slowly with some algorithm which analyses > the call graph, and which goes really crazy due to the huge number > of call points as mentioned in my previous message. Hmm, no. The progress bar is only visible when it is really loading the data. Which means allocating lots of small C++ objects for KCachegrinds internal representation. I would have to recheck this, but for now, I use a lot of linked lists for linking the internal data, and this is really bad on the number of objects needed. I would suspect a net effect of around 10 mallocs for every function. The cycle detection phase is done after the loading. > I tried also on a PIII running the same SuSE 10.0, and it is > also very slow. So it's not caused by the Pentium-4 denormalised > FP number performance disaster phenomenon. Yeah. It is loading more than this 1MB file... > Too difficult .. Why this? Is it because of the root rights you need? OProfile is really working out of the box on Suse nowadays. Josef > > I might try 'valgrind --tool=none --profile-flags=10000000 kcachegrind ...' > to find the top 100 basic blocks. > > J > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live webcast > and join the prime developer group breaking into this new coding territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > > |
|
From: Josef W. <Jos...@gm...> - 2006-04-18 11:47:24
|
> > So I still do not understand
> > it. You would need a lot of functions (> 500 000), which is very
> > unusual.
>
> Ah .. but I'm profiling memcheck, and memcheck is producing
> code with many calls to MC_(helperc_{LOAD,STORE}V*. I know
> that memcheck (in the inner valgrind) produces about 80000
> translations and so if each translation has eg 5 such calls
> then there will be 400000 call points to these functions.
> I suspect this is why the profile data is very large and why
> kcachegrind is taking a long time.
>
> > Sure. If the function count is really huge, I can understand at least the
> > memory consumption. Can you check the number of functions in this 75 MB
> > profile with "grep ^fn callgrind.out.XXX | wc" ?
>
> sewardj@suse10:~/VgTRUNK$ grep ^fn koffice-started.1 | wc
> 80660 160583 1891572
OK, so there are around 80 thousand functions "detected" by callgrind (ie. targets
of the "Call" instruction). That is quite much, but I thought KCachegrind should
be able to cope with such a number, at least with 1GB physical memory...
> > Again: wow. I never thought of using callgrind with self-hosting
> > because there is still this design problem that profile counters are never
> > freed when discarding code.
>
> Well, self-hosting does not work when the inner valgrind starts to
> discard code, so I don't think this is a problem.
Ah, OK. So I suspect I should give this self-hosting a try myself.
> > BTW: how much memory is the callgrind process using itself?
>
> After about 10 CPU hours it was at 1600M virtual and about 670M
> resident, but my machine only has 1G memory I had to stop it
Memcheck seems to push the limits of callgrind ;-)
Perhaps there should be a possibility to limit the code areas
where callgrind stores full counters per client instruction.
Currently, even with no debug info, the counters are stored at
instruction granularity and only aggregated at dump time.
> (so I
> first did 'callgrind_control -d ...' (very cool btw)).
I really would like to have a general way to interact with a
running valgrind tool. My current approach, which checks for existance
of a command file every time the valgrind scheduler runs is really
screwed (have you ever done "strace callgrind ..." ?).
Some mini-telnet server taking text commands would be cool.
> To stop it I did control-C for the application (kpresenter) and so
> the outer valgrind (callgrind) finished normally, and produced a final
> dump which is very small compared to the two intermediate ones I did:
>
> -rw------- 1 sewardj users 55936636 2006-04-17 02:40 callgrind.out.24371.1
> -rw------- 1 sewardj users 63330227 2006-04-17 13:32 callgrind.out.24371.2
> -rw------- 1 sewardj users 1127604 2006-04-17 15:35 callgrind.out.24371
Looks really heavy...
> $ for f in callgrind.out.24371* ; do echo $f ; grep ^fn $f | wc ; done
> callgrind.out.24371
> 1736 3070 36549
> callgrind.out.24371.1
> 63941 127124 1483254
> callgrind.out.24371.2
> 61833 123046 1467148
The number of unique functions in multiple profile files loaded into
KCachegrind is more relevant to KCachegrinds memory consumption. Still, there
is a hit. Still, it would be good if there would be a merge command (another todo).
> This confused me. I was even more confused to find that kcachegrind
> still takes just as long to load the 1M file as the 63M file.
That is simple. KCachegrind more or less automatically appends a "*" wildcard when
loading a profile file. So when you do "kcachegrind callgrind.out.24371" KCachegrind is
always loading all 3 files. If you have a look
into the status bar, you should see which file currently is being loaded.
Perhaps I should get rid of this unexpected behavior. I did this to make "Trigger new
dump & reload the data with the new dump" more simple. There is a toolbar button in
KCachegrind to trigger this: If you press the "Force Dump" button while callgrind is
running in the current working directory, you get this behavior, and reloading all
profile parts is part of this.
> Anyway, after all this, (1) from the intermediate dumps I found that
> MC_(realloc) might be a memcheck slow point in realloc-intensive code,
> and (2) I restarted the run using cachegrind instead of callgrind, to
> see if it can successfully complete the profiling run with the memory
> I have available. So far it has been running 6.6 hours.
I would suspect that the intermediate dump shows similar behavior
to the final result. If you think that this changes over time instead,
you could do some kind of manual "sampling":
Make a script which does something like this in a loop:
while(1) {
# fast-forward for some time...
callgrind_control -i off
sleep 60
# switch on instrumentation for cache simulation again
callgrind_control -i on
# wait some time to warm up the simulated cache
sleep 1
# get rid of artifical cold misses of the warm up phase
callgrind_control -z
# run some time with full cache simulation
sleep 59
callgrind_control -d
}
The net effect is that you do cache simulation in small intervals
only, and run in valgrinds "none" mode in-between:
Above does 60 seconds with "none" speed (lets say slowdown of 3),
and 60 seconds with full simulation (perhaps slowdown of 60),
which gives you allover slowdown of around 6, with full simulation
only at 1/20 of the runtime.
Josef
|
|
From: Julian S. <js...@ac...> - 2006-04-18 11:12:47
|
> > > Can you check with OProfile what is taking your time? > > > > > > I might try 'valgrind --tool=none --profile-flags=10000000 kcachegrind > > ...' to find the top 100 basic blocks. > > This gets stranger all the time. On 'none' it dies at 7% loaded with > > ==31091== Process terminating with default action of signal 14 (SIGALRM) It dies like that with all the tools that do not replace malloc (none, lackey, cachegrind, callgrind). Runs ok on massif and memcheck. Eventually I got some info using massif. Took about 5 cpu hours. Just 10 bbs account for 93% of all executed bbs. They are: Total score = 29991666790 0: (2797737062 9.32%) 2797737062 9.32% 0x4D3FAEF QGList::next()+15 1: (5595474124 18.65%) 2797737062 9.32% 0x4D3FAE0 QGList::next() 2: (8392034273 27.98%) 2796560149 9.32% 0x4D3FAF6 QGList::next()+22 3: (11187229411 37.30%) 2795195138 9.31% 0x806670C 4: (13982424549 46.62%) 2795195138 9.31% 0x8066716 5: (16777619687 55.94%) 2795195138 9.31% 0x806673A 6: (19572238018 65.25%) 2794618331 9.31% 0x8076CBE 7: (22366361173 74.57%) 2794123155 9.31% 0x8076CCA 8: (25160484328 83.89%) 2794123155 9.31% 0x8076CC2 9: (27954312092 93.20%) 2793827764 9.31% 0x8076CB0 The 0x80xxxxx ones are from /opt/kde3/bin/kcachegrind itself, which is unfortunately stripped, so no symbols. I used objdump to get the bbs shown below for them. So maybe something in kcachegrind itself is doing many traversals of a very long QGList? Does this mean anything to you? J 806670c: 8b 91 e0 00 00 00 mov 0xe0(%ecx),%edx 8066712: 85 d2 test %edx,%edx 8066714: 74 24 je 806673a <_ZN8QPainter4saveEv@plt+0x7572> 8066716: 8b 82 48 01 00 00 mov 0x148(%edx),%eax 806671c: 85 c0 test %eax,%eax 806671e: 74 1a je 806673a <_ZN8QPainter4saveEv@plt+0x7572> 806673a: 5d pop %ebp 806673b: 89 d0 mov %edx,%eax 806673d: c3 ret 8076cbe: 39 c7 cmp %eax,%edi 8076cc0: 74 52 je 8076d14 <_ZN8QPainter4saveEv@plt+0x17b4c> 8076cca: 85 c0 test %eax,%eax 8076ccc: 89 c3 mov %eax,%ebx 8076cce: 75 e0 jne 8076cb0 <_ZN8QPainter4saveEv@plt+0x17ae8> 8076cc2: 89 34 24 mov %esi,(%esp) 8076cc5: e8 6e 5f fe ff call 805cc38 <_ZN6QGList4nextEv@plt> 8076cb0: 31 c0 xor %eax,%eax 8076cb2: 89 44 24 04 mov %eax,0x4(%esp) 8076cb6: 89 1c 24 mov %ebx,(%esp) 8076cb9: e8 42 fa fe ff call 8066700 <_ZN8QPainter4saveEv@plt+0x7538> |
|
From: Joseph M L. <val...@jo...> - 2006-04-18 04:05:29
|
So far, I have traced that pthread_cancel() basically hands off to gcc 4.0.2's _Unwind_ForcedUnwind(), passing it its unwind_stop() callback. unwind_stop() is defined in the pthread library, glibc-2.3.6/nptl/unwind.c. It is called, presumably, for each frame. When running natively, it always returns. When running under valgrind, after a certain number of iterations, it doesn't return. I assume it is doing the longjmp() at the end of the call. I've tried to instrument the call, but I am having trouble building the library (something about an undefined GLIBC_PRIVATE). Joe Julian Seward wrote: > On Friday 14 April 2006 14:07, Joseph M Link wrote: >> I have also reproduced this on FC4 with a highly threaded application >> that uses pthread_cancel() and depends on pthread cleanup handlers. >> >> I am still using 2.4.1, and didn't see anything to indicate that it has >> been fixed in more recent versions. Anyone have any luck with this issue? > > It would be nice to fix this, yes. Er .. no .. nobody afaik has chased > it any more. It's not an easy one. My belief is that pthread_cancel > throws a signal at the target thread, and the signal handler starts > unwinding the stack. This is not working because the unwinder is > seeing a signal frame which is different from what it expects, so it > gives up. At least, that's my theory. My first line of approach would > be to figure out if that's really what pthread_cancel does, and if so > what it expects the signal frame to look like. > > J |
|
From: Tom H. <th...@cy...> - 2006-04-18 02:46:47
|
Nightly build on ford ( i686, Fedora Core 4 ) started at 2006-04-18 03:25:05 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 237 tests, 8 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 237 tests, 7 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Apr 18 03:36:49 2006 --- new.short Tue Apr 18 03:46:33 2006 *************** *** 8,11 **** ! == 237 tests, 7 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) --- 8,12 ---- ! == 237 tests, 8 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) + memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) |
|
From: Tom H. <to...@co...> - 2006-04-18 02:45:21
|
Nightly build on dunsmere ( athlon, Fedora Core 5 ) started at 2006-04-18 03:30:05 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 236 tests, 7 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/xml1 (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2006-04-18 02:35:07
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2006-04-18 03:15:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 235 tests, 21 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/mempool (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/xml1 (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |