You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(14) |
2
(8) |
3
(7) |
|
4
(7) |
5
(7) |
6
(6) |
7
(11) |
8
(10) |
9
(14) |
10
(10) |
|
11
(13) |
12
(15) |
13
(6) |
14
(8) |
15
(6) |
16
(6) |
17
(6) |
|
18
(6) |
19
(11) |
20
(15) |
21
(14) |
22
(11) |
23
(7) |
24
(17) |
|
25
(14) |
26
(28) |
27
(21) |
28
(23) |
29
(21) |
30
(17) |
31
(8) |
|
From: Julian S. <js...@ac...> - 2007-03-25 20:59:49
|
> particularly: "It switches threads every 50000 basic blocks (on x86, > typically around 300000 instructions)..." > find some statistics about the size of basic blocks and instruction > frequency. Does anyone have any information about this figure? You mean the implied bb length of 6 (== 300000/50000) ? No idea, probably got made up by the person writing the documentation :-) I think it's the right sort of ballpack though. Wouldn't Hennessy and Patterson contain info on average bb lengths? It's stuffed full of that kind of info. (and generally a great book) In any case you can measure it directly. Run with --tool=lackey --vex-guest-chase-thresh=no and divide "guest instrs" by "SBs entered". That gives me 4.9 for a start/exit of konqueror. Oh, in fact it even computes it for you. What more can you ask: ==3134== guest instrs : SB entered = 49 : 10 J |
|
From: Vince W. <vi...@cs...> - 2007-03-25 20:04:45
|
On Sun, 25 Mar 2007, Nicholas Nethercote wrote: > On Sat, 24 Mar 2007, Vince Weaver wrote: > > I do wonder if any of the compiler vendors noticed this problem with art.. > > you could in theory make your compiler look better on the FP score by > > having art finish in half the time if you made sure it ran in 64-bit > > rather than 80-bit mode on x86... > > Surely the SPEC output checking would catch this, if you are doing proper, > reportable runs? This is rapidly getting more and more off-topic, for which I apologize... The output from the 'art' benchmark is identical in all cases... the difference is that it converges twice as fast when using 64-bit math rather than 80-bit math. I only noticed this problem because the experiments I am doing depend on the instructions_retired metric to be roughly the same across all the tools I am testing. Vince |
|
From: Tom C. <tc...@cs...> - 2007-03-25 19:25:42
|
Dear all, This may be off-topic, but I'd been pointed towards something in the Valgrind Manual: http://valgrind.org/docs/manual/manual-core.html#manual-core.pthreads particularly: "It switches threads every 50000 basic blocks (on x86, typically around 300000 instructions)..." My question isn't about thread-handling, but about where this figure came from - I'm working on code optimisation and have been trying to find some statistics about the size of basic blocks and instruction frequency. Does anyone have any information about this figure? Thanks and regards, Tom -- Tom Crick Mathematical Foundations Group Department of Computer Science University of Bath tc...@cs... http://www.cs.bath.ac.uk/tom/ |
|
From: <js...@ac...> - 2007-03-25 10:54:09
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2007-03-25 09:00:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 219 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <sv...@va...> - 2007-03-25 08:34:40
|
Author: njn Date: 2007-03-25 09:33:47 +0100 (Sun, 25 Mar 2007) New Revision: 319 Log: wibble Modified: trunk/docs/pubs.html Modified: trunk/docs/pubs.html =================================================================== --- trunk/docs/pubs.html 2007-03-25 08:32:56 UTC (rev 318) +++ trunk/docs/pubs.html 2007-03-25 08:33:47 UTC (rev 319) @@ -18,7 +18,8 @@ DBI frameworks such as Pin and DynamoRIO. Please cite this paper when discussing Valgrind in general. However, if you are discussing Valgrind specifically in relation to memory errors - (i.e. the Memcheck tool), please cite the USENIX paper below as well. + (i.e. the Memcheck tool), please cite the USENIX paper below as well or + instead. </p></li> <li><p> |
|
From: <sv...@va...> - 2007-03-25 08:32:56
|
Author: njn
Date: 2007-03-25 09:32:56 +0100 (Sun, 25 Mar 2007)
New Revision: 318
Log:
wibble
Modified:
trunk/docs/pubs.html
Modified: trunk/docs/pubs.html
===================================================================
--- trunk/docs/pubs.html 2007-03-25 08:23:00 UTC (rev 317)
+++ trunk/docs/pubs.html 2007-03-25 08:32:56 UTC (rev 318)
@@ -14,7 +14,6 @@
Dynamic Binary Instrumentation.</a><br>
Nicholas Nethercote and Julian Seward.<br>
Proceedings of PLDI 2007, San Diego, California, USA, June 2007.</b><br>
- (Final version not yet available)<br>
This paper describes how Valgrind works, and how it differs from other
DBI frameworks such as Pin and DynamoRIO.
Please cite this paper when discussing Valgrind in general. However, if
|
|
From: <sv...@va...> - 2007-03-25 08:23:04
|
Author: njn Date: 2007-03-25 09:23:00 +0100 (Sun, 25 Mar 2007) New Revision: 317 Log: Add PLDI paper. Added: trunk/docs/valgrind2007.pdf Modified: trunk/docs/pubs.html Modified: trunk/docs/pubs.html =================================================================== --- trunk/docs/pubs.html 2007-03-20 22:32:07 UTC (rev 316) +++ trunk/docs/pubs.html 2007-03-25 08:23:00 UTC (rev 317) @@ -10,7 +10,8 @@ <ul> <li><p> - <b>Valgrind: A Framework for Heavyweight Dynamic Binary Instrumentation.<br> + <b><a href="/docs/valgrind2007.pdf">Valgrind: A Framework for Heavyweight + Dynamic Binary Instrumentation.</a><br> Nicholas Nethercote and Julian Seward.<br> Proceedings of PLDI 2007, San Diego, California, USA, June 2007.</b><br> (Final version not yet available)<br> Added: trunk/docs/valgrind2007.pdf =================================================================== (Binary files differ) Property changes on: trunk/docs/valgrind2007.pdf ___________________________________________________________________ Name: svn:mime-type + application/octet-stream |
|
From: <sv...@va...> - 2007-03-25 04:15:02
|
Author: sewardj
Date: 2007-03-25 05:14:58 +0100 (Sun, 25 Mar 2007)
New Revision: 1744
Log:
x86 back end: use 80-bit loads/stores for floating point spills rather
than 64-bit ones, to reduce accuracy loss. To support this, in
reg-alloc, allocate 2 64-bit spill slots for each HRcFlt64 vreg
instead of just 1.
Modified:
trunk/priv/host-generic/h_generic_regs.h
trunk/priv/host-generic/reg_alloc2.c
trunk/priv/host-x86/hdefs.c
Modified: trunk/priv/host-generic/h_generic_regs.h
===================================================================
--- trunk/priv/host-generic/h_generic_regs.h 2007-03-21 00:21:56 UTC (rev 1743)
+++ trunk/priv/host-generic/h_generic_regs.h 2007-03-25 04:14:58 UTC (rev 1744)
@@ -87,10 +87,17 @@
available on any specific host. For example on x86, the available
classes are: Int32, Flt64, Vec128 only.
- IMPORTANT NOTE: Vec128 is the only >= 128-bit-sized class, and
- reg_alloc2.c handles it specially when assigning spill slots. If
- you add another 128-bit or larger regclass, you must remember to
- update reg_alloc2.c accordingly.
+ IMPORTANT NOTE: reg_alloc2.c needs how much space is needed to spill
+ each class of register. It has the following knowledge hardwired in:
+
+ HRcInt32 32 bits
+ HRcInt64 64 bits
+ HRcFlt64 80 bits (on x86 these are spilled by fstpt/fldt)
+ HRcVec64 64 bits
+ HRcVec128 128 bits
+
+ If you add another regclass, you must remember to update
+ reg_alloc2.c accordingly.
*/
typedef
enum {
Modified: trunk/priv/host-generic/reg_alloc2.c
===================================================================
--- trunk/priv/host-generic/reg_alloc2.c 2007-03-21 00:21:56 UTC (rev 1743)
+++ trunk/priv/host-generic/reg_alloc2.c 2007-03-25 04:14:58 UTC (rev 1744)
@@ -778,8 +778,9 @@
/* --------- Stage 3: allocate spill slots. --------- */
- /* Each spill slot is 8 bytes long. For 128-bit vregs
- we have to allocate two spill slots.
+ /* Each spill slot is 8 bytes long. For vregs which take more than
+ 64 bits to spill (classes Flt64 and Vec128), we have to allocate
+ two spill slots.
Do a rank-based allocation of vregs to spill slot numbers. We
put as few values as possible in spill slows, but nevertheless
@@ -799,14 +800,31 @@
continue;
}
- /* The spill slots are 64 bits in size. That means, to spill a
- Vec128-class vreg, we'll need to find two adjacent spill
- slots to use. Note, this special-casing needs to happen for
- all 128-bit sized register classes. Currently though
- HRcVector is the only such class. */
+ /* The spill slots are 64 bits in size. As per the comment on
+ definition of HRegClass in h_generic_regs.h, that means, to
+ spill a vreg of class Flt64 or Vec128, we'll need to find two
+ adjacent spill slots to use. Note, this logic needs to kept
+ in sync with the size info on the definition of HRegClass. */
- if (vreg_lrs[j].reg_class != HRcVec128) {
+ if (vreg_lrs[j].reg_class == HRcVec128
+ || vreg_lrs[j].reg_class == HRcFlt64) {
+ /* Find two adjacent free slots in which between them provide
+ up to 128 bits in which to spill the vreg. */
+
+ for (k = 0; k < N_SPILL64S-1; k++)
+ if (ss_busy_until_before[k] <= vreg_lrs[j].live_after
+ && ss_busy_until_before[k+1] <= vreg_lrs[j].live_after)
+ break;
+ if (k == N_SPILL64S-1) {
+ vpanic("LibVEX_N_SPILL_BYTES is too low. "
+ "Increase and recompile.");
+ }
+ ss_busy_until_before[k+0] = vreg_lrs[j].dead_before;
+ ss_busy_until_before[k+1] = vreg_lrs[j].dead_before;
+
+ } else {
+
/* The ordinary case -- just find a single spill slot. */
/* Find the lowest-numbered spill slot which is available at
@@ -821,22 +839,6 @@
}
ss_busy_until_before[k] = vreg_lrs[j].dead_before;
- } else {
-
- /* Find two adjacent free slots in which to spill a 128-bit
- vreg. */
-
- for (k = 0; k < N_SPILL64S-1; k++)
- if (ss_busy_until_before[k] <= vreg_lrs[j].live_after
- && ss_busy_until_before[k+1] <= vreg_lrs[j].live_after)
- break;
- if (k == N_SPILL64S-1) {
- vpanic("LibVEX_N_SPILL_BYTES is too low. "
- "Increase and recompile.");
- }
- ss_busy_until_before[k+0] = vreg_lrs[j].dead_before;
- ss_busy_until_before[k+1] = vreg_lrs[j].dead_before;
-
}
/* This reflects LibVEX's hard-wired knowledge of the baseBlock
Modified: trunk/priv/host-x86/hdefs.c
===================================================================
--- trunk/priv/host-x86/hdefs.c 2007-03-21 00:21:56 UTC (rev 1743)
+++ trunk/priv/host-x86/hdefs.c 2007-03-25 04:14:58 UTC (rev 1744)
@@ -737,7 +737,7 @@
i->Xin.FpLdSt.sz = sz;
i->Xin.FpLdSt.reg = reg;
i->Xin.FpLdSt.addr = addr;
- vassert(sz == 4 || sz == 8);
+ vassert(sz == 4 || sz == 8 || sz == 10);
return i;
}
X86Instr* X86Instr_FpLdStI ( Bool isLoad, UChar sz,
@@ -1005,12 +1005,14 @@
break;
case Xin_FpLdSt:
if (i->Xin.FpLdSt.isLoad) {
- vex_printf("gld%c " , i->Xin.FpLdSt.sz==8 ? 'D' : 'F');
+ vex_printf("gld%c " , i->Xin.FpLdSt.sz==10 ? 'T'
+ : (i->Xin.FpLdSt.sz==8 ? 'D' : 'F'));
ppX86AMode(i->Xin.FpLdSt.addr);
vex_printf(", ");
ppHRegX86(i->Xin.FpLdSt.reg);
} else {
- vex_printf("gst%c " , i->Xin.FpLdSt.sz==8 ? 'D' : 'F');
+ vex_printf("gst%c " , i->Xin.FpLdSt.sz==10 ? 'T'
+ : (i->Xin.FpLdSt.sz==8 ? 'D' : 'F'));
ppHRegX86(i->Xin.FpLdSt.reg);
vex_printf(", ");
ppX86AMode(i->Xin.FpLdSt.addr);
@@ -1558,7 +1560,7 @@
case HRcInt32:
return X86Instr_Alu32M ( Xalu_MOV, X86RI_Reg(rreg), am );
case HRcFlt64:
- return X86Instr_FpLdSt ( False/*store*/, 8, rreg, am );
+ return X86Instr_FpLdSt ( False/*store*/, 10, rreg, am );
case HRcVec128:
return X86Instr_SseLdSt ( False/*store*/, rreg, am );
default:
@@ -1578,7 +1580,7 @@
case HRcInt32:
return X86Instr_Alu32R ( Xalu_MOV, X86RMI_Mem(am), rreg );
case HRcFlt64:
- return X86Instr_FpLdSt ( True/*load*/, 8, rreg, am );
+ return X86Instr_FpLdSt ( True/*load*/, 10, rreg, am );
case HRcVec128:
return X86Instr_SseLdSt ( True/*load*/, rreg, am );
default:
@@ -2497,14 +2499,27 @@
goto done;
case Xin_FpLdSt:
- vassert(i->Xin.FpLdSt.sz == 4 || i->Xin.FpLdSt.sz == 8);
if (i->Xin.FpLdSt.isLoad) {
/* Load from memory into %fakeN.
- --> ffree %st(7) ; fld{s/l} amode ; fstp st(N+1)
+ --> ffree %st(7) ; fld{s/l/t} amode ; fstp st(N+1)
*/
p = do_ffree_st7(p);
- *p++ = toUChar(i->Xin.FpLdSt.sz==4 ? 0xD9 : 0xDD);
- p = doAMode_M(p, fake(0)/*subopcode*/, i->Xin.FpLdSt.addr);
+ switch (i->Xin.FpLdSt.sz) {
+ case 4:
+ *p++ = 0xD9;
+ p = doAMode_M(p, fake(0)/*subopcode*/, i->Xin.FpLdSt.addr);
+ break;
+ case 8:
+ *p++ = 0xDD;
+ p = doAMode_M(p, fake(0)/*subopcode*/, i->Xin.FpLdSt.addr);
+ break;
+ case 10:
+ *p++ = 0xDB;
+ p = doAMode_M(p, fake(5)/*subopcode*/, i->Xin.FpLdSt.addr);
+ break;
+ default:
+ vpanic("emitX86Instr(FpLdSt,load)");
+ }
p = do_fstp_st(p, 1+hregNumber(i->Xin.FpLdSt.reg));
goto done;
} else {
@@ -2513,8 +2528,22 @@
*/
p = do_ffree_st7(p);
p = do_fld_st(p, 0+hregNumber(i->Xin.FpLdSt.reg));
- *p++ = toUChar(i->Xin.FpLdSt.sz==4 ? 0xD9 : 0xDD);
- p = doAMode_M(p, fake(3)/*subopcode*/, i->Xin.FpLdSt.addr);
+ switch (i->Xin.FpLdSt.sz) {
+ case 4:
+ *p++ = 0xD9;
+ p = doAMode_M(p, fake(3)/*subopcode*/, i->Xin.FpLdSt.addr);
+ break;
+ case 8:
+ *p++ = 0xDD;
+ p = doAMode_M(p, fake(3)/*subopcode*/, i->Xin.FpLdSt.addr);
+ break;
+ case 10:
+ *p++ = 0xDB;
+ p = doAMode_M(p, fake(7)/*subopcode*/, i->Xin.FpLdSt.addr);
+ break;
+ default:
+ vpanic("emitX86Instr(FpLdSt,store)");
+ }
goto done;
}
break;
|
|
From: Tom H. <th...@cy...> - 2007-03-25 02:31:09
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-03-25 03:15:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 256 tests, 27 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-03-25 02:23:42
|
Nightly build on dellow ( x86_64, Fedora Core 6 ) started at 2007-03-25 03:10:06 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 291 tests, 4 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 291 tests, 4 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Mar 25 03:17:02 2007 --- new.short Sun Mar 25 03:23:35 2007 *************** *** 8,10 **** ! == 291 tests, 4 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 291 tests, 4 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) *************** *** 14,15 **** --- 14,16 ---- none/tests/mremap2 (stdout) + none/tests/pth_detached (stdout) |
|
From: Tom H. <th...@cy...> - 2007-03-25 02:19:04
|
Nightly build on lloyd ( x86_64, Fedora Core 3 ) started at 2007-03-25 03:05:06 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 291 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-03-25 02:12:11
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-03-25 03:00:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 293 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: <js...@ac...> - 2007-03-25 01:16:25
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-03-25 03:00:01 CEST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 226 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Julian S. <js...@ac...> - 2007-03-25 00:00:50
|
Did you make any changes in coregrind/ ? If yes, what are they? J On Saturday 24 March 2007 19:53, xiaoming gu wrote: > Hi, all. > > I get the following failed assertion problem when I'm trying to profile > some program > using a new tool based on valgrind. > =================== > valgrind: ../../valgrind-3.2.2-reda/coregrind/m_threadstate.c:63 > (vgPlain_get_ThreadState): Assertion 'VG_(threads)[tid].tid == tid' failed. > In get_ThreadState. > =================== > > I aslo get the following information from debugging it. > =================== > #0 vgPlain_get_ThreadState (tid=1) > at ../../valgrind-3.2.2-reda/coregrind/m_threadstate.c:63 > #1 0x38010aa4 in sync_signalhandler (sigNo=11, info=0x8907ab6c, > uc=0x8907abec) > at ../../valgrind-3.2.2-reda/coregrind/m_signals.c:1731 > #2 0x3800e500 in vgPlain_redir_notify_new_SegInfo () > Previous frame inner to this frame (corrupt stack?) > =================== > > After try my best, I still don't know where is the bug. I use system calls > provided by > valgrind itself. And when I try other programs with this new tool there is > no such a > failed assertion. Please give me some directions if you have the same > problem before. > > Xiaoming Gu |