You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(9) |
2
(7) |
3
(15) |
4
(14) |
|
5
(12) |
6
(18) |
7
(16) |
8
(13) |
9
(14) |
10
(20) |
11
(26) |
|
12
(14) |
13
(25) |
14
(20) |
15
(15) |
16
(14) |
17
(13) |
18
(12) |
|
19
(8) |
20
(16) |
21
(15) |
22
(37) |
23
(15) |
24
(18) |
25
(12) |
|
26
(8) |
27
(13) |
28
(12) |
|
|
|
|
|
From: Eric P. <eri...@wa...> - 2006-02-15 18:21:10
|
Tom Hughes wrote:
> In message <200...@kt...>
> Eric Pouech <eri...@wa...> wrote:
>=20
>=20
>>We do however have some (other) strange effect with stacks. When wine
>>starts, the first thread is given a new stack (different from the one
>>allocated by the OS). We need this because most windows executable
>>define the size of the stack, and we must create a stack of the right
>>size even for the first thread.
>>
>>So, the typical layout we end up with is:
>>
>>0xBE?????? original stack
>>0x20?????? new stack
>>
>>which is horrible for valgrind, has the range for stacks is seen
>>between 0x20?????? 0xBE??????, and in most of the cases we cannot get
>>a decent backtrace on this stack. The problem doesn't occur on any
>>other thread, because we can define at thread creation time the size
>>the stack.
>=20
>=20
> Well valgrind itself provides the client program (wine in this case) wi=
th
> a stack that is not the original OS and it then emulates the stack grow=
th
> behaviour that the kernel normally provides.
>=20
> There should be no problem with wine allocating it's own stack - you
> will probably get a stack switch warning from valgrind when you jump
> on to it but valgrind should cope.
>=20
> Not sure why you aren't getting backtraces
it's because of the test:
if (fp_min + VG_(clo_max_stackframe) <=3D fp_max) {
in get_StackTrace2
what I currently get is fp_min (on the new stack) is around 0x20??????
while fp_max, as being computes as the highest sp ever met in somewhere=20
in 0xBE?????? (as the first stack has been used).
which means that the test always fail, and we never get a backtrace
again, this only happens for the first thread of a program. the next=20
threads are created directly with the right stack, and this doesn't happe=
n
I tried commenting out the test, and get the correct backtraces
however, this is highly error prone and triggers in some other tests=20
sigsegv in get_StackTrace2 (because of the large span of fp_min/fp_max)
> - one problem with
> backtraces in wine is obviously where you have things on the stack
> that only have PDB symbol tables/debugging which valgrind won't
> understand but it should still be able to unwind so long as there
> are normal x86 frames on the stack.
this is a different problem
A=B0
--=20
Eric Pouech
|
|
From: Tom H. <to...@co...> - 2006-02-15 17:54:09
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> > Not sure why you aren't getting backtraces - one problem with
> > backtraces in wine is obviously where you have things on the stack
> > that only have PDB symbol tables/debugging which valgrind won't
> > understand but it should still be able to unwind so long as there
> > are normal x86 frames on the stack.
>
> The symbol-table and line-number info and to some extent the stack
> unwind info is stored inside V in a target-neutral format. So at
> least in principle, it should be possible read info from PDBs
> (possibly using Adam Gundy's code) and store that; then V would
> be able to seamlessly display backtraces from combinations of
> ELF and PDB objects.
Absolutely. Adam's code was the wine code anyway. Adding PE executable
and PDB debugging support is certainly possible.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2006-02-15 17:38:42
|
> Not sure why you aren't getting backtraces - one problem with > backtraces in wine is obviously where you have things on the stack > that only have PDB symbol tables/debugging which valgrind won't > understand but it should still be able to unwind so long as there > are normal x86 frames on the stack. The symbol-table and line-number info and to some extent the stack unwind info is stored inside V in a target-neutral format. So at least in principle, it should be possible read info from PDBs (possibly using Adam Gundy's code) and store that; then V would be able to seamlessly display backtraces from combinations of ELF and PDB objects. J |
|
From: Tom H. <to...@co...> - 2006-02-15 17:18:42
|
In message <200...@kt...>
Eric Pouech <eri...@wa...> wrote:
> We do however have some (other) strange effect with stacks. When wine
> starts, the first thread is given a new stack (different from the one
> allocated by the OS). We need this because most windows executable
> define the size of the stack, and we must create a stack of the right
> size even for the first thread.
>
> So, the typical layout we end up with is:
>
> 0xBE?????? original stack
> 0x20?????? new stack
>
> which is horrible for valgrind, has the range for stacks is seen
> between 0x20?????? 0xBE??????, and in most of the cases we cannot get
> a decent backtrace on this stack. The problem doesn't occur on any
> other thread, because we can define at thread creation time the size
> the stack.
Well valgrind itself provides the client program (wine in this case) with
a stack that is not the original OS and it then emulates the stack growth
behaviour that the kernel normally provides.
There should be no problem with wine allocating it's own stack - you
will probably get a stack switch warning from valgrind when you jump
on to it but valgrind should cope.
Not sure why you aren't getting backtraces - one problem with
backtraces in wine is obviously where you have things on the stack
that only have PDB symbol tables/debugging which valgrind won't
understand but it should still be able to unwind so long as there
are normal x86 frames on the stack.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: <js...@ac...> - 2006-02-15 12:01:21
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2006-02-15 05:00:01 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 192 tests, 11 stderr failures, 5 stdout failures ================= memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/stack_changes (stdout) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <sv...@va...> - 2006-02-15 10:45:27
|
Author: tom Date: 2006-02-15 10:45:18 +0000 (Wed, 15 Feb 2006) New Revision: 5653 Log: Update bug status. Modified: trunk/docs/internals/3_1_BUGSTATUS.txt Modified: trunk/docs/internals/3_1_BUGSTATUS.txt =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/docs/internals/3_1_BUGSTATUS.txt 2006-02-15 10:44:02 UTC (rev 5= 652) +++ trunk/docs/internals/3_1_BUGSTATUS.txt 2006-02-15 10:45:18 UTC (rev 5= 653) @@ -44,6 +44,8 @@ v5633 pending 120728 TIOCSERGETLSR, TIOCGICOUNT, HDIO_GET_DMA io= ctls vx1419 pending 120658 Build fixes for gcc 2.96 v5593 pending 120658 Pass -Wdeclaration-after-statement to VEX b= uild +pending pending 120732 Generating trapno for sigcontext (x86) +v5641 pending 120734 Support for changing EIP in signal handler = (x86) v5616 pending n-i-bz memcheck/tests/zeropage de-looping fix vx1569 pending n-i-bz x86 fxtract doesn't work reliably probably-wontfix 121029 std::pow returns different float values @@ -51,6 +53,8 @@ pending pending 121662 x86: lock xadd (0xF0 0xF 0xC0 0x2) v5647 pending 121893 calloc does not always zero memory pending pending n-i-bz XML output truncated (users, Jan 26 09:08:3= 4 2006) +pending pending 121896 Handling ESP modification in ucontext from = signal handlers +v5651 pending 121901 no support for syscall tkill =20 (next 3 are ppc32-specific FP problems) many pending n-i-bz ppc32 rounding mode problems |
|
From: <sv...@va...> - 2006-02-15 10:44:12
|
Author: tom
Date: 2006-02-15 10:44:02 +0000 (Wed, 15 Feb 2006)
New Revision: 5652
Log:
Restore RIP on return from a signal handler on amd64 - mirrors the
change in revision 5641 to restore EIP on x86.
Modified:
trunk/coregrind/m_sigframe/sigframe-amd64-linux.c
Modified: trunk/coregrind/m_sigframe/sigframe-amd64-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_sigframe/sigframe-amd64-linux.c 2006-02-15 10:34:50=
UTC (rev 5651)
+++ trunk/coregrind/m_sigframe/sigframe-amd64-linux.c 2006-02-15 10:44:02=
UTC (rev 5652)
@@ -565,7 +565,7 @@
tst->arch.vex.guest_R14 =3D sc->r14;
tst->arch.vex.guest_R15 =3D sc->r15;
//:: tst->arch.vex.guest_rflags =3D sc->rflags;
-//:: tst->arch.vex.guest_RIP =3D sc->rip;
+ tst->arch.vex.guest_RIP =3D sc->rip;
=20
//:: tst->arch.vex.guest_CS =3D sc->cs;=20
//:: tst->arch.vex.guest_FS =3D sc->fs;
|
|
From: <sv...@va...> - 2006-02-15 10:34:59
|
Author: tom
Date: 2006-02-15 10:34:50 +0000 (Wed, 15 Feb 2006)
New Revision: 5651
Log:
Fix the tkill system call wrapper and enable it on x86 and amd64.
Fixes bug #121901.
Modified:
trunk/coregrind/m_syswrap/syswrap-amd64-linux.c
trunk/coregrind/m_syswrap/syswrap-linux.c
trunk/coregrind/m_syswrap/syswrap-x86-linux.c
Modified: trunk/coregrind/m_syswrap/syswrap-amd64-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-amd64-linux.c 2006-02-14 21:55:11 U=
TC (rev 5650)
+++ trunk/coregrind/m_syswrap/syswrap-amd64-linux.c 2006-02-15 10:34:50 U=
TC (rev 5651)
@@ -1204,7 +1204,7 @@
LINX_(__NR_lremovexattr, sys_lremovexattr), // 198=20
LINX_(__NR_fremovexattr, sys_fremovexattr), // 199=20
=20
- // (__NR_tkill, sys_tkill), // 200=20
+ LINXY(__NR_tkill, sys_tkill), // 200=20
GENXY(__NR_time, sys_time), /*was sys_time64*/ // 201=20
LINXY(__NR_futex, sys_futex), // 202=20
LINX_(__NR_sched_setaffinity, sys_sched_setaffinity), // 203=20
Modified: trunk/coregrind/m_syswrap/syswrap-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-linux.c 2006-02-14 21:55:11 UTC (re=
v 5650)
+++ trunk/coregrind/m_syswrap/syswrap-linux.c 2006-02-15 10:34:50 UTC (re=
v 5651)
@@ -867,30 +867,47 @@
PRE_REG_READ1(long, "set_tid_address", int *, tidptr);
}
=20
-//zz PRE(sys_tkill, Special)
-//zz {
-//zz /* int tkill(pid_t tid, int sig); */
-//zz PRINT("sys_tkill ( %d, %d )", ARG1,ARG2);
-//zz PRE_REG_READ2(long, "tkill", int, tid, int, sig);
-//zz if (!ML_(client_signal_OK)(ARG2)) {
-//zz SET_STATUS_( -VKI_EINVAL );
-//zz return;
-//zz }
-//zz=20
-//zz /* If we're sending SIGKILL, check to see if the target is one o=
f
-//zz our threads and handle it specially. */
-//zz if (ARG2 =3D=3D VKI_SIGKILL && ML_(do_sigkill)(ARG1, -1))
-//zz SET_STATUS_(0);
-//zz else
-//zz SET_STATUS_(VG_(do_syscall2)(SYSNO, ARG1, ARG2));
-//zz=20
-//zz if (VG_(clo_trace_signals))
-//zz VG_(message)(Vg_DebugMsg, "tkill: sent signal %d to pid %d",
-//zz ARG2, ARG1);
-//zz // Check to see if this kill gave us a pending signal
-//zz XXX FIXME VG_(poll_signals)(tid);
-//zz }
+PRE(sys_tkill)
+{
+ PRINT("sys_tgkill ( %d, %d )", ARG1,ARG2);
+ PRE_REG_READ2(long, "tkill", int, tid, int, sig);
+ if (!ML_(client_signal_OK)(ARG2)) {
+ SET_STATUS_Failure( VKI_EINVAL );
+ return;
+ }
+ =20
+ /* Check to see if this kill gave us a pending signal */
+ *flags |=3D SfPollAfter;
=20
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "tkill: sending signal %d to pid %d",
+ ARG2, ARG1);
+
+ /* If we're sending SIGKILL, check to see if the target is one of
+ our threads and handle it specially. */
+ if (ARG2 =3D=3D VKI_SIGKILL && ML_(do_sigkill)(ARG1, -1)) {
+ SET_STATUS_Success(0);
+ return;
+ }
+
+ /* Ask to handle this syscall via the slow route, since that's the
+ only one that sets tst->status to VgTs_WaitSys. If the result
+ of doing the syscall is an immediate run of
+ async_signalhandler() in m_signals, then we need the thread to
+ be properly tidied away. I have the impression the previous
+ version of this wrapper worked on x86/amd64 only because the
+ kernel did not immediately deliver the async signal to this
+ thread (on ppc it did, which broke the assertion re tst->status
+ at the top of async_signalhandler()). */
+ *flags |=3D SfMayBlock;
+}
+POST(sys_tkill)
+{
+ if (VG_(clo_trace_signals))
+ VG_(message)(Vg_DebugMsg, "tkill: sent signal %d to pid %d",
+ ARG2, ARG1);
+}
+
PRE(sys_tgkill)
{
PRINT("sys_tgkill ( %d, %d, %d )", ARG1,ARG2,ARG3);
Modified: trunk/coregrind/m_syswrap/syswrap-x86-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-x86-linux.c 2006-02-14 21:55:11 UTC=
(rev 5650)
+++ trunk/coregrind/m_syswrap/syswrap-x86-linux.c 2006-02-15 10:34:50 UTC=
(rev 5651)
@@ -2050,7 +2050,7 @@
LINX_(__NR_removexattr, sys_removexattr), // 235
LINX_(__NR_lremovexattr, sys_lremovexattr), // 236
LINX_(__NR_fremovexattr, sys_fremovexattr), // 237
-//zz LINX_(__NR_tkill, sys_tkill), // 238 */Linu=
x
+ LINXY(__NR_tkill, sys_tkill), // 238 */Linux
LINXY(__NR_sendfile64, sys_sendfile64), // 239
=20
LINXY(__NR_futex, sys_futex), // 240
|
|
From: <js...@ac...> - 2006-02-15 04:08:32
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2006-02-15 03:30:01 GMT Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 223 tests, 7 stderr failures, 0 stdout failures ================= memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: <js...@ac...> - 2006-02-15 03:57:51
|
Nightly build on g5 ( YDL 4.0, ppc970 ) started at 2006-02-15 04:40:00 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 197 tests, 6 stderr failures, 1 stdout failure ================= memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) |
|
From: Tom H. <to...@co...> - 2006-02-15 03:44:12
|
Nightly build on dunsmere ( athlon, Fedora Core 4 ) started at 2006-02-15 03:30:05 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 225 tests, 7 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 225 tests, 8 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Feb 15 03:37:09 2006 --- new.short Wed Feb 15 03:43:59 2006 *************** *** 8,12 **** ! == 225 tests, 8 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) - memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) --- 8,11 ---- ! == 225 tests, 7 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) |
|
From: Tom H. <th...@cy...> - 2006-02-15 03:33:16
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2006-02-15 03:15:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 224 tests, 21 stderr failures, 1 stdout failure ================= memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/mempool (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/xml1 (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2006-02-15 03:26:45
|
Nightly build on dellow ( x86_64, Fedora Core 4 ) started at 2006-02-15 03:10:07 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 245 tests, 6 stderr failures, 1 stdout failure ================= memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) none/tests/amd64/faultstatus (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 245 tests, 5 stderr failures, 1 stdout failure ================= memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) none/tests/amd64/faultstatus (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Feb 15 03:19:29 2006 --- new.short Wed Feb 15 03:26:37 2006 *************** *** 8,10 **** ! == 245 tests, 5 stderr failures, 1 stdout failure ================= memcheck/tests/x86/scalar (stderr) --- 8,11 ---- ! == 245 tests, 6 stderr failures, 1 stdout failure ================= ! memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) |
|
From: Tom H. <th...@cy...> - 2006-02-15 03:23:15
|
Nightly build on aston ( x86_64, Fedora Core 3 ) started at 2006-02-15 03:05:07 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 245 tests, 6 stderr failures, 1 stdout failure ================= memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) none/tests/amd64/faultstatus (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2006-02-15 03:16:05
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2006-02-15 03:00:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 245 tests, 7 stderr failures, 1 stdout failure ================= memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) none/tests/amd64/faultstatus (stderr) none/tests/fdleak_fcntl (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |