You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(32) |
2
(22) |
3
(47) |
4
(29) |
5
(18) |
6
(16) |
|
7
(21) |
8
(29) |
9
(23) |
10
(68) |
11
(20) |
12
(17) |
13
(17) |
|
14
(27) |
15
(26) |
16
(21) |
17
(13) |
18
(19) |
19
(29) |
20
(13) |
|
21
(9) |
22
(8) |
23
(29) |
24
(56) |
25
(21) |
26
(46) |
27
(33) |
|
28
(25) |
29
(41) |
30
(35) |
31
(28) |
|
|
|
|
From: Tom H. <to...@co...> - 2005-08-07 16:03:14
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
>
> > > I think (unless I misunderstand) that this is a non-problem.
> > > Vex makes it easy to defer to the scheduler and simultaneously
> > > set the next-IP value to anything you like (any constant value
> > > known at translation-time). So for sysenter we'd merely need
> > > to tell vex what the where-next address is, and there's already
> > > a struct to tell Vex that kind of stuff (eg, for ppc32 'dcbz'
> > > it needs to know the cache line size of the machine being simulated,
> > > and only Valgrind knows that).
> > >
> > > Shall I hack this up?
> >
> > Might be any idea. I believe that Solaris needs sysenter support
> > anyway if I remember correctly.
>
> Done (1320/4337). Note, I realised later the above scheme is
> unnecessarily complicated. A sysenter now causes the calling thread
> to return to the scheduler with code VEX_TRC_JMP_SYSENTER_X86.
> It is the scheduler's problem to fill in the thread's guest_EIP
> with a valid restart address before letting it run again.
Thanks for that. I'll have to find an Intel box to try it on.
I was being a muppet earlier when I claimed that the amd64 in 32 bit
mode was using sysenter - it was actually using syscall so we will
need that as well... The 32 bit emulation on amd64 uses either syscall
or sysenter in the vdso depending on the processor.
It never uses int $80 although the kernel does seem to recognise and
handle it because after adding syscall support to VEX on x86 I now have
a 32 bit process running under valgrind with the vdso patch.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: <sv...@va...> - 2005-08-07 15:17:07
|
Author: tom
Date: 2005-08-07 16:16:59 +0100 (Sun, 07 Aug 2005)
New Revision: 4338
Log:
Check the fields of the new structure passed to sigaction individually
and only check sa_restorer if the SA_RESTORER flag is set.
Modified:
trunk/coregrind/m_syswrap/syswrap-generic.c
trunk/coregrind/m_syswrap/syswrap-x86-linux.c
Modified: trunk/coregrind/m_syswrap/syswrap-generic.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-generic.c 2005-08-07 14:49:27 UTC (=
rev 4337)
+++ trunk/coregrind/m_syswrap/syswrap-generic.c 2005-08-07 15:16:59 UTC (=
rev 4338)
@@ -5374,8 +5374,14 @@
int, signum, const struct sigaction *, act,
struct sigaction *, oldact, vki_size_t, sigsetsize);
=20
- if (ARG2 !=3D 0)
- PRE_MEM_READ( "rt_sigaction(act)", ARG2, sizeof(struct vki_sigacti=
on));
+ if (ARG2 !=3D 0) {
+ struct vki_sigaction *sa =3D (struct vki_sigaction *)ARG2;
+ PRE_MEM_READ( "rt_sigaction(act->sa_handler)", (Addr)&sa->ksa_hand=
ler, sizeof(sa->ksa_handler));
+ PRE_MEM_READ( "rt_sigaction(act->sa_mask)", (Addr)&sa->sa_mask, si=
zeof(sa->sa_mask));
+ PRE_MEM_READ( "rt_sigaction(act->sa_flags)", (Addr)&sa->sa_flags, =
sizeof(sa->sa_flags));
+ if (sa->sa_flags & VKI_SA_RESTORER)
+ PRE_MEM_READ( "rt_sigaction(act->sa_restorer)", (Addr)&sa->sa_r=
estorer, sizeof(sa->sa_restorer));
+ }
if (ARG3 !=3D 0)
PRE_MEM_WRITE( "rt_sigaction(oldact)", ARG3, sizeof(struct vki_sig=
action));
=20
Modified: trunk/coregrind/m_syswrap/syswrap-x86-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-x86-linux.c 2005-08-07 14:49:27 UTC=
(rev 4337)
+++ trunk/coregrind/m_syswrap/syswrap-x86-linux.c 2005-08-07 15:16:59 UTC=
(rev 4338)
@@ -1871,8 +1871,14 @@
=20
newp =3D oldp =3D NULL;
=20
- if (ARG2 !=3D 0)
- PRE_MEM_READ( "sigaction(act)", ARG2, sizeof(struct vki_old_sigact=
ion));
+ if (ARG2 !=3D 0) {
+ struct vki_old_sigaction *sa =3D (struct vki_old_sigaction *)ARG2;
+ PRE_MEM_READ( "rt_sigaction(act->sa_handler)", (Addr)&sa->ksa_hand=
ler, sizeof(sa->ksa_handler));
+ PRE_MEM_READ( "rt_sigaction(act->sa_mask)", (Addr)&sa->sa_mask, si=
zeof(sa->sa_mask));
+ PRE_MEM_READ( "rt_sigaction(act->sa_flags)", (Addr)&sa->sa_flags, =
sizeof(sa->sa_flags));
+ if (sa->sa_flags & VKI_SA_RESTORER)
+ PRE_MEM_READ( "rt_sigaction(act->sa_restorer)", (Addr)&sa->sa_r=
estorer, sizeof(sa->sa_restorer));
+ }
=20
if (ARG3 !=3D 0) {
PRE_MEM_WRITE( "sigaction(oldact)", ARG3, sizeof(struct vki_old_si=
gaction));
|
|
From: Kailash S. <hs...@gm...> - 2005-08-07 15:01:59
|
Hi all, On 8/7/05, Julian Seward <js...@ac...> wrote: >=20 > Here's a possible plan which might help NetBSD, Solaris 8 > and MacOSX. The idea is to get rid of stage 1 completely, and > turn stage 2 into a statically linked executable which is > completely standalone -- it does not depend on either libc or > on any dynamic linking library (ld.so). stage 2 would also have > a non-standard load address, as it does currently. >=20 > Currently stage2 uses dlopen to load the tool. In this > revised plan, tools would be statically linked to the core, > so there would be a different stage2 for each tool > (valgrind_memcheck, valgrind_cachegrind, etc). >=20 > The result is (for each tool) a statically linked valgrind > binary which can be started directly. >=20 I think this is a good plan which would greatly help improve portability. > So .. I already tried out most of this -- the removal of > stage1 and the static linking work at least on x86 linux. >=20 Do you have experimental patches for this? I could try this on NetBSD too,= =20 The valgrind port of netbsd is based on a snapshot of valgrind-current from a few weeks ago. > Another advantage is that there no longer any need for address > space padding games to get stage2 to load at the address we > want it to. Overall it appears to remove some complexity and > Linux-specificness. >=20 > Once that's done, the next step is to rewrite the address space > manager to be much more flexible about address space layout. > We know from the FreeBSD folks that address space layout issues > are at present a major problem in porting V to FreeBSD, and > the same goes for MacOSX, so this seems like a good thing for > lots of reasons. Yes please :) >=20 > J >=20 >=20 > On Thursday 04 August 2005 20:07, Naveen Kumar wrote: > > I had a similar problem(no auxv) in Solaris 8 when > > stage1 was compiled as static. You just need to make > > up the remaining auxv entries with known info about > > the OS and stage2 exe and add it just like > > AT_UME_PADFD or AT_UME_EXECFD. > > I have tried this, and the NetBSD linker is still confused and crashes before stage 2 can be loaded. Anyway we are working to debug this and get a dynamic stage2 loaded for now= . Regards, Kailash |
|
From: Julian S. <js...@ac...> - 2005-08-07 14:52:53
|
> > I think (unless I misunderstand) that this is a non-problem. > > Vex makes it easy to defer to the scheduler and simultaneously > > set the next-IP value to anything you like (any constant value > > known at translation-time). So for sysenter we'd merely need > > to tell vex what the where-next address is, and there's already > > a struct to tell Vex that kind of stuff (eg, for ppc32 'dcbz' > > it needs to know the cache line size of the machine being simulated, > > and only Valgrind knows that). > > > > Shall I hack this up? > > Might be any idea. I believe that Solaris needs sysenter support > anyway if I remember correctly. Done (1320/4337). Note, I realised later the above scheme is unnecessarily complicated. A sysenter now causes the calling thread to return to the scheduler with code VEX_TRC_JMP_SYSENTER_X86. It is the scheduler's problem to fill in the thread's guest_EIP with a valid restart address before letting it run again. J |
|
From: <sv...@va...> - 2005-08-07 14:49:28
|
Author: sewardj
Date: 2005-08-07 15:49:27 +0100 (Sun, 07 Aug 2005)
New Revision: 4337
Log:
Valgrind-side stub for dealing with x86 sysenter artefacts from Vex.
Does not do anything yet.
Modified:
trunk/coregrind/m_scheduler/scheduler.c
Modified: trunk/coregrind/m_scheduler/scheduler.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_scheduler/scheduler.c 2005-08-06 18:07:17 UTC (rev =
4336)
+++ trunk/coregrind/m_scheduler/scheduler.c 2005-08-07 14:49:27 UTC (rev =
4337)
@@ -792,6 +792,24 @@
"run_innerloop detected host "
"state invariant failure", trc);
=20
+ case VEX_TRC_JMP_SYSENTER_X86:
+ /* Do whatever simulation is appropriate for an x86 sysenter
+ instruction. Note that it is critical to set this thread's
+ guest_EIP to point at the code to execute after the
+ sysenter, since Vex-generated code will not have set it --
+ vex does not know what it should be. Vex sets the next
+ address to zero, so if you don't guest_EIP, the thread will
+ jump to zero afterwards and probably die as a result. */
+# if defined(VGA_x86)
+ //FIXME: VG_(threads)[tid].arch.vex.guest_EIP =3D ....
+ //handle_sysenter_x86(tid);
+ vg_assert2(0, "VG_(scheduler), phase 3: "
+ "sysenter_x86 on not yet implemented");
+# else
+ vg_assert2(0, "VG_(scheduler), phase 3: "
+ "sysenter_x86 on non-x86 platform?!?!");
+# endif
+
default:=20
vg_assert2(0, "VG_(scheduler), phase 3: "
"unexpected thread return code (%u)", trc);
|
|
From: <sv...@va...> - 2005-08-07 14:48:11
|
Author: sewardj
Date: 2005-08-07 15:48:03 +0100 (Sun, 07 Aug 2005)
New Revision: 1320
Log:
A minimal implementation of the x86 sysenter instruction
(experimental). Limitations as commented in the code.
Modified:
trunk/priv/guest-x86/toIR.c
trunk/priv/host-x86/hdefs.c
trunk/priv/ir/irdefs.c
trunk/pub/libvex_ir.h
trunk/pub/libvex_trc_values.h
Modified: trunk/priv/guest-x86/toIR.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-x86/toIR.c 2005-08-06 11:45:02 UTC (rev 1319)
+++ trunk/priv/guest-x86/toIR.c 2005-08-07 14:48:03 UTC (rev 1320)
@@ -88,6 +88,15 @@
happen. Programs that set it to 1 and then rely on the resulting
SIGBUSs to inform them of misaligned accesses will not work.
=20
+ Implementation sysenter is necessarily partial. sysenter is a kind
+ of system call entry. When doing a sysenter, the return address is
+ not known -- that is something that is beyond Vex's knowledge. So
+ the generated IR forces a return to the scheduler, which can do
+ what it likes to simulate the systemter, but it MUST set this
+ thread's guest_EIP field with the continuation address before
+ resuming execution. If that doesn't happen, the thread will jump
+ to address zero, which is probably fatal.
+
This module uses global variables and so is not MT-safe (if that
should ever become relevant).
=20
@@ -11974,6 +11983,26 @@
"%cl", False );
break;
=20
+ /* =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- SYSENTER -=3D-=3D-=3D-=3D-=
=3D-=3D-=3D-=3D-=3D-=3D */
+
+ case 0x34:
+ /* Simple implementation needing a long explaination.
+
+ sysenter is a kind of syscall entry. The key thing here
+ is that the return address is not known -- that is
+ something that is beyond Vex's knowledge. So this IR
+ forces a return to the scheduler, which can do what it
+ likes to simulate the systemter, but it MUST set this
+ thread's guest_EIP field with the continuation address
+ before resuming execution. If that doesn't happen, the
+ thread will jump to address zero, which is probably
+ fatal.=20
+ */=20
+ jmp_lit(Ijk_SysenterX86, 0/*bogus next EIP value*/);
+ dres.whatNext =3D Dis_StopHere;
+ DIP("sysenter");
+ break;
+
/* =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- XADD -=3D-=3D-=3D-=3D-=3D-=
=3D-=3D-=3D-=3D-=3D */
=20
//-- case 0xC0: /* XADD Gb,Eb */
Modified: trunk/priv/host-x86/hdefs.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/host-x86/hdefs.c 2005-08-06 11:45:02 UTC (rev 1319)
+++ trunk/priv/host-x86/hdefs.c 2005-08-07 14:48:03 UTC (rev 1320)
@@ -2165,6 +2165,9 @@
case Ijk_TInval:
*p++ =3D 0xBD;
p =3D emit32(p, VEX_TRC_JMP_TINVAL); break;
+ case Ijk_SysenterX86:
+ *p++ =3D 0xBD;
+ p =3D emit32(p, VEX_TRC_JMP_SYSENTER_X86); break;
case Ijk_Ret:
case Ijk_Call:
case Ijk_Boring:
Modified: trunk/priv/ir/irdefs.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/ir/irdefs.c 2005-08-06 11:45:02 UTC (rev 1319)
+++ trunk/priv/ir/irdefs.c 2005-08-07 14:48:03 UTC (rev 1320)
@@ -599,17 +599,18 @@
void ppIRJumpKind ( IRJumpKind kind )
{
switch (kind) {
- case Ijk_Boring: vex_printf("Boring"); break;
- case Ijk_Call: vex_printf("Call"); break;
- case Ijk_Ret: vex_printf("Return"); break;
- case Ijk_ClientReq: vex_printf("ClientReq"); break;
- case Ijk_Syscall: vex_printf("Syscall"); break;
- case Ijk_Yield: vex_printf("Yield"); break;
- case Ijk_EmWarn: vex_printf("EmWarn"); break;
- case Ijk_NoDecode: vex_printf("NoDecode"); break;
- case Ijk_MapFail: vex_printf("MapFail"); break;
- case Ijk_TInval: vex_printf("Invalidate"); break;
- default: vpanic("ppIRJumpKind");
+ case Ijk_Boring: vex_printf("Boring"); break;
+ case Ijk_Call: vex_printf("Call"); break;
+ case Ijk_Ret: vex_printf("Return"); break;
+ case Ijk_ClientReq: vex_printf("ClientReq"); break;
+ case Ijk_Syscall: vex_printf("Syscall"); break;
+ case Ijk_Yield: vex_printf("Yield"); break;
+ case Ijk_EmWarn: vex_printf("EmWarn"); break;
+ case Ijk_NoDecode: vex_printf("NoDecode"); break;
+ case Ijk_MapFail: vex_printf("MapFail"); break;
+ case Ijk_TInval: vex_printf("Invalidate"); break;
+ case Ijk_SysenterX86: vex_printf("SysenterX86"); break;
+ default: vpanic("ppIRJumpKind");
}
}
=20
Modified: trunk/pub/libvex_ir.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/pub/libvex_ir.h 2005-08-06 11:45:02 UTC (rev 1319)
+++ trunk/pub/libvex_ir.h 2005-08-07 14:48:03 UTC (rev 1320)
@@ -816,7 +816,9 @@
Ijk_EmWarn, /* report emulation warning before continuing =
*/
Ijk_NoDecode, /* next instruction cannot be decoded */
Ijk_MapFail, /* Vex-provided address translation failed */
- Ijk_TInval /* Invalidate translations before continuing. =
*/
+ Ijk_TInval, /* Invalidate translations before continuing. =
*/
+ Ijk_SysenterX86 /* X86 sysenter. guest_EIP becomes invalid
+ at the point this happens. */
}
IRJumpKind;
=20
Modified: trunk/pub/libvex_trc_values.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/pub/libvex_trc_values.h 2005-08-06 11:45:02 UTC (rev 1319)
+++ trunk/pub/libvex_trc_values.h 2005-08-07 14:48:03 UTC (rev 1320)
@@ -67,7 +67,10 @@
#define VEX_TRC_JMP_NODECODE 29 /* next instruction in not decodable =
*/
#define VEX_TRC_JMP_MAPFAIL 31 /* address translation failed */
=20
+#define VEX_TRC_JMP_SYSENTER_X86 9 /* simulate X86 sysenter before
+ continuing */
=20
+
#endif /* ndef __LIBVEX_TRC_VALUES_H */
=20
/*---------------------------------------------------------------*/
|
|
From: Tom H. <to...@co...> - 2005-08-07 11:56:15
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> > The next problem obviously is sysenter. Unfortunately supporting it
> > might be hard as the user program does not provide the return address
> > when it is in use - the kernel returns (using sysexit) to a fixed
> > address based on where it has mapped the vdso. That address is not
> > even the one after the sysenter instruction - there are some nops
> > and a jump in between.
> >
> > So if VEX caught sysenter and handed control back to valgrind and
> > valgrind then ran the system call and tried to return control to the
> > address after the sysenter instruction we would actually wind up
> > looping back and doing the system call again...
>
> I think (unless I misunderstand) that this is a non-problem.
> Vex makes it easy to defer to the scheduler and simultaneously
> set the next-IP value to anything you like (any constant value
> known at translation-time). So for sysenter we'd merely need
> to tell vex what the where-next address is, and there's already
> a struct to tell Vex that kind of stuff (eg, for ppc32 'dcbz'
> it needs to know the cache line size of the machine being simulated,
> and only Valgrind knows that).
>
> Shall I hack this up?
Might be any idea. I believe that Solaris needs sysenter support
anyway if I remember correctly.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2005-08-07 11:39:39
|
> The next problem obviously is sysenter. Unfortunately supporting it > might be hard as the user program does not provide the return address > when it is in use - the kernel returns (using sysexit) to a fixed > address based on where it has mapped the vdso. That address is not > even the one after the sysenter instruction - there are some nops > and a jump in between. > > So if VEX caught sysenter and handed control back to valgrind and > valgrind then ran the system call and tried to return control to the > address after the sysenter instruction we would actually wind up > looping back and doing the system call again... I think (unless I misunderstand) that this is a non-problem. Vex makes it easy to defer to the scheduler and simultaneously set the next-IP value to anything you like (any constant value known at translation-time). So for sysenter we'd merely need to tell vex what the where-next address is, and there's already a struct to tell Vex that kind of stuff (eg, for ppc32 'dcbz' it needs to know the cache line size of the machine being simulated, and only Valgrind knows that). Shall I hack this up? > Something else to beware of is that we wind up with stage2 and > the client program both using the vdso (when stage2 uses glibc > to do things) although as it has no state that is probably safe. Yet another reason to be completely independent of glibc. > On amd64 I don't believe the vdso is used in the same way - although > there is a special page containing vsyscalls I don't believe that the > auxv is used to announce it as it is mapped at a fixed address and > glibc just assumes it can jump to addresses in that page for certain > system calls. That is my understanding too. J |
|
From: Tom H. <to...@co...> - 2005-08-07 10:53:03
|
While investigating bug #110205 the question arose as to why we hide any kernel provided vdso from the client program and whether we could allow the client program to use it instead. Julian wanted to know more about vdsos, so here is a quick summary of where I'm at. The code for the x86 vdso is in arch/i386/kernel in the ker source (the vsyscall*.S files). Current kernels contain two version of the vdso, one for systems supporting the sysenter instruction and one for systems which do not support it. Each vdso currently contains __kernel_vsyscall which is a routine to make a system call, either using sysenter or using int $80 for older machines. They also contain __kernel_sigreturn and __kernel_rt_sigreturn which are the default return addresses for signal handlers when the kernel builds a signal frame on the stack. The vdso is a full ELF object and AT_SYSINFO_EHDR in the auxv will point to it - in addition AT_SYSINFO will point at the system call routine within that object as old glibc's only a expect a simple system call routine pointer. On an athlon system that does not support sysenter we can get valgrind to use the vdso - the patch to do so is attached. The first thing is just to stop patching in AT_IGNORE over the auxv entries. The second problem is that the vdso seems to be mapped fairly low down in what is normally the client space so we normally accidentally unmap it when we make the client hold. I have worked around that in the patch. The next problem obviously is sysenter. Unfortunately supporting it might be hard as the user program does not provide the return address when it is in use - the kernel returns (using sysexit) to a fixed address based on where it has mapped the vdso. That address is not even the one after the sysenter instruction - there are some nops and a jump in between. So if VEX caught sysenter and handed control back to valgrind and valgrind then ran the system call and tried to return control to the address after the sysenter instruction we would actually wind up looping back and doing the system call again... Something else to beware of is that we wind up with stage2 and the client program both using the vdso (when stage2 uses glibc to do things) although as it has no state that is probably safe. On amd64 I don't believe the vdso is used in the same way - although there is a special page containing vsyscalls I don't believe that the auxv is used to announce it as it is mapped at a fixed address and glibc just assumes it can jump to addresses in that page for certain system calls. The x86 on amd64 system call stuff does use it I believe as when I was playing with a 32 bit build on an amd64 box on Friday I managaed to hit a sysenter instruction (Athlon 64 does support sysenter for 32 bit programs). I haven't looked at that in detail though. Likewise I haven't looked at ppc32 at all. Tom -- Tom Hughes (to...@co...) http://www.compton.nu/ |
|
From: Julian S. <js...@ac...> - 2005-08-07 10:24:33
|
Here's a possible plan which might help NetBSD, Solaris 8 and MacOSX. The idea is to get rid of stage 1 completely, and turn stage 2 into a statically linked executable which is completely standalone -- it does not depend on either libc or on any dynamic linking library (ld.so). stage 2 would also have a non-standard load address, as it does currently. Currently stage2 uses dlopen to load the tool. In this revised plan, tools would be statically linked to the core, so there would be a different stage2 for each tool (valgrind_memcheck, valgrind_cachegrind, etc). The result is (for each tool) a statically linked valgrind binary which can be started directly. So .. I already tried out most of this -- the removal of stage1 and the static linking work at least on x86 linux. Another advantage is that there no longer any need for address space padding games to get stage2 to load at the address we want it to. Overall it appears to remove some complexity and Linux-specificness. Once that's done, the next step is to rewrite the address space manager to be much more flexible about address space layout. We know from the FreeBSD folks that address space layout issues are at present a major problem in porting V to FreeBSD, and the same goes for MacOSX, so this seems like a good thing for lots of reasons. J On Thursday 04 August 2005 20:07, Naveen Kumar wrote: > >valgrind-current. If we do > >this in netbsd, it seems that the auxv structures > > that >we look for in > > >fix_auxv() are not there. Only the first two we place > >are located. If > >stage1 is compiled as dynamic, the rest of the auxv > >structures can be > >found and fixed up. > > I had a similar problem(no auxv) in Solaris 8 when > stage1 was compiled as static. You just need to make > up the remaining auxv entries with known info about > the OS and stage2 exe and add it just like > AT_UME_PADFD or AT_UME_EXECFD. > > > My suspicion was that loading a dynamic stage2 when > >stage1 is also > >dynamic could result in some problems, so I tried a > >static stage2, > >which successfully loaded. The problem here in stage > >2 is that the > >break() , used internally by malloc in libc syscalls > >fail. I dont > >think netbsd falls back to mmap like in glibc, I have > >not investigated > >this closely. > > I had the exact same problem on Solaris. I > circumvented that by using the mapmalloc library which > implements malloc using mmap.Is there such an > alternative on netbsd ? Worst case you can try to hack > mapmalloc on netbsd from solaris. > > http://cvs.opensolaris.org/source/xref/usr/src/lib/libmapmalloc/ > > > Naveen > > > > ____________________________________________________ > Start your day with Yahoo! - make it your home page > http://www.yahoo.com/r/hs > > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA > Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Julian S. <js...@ac...> - 2005-08-07 10:22:30
|
> I think you have sorted it - that is a diff from the old results > to the new results, so it is no longer failing. Um. Well, we'll see tonight. I'm a bit surprised though because I introduced the problem with vex r1319 at 12:45 yesterday and committed a corresponding regtest fix (r4334) about an hour later, and that interval surely does not intersect with the time your overnight build ran. J |
|
From: Tom H. <to...@co...> - 2005-08-07 10:14:02
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
>
> > On Sunday 07 August 2005 03:40, Tom Hughes wrote:
> > Nightly build on dunsmere ( athlon, Fedora Core 4 ) started at 2005-08-07
> > 03:30:04 BST
>
> > ! == 181 tests, 6 stderr failures, 0 stdout failures =================
> > memcheck/tests/leak-tree (stderr)
> > memcheck/tests/weirdioctl (stderr)
> > - memcheck/tests/x86/pushfpopf (stderr)
> > memcheck/tests/xml1 (stderr)
> > --- 8,12 ----
> >
> > ! == 181 tests, 5 stderr failures, 0 stdout failures =================
> > memcheck/tests/leak-tree (stderr)
> > memcheck/tests/weirdioctl (stderr)
> > memcheck/tests/xml1 (stderr)
>
> I thought I had this sorted out (r4334) and I can't repro this
> failure on my fc4 installation. Could you send
> memcheck/tests/x86/pushfpopf.stderr.out so I can see what happen?
I think you have sorted it - that is a diff from the old results
to the new results, so it is no longer failing.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2005-08-07 09:39:38
|
> On Sunday 07 August 2005 03:40, Tom Hughes wrote: > Nightly build on dunsmere ( athlon, Fedora Core 4 ) started at 2005-08-07 > 03:30:04 BST > ! == 181 tests, 6 stderr failures, 0 stdout failures ================= > memcheck/tests/leak-tree (stderr) > memcheck/tests/weirdioctl (stderr) > - memcheck/tests/x86/pushfpopf (stderr) > memcheck/tests/xml1 (stderr) > --- 8,12 ---- > > ! == 181 tests, 5 stderr failures, 0 stdout failures ================= > memcheck/tests/leak-tree (stderr) > memcheck/tests/weirdioctl (stderr) > memcheck/tests/xml1 (stderr) I thought I had this sorted out (r4334) and I can't repro this failure on my fc4 installation. Could you send memcheck/tests/x86/pushfpopf.stderr.out so I can see what happen? J |
|
From: Julian S. <js...@ac...> - 2005-08-07 09:15:41
|
> > none/tests/yield isn't stable in the sense that the outcome > > can be scheduling dependent. This also happened on Tom's > > RH73 machine a few nights back. I wonder what we should do > > about it. > > Rewrite it so it is deterministic? :) I would if I understood how that > test works... anyone want to volunteer? The idea is to check that a thread doing P4 "rep nop" instructions makes progress more slowly than one that doesn't. rep nop is a hint for hyperthreaded P4s which tells the CPU to give priority to other threads running on the same CPU at the same time. As far as I know the hint is not observed or acted on at the OS level. For Valgrind, when such a hint is executed, control returns to the despatcher and hence to the scheduler, which does sys_yield for that thread*, hence using the kernel to mimic the effect. Problem is that this makes it impossible to reliabily differentiate successful simulation of this facility from arbitrary kernel-induced scheduling wierdness. J * In fact there is more to it (see scheduler.c line 720 ish). Swapping to another kernel thread was observed to create massive delays for threads in spin-wait loops. Not surprisingly -- a rep nop on real hardware might cause it to fall a few cycles behind, whereas a kernel thread swap must cost in the thousands of cycles. So I kludged the scheduler to instead keep running the same thread, but set its remaining bb-quantum to 100 at most. The effect is that a thread now has to execute 100 rep nops in short succession before a thread swap actually occurs. |
|
From: Tom H. <th...@cy...> - 2005-08-07 03:36:12
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2005-08-07 03:00:03 BST Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 159 tests, 7 stderr failures, 1 stdout failure ================= memcheck/tests/sigprocmask (stderr) memcheck/tests/strchr (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/weirdioctl (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_fcntl (stderr) none/tests/tls (stdout) |
|
From: <js...@ac...> - 2005-08-07 03:18:42
|
Nightly build on g5 ( YDL 4.0, ppc970 ) started at 2005-08-07 04:40:01 CEST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 154 tests, 90 stderr failures, 7 stdout failures ================= memcheck/tests/addressable (stderr) memcheck/tests/badaddrvalue (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badfree (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badloop (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/badrw (stderr) memcheck/tests/brk (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/clientperm (stderr) memcheck/tests/custom_alloc (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/doublefree (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/errs1 (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/exitprog (stderr) memcheck/tests/fprw (stderr) memcheck/tests/fwrite (stderr) memcheck/tests/inits (stderr) memcheck/tests/inline (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/malloc1 (stderr) memcheck/tests/malloc2 (stderr) memcheck/tests/malloc3 (stderr) memcheck/tests/manuel1 (stderr) memcheck/tests/manuel2 (stderr) memcheck/tests/manuel3 (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/memalign2 (stderr) memcheck/tests/memalign_test (stderr) memcheck/tests/memcmptest (stderr) memcheck/tests/mempool (stderr) memcheck/tests/mismatches (stderr) memcheck/tests/mmaptest (stderr) memcheck/tests/nanoleak (stderr) memcheck/tests/nanoleak_supp (stderr) memcheck/tests/new_nothrow (stderr) memcheck/tests/new_override (stderr) memcheck/tests/null_socket (stderr) memcheck/tests/overlap (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/post-syscall (stdout) memcheck/tests/post-syscall (stderr) memcheck/tests/realloc1 (stderr) memcheck/tests/realloc2 (stderr) memcheck/tests/realloc3 (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/signal2 (stderr) memcheck/tests/sigprocmask (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/str_tester (stderr) memcheck/tests/strchr (stderr) memcheck/tests/supp1 (stderr) memcheck/tests/supp2 (stderr) memcheck/tests/suppfree (stderr) memcheck/tests/toobig-allocs (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/weirdioctl (stderr) memcheck/tests/with-space (stderr) memcheck/tests/writev (stderr) memcheck/tests/xml1 (stderr) memcheck/tests/zeropage (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/dlclose (stdout) cachegrind/tests/dlclose (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/fdleak_creat (stderr) none/tests/fdleak_dup (stderr) none/tests/fdleak_dup2 (stderr) none/tests/fdleak_fcntl (stderr) none/tests/fdleak_ipv4 (stderr) none/tests/fdleak_open (stderr) none/tests/fdleak_pipe (stderr) none/tests/fdleak_socketpair (stderr) none/tests/manythreads (stdout) none/tests/manythreads (stderr) none/tests/pending (stdout) none/tests/pending (stderr) none/tests/pth_blockedsig (stderr) none/tests/pth_cancel1 (stdout) none/tests/pth_cancel1 (stderr) none/tests/pth_cancel2 (stderr) none/tests/thread-exits (stdout) none/tests/thread-exits (stderr) |
|
From: Tom H. <to...@co...> - 2005-08-07 02:40:48
|
Nightly build on dunsmere ( athlon, Fedora Core 4 ) started at 2005-08-07 03:30:04 BST Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 181 tests, 5 stderr failures, 0 stdout failures ================= memcheck/tests/leak-tree (stderr) memcheck/tests/weirdioctl (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 181 tests, 6 stderr failures, 0 stdout failures ================= memcheck/tests/leak-tree (stderr) memcheck/tests/weirdioctl (stderr) memcheck/tests/x86/pushfpopf (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Aug 7 03:35:24 2005 --- new.short Sun Aug 7 03:40:43 2005 *************** *** 8,13 **** ! == 181 tests, 6 stderr failures, 0 stdout failures ================= memcheck/tests/leak-tree (stderr) memcheck/tests/weirdioctl (stderr) - memcheck/tests/x86/pushfpopf (stderr) memcheck/tests/xml1 (stderr) --- 8,12 ---- ! == 181 tests, 5 stderr failures, 0 stdout failures ================= memcheck/tests/leak-tree (stderr) memcheck/tests/weirdioctl (stderr) memcheck/tests/xml1 (stderr) |
|
From: Tom H. <th...@cy...> - 2005-08-07 02:28:02
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2005-08-07 03:15:03 BST Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 180 tests, 14 stderr failures, 0 stdout failures ================= memcheck/tests/addressable (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 180 tests, 15 stderr failures, 1 stdout failure ================= memcheck/tests/addressable (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/pushfpopf (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) none/tests/x86/yield (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Aug 7 03:21:44 2005 --- new.short Sun Aug 7 03:27:55 2005 *************** *** 8,10 **** ! == 180 tests, 15 stderr failures, 1 stdout failure ================= memcheck/tests/addressable (stderr) --- 8,10 ---- ! == 180 tests, 14 stderr failures, 0 stdout failures ================= memcheck/tests/addressable (stderr) *************** *** 21,26 **** memcheck/tests/stack_changes (stderr) - memcheck/tests/x86/pushfpopf (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) - none/tests/x86/yield (stdout) --- 21,24 ---- |
|
From: Tom H. <th...@cy...> - 2005-08-07 02:25:09
|
Nightly build on ginetta ( i686, Red Hat 8.0 ) started at 2005-08-07 03:10:07 BST Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 180 tests, 2 stderr failures, 0 stdout failures ================= none/tests/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 180 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/x86/pushfpopf (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) none/tests/x86/yield (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Aug 7 03:18:52 2005 --- new.short Sun Aug 7 03:25:03 2005 *************** *** 8,14 **** ! == 180 tests, 3 stderr failures, 1 stdout failure ================= ! memcheck/tests/x86/pushfpopf (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) - none/tests/x86/yield (stdout) --- 8,12 ---- ! == 180 tests, 2 stderr failures, 0 stdout failures ================= none/tests/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2005-08-07 02:20:23
|
Nightly build on dellow ( x86_64, Fedora Core 4 ) started at 2005-08-07 03:10:07 BST Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 159 tests, 6 stderr failures, 0 stdout failures ================= memcheck/tests/sigprocmask (stderr) memcheck/tests/strchr (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/weirdioctl (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) |
|
From: Tom H. <th...@cy...> - 2005-08-07 02:16:48
|
Nightly build on aston ( x86_64, Fedora Core 3 ) started at 2005-08-07 03:05:10 BST Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 159 tests, 6 stderr failures, 0 stdout failures ================= memcheck/tests/sigprocmask (stderr) memcheck/tests/strchr (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/weirdioctl (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) |