You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(25) |
2
(25) |
3
(5) |
4
(13) |
|
5
(4) |
6
(8) |
7
(6) |
8
|
9
(10) |
10
(15) |
11
(9) |
|
12
(14) |
13
(10) |
14
(24) |
15
(41) |
16
(13) |
17
(9) |
18
(3) |
|
19
(10) |
20
(11) |
21
(28) |
22
(36) |
23
(52) |
24
(36) |
25
(12) |
|
26
(31) |
27
(27) |
28
(20) |
29
(15) |
30
(22) |
31
(17) |
|
|
From: Tom H. <to...@co...> - 2009-07-12 23:11:16
|
On 12/07/09 23:39, Nicholas Nethercote wrote: > On Mon, Jul 6, 2009 at 7:48 AM, Julian Seward<js...@ac...> wrote: > >> In the diff there's many changes like this: >> >> Property changes on: exp-ptrcheck/h_intercepts.c >> ___________________________________________________________________ >> Deleted: svn:mergeinfo > > I had lots of those when I did the Darwin merge... I just ignored > them. Not sure what it means. It's the new merge tracking stuff in recent subversions - that property tracks which revisions have been merged so that if you merge again it knows it doesn't need to merge them again. Tom -- Tom Hughes (to...@co...) http://www.compton.nu/ |
|
From: Nicholas N. <n.n...@gm...> - 2009-07-12 23:09:06
|
On Sun, Jul 12, 2009 at 4:05 AM, Bart Van Assche<bar...@gm...> wrote: > Hello, > > Recently (r10156) an invocation of AC_PROG_OBJC has been added to > Valgrind's configure.in. This macro is causing trouble on older Linux > distro's because older autoconf packages do not know this macro. For which distro and autoconf version do you see it? > However, as far as I can see none of Valgrind's makefiles are using > the variables defined by this macro. Is it safe to remove this macro > again from configure.in, or am I overlooking something ? I disabled it in r10436 and put an explanatory comment. Thanks for the report! Nick |
|
From: <sv...@va...> - 2009-07-12 23:07:22
|
Author: njn Date: 2009-07-13 00:07:13 +0100 (Mon, 13 Jul 2009) New Revision: 10436 Log: Disable AC_PROG_OBJC -- it's currently not required and apparently causes problems on older Linux distros. Modified: trunk/configure.in Modified: trunk/configure.in =================================================================== --- trunk/configure.in 2009-07-12 22:58:26 UTC (rev 10435) +++ trunk/configure.in 2009-07-12 23:07:13 UTC (rev 10436) @@ -25,7 +25,18 @@ AM_PROG_CC_C_O AC_PROG_CPP AC_PROG_CXX -AC_PROG_OBJC +# AC_PROG_OBJC apparently causes problems on older Linux distros. If we +# ever have any Objective-C code in the Valgrind code base (eg. most likely +# as Darwin-specific tests) we'll need one of the following: +# - put AC_PROG_OBJC in a Darwin-specific part of this file +# - Use AC_PROG_OBJC here and up the minimum autoconf version +# - Use the following, which is apparently equivalent: +# m4_ifdef([AC_PROG_OBJC], +# [AC_PROG_OBJC], +# [AC_CHECK_TOOL([OBJC], [gcc]) +# AC_SUBST([OBJC]) +# AC_SUBST([OBJCFLAGS]) +# ]) AC_PROG_RANLIB # If no AR variable was specified, look up the name of the archiver. Otherwise |
|
From: <sv...@va...> - 2009-07-12 22:58:34
|
Author: njn Date: 2009-07-12 23:58:26 +0100 (Sun, 12 Jul 2009) New Revision: 10435 Log: Make atomic_incs.c build on Mac. Modified: trunk/memcheck/tests/Makefile.am trunk/memcheck/tests/atomic_incs.c Modified: trunk/memcheck/tests/Makefile.am =================================================================== --- trunk/memcheck/tests/Makefile.am 2009-07-12 13:19:04 UTC (rev 10434) +++ trunk/memcheck/tests/Makefile.am 2009-07-12 22:58:26 UTC (rev 10435) @@ -235,6 +235,10 @@ AM_CFLAGS += $(AM_FLAG_M3264_PRI) AM_CXXFLAGS += $(AM_FLAG_M3264_PRI) +if VGCONF_OS_IS_DARWIN +atomic_incs_CFLAGS = $(AM_CFLAGS) -mdynamic-no-pic +endif + deep_templates_SOURCES = deep_templates.cpp deep_templates_CXXFLAGS = $(AM_CFLAGS) -O -gstabs Modified: trunk/memcheck/tests/atomic_incs.c =================================================================== --- trunk/memcheck/tests/atomic_incs.c 2009-07-12 13:19:04 UTC (rev 10434) +++ trunk/memcheck/tests/atomic_incs.c 2009-07-12 22:58:26 UTC (rev 10435) @@ -11,7 +11,7 @@ #include <assert.h> #include <unistd.h> #include <sys/wait.h> -#include <sys/mman.h> +#include "tests/sys_mman.h" #define NNN 3456987 |
|
From: Nicholas N. <n.n...@gm...> - 2009-07-12 22:46:25
|
On Mon, Jul 6, 2009 at 7:48 AM, Julian Seward<js...@ac...> wrote: > > * VG_(message)(Vg_UserMsg, ..args..) now has the abbreviated form > VG_(UMSG)(..args..); ditto VG_(DMSG) for Vg_DebugMsg and > VG_(EMSG) for Vg_DebugExtraMsg. A couple of other useful > printf derivatives have been added to pub_tool_libcprint.h, > most particularly VG_(vcbprintf). I know that these names exist because you converted the VG_UMSG/VG_DMSG macros to functions. So VG_(umsg) and VG_(dmsg) would be better names now. > In the diff there's many changes like this: > > Property changes on: exp-ptrcheck/h_intercepts.c > ___________________________________________________________________ > Deleted: svn:mergeinfo I had lots of those when I did the Darwin merge... I just ignored them. Not sure what it means. Nick |
|
From: Julian S. <js...@ac...> - 2009-07-12 13:21:40
|
I believe I've fixed all these failures now and so the next F7 test should run clean. However, as we know, theory == practice only in theory. Let's see. J On Sunday 12 July 2009, Tom Hughes wrote: > Nightly build on lloyd ( x86_64, Fedora 7 ) > Started at 2009-07-12 03:05:07 BST > Ended at 2009-07-12 03:50:59 BST > Results unchanged from 24 hours ago > > Checking out valgrind source tree ... done > Configuring valgrind ... done > Building valgrind ... done > Running regression tests ... failed > > Regression test results follow > > == 496 tests, 4 stderr failures, 1 stdout failure, 0 post failures == > memcheck/tests/x86-linux/scalar (stderr) > memcheck/tests/x86-linux/scalar_exit_group (stderr) > memcheck/tests/x86-linux/scalar_supp (stderr) > none/tests/amd64/bug127521-64 (stdout) > none/tests/amd64/bug127521-64 (stderr) |
|
From: <sv...@va...> - 2009-07-12 13:19:15
|
Author: sewardj
Date: 2009-07-12 14:19:04 +0100 (Sun, 12 Jul 2009)
New Revision: 10434
Log:
Fix identification of sse3 on amd64s. Previously it was identifying
ssse3, not sse3 (sigh).
Modified:
trunk/coregrind/m_machine.c
Modified: trunk/coregrind/m_machine.c
===================================================================
--- trunk/coregrind/m_machine.c 2009-07-12 13:17:18 UTC (rev 10433)
+++ trunk/coregrind/m_machine.c 2009-07-12 13:19:04 UTC (rev 10434)
@@ -414,7 +414,7 @@
have_sse1 = (edx & (1<<25)) != 0; /* True => have sse insns */
have_sse2 = (edx & (1<<26)) != 0; /* True => have sse2 insns */
- have_sse3 = (ecx & (1<<9)) != 0; /* True => have sse3 insns */
+ have_sse3 = (ecx & (1<<0)) != 0; /* True => have sse3 insns */
/* cmpxchg8b is a minimum requirement now; if we don't have it we
must simply give up. But all CPUs since Pentium-I have it, so
|
|
From: <sv...@va...> - 2009-07-12 13:17:26
|
Author: sewardj
Date: 2009-07-12 14:17:18 +0100 (Sun, 12 Jul 2009)
New Revision: 10433
Log:
Only run none/tests/amd64/bug127521-64 on machines supporting cmpxchg16b.
Modified:
trunk/none/tests/amd64/bug127521-64.vgtest
trunk/tests/x86_amd64_features.c
Modified: trunk/none/tests/amd64/bug127521-64.vgtest
===================================================================
--- trunk/none/tests/amd64/bug127521-64.vgtest 2009-07-12 13:00:17 UTC (rev 10432)
+++ trunk/none/tests/amd64/bug127521-64.vgtest 2009-07-12 13:17:18 UTC (rev 10433)
@@ -1 +1,2 @@
prog: bug127521-64
+prereq: ../../../tests/x86_amd64_features amd64-cx16
Modified: trunk/tests/x86_amd64_features.c
===================================================================
--- trunk/tests/x86_amd64_features.c 2009-07-12 13:00:17 UTC (rev 10432)
+++ trunk/tests/x86_amd64_features.c 2009-07-12 13:17:18 UTC (rev 10433)
@@ -64,6 +64,9 @@
} else if ( strcmp( cpu, "amd64-ssse3" ) == 0 ) {
level = 1;
cmask = 1 << 9;
+ } else if ( strcmp( cpu, "amd64-cx16" ) == 0 ) {
+ level = 1;
+ cmask = 1 << 13;
#endif
} else {
return 2; // Unrecognised feature.
|
|
From: <sv...@va...> - 2009-07-12 13:01:28
|
Author: sewardj
Date: 2009-07-12 14:01:17 +0100 (Sun, 12 Jul 2009)
New Revision: 1908
Log:
Fix disassembly printing of cmpxchg insns (don't print "lock" twice).
Modified:
trunk/priv/guest_amd64_toIR.c
trunk/priv/guest_x86_toIR.c
Modified: trunk/priv/guest_amd64_toIR.c
===================================================================
--- trunk/priv/guest_amd64_toIR.c 2009-07-12 12:56:53 UTC (rev 1907)
+++ trunk/priv/guest_amd64_toIR.c 2009-07-12 13:01:17 UTC (rev 1908)
@@ -7605,8 +7605,8 @@
assign( cond8, unop(Iop_1Uto8, mk_amd64g_calculate_condition(AMD64CondZ)) );
assign( acc2, IRExpr_Mux0X(mkexpr(cond8), mkexpr(dest), mkexpr(acc)) );
putIRegRAX(size, mkexpr(acc2));
- DIP("lock cmpxchg%c %s,%s\n", nameISize(size),
- nameIRegG(size,pfx,rm), dis_buf);
+ DIP("cmpxchg%c %s,%s\n", nameISize(size),
+ nameIRegG(size,pfx,rm), dis_buf);
}
else vassert(0);
Modified: trunk/priv/guest_x86_toIR.c
===================================================================
--- trunk/priv/guest_x86_toIR.c 2009-07-12 12:56:53 UTC (rev 1907)
+++ trunk/priv/guest_x86_toIR.c 2009-07-12 13:01:17 UTC (rev 1908)
@@ -6529,8 +6529,8 @@
assign( cond8, unop(Iop_1Uto8, mk_x86g_calculate_condition(X86CondZ)) );
assign( acc2, IRExpr_Mux0X(mkexpr(cond8), mkexpr(dest), mkexpr(acc)) );
putIReg(size, R_EAX, mkexpr(acc2));
- DIP("lock cmpxchg%c %s,%s\n", nameISize(size),
- nameIReg(size,gregOfRM(rm)), dis_buf);
+ DIP("cmpxchg%c %s,%s\n", nameISize(size),
+ nameIReg(size,gregOfRM(rm)), dis_buf);
}
else vassert(0);
|
|
From: <sv...@va...> - 2009-07-12 13:00:25
|
Author: sewardj
Date: 2009-07-12 14:00:17 +0100 (Sun, 12 Jul 2009)
New Revision: 10432
Log:
Track vex r1907 (introduce Iop_CmpCas{EQ,NE}{8,16,32,64} and use them
for CAS-success? tests).
Detailed background and rationale in memcheck/mc_translate, comment
"COMMENT_ON_CasCmpEQ".
This commit changes the Memcheck instrumentation of IRCAS so as not to
do a definedness check on the success/failure indication. Also, by
being able to identify via the Iop_CasCmpEQ primitives any such checks
independently created by front ends, it can avoid instrumenting these
too.
All this is to avoid reporting new false positives observed on Fedora 7
(x86?) and openSUSE 10.2 (x86) following the recent merge of branches/DCAS.
Modified:
trunk/exp-ptrcheck/h_main.c
trunk/memcheck/mc_translate.c
Modified: trunk/exp-ptrcheck/h_main.c
===================================================================
--- trunk/exp-ptrcheck/h_main.c 2009-07-11 15:03:20 UTC (rev 10431)
+++ trunk/exp-ptrcheck/h_main.c 2009-07-12 13:00:17 UTC (rev 10432)
@@ -4315,6 +4315,13 @@
switch (st->tag) {
case Ist_CAS: {
+ /* In all these CAS cases, the did-we-succeed? comparison is
+ done using Iop_CasCmpEQ{8,16,32,64} rather than the plain
+ Iop_CmpEQ equivalents. This isn't actually necessary,
+ since the generated IR is not going to be subsequently
+ instrumented by Memcheck. But it's done for consistency.
+ See COMMENT_ON_CasCmpEQ in memcheck/mc_translate.c for
+ background/rationale. */
IRCAS* cas = st->Ist.CAS.details;
IRType elTy = typeOfIRExpr(pce->sb->tyenv, cas->expdLo);
if (cas->oldHi == IRTemp_INVALID) {
@@ -4327,11 +4334,11 @@
// 32 bit host translation scheme; 64-bit is analogous
// old# = check_load4_P(addr, addr#)
// old = CAS(addr:expd->new) [COPY]
- // success = CmpEQ32(old,expd)
+ // success = CasCmpEQ32(old,expd)
// if (success) do_shadow_store4_P(addr, new#)
IRTemp success;
Bool is64 = elTy == Ity_I64;
- IROp cmpEQ = is64 ? Iop_CmpEQ64 : Iop_CmpEQ32;
+ IROp cmpEQ = is64 ? Iop_CasCmpEQ64 : Iop_CasCmpEQ32;
void* r_fn = is64 ? &check_load8_P : &check_load4_P;
HChar* r_nm = is64 ? "check_load8_P" : "check_load4_P";
void* w_fn = is64 ? &do_shadow_store8_P : &do_shadow_store4_P;
@@ -4358,7 +4365,7 @@
// 8-bit translation scheme; 16-bit is analogous
// check_load1(addr, addr#)
// old = CAS(addr:expd->new) [COPY]
- // success = CmpEQ8(old,expd)
+ // success = CasCmpEQ8(old,expd)
// if (success) nonptr_or_unknown_range(addr, 1)
IRTemp success;
Bool is16 = elTy == Ity_I16;
@@ -4368,7 +4375,7 @@
IRExpr* expd = cas->expdLo;
void* h_fn = is16 ? &check_load2 : &check_load1;
HChar* h_nm = is16 ? "check_load2" : "check_load1";
- IROp cmpEQ = is16 ? Iop_CmpEQ16 : Iop_CmpEQ8;
+ IROp cmpEQ = is16 ? Iop_CasCmpEQ16 : Iop_CasCmpEQ8;
Int szB = is16 ? 2 : 1;
gen_dirty_v_WW( pce, NULL, h_fn, h_nm, addr, addrV );
stmt( 'C', pce, st );
@@ -4386,7 +4393,7 @@
// 8-bit translation scheme; 16/32-bit are analogous
// check_load1(addr, addr#)
// old = CAS(addr:expd->new) [COPY]
- // success = CmpEQ8(old,expd)
+ // success = CasCmpEQ8(old,expd)
// if (success) nonptr_or_unknown_range(addr, 1)
IRTemp success;
Bool is16 = elTy == Ity_I16;
@@ -4399,8 +4406,8 @@
: (is16 ? &check_load2 : &check_load1);
HChar* h_nm = is32 ? "check_load4"
: (is16 ? "check_load2" : "check_load1");
- IROp cmpEQ = is32 ? Iop_CmpEQ32
- : (is16 ? Iop_CmpEQ16 : Iop_CmpEQ8);
+ IROp cmpEQ = is32 ? Iop_CasCmpEQ32
+ : (is16 ? Iop_CasCmpEQ16 : Iop_CasCmpEQ8);
Int szB = is32 ? 4 : (is16 ? 2 : 1);
gen_dirty_v_WW( pce, NULL, h_fn, h_nm, addr, addrV );
stmt( 'C', pce, st );
@@ -4429,36 +4436,36 @@
// oldHi# = check_load4_P(addr+4, addr#)
// oldLo# = check_load4_P(addr+0, addr#)
// oldHi/Lo = DCAS(addr:expdHi/Lo->newHi/Lo) [COPY]
- // success = CmpEQ32(oldHi,expdHi) && CmpEQ32(oldLo,expdLo)
+ // success = CasCmpEQ32(oldHi,expdHi) && CasCmpEQ32(oldLo,expdLo)
// = ((oldHi ^ expdHi) | (oldLo ^ expdLo)) == 0
// if (success) do_shadow_store4_P(addr+4, newHi#)
// if (success) do_shadow_store4_P(addr+0, newLo#)
IRTemp diffHi, diffLo, diff, success, addrpp;
- Bool is64 = elTy == Ity_I64;
- void* r_fn = is64 ? &check_load8_P : &check_load4_P;
- HChar* r_nm = is64 ? "check_load8_P" : "check_load4_P";
- void* w_fn = is64 ? &do_shadow_store8_P
- : &do_shadow_store4_P;
- void* w_nm = is64 ? "do_shadow_store8_P"
- : "do_shadow_store4_P";
- IROp opADD = is64 ? Iop_Add64 : Iop_Add32;
- IROp opXOR = is64 ? Iop_Xor64 : Iop_Xor32;
- IROp opOR = is64 ? Iop_Or64 : Iop_Or32;
- IROp opCmpEQ = is64 ? Iop_CmpEQ64 : Iop_CmpEQ32;
- IRExpr* step = is64 ? mkU64(8) : mkU32(4);
- IRExpr* zero = is64 ? mkU64(0) : mkU32(0);
- IRExpr* addr = cas->addr;
- IRExpr* addrV = schemeEw_Atom(pce, addr);
- IRTemp oldLo = cas->oldLo;
- IRTemp oldLoV = newShadowTmp(pce, oldLo);
- IRTemp oldHi = cas->oldHi;
- IRTemp oldHiV = newShadowTmp(pce, oldHi);
- IRExpr* nyuLo = cas->dataLo;
- IRExpr* nyuLoV = schemeEw_Atom(pce, nyuLo);
- IRExpr* nyuHi = cas->dataHi;
- IRExpr* nyuHiV = schemeEw_Atom(pce, nyuHi);
- IRExpr* expdLo = cas->expdLo;
- IRExpr* expdHi = cas->expdHi;
+ Bool is64 = elTy == Ity_I64;
+ void* r_fn = is64 ? &check_load8_P : &check_load4_P;
+ HChar* r_nm = is64 ? "check_load8_P" : "check_load4_P";
+ void* w_fn = is64 ? &do_shadow_store8_P
+ : &do_shadow_store4_P;
+ void* w_nm = is64 ? "do_shadow_store8_P"
+ : "do_shadow_store4_P";
+ IROp opADD = is64 ? Iop_Add64 : Iop_Add32;
+ IROp opXOR = is64 ? Iop_Xor64 : Iop_Xor32;
+ IROp opOR = is64 ? Iop_Or64 : Iop_Or32;
+ IROp opCasCmpEQ = is64 ? Iop_CasCmpEQ64 : Iop_CasCmpEQ32;
+ IRExpr* step = is64 ? mkU64(8) : mkU32(4);
+ IRExpr* zero = is64 ? mkU64(0) : mkU32(0);
+ IRExpr* addr = cas->addr;
+ IRExpr* addrV = schemeEw_Atom(pce, addr);
+ IRTemp oldLo = cas->oldLo;
+ IRTemp oldLoV = newShadowTmp(pce, oldLo);
+ IRTemp oldHi = cas->oldHi;
+ IRTemp oldHiV = newShadowTmp(pce, oldHi);
+ IRExpr* nyuLo = cas->dataLo;
+ IRExpr* nyuLoV = schemeEw_Atom(pce, nyuLo);
+ IRExpr* nyuHi = cas->dataHi;
+ IRExpr* nyuHiV = schemeEw_Atom(pce, nyuHi);
+ IRExpr* expdLo = cas->expdLo;
+ IRExpr* expdHi = cas->expdHi;
tl_assert(elTy == Ity_I32 || elTy == Ity_I64);
tl_assert(pce->gWordTy == elTy);
addrpp = newTemp(pce, elTy, NonShad);
@@ -4483,7 +4490,7 @@
binop(opOR, mkexpr(diffHi), mkexpr(diffLo)));
success = newTemp(pce, Ity_I1, NonShad);
assign('I', pce, success,
- binop(opCmpEQ, mkexpr(diff), zero));
+ binop(opCasCmpEQ, mkexpr(diff), zero));
gen_dirty_v_WW( pce, mkexpr(success),
w_fn, w_nm, mkexpr(addrpp), nyuHiV );
gen_dirty_v_WW( pce, mkexpr(success),
@@ -4494,7 +4501,7 @@
if (pce->gWordTy == Ity_I64 && elTy == Ity_I32) {
// check_load8(addr, addr#)
// oldHi/Lo = DCAS(addr:expdHi/Lo->newHi/Lo) [COPY]
- // success = CmpEQ32(oldHi,expdHi) && CmpEQ32(oldLo,expdLo)
+ // success = CasCmpEQ32(oldHi,expdHi) && CasCmpEQ32(oldLo,expdLo)
// = ((oldHi ^ expdHi) | (oldLo ^ expdLo)) == 0
// if (success) nonptr_or_unknown_range(addr, 8)
IRTemp diffHi, diffLo, diff, success;
@@ -4518,7 +4525,7 @@
binop(Iop_Or32, mkexpr(diffHi), mkexpr(diffLo)));
success = newTemp(pce, Ity_I1, NonShad);
assign('I', pce, success,
- binop(Iop_CmpEQ32, mkexpr(diff), mkU32(0)));
+ binop(Iop_CasCmpEQ32, mkexpr(diff), mkU32(0)));
gen_call_nonptr_or_unknown_range( pce, mkexpr(success),
addr, mkU64(8) );
}
Modified: trunk/memcheck/mc_translate.c
===================================================================
--- trunk/memcheck/mc_translate.c 2009-07-11 15:03:20 UTC (rev 10431)
+++ trunk/memcheck/mc_translate.c 2009-07-12 13:00:17 UTC (rev 10432)
@@ -2390,7 +2390,6 @@
complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_V128, binop(op, vatom1, atom2));
-
/* I128-bit data-steering */
case Iop_64HLto128:
return assignNew('V', mce, Ity_I128, binop(op, vatom1, vatom2));
@@ -2549,6 +2548,14 @@
case Iop_CmpEQ8: case Iop_CmpNE8:
return mkPCastTo(mce, Ity_I1, mkUifU8(mce, vatom1,vatom2));
+ case Iop_CasCmpEQ8: case Iop_CasCmpNE8:
+ case Iop_CasCmpEQ16: case Iop_CasCmpNE16:
+ case Iop_CasCmpEQ32: case Iop_CasCmpNE32:
+ case Iop_CasCmpEQ64: case Iop_CasCmpNE64:
+ /* Just say these all produce a defined result, regardless
+ of their arguments. See COMMENT_ON_CasCmpEQ in this file. */
+ return assignNew('V', mce, Ity_I1, definedOfType(Ity_I1));
+
case Iop_Shl64: case Iop_Shr64: case Iop_Sar64:
return scalarShift( mce, Ity_I64, op, vatom1,vatom2, atom1,atom2 );
@@ -3488,23 +3495,15 @@
5. the CAS itself
- 6. complain if "expected == old" is undefined
+ 6. compute "expected == old". See COMMENT_ON_CasCmpEQ below.
- 7. if "expected == old"
+ 7. if "expected == old" (as computed by (6))
store data#,dataB to shadow memory
Note that 5 reads 'old' but 4 reads 'old#'. Similarly, 5 stores
'data' but 7 stores 'data#'. Hence it is possible for the
shadow data to be incorrectly checked and/or updated:
- * 6 could falsely complain if 4 read old# as undefined, but some
- other thread wrote a defined value to the location after 4 but
- before 5.
-
- * 6 could falsely not-complain if 4 read old# as defined, but
- some other thread wrote an undefined value to the location
- after 4 but before 5.
-
* 7 is at least gated correctly, since the 'expected == old'
condition is derived from outputs of 5. However, the shadow
write could happen too late: imagine after 5 we are
@@ -3528,6 +3527,83 @@
would't work properly since it only guards us against other
threads doing CASs on the same location, not against other
threads doing normal reads and writes.
+
+ ------------------------------------------------------------
+
+ COMMENT_ON_CasCmpEQ:
+
+ Note two things. Firstly, in the sequence above, we compute
+ "expected == old", but we don't check definedness of it. Why
+ not? Also, the x86 and amd64 front ends use
+ Iop_CmpCas{EQ,NE}{8,16,32,64} comparisons to make the equivalent
+ determination (expected == old ?) for themselves, and we also
+ don't check definedness for those primops; we just say that the
+ result is defined. Why? Details follow.
+
+ x86/amd64 contains various forms of locked insns:
+ * lock prefix before all basic arithmetic insn;
+ eg lock xorl %reg1,(%reg2)
+ * atomic exchange reg-mem
+ * compare-and-swaps
+
+ Rather than attempt to represent them all, which would be a
+ royal PITA, I used a result from Maurice Herlihy
+ (http://en.wikipedia.org/wiki/Maurice_Herlihy), in which he
+ demonstrates that compare-and-swap is a primitive more general
+ than the other two, and so can be used to represent all of them.
+ So the translation scheme for (eg) lock incl (%reg) is as
+ follows:
+
+ again:
+ old = * %reg
+ new = old + 1
+ atomically { if (* %reg == old) { * %reg = new } else { goto again } }
+
+ The "atomically" is the CAS bit. The scheme is always the same:
+ get old value from memory, compute new value, atomically stuff
+ new value back in memory iff the old value has not changed (iow,
+ no other thread modified it in the meantime). If it has changed
+ then we've been out-raced and we have to start over.
+
+ Now that's all very neat, but it has the bad side effect of
+ introducing an explicit equality test into the translation.
+ Consider the behaviour of said code on a memory location which
+ is uninitialised. We will wind up doing a comparison on
+ uninitialised data, and mc duly complains.
+
+ What's difficult about this is, the common case is that the
+ location is uncontended, and so we're usually comparing the same
+ value (* %reg) with itself. So we shouldn't complain even if it
+ is undefined. But mc doesn't know that.
+
+ My solution is to mark the == in the IR specially, so as to tell
+ mc that it almost certainly compares a value with itself, and we
+ should just regard the result as always defined. Rather than
+ add a bit to all IROps, I just cloned Iop_CmpEQ{8,16,32,64} into
+ Iop_CasCmpEQ{8,16,32,64} so as not to disturb anything else.
+
+ So there's always the question of, can this give a false
+ negative? eg, imagine that initially, * %reg is defined; and we
+ read that; but then in the gap between the read and the CAS, a
+ different thread writes an undefined (and different) value at
+ the location. Then the CAS in this thread will fail and we will
+ go back to "again:", but without knowing that the trip back
+ there was based on an undefined comparison. No matter; at least
+ the other thread won the race and the location is correctly
+ marked as undefined. What if it wrote an uninitialised version
+ of the same value that was there originally, though?
+
+ etc etc. Seems like there's a small corner case in which we
+ might lose the fact that something's defined -- we're out-raced
+ in between the "old = * reg" and the "atomically {", _and_ the
+ other thread is writing in an undefined version of what's
+ already there. Well, that seems pretty unlikely.
+
+ ---
+
+ If we ever need to reinstate it .. code which generates a
+ definedness test for "expected == old" was removed at r10432 of
+ this file.
*/
if (cas->oldHi == IRTemp_INVALID) {
do_shadow_CAS_single( mce, cas );
@@ -3542,9 +3618,8 @@
IRAtom *vdataLo = NULL, *bdataLo = NULL;
IRAtom *vexpdLo = NULL, *bexpdLo = NULL;
IRAtom *voldLo = NULL, *boldLo = NULL;
- IRAtom *expd_eq_old_V = NULL, *expd_eq_old_B = NULL;
- IRAtom *expd_eq_old = NULL;
- IROp opCmpEQ;
+ IRAtom *expd_eq_old = NULL;
+ IROp opCasCmpEQ;
Int elemSzB;
IRType elemTy;
Bool otrak = MC_(clo_mc_level) >= 3; /* a shorthand */
@@ -3556,10 +3631,10 @@
elemTy = typeOfIRExpr(mce->sb->tyenv, cas->expdLo);
switch (elemTy) {
- case Ity_I8: elemSzB = 1; opCmpEQ = Iop_CmpEQ8; break;
- case Ity_I16: elemSzB = 2; opCmpEQ = Iop_CmpEQ16; break;
- case Ity_I32: elemSzB = 4; opCmpEQ = Iop_CmpEQ32; break;
- case Ity_I64: elemSzB = 8; opCmpEQ = Iop_CmpEQ64; break;
+ case Ity_I8: elemSzB = 1; opCasCmpEQ = Iop_CasCmpEQ8; break;
+ case Ity_I16: elemSzB = 2; opCasCmpEQ = Iop_CasCmpEQ16; break;
+ case Ity_I32: elemSzB = 4; opCasCmpEQ = Iop_CasCmpEQ32; break;
+ case Ity_I64: elemSzB = 8; opCasCmpEQ = Iop_CasCmpEQ64; break;
default: tl_assert(0); /* IR defn disallows any other types */
}
@@ -3595,47 +3670,25 @@
mce,
cas->end, elemTy, cas->addr, 0/*Addr bias*/
));
+ bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldLo), voldLo);
if (otrak) {
boldLo
= assignNew('B', mce, Ity_I32,
gen_load_b(mce, elemSzB, cas->addr, 0/*addr bias*/));
+ bind_shadow_tmp_to_orig('B', mce, mkexpr(cas->oldLo), boldLo);
}
/* 5. the CAS itself */
stmt( 'C', mce, IRStmt_CAS(cas) );
- /* 6. complain if "expected == old" is undefined */
- /* Doing this directly interacts in a complex way with origin
- tracking. Much easier to make up an expression tree and hand
- that off to expr2vbits_Binop. We will need the expression
- tree in any case in order to decide whether or not to do a
- shadow store. */
+ /* 6. compute "expected == old" */
+ /* See COMMENT_ON_CasCmpEQ in this file background/rationale. */
/* Note that 'C' is kinda faking it; it is indeed a non-shadow
tree, but it's not copied from the input block. */
expd_eq_old
= assignNew('C', mce, Ity_I1,
- binop(opCmpEQ, cas->expdLo, mkexpr(cas->oldLo)));
+ binop(opCasCmpEQ, cas->expdLo, mkexpr(cas->oldLo)));
- /* Compute into expd_eq_old_V the definedness for expd_eq_old.
- First we need to ensure that cas->oldLo's V-shadow is bound
- voldLo, since expr2vbits_Binop will generate a use of it. */
- bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldLo), voldLo);
- expd_eq_old_V
- = expr2vbits_Binop( mce, opCmpEQ, cas->expdLo, mkexpr(cas->oldLo) );
- if (otrak) {
- bind_shadow_tmp_to_orig('B', mce, mkexpr(cas->oldLo), boldLo);
- expd_eq_old_B
- = gen_maxU32( mce, bexpdLo, boldLo );
- }
-
- /* Generate a complaint if expd_eq_old is undefined. As above,
- first force expd_eq_old's definedness to be bound to its
- V-shadow tmp. */
- bind_shadow_tmp_to_orig('V', mce, expd_eq_old, expd_eq_old_V);
- if (otrak)
- bind_shadow_tmp_to_orig('B', mce, expd_eq_old, expd_eq_old_B);
- complainIfUndefined(mce, expd_eq_old);
-
/* 7. if "expected == old"
store data# to shadow memory */
do_shadow_Store( mce, cas->end, cas->addr, 0/*bias*/,
@@ -3657,12 +3710,9 @@
IRAtom *vexpdLo = NULL, *bexpdLo = NULL;
IRAtom *voldHi = NULL, *boldHi = NULL;
IRAtom *voldLo = NULL, *boldLo = NULL;
- IRAtom *xHi = NULL, *xLo = NULL, *xHL = NULL;
- IRAtom *xHi_V = NULL, *xLo_V = NULL, *xHL_V = NULL;
- IRAtom *xHi_B = NULL, *xLo_B = NULL, *xHL_B = NULL;
- IRAtom *expd_eq_old_V = NULL, *expd_eq_old_B = NULL;
- IRAtom *expd_eq_old = NULL, *zero = NULL;
- IROp opCmpEQ, opOr, opXor;
+ IRAtom *xHi = NULL, *xLo = NULL, *xHL = NULL;
+ IRAtom *expd_eq_old = NULL, *zero = NULL;
+ IROp opCasCmpEQ, opOr, opXor;
Int elemSzB, memOffsLo, memOffsHi;
IRType elemTy;
Bool otrak = MC_(clo_mc_level) >= 3; /* a shorthand */
@@ -3675,19 +3725,19 @@
elemTy = typeOfIRExpr(mce->sb->tyenv, cas->expdLo);
switch (elemTy) {
case Ity_I8:
- opCmpEQ = Iop_CmpEQ8; opOr = Iop_Or8; opXor = Iop_Xor8;
+ opCasCmpEQ = Iop_CasCmpEQ8; opOr = Iop_Or8; opXor = Iop_Xor8;
elemSzB = 1; zero = mkU8(0);
break;
case Ity_I16:
- opCmpEQ = Iop_CmpEQ16; opOr = Iop_Or16; opXor = Iop_Xor16;
+ opCasCmpEQ = Iop_CasCmpEQ16; opOr = Iop_Or16; opXor = Iop_Xor16;
elemSzB = 2; zero = mkU16(0);
break;
case Ity_I32:
- opCmpEQ = Iop_CmpEQ32; opOr = Iop_Or32; opXor = Iop_Xor32;
+ opCasCmpEQ = Iop_CasCmpEQ32; opOr = Iop_Or32; opXor = Iop_Xor32;
elemSzB = 4; zero = mkU32(0);
break;
case Ity_I64:
- opCmpEQ = Iop_CmpEQ64; opOr = Iop_Or64; opXor = Iop_Xor64;
+ opCasCmpEQ = Iop_CasCmpEQ64; opOr = Iop_Or64; opXor = Iop_Xor64;
elemSzB = 8; zero = mkU64(0);
break;
default:
@@ -3755,6 +3805,8 @@
mce,
cas->end, elemTy, cas->addr, memOffsLo/*Addr bias*/
));
+ bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldHi), voldHi);
+ bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldLo), voldLo);
if (otrak) {
boldHi
= assignNew('B', mce, Ity_I32,
@@ -3764,17 +3816,15 @@
= assignNew('B', mce, Ity_I32,
gen_load_b(mce, elemSzB, cas->addr,
memOffsLo/*addr bias*/));
+ bind_shadow_tmp_to_orig('B', mce, mkexpr(cas->oldHi), boldHi);
+ bind_shadow_tmp_to_orig('B', mce, mkexpr(cas->oldLo), boldLo);
}
/* 5. the CAS itself */
stmt( 'C', mce, IRStmt_CAS(cas) );
- /* 6. complain if "expected == old" is undefined */
- /* Doing this directly interacts in a complex way with origin
- tracking. Much easier to make up an expression tree and hand
- that off to expr2vbits_Binop. We will need the expression
- tree in any case in order to decide whether or not to do a
- shadow store. */
+ /* 6. compute "expected == old" */
+ /* See COMMENT_ON_CasCmpEQ in this file background/rationale. */
/* Note that 'C' is kinda faking it; it is indeed a non-shadow
tree, but it's not copied from the input block. */
/*
@@ -3783,63 +3833,16 @@
xHL = xHi | xLo;
expd_eq_old = xHL == 0;
*/
-
- /* --- xHi = oldHi ^ expdHi --- */
xHi = assignNew('C', mce, elemTy,
binop(opXor, cas->expdHi, mkexpr(cas->oldHi)));
- bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldHi), voldHi);
- xHi_V
- = expr2vbits_Binop( mce, opXor, cas->expdHi, mkexpr(cas->oldHi));
- if (otrak) {
- bind_shadow_tmp_to_orig('B', mce, mkexpr(cas->oldHi), boldHi);
- xHi_B = gen_maxU32( mce, bexpdHi, boldHi );
- }
-
- /* --- xLo = oldLo ^ expdLo --- */
xLo = assignNew('C', mce, elemTy,
binop(opXor, cas->expdLo, mkexpr(cas->oldLo)));
- bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldLo), voldLo);
- xLo_V
- = expr2vbits_Binop( mce, opXor, cas->expdLo, mkexpr(cas->oldLo));
- if (otrak) {
- bind_shadow_tmp_to_orig('B', mce, mkexpr(cas->oldLo), boldLo);
- xLo_B = gen_maxU32( mce, bexpdLo, boldLo );
- }
-
- /* --- xHL = xHi | xLo --- */
xHL = assignNew('C', mce, elemTy,
binop(opOr, xHi, xLo));
- bind_shadow_tmp_to_orig('V', mce, xHi, xHi_V);
- bind_shadow_tmp_to_orig('V', mce, xLo, xLo_V);
- xHL_V
- = expr2vbits_Binop( mce, opOr, xHi, xLo );
- if (otrak) {
- bind_shadow_tmp_to_orig('B', mce, xHi, xHi_B);
- bind_shadow_tmp_to_orig('B', mce, xLo, xLo_B);
- xHL_B = gen_maxU32( mce, xHi_B, xLo_B );
- }
-
- /* --- expd_eq_old = xHL == 0 --- */
expd_eq_old
= assignNew('C', mce, Ity_I1,
- binop(opCmpEQ, xHL, zero));
- bind_shadow_tmp_to_orig('V', mce, xHL, xHL_V);
- expd_eq_old_V
- = expr2vbits_Binop( mce, opCmpEQ, xHL, zero);
- if (otrak) {
- expd_eq_old_B = xHL_B; /* since the zero literal isn't going to
- contribute any interesting origin */
- }
+ binop(opCasCmpEQ, xHL, zero));
- /* The backend's register allocator is probably on fire by now :-) */
- /* Generate a complaint if expd_eq_old is undefined. As above,
- first force expd_eq_old's definedness to be bound to its
- V-shadow tmp. */
- bind_shadow_tmp_to_orig('V', mce, expd_eq_old, expd_eq_old_V);
- if (otrak)
- bind_shadow_tmp_to_orig('B', mce, expd_eq_old, expd_eq_old_B);
- complainIfUndefined(mce, expd_eq_old);
-
/* 7. if "expected == old"
store data# to shadow memory */
do_shadow_Store( mce, cas->end, cas->addr, memOffsHi/*bias*/,
@@ -4682,9 +4685,23 @@
return gen_maxU32( mce, b1, gen_maxU32( mce, b2, b3 ) );
}
case Iex_Binop: {
- IRAtom* b1 = schemeE( mce, e->Iex.Binop.arg1 );
- IRAtom* b2 = schemeE( mce, e->Iex.Binop.arg2 );
- return gen_maxU32( mce, b1, b2 );
+ switch (e->Iex.Binop.op) {
+ case Iop_CasCmpEQ8: case Iop_CasCmpNE8:
+ case Iop_CasCmpEQ16: case Iop_CasCmpNE16:
+ case Iop_CasCmpEQ32: case Iop_CasCmpNE32:
+ case Iop_CasCmpEQ64: case Iop_CasCmpNE64:
+ /* Just say these all produce a defined result,
+ regardless of their arguments. See
+ COMMENT_ON_CasCmpEQ in this file. */
+ return mkU32(0);
+ default: {
+ IRAtom* b1 = schemeE( mce, e->Iex.Binop.arg1 );
+ IRAtom* b2 = schemeE( mce, e->Iex.Binop.arg2 );
+ return gen_maxU32( mce, b1, b2 );
+ }
+ }
+ tl_assert(0);
+ /*NOTREACHED*/
}
case Iex_Unop: {
IRAtom* b1 = schemeE( mce, e->Iex.Unop.arg );
|
|
From: <sv...@va...> - 2009-07-12 12:57:00
|
Author: sewardj
Date: 2009-07-12 13:56:53 +0100 (Sun, 12 Jul 2009)
New Revision: 1907
Log:
Add new integer comparison primitives Iop_CasCmp{EQ,NE}{8,16,32,64},
which are semantically identical to Iop_Cmp{EQ,NE}{8,16,32,64}. Use
these new primitives instead of the normal ones, in the tests
following IR-level compare-and-swap operations, which establish
whether or not the CAS succeeded. This is all for Memcheck's benefit,
as it really needs to be able to identify which comparisons are
CAS-success tests and which aren't. This is all described in great
detail in memcheck/mc_translate.c in the comment
"COMMENT_ON_CasCmpEQ".
Modified:
trunk/priv/guest_amd64_toIR.c
trunk/priv/guest_x86_toIR.c
trunk/priv/host_amd64_isel.c
trunk/priv/host_x86_isel.c
trunk/priv/ir_defs.c
trunk/pub/libvex_ir.h
Modified: trunk/priv/guest_amd64_toIR.c
===================================================================
--- trunk/priv/guest_amd64_toIR.c 2009-07-04 13:07:30 UTC (rev 1906)
+++ trunk/priv/guest_amd64_toIR.c 2009-07-12 12:56:53 UTC (rev 1907)
@@ -148,6 +148,13 @@
jump. It's not such a big deal with casLE since the side exit is
only taken if the CAS fails, that is, the location is contended,
which is relatively unlikely.
+
+ Note also, the test for CAS success vs failure is done using
+ Iop_CasCmp{EQ,NE}{8,16,32,64} rather than the ordinary
+ Iop_Cmp{EQ,NE} equivalents. This is so as to tell Memcheck that it
+ shouldn't definedness-check these comparisons. See
+ COMMENT_ON_CasCmpEQ in memcheck/mc_translate.c for
+ background/rationale.
*/
/* LOCK prefixed instructions. These are translated using IR-level
@@ -320,6 +327,7 @@
|| op8 == Iop_Or8 || op8 == Iop_And8 || op8 == Iop_Xor8
|| op8 == Iop_Shl8 || op8 == Iop_Shr8 || op8 == Iop_Sar8
|| op8 == Iop_CmpEQ8 || op8 == Iop_CmpNE8
+ || op8 == Iop_CasCmpNE8
|| op8 == Iop_Not8 );
switch (ty) {
case Ity_I8: return 0 +op8;
@@ -1436,7 +1444,8 @@
NULL, mkexpr(expTmp), NULL, newVal );
stmt( IRStmt_CAS(cas) );
stmt( IRStmt_Exit(
- binop( mkSizedOp(tyE,Iop_CmpNE8), mkexpr(oldTmp), mkexpr(expTmp) ),
+ binop( mkSizedOp(tyE,Iop_CasCmpNE8),
+ mkexpr(oldTmp), mkexpr(expTmp) ),
Ijk_Boring, /*Ijk_NoRedir*/
IRConst_U64( restart_point )
));
@@ -15499,22 +15508,22 @@
}
case 0xC7: { /* CMPXCHG8B Ev, CMPXCHG16B Ev */
- IRType elemTy = sz==4 ? Ity_I32 : Ity_I64;
- IRTemp expdHi = newTemp(elemTy);
- IRTemp expdLo = newTemp(elemTy);
- IRTemp dataHi = newTemp(elemTy);
- IRTemp dataLo = newTemp(elemTy);
- IRTemp oldHi = newTemp(elemTy);
- IRTemp oldLo = newTemp(elemTy);
- IRTemp flags_old = newTemp(Ity_I64);
- IRTemp flags_new = newTemp(Ity_I64);
- IRTemp success = newTemp(Ity_I1);
- IROp opOR = sz==4 ? Iop_Or32 : Iop_Or64;
- IROp opXOR = sz==4 ? Iop_Xor32 : Iop_Xor64;
- IROp opCmpEQ = sz==4 ? Iop_CmpEQ32 : Iop_CmpEQ64;
- IRExpr* zero = sz==4 ? mkU32(0) : mkU64(0);
- IRTemp expdHi64 = newTemp(Ity_I64);
- IRTemp expdLo64 = newTemp(Ity_I64);
+ IRType elemTy = sz==4 ? Ity_I32 : Ity_I64;
+ IRTemp expdHi = newTemp(elemTy);
+ IRTemp expdLo = newTemp(elemTy);
+ IRTemp dataHi = newTemp(elemTy);
+ IRTemp dataLo = newTemp(elemTy);
+ IRTemp oldHi = newTemp(elemTy);
+ IRTemp oldLo = newTemp(elemTy);
+ IRTemp flags_old = newTemp(Ity_I64);
+ IRTemp flags_new = newTemp(Ity_I64);
+ IRTemp success = newTemp(Ity_I1);
+ IROp opOR = sz==4 ? Iop_Or32 : Iop_Or64;
+ IROp opXOR = sz==4 ? Iop_Xor32 : Iop_Xor64;
+ IROp opCasCmpEQ = sz==4 ? Iop_CasCmpEQ32 : Iop_CasCmpEQ64;
+ IRExpr* zero = sz==4 ? mkU32(0) : mkU64(0);
+ IRTemp expdHi64 = newTemp(Ity_I64);
+ IRTemp expdLo64 = newTemp(Ity_I64);
/* Translate this using a DCAS, even if there is no LOCK
prefix. Life is too short to bother with generating two
@@ -15562,7 +15571,7 @@
/* success when oldHi:oldLo == expdHi:expdLo */
assign( success,
- binop(opCmpEQ,
+ binop(opCasCmpEQ,
binop(opOR,
binop(opXOR, mkexpr(oldHi), mkexpr(expdHi)),
binop(opXOR, mkexpr(oldLo), mkexpr(expdLo))
Modified: trunk/priv/guest_x86_toIR.c
===================================================================
--- trunk/priv/guest_x86_toIR.c 2009-07-04 13:07:30 UTC (rev 1906)
+++ trunk/priv/guest_x86_toIR.c 2009-07-12 12:56:53 UTC (rev 1907)
@@ -118,6 +118,13 @@
jump. It's not such a big deal with casLE since the side exit is
only taken if the CAS fails, that is, the location is contended,
which is relatively unlikely.
+
+ Note also, the test for CAS success vs failure is done using
+ Iop_CasCmp{EQ,NE}{8,16,32,64} rather than the ordinary
+ Iop_Cmp{EQ,NE} equivalents. This is so as to tell Memcheck that it
+ shouldn't definedness-check these comparisons. See
+ COMMENT_ON_CasCmpEQ in memcheck/mc_translate.c for
+ background/rationale.
*/
/* Performance holes:
@@ -715,6 +722,7 @@
|| op8 == Iop_Or8 || op8 == Iop_And8 || op8 == Iop_Xor8
|| op8 == Iop_Shl8 || op8 == Iop_Shr8 || op8 == Iop_Sar8
|| op8 == Iop_CmpEQ8 || op8 == Iop_CmpNE8
+ || op8 == Iop_CasCmpNE8
|| op8 == Iop_Not8);
adj = ty==Ity_I8 ? 0 : (ty==Ity_I16 ? 1 : 2);
return adj + op8;
@@ -765,7 +773,8 @@
NULL, mkexpr(expTmp), NULL, newVal );
stmt( IRStmt_CAS(cas) );
stmt( IRStmt_Exit(
- binop( mkSizedOp(tyE,Iop_CmpNE8), mkexpr(oldTmp), mkexpr(expTmp) ),
+ binop( mkSizedOp(tyE,Iop_CasCmpNE8),
+ mkexpr(oldTmp), mkexpr(expTmp) ),
Ijk_Boring, /*Ijk_NoRedir*/
IRConst_U32( restart_point )
));
@@ -13763,11 +13772,8 @@
/* ------------------------ XCHG ----------------------- */
/* XCHG reg,mem automatically asserts LOCK# even without a LOCK
- prefix. Therefore, surround it with a IRStmt_MBE(Imbe_BusLock)
- and IRStmt_MBE(Imbe_BusUnlock) pair. But be careful; if it is
- used with an explicit LOCK prefix, we don't want to end up with
- two IRStmt_MBE(Imbe_BusLock)s -- one made here and one made by
- the generic LOCK logic at the top of disInstr. */
+ prefix; hence it must be translated with an IRCAS (at least, the
+ memory variant). */
case 0x86: /* XCHG Gb,Eb */
sz = 1;
/* Fall through ... */
@@ -14216,7 +14222,7 @@
/* success when oldHi:oldLo == expdHi:expdLo */
assign( success,
- binop(Iop_CmpEQ32,
+ binop(Iop_CasCmpEQ32,
binop(Iop_Or32,
binop(Iop_Xor32, mkexpr(oldHi), mkexpr(expdHi)),
binop(Iop_Xor32, mkexpr(oldLo), mkexpr(expdLo))
Modified: trunk/priv/host_amd64_isel.c
===================================================================
--- trunk/priv/host_amd64_isel.c 2009-07-04 13:07:30 UTC (rev 1906)
+++ trunk/priv/host_amd64_isel.c 2009-07-12 12:56:53 UTC (rev 1907)
@@ -2197,7 +2197,9 @@
/* CmpEQ8 / CmpNE8 */
if (e->tag == Iex_Binop
&& (e->Iex.Binop.op == Iop_CmpEQ8
- || e->Iex.Binop.op == Iop_CmpNE8)) {
+ || e->Iex.Binop.op == Iop_CmpNE8
+ || e->Iex.Binop.op == Iop_CasCmpEQ8
+ || e->Iex.Binop.op == Iop_CasCmpNE8)) {
HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
AMD64RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
HReg r = newVRegI(env);
@@ -2205,8 +2207,8 @@
addInstr(env, AMD64Instr_Alu64R(Aalu_XOR,rmi2,r));
addInstr(env, AMD64Instr_Alu64R(Aalu_AND,AMD64RMI_Imm(0xFF),r));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ8: return Acc_Z;
- case Iop_CmpNE8: return Acc_NZ;
+ case Iop_CmpEQ8: case Iop_CasCmpEQ8: return Acc_Z;
+ case Iop_CmpNE8: case Iop_CasCmpNE8: return Acc_NZ;
default: vpanic("iselCondCode(amd64): CmpXX8");
}
}
@@ -2214,7 +2216,9 @@
/* CmpEQ16 / CmpNE16 */
if (e->tag == Iex_Binop
&& (e->Iex.Binop.op == Iop_CmpEQ16
- || e->Iex.Binop.op == Iop_CmpNE16)) {
+ || e->Iex.Binop.op == Iop_CmpNE16
+ || e->Iex.Binop.op == Iop_CasCmpEQ16
+ || e->Iex.Binop.op == Iop_CasCmpNE16)) {
HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
AMD64RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
HReg r = newVRegI(env);
@@ -2222,8 +2226,8 @@
addInstr(env, AMD64Instr_Alu64R(Aalu_XOR,rmi2,r));
addInstr(env, AMD64Instr_Alu64R(Aalu_AND,AMD64RMI_Imm(0xFFFF),r));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ16: return Acc_Z;
- case Iop_CmpNE16: return Acc_NZ;
+ case Iop_CmpEQ16: case Iop_CasCmpEQ16: return Acc_Z;
+ case Iop_CmpNE16: case Iop_CasCmpNE16: return Acc_NZ;
default: vpanic("iselCondCode(amd64): CmpXX16");
}
}
@@ -2231,7 +2235,9 @@
/* CmpEQ32 / CmpNE32 */
if (e->tag == Iex_Binop
&& (e->Iex.Binop.op == Iop_CmpEQ32
- || e->Iex.Binop.op == Iop_CmpNE32)) {
+ || e->Iex.Binop.op == Iop_CmpNE32
+ || e->Iex.Binop.op == Iop_CasCmpEQ32
+ || e->Iex.Binop.op == Iop_CasCmpNE32)) {
HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
AMD64RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
HReg r = newVRegI(env);
@@ -2239,8 +2245,8 @@
addInstr(env, AMD64Instr_Alu64R(Aalu_XOR,rmi2,r));
addInstr(env, AMD64Instr_Sh64(Ash_SHL, 32, r));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ32: return Acc_Z;
- case Iop_CmpNE32: return Acc_NZ;
+ case Iop_CmpEQ32: case Iop_CasCmpEQ32: return Acc_Z;
+ case Iop_CmpNE32: case Iop_CasCmpNE32: return Acc_NZ;
default: vpanic("iselCondCode(amd64): CmpXX32");
}
}
@@ -2253,13 +2259,14 @@
|| e->Iex.Binop.op == Iop_CmpLT64U
|| e->Iex.Binop.op == Iop_CmpLE64S
|| e->Iex.Binop.op == Iop_CmpLE64U
- )) {
+ || e->Iex.Binop.op == Iop_CasCmpEQ64
+ || e->Iex.Binop.op == Iop_CasCmpNE64)) {
HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
AMD64RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
addInstr(env, AMD64Instr_Alu64R(Aalu_CMP,rmi2,r1));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ64: return Acc_Z;
- case Iop_CmpNE64: return Acc_NZ;
+ case Iop_CmpEQ64: case Iop_CasCmpEQ64: return Acc_Z;
+ case Iop_CmpNE64: case Iop_CasCmpNE64: return Acc_NZ;
case Iop_CmpLT64S: return Acc_L;
case Iop_CmpLT64U: return Acc_B;
case Iop_CmpLE64S: return Acc_LE;
Modified: trunk/priv/host_x86_isel.c
===================================================================
--- trunk/priv/host_x86_isel.c 2009-07-04 13:07:30 UTC (rev 1906)
+++ trunk/priv/host_x86_isel.c 2009-07-12 12:56:53 UTC (rev 1907)
@@ -1802,13 +1802,15 @@
/* CmpEQ8 / CmpNE8 */
if (e->tag == Iex_Binop
&& (e->Iex.Binop.op == Iop_CmpEQ8
- || e->Iex.Binop.op == Iop_CmpNE8)) {
+ || e->Iex.Binop.op == Iop_CmpNE8
+ || e->Iex.Binop.op == Iop_CasCmpEQ8
+ || e->Iex.Binop.op == Iop_CasCmpNE8)) {
if (isZeroU8(e->Iex.Binop.arg2)) {
HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
addInstr(env, X86Instr_Test32(0xFF,X86RM_Reg(r1)));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ8: return Xcc_Z;
- case Iop_CmpNE8: return Xcc_NZ;
+ case Iop_CmpEQ8: case Iop_CasCmpEQ8: return Xcc_Z;
+ case Iop_CmpNE8: case Iop_CasCmpNE8: return Xcc_NZ;
default: vpanic("iselCondCode(x86): CmpXX8(expr,0:I8)");
}
} else {
@@ -1819,8 +1821,8 @@
addInstr(env, X86Instr_Alu32R(Xalu_XOR,rmi2,r));
addInstr(env, X86Instr_Test32(0xFF,X86RM_Reg(r)));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ8: return Xcc_Z;
- case Iop_CmpNE8: return Xcc_NZ;
+ case Iop_CmpEQ8: case Iop_CasCmpEQ8: return Xcc_Z;
+ case Iop_CmpNE8: case Iop_CasCmpNE8: return Xcc_NZ;
default: vpanic("iselCondCode(x86): CmpXX8(expr,expr)");
}
}
@@ -1829,7 +1831,9 @@
/* CmpEQ16 / CmpNE16 */
if (e->tag == Iex_Binop
&& (e->Iex.Binop.op == Iop_CmpEQ16
- || e->Iex.Binop.op == Iop_CmpNE16)) {
+ || e->Iex.Binop.op == Iop_CmpNE16
+ || e->Iex.Binop.op == Iop_CasCmpEQ16
+ || e->Iex.Binop.op == Iop_CasCmpNE16)) {
HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
X86RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
HReg r = newVRegI(env);
@@ -1837,8 +1841,8 @@
addInstr(env, X86Instr_Alu32R(Xalu_XOR,rmi2,r));
addInstr(env, X86Instr_Test32(0xFFFF,X86RM_Reg(r)));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ16: return Xcc_Z;
- case Iop_CmpNE16: return Xcc_NZ;
+ case Iop_CmpEQ16: case Iop_CasCmpEQ16: return Xcc_Z;
+ case Iop_CmpNE16: case Iop_CasCmpNE16: return Xcc_NZ;
default: vpanic("iselCondCode(x86): CmpXX16");
}
}
@@ -1850,13 +1854,15 @@
|| e->Iex.Binop.op == Iop_CmpLT32S
|| e->Iex.Binop.op == Iop_CmpLT32U
|| e->Iex.Binop.op == Iop_CmpLE32S
- || e->Iex.Binop.op == Iop_CmpLE32U)) {
+ || e->Iex.Binop.op == Iop_CmpLE32U
+ || e->Iex.Binop.op == Iop_CasCmpEQ32
+ || e->Iex.Binop.op == Iop_CasCmpNE32)) {
HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
X86RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
addInstr(env, X86Instr_Alu32R(Xalu_CMP,rmi2,r1));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ32: return Xcc_Z;
- case Iop_CmpNE32: return Xcc_NZ;
+ case Iop_CmpEQ32: case Iop_CasCmpEQ32: return Xcc_Z;
+ case Iop_CmpNE32: case Iop_CasCmpNE32: return Xcc_NZ;
case Iop_CmpLT32S: return Xcc_L;
case Iop_CmpLT32U: return Xcc_B;
case Iop_CmpLE32S: return Xcc_LE;
@@ -1880,8 +1886,8 @@
addInstr(env, X86Instr_Alu32R(Xalu_XOR,X86RMI_Reg(lo2), tLo));
addInstr(env, X86Instr_Alu32R(Xalu_OR,X86RMI_Reg(tHi), tLo));
switch (e->Iex.Binop.op) {
- case Iop_CmpNE64: return Xcc_NZ;
- case Iop_CmpEQ64: return Xcc_Z;
+ case Iop_CmpNE64: return Xcc_NZ;
+ case Iop_CmpEQ64: return Xcc_Z;
default: vpanic("iselCondCode(x86): CmpXX64");
}
}
Modified: trunk/priv/ir_defs.c
===================================================================
--- trunk/priv/ir_defs.c 2009-07-04 13:07:30 UTC (rev 1906)
+++ trunk/priv/ir_defs.c 2009-07-12 12:56:53 UTC (rev 1907)
@@ -144,6 +144,10 @@
str = "CmpEQ"; base = Iop_CmpEQ8; break;
case Iop_CmpNE8 ... Iop_CmpNE64:
str = "CmpNE"; base = Iop_CmpNE8; break;
+ case Iop_CasCmpEQ8 ... Iop_CasCmpEQ64:
+ str = "CasCmpEQ"; base = Iop_CasCmpEQ8; break;
+ case Iop_CasCmpNE8 ... Iop_CasCmpNE64:
+ str = "CasCmpNE"; base = Iop_CasCmpNE8; break;
case Iop_Not8 ... Iop_Not64:
str = "Not"; base = Iop_Not8; break;
/* other cases must explicitly "return;" */
@@ -574,7 +578,8 @@
default: vpanic("ppIROp(1)");
}
-
+
+ vassert(str);
switch (op - base) {
case 0: vex_printf("%s",str); vex_printf("8"); break;
case 1: vex_printf("%s",str); vex_printf("16"); break;
@@ -1642,14 +1647,18 @@
UNARY(Ity_I64, Ity_I64);
case Iop_CmpEQ8: case Iop_CmpNE8:
+ case Iop_CasCmpEQ8: case Iop_CasCmpNE8:
COMPARISON(Ity_I8);
case Iop_CmpEQ16: case Iop_CmpNE16:
+ case Iop_CasCmpEQ16: case Iop_CasCmpNE16:
COMPARISON(Ity_I16);
case Iop_CmpEQ32: case Iop_CmpNE32:
+ case Iop_CasCmpEQ32: case Iop_CasCmpNE32:
case Iop_CmpLT32S: case Iop_CmpLE32S:
case Iop_CmpLT32U: case Iop_CmpLE32U:
COMPARISON(Ity_I32);
case Iop_CmpEQ64: case Iop_CmpNE64:
+ case Iop_CasCmpEQ64: case Iop_CasCmpNE64:
case Iop_CmpLT64S: case Iop_CmpLE64S:
case Iop_CmpLT64U: case Iop_CmpLE64U:
COMPARISON(Ity_I64);
Modified: trunk/pub/libvex_ir.h
===================================================================
--- trunk/pub/libvex_ir.h 2009-07-04 13:07:30 UTC (rev 1906)
+++ trunk/pub/libvex_ir.h 2009-07-12 12:56:53 UTC (rev 1907)
@@ -423,6 +423,14 @@
/* Tags for unary ops */
Iop_Not8, Iop_Not16, Iop_Not32, Iop_Not64,
+ /* Exactly like CmpEQ8/16/32/64, but carrying the additional
+ hint that these compute the success/failure of a CAS
+ operation, and hence are almost certainly applied to two
+ copies of the same value, which in turn has implications for
+ Memcheck's instrumentation. */
+ Iop_CasCmpEQ8, Iop_CasCmpEQ16, Iop_CasCmpEQ32, Iop_CasCmpEQ64,
+ Iop_CasCmpNE8, Iop_CasCmpNE16, Iop_CasCmpNE32, Iop_CasCmpNE64,
+
/* -- Ordering not important after here. -- */
/* Widening multiplies */
|
|
From: Bart V. A. <bar...@gm...> - 2009-07-12 07:48:51
|
Nightly build on georgia-tech-cellbuzz-native ( cellbuzz, ppc64, Fedora 7, native ) Started at 2009-07-12 02:00:05 EDT Ended at 2009-07-12 03:48:32 EDT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... done Regression test results follow == 422 tests, 41 stderr failures, 13 stdout failures, 0 post failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cases-full (stderr) memcheck/tests/leak-cases-summary (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/wrap8 (stderr) none/tests/empty-exe (stderr) none/tests/linux/mremap (stderr) none/tests/linux/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-vmx (stdout) none/tests/ppc32/round (stdout) none/tests/ppc32/test_gx (stdout) none/tests/ppc64/jm-fp (stdout) none/tests/ppc64/jm-vmx (stdout) none/tests/ppc64/round (stdout) none/tests/shell (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) none/tests/shell_zerolength (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc23_bogus_condwait (stderr) exp-ptrcheck/tests/bad_percentify (stdout) exp-ptrcheck/tests/bad_percentify (stderr) exp-ptrcheck/tests/base (stderr) exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/fp (stderr) exp-ptrcheck/tests/globalerr (stderr) exp-ptrcheck/tests/hackedbz2 (stdout) exp-ptrcheck/tests/hackedbz2 (stderr) exp-ptrcheck/tests/hp_bounds (stderr) exp-ptrcheck/tests/hp_dangle (stderr) exp-ptrcheck/tests/justify (stderr) exp-ptrcheck/tests/partial_bad (stderr) exp-ptrcheck/tests/partial_good (stderr) exp-ptrcheck/tests/preen_invars (stdout) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) exp-ptrcheck/tests/realloc (stderr) exp-ptrcheck/tests/stackerr (stderr) exp-ptrcheck/tests/strcpy (stderr) exp-ptrcheck/tests/supp (stderr) exp-ptrcheck/tests/tricky (stderr) exp-ptrcheck/tests/unaligned (stderr) exp-ptrcheck/tests/zero (stderr) |
|
From: Tom H. <th...@cy...> - 2009-07-12 02:51:16
|
Nightly build on lloyd ( x86_64, Fedora 7 ) Started at 2009-07-12 03:05:07 BST Ended at 2009-07-12 03:50:59 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 496 tests, 4 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/x86-linux/scalar (stderr) memcheck/tests/x86-linux/scalar_exit_group (stderr) memcheck/tests/x86-linux/scalar_supp (stderr) none/tests/amd64/bug127521-64 (stdout) none/tests/amd64/bug127521-64 (stderr) |
|
From: Tom H. <th...@cy...> - 2009-07-12 02:31:35
|
Nightly build on mg ( x86_64, Fedora 9 ) Started at 2009-07-12 03:10:05 BST Ended at 2009-07-12 03:31:21 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 502 tests, 0 stderr failures, 1 stdout failure, 0 post failures == none/tests/linux/mremap2 (stdout) |