You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(2) |
2
(7) |
3
(1) |
4
(9) |
5
|
|
6
(7) |
7
(10) |
8
(23) |
9
(19) |
10
(21) |
11
(14) |
12
(15) |
|
13
(11) |
14
(7) |
15
(20) |
16
(21) |
17
(20) |
18
(20) |
19
(19) |
|
20
(24) |
21
(22) |
22
(19) |
23
(17) |
24
(26) |
25
(15) |
26
(16) |
|
27
(8) |
28
(10) |
29
(24) |
30
(21) |
31
(19) |
|
|
|
From: <sv...@va...> - 2013-01-17 23:57:49
|
philippe 2013-01-17 23:57:35 +0000 (Thu, 17 Jan 2013)
New Revision: 13237
Log:
Change the size of the hash table used to cache IP -> debuginfo to a prime nr
This change is based on rumours/legends/oral transmission of experience/...
that prime nrs are good to use for hash table size :).
If someone has a (short) explanation about why this is useful,
that will be welcome.
Modified files:
trunk/coregrind/m_debuginfo/debuginfo.c
Modified: trunk/coregrind/m_debuginfo/debuginfo.c (+2 -1)
===================================================================
--- trunk/coregrind/m_debuginfo/debuginfo.c 2013-01-17 14:24:35 +00:00 (rev 13236)
+++ trunk/coregrind/m_debuginfo/debuginfo.c 2013-01-17 23:57:35 +00:00 (rev 13237)
@@ -2245,7 +2245,8 @@
records are added all at once, when the debuginfo for an object is
read, and is not changed ever thereafter. */
-#define N_CFSI_CACHE 511
+// Prime number, giving about 3K cache on 32 bits, 6K cache on 64 bits.
+#define N_CFSI_CACHE 509
typedef
struct { Addr ip; DebugInfo* di; Word ix; }
|
|
From: Philippe W. <phi...@sk...> - 2013-01-17 20:19:04
|
On Thu, 2013-01-17 at 20:02 +0530, Muthumeenal Natarajan wrote: > Hi Rich, > > Sure, Thanks for your help. > Pls do let us know..if this is something to do with C++ (when using static variables, > if we remove the static variables from the class, it gets postponed to another class > down the line which has static variable...) If so, is there a work around or patch ? > We can't remove these static variables from these classes.. > so pls help us to run valgrind with our code. > > Also if this is the problem with C++, you must have already got such error reports. > .we remember seeing some blogs with patches for the same, we couldn't download the > same...so pls let us know if there're patches in official valgrind site which can be used. The previous logs you have captured have quite clearly shown that the problem is *not* a lack of memory. The advice of Rich to create a small set of sources allowing others to reproduce the problem is a good way to work. If that is not possible (closed sources) or too difficult, I suggest you follow then my previous advice: Have two GDBs, one debugging a native (working) execution; another GDB debugging the execution under Valgrind. Of course, do not let it run till it crashes. Instead, put breakpoints before the crash (for example at NewDelete.cpp:64 or at LtePdcpConstants.cpp:7) and then use next/step/... in parallel in both GDBs. Then at some point in time, the behaviour will diverge. That might give some idea what is going wrong. Philippe |
|
From: galap <art...@gm...> - 2013-01-17 15:49:00
|
My bad. svn port was blocked. Sorry for panic. -- View this message in context: http://valgrind.10908.n7.nabble.com/svn-trouble-tp23414p43721.html Sent from the Valgrind - Dev mailing list archive at Nabble.com. |
|
From: galap <art...@gm...> - 2013-01-17 14:51:33
|
Problem seems to be up again: svn: E000111: Can't connect to host 'svn.valgrind.org': Connection refused Ping is successful -- View this message in context: http://valgrind.10908.n7.nabble.com/svn-trouble-tp23414p43720.html Sent from the Valgrind - Dev mailing list archive at Nabble.com. |
|
From: <sv...@va...> - 2013-01-17 14:24:44
|
sewardj 2013-01-17 14:24:35 +0000 (Thu, 17 Jan 2013)
New Revision: 13236
Log:
Merge, from branches/COMEM, revisions 13139 to 13235.
Modified directories:
trunk/
Modified files:
trunk/cachegrind/cg_main.c
trunk/callgrind/main.c
trunk/callgrind/sim.c
trunk/drd/drd_load_store.c
trunk/helgrind/hg_main.c
trunk/lackey/lk_main.c
trunk/memcheck/mc_translate.c
Modified: trunk/
Modified: trunk/memcheck/mc_translate.c (+429 -121)
===================================================================
--- trunk/memcheck/mc_translate.c 2013-01-16 22:07:02 +00:00 (rev 13235)
+++ trunk/memcheck/mc_translate.c 2013-01-17 14:24:35 +00:00 (rev 13236)
@@ -43,6 +43,56 @@
#include "mc_include.h"
+/* Comments re guarded loads and stores, and conditional dirty calls
+ that access memory, JRS 2012-Dec-14, for branches/COMEM, r13180.
+
+ Currently Memcheck generates code that checks the definedness of
+ addresses in such cases, regardless of the what the guard value is
+ (at runtime). This could potentially lead to false positives if we
+ ever construct IR in which a guarded memory access happens, and the
+ address is undefined when the guard is false. However, at the
+ moment I am not aware of any situations where such IR is generated.
+
+ The obvious thing to do is generate conditionalised checking code
+ in such cases. However:
+
+ * it's more complex to verify
+
+ * it is cheaper to always do the check -- basically a check if
+ the shadow value is nonzero, and conditional call to report
+ an error if so -- than it is to conditionalise the check.
+
+ * currently the implementation is incomplete. complainIfUndefined
+ can correctly conditionalise the check and complaint as per its
+ third argument. However, the part of it that then sets the
+ shadow to 'defined' (see comments at the top of said fn) ignores
+ the guard.
+
+ Therefore, removing this functionality in r13181 until we know we
+ need it. To reinstate, do the following:
+
+ * back out r13181 (which also adds this comment)
+
+ * undo (== reinstate the non-NULL 3rd args) in the following two
+ chunks, which were removed in r13142. These are the only two
+ places where complainIfUndefined is actually used with a guard.
+
+ // First, emit a definedness test for the address. This also sets
+ // the address (shadow) to 'defined' following the test.
+ - complainIfUndefined( mce, addr, guard );
+ + complainIfUndefined( mce, addr, NULL );
+
+ and
+
+ IRType tyAddr;
+ tl_assert(d->mAddr);
+ - complainIfUndefined(mce, d->mAddr, d->guard);
+ + complainIfUndefined(mce, d->mAddr, NULL);
+
+ * fix complainIfUndefined to conditionalise setting the shadow temp
+ to 'defined', as described above.
+*/
+
/* FIXMEs JRS 2011-June-16.
Check the interpretation for vector narrowing and widening ops,
@@ -1090,17 +1140,20 @@
}
-/* Check the supplied **original** atom for undefinedness, and emit a
+/* Check the supplied *original* |atom| for undefinedness, and emit a
complaint if so. Once that happens, mark it as defined. This is
possible because the atom is either a tmp or literal. If it's a
tmp, it will be shadowed by a tmp, and so we can set the shadow to
be defined. In fact as mentioned above, we will have to allocate a
new tmp to carry the new 'defined' shadow value, and update the
original->tmp mapping accordingly; we cannot simply assign a new
- value to an existing shadow tmp as this breaks SSAness -- resulting
- in the post-instrumentation sanity checker spluttering in disapproval.
+ value to an existing shadow tmp as this breaks SSAness.
+
+ It may be that any resulting complaint should only be emitted
+ conditionally, as defined by |guard|. If |guard| is NULL then it
+ is assumed to be always-true.
*/
-static void complainIfUndefined ( MCEnv* mce, IRAtom* atom, IRExpr *guard )
+static void complainIfUndefined ( MCEnv* mce, IRAtom* atom )
{
IRAtom* vatom;
IRType ty;
@@ -1233,16 +1286,6 @@
VG_(fnptr_to_fnentry)( fn ), args );
di->guard = cond;
- /* If the complaint is to be issued under a guard condition, AND that
- guard condition. */
- if (guard) {
- IRAtom *g1 = assignNew('V', mce, Ity_I32, unop(Iop_1Uto32, di->guard));
- IRAtom *g2 = assignNew('V', mce, Ity_I32, unop(Iop_1Uto32, guard));
- IRAtom *e = assignNew('V', mce, Ity_I32, binop(Iop_And32, g1, g2));
-
- di->guard = assignNew('V', mce, Ity_I1, unop(Iop_32to1, e));
- }
-
setHelperAnns( mce, di );
stmt( 'V', mce, IRStmt_Dirty(di));
@@ -1374,7 +1417,7 @@
arrSize = descr->nElems * sizeofIRType(ty);
tl_assert(ty != Ity_I1);
tl_assert(isOriginalAtom(mce,ix));
- complainIfUndefined(mce, ix, NULL);
+ complainIfUndefined(mce, ix);
if (isAlwaysDefd(mce, descr->base, arrSize)) {
/* later: no ... */
/* emit code to emit a complaint if any of the vbits are 1. */
@@ -1423,7 +1466,7 @@
Int arrSize = descr->nElems * sizeofIRType(ty);
tl_assert(ty != Ity_I1);
tl_assert(isOriginalAtom(mce,ix));
- complainIfUndefined(mce, ix, NULL);
+ complainIfUndefined(mce, ix);
if (isAlwaysDefd(mce, descr->base, arrSize)) {
/* Always defined, return all zeroes of the relevant type */
return definedOfType(tyS);
@@ -2569,15 +2612,15 @@
/* IRRoundingModeDFP(I32) x I8 x D128 -> D128 */
return mkLazy3(mce, Ity_I128, vatom1, vatom2, vatom3);
case Iop_ExtractV128:
- complainIfUndefined(mce, atom3, NULL);
+ complainIfUndefined(mce, atom3);
return assignNew('V', mce, Ity_V128, triop(op, vatom1, vatom2, atom3));
case Iop_Extract64:
- complainIfUndefined(mce, atom3, NULL);
+ complainIfUndefined(mce, atom3);
return assignNew('V', mce, Ity_I64, triop(op, vatom1, vatom2, atom3));
case Iop_SetElem8x8:
case Iop_SetElem16x4:
case Iop_SetElem32x2:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I64, triop(op, vatom1, atom2, vatom3));
default:
ppIROp(op);
@@ -2644,7 +2687,7 @@
case Iop_ShlN32x2:
case Iop_ShlN8x8:
/* Same scheme as with all other shifts. */
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I64, binop(op, vatom1, atom2));
case Iop_QNarrowBin32Sto16Sx4:
@@ -2727,25 +2770,25 @@
case Iop_QShlN8Sx8:
case Iop_QShlN8x8:
case Iop_QSalN8x8:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast8x8(mce, vatom1);
case Iop_QShlN16Sx4:
case Iop_QShlN16x4:
case Iop_QSalN16x4:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast16x4(mce, vatom1);
case Iop_QShlN32Sx2:
case Iop_QShlN32x2:
case Iop_QSalN32x2:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast32x2(mce, vatom1);
case Iop_QShlN64Sx1:
case Iop_QShlN64x1:
case Iop_QSalN64x1:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast32x2(mce, vatom1);
case Iop_PwMax32Sx2:
@@ -2842,13 +2885,13 @@
return assignNew('V', mce, Ity_I64, binop(op, vatom1, vatom2));
case Iop_GetElem8x8:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I8, binop(op, vatom1, atom2));
case Iop_GetElem16x4:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I16, binop(op, vatom1, atom2));
case Iop_GetElem32x2:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I32, binop(op, vatom1, atom2));
/* Perm8x8: rearrange values in left arg using steering values
@@ -2878,7 +2921,7 @@
/* Same scheme as with all other shifts. Note: 22 Oct 05:
this is wrong now, scalar shifts are done properly lazily.
Vector shifts should be fixed too. */
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_V128, binop(op, vatom1, atom2));
/* V x V shifts/rotates are done using the standard lazy scheme. */
@@ -2925,14 +2968,14 @@
case Iop_F32ToFixed32Sx4_RZ:
case Iop_Fixed32UToF32x4_RN:
case Iop_Fixed32SToF32x4_RN:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast32x4(mce, vatom1);
case Iop_F32ToFixed32Ux2_RZ:
case Iop_F32ToFixed32Sx2_RZ:
case Iop_Fixed32UToF32x2_RN:
case Iop_Fixed32SToF32x2_RN:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast32x2(mce, vatom1);
case Iop_QSub8Ux16:
@@ -3089,25 +3132,25 @@
case Iop_QShlN8Sx16:
case Iop_QShlN8x16:
case Iop_QSalN8x16:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast8x16(mce, vatom1);
case Iop_QShlN16Sx8:
case Iop_QShlN16x8:
case Iop_QSalN16x8:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast16x8(mce, vatom1);
case Iop_QShlN32Sx4:
case Iop_QShlN32x4:
case Iop_QSalN32x4:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast32x4(mce, vatom1);
case Iop_QShlN64Sx2:
case Iop_QShlN64x2:
case Iop_QSalN64x2:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return mkPCast32x4(mce, vatom1);
case Iop_Mull32Sx2:
@@ -3170,16 +3213,16 @@
return assignNew('V', mce, Ity_V128, binop(op, vatom1, vatom2));
case Iop_GetElem8x16:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I8, binop(op, vatom1, atom2));
case Iop_GetElem16x8:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I16, binop(op, vatom1, atom2));
case Iop_GetElem32x4:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I32, binop(op, vatom1, atom2));
case Iop_GetElem64x2:
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_I64, binop(op, vatom1, atom2));
/* Perm8x16: rearrange values in left arg using steering values
@@ -3238,7 +3281,7 @@
/* Same scheme as with all other shifts. Note: 10 Nov 05:
this is wrong now, scalar shifts are done properly lazily.
Vector shifts should be fixed too. */
- complainIfUndefined(mce, atom2, NULL);
+ complainIfUndefined(mce, atom2);
return assignNew('V', mce, Ity_V128, binop(op, vatom1, atom2));
/* I128-bit data-steering */
@@ -3626,6 +3669,11 @@
static
IRExpr* expr2vbits_Unop ( MCEnv* mce, IROp op, IRAtom* atom )
{
+ /* For the widening operations {8,16,32}{U,S}to{16,32,64}, the
+ selection of shadow operation implicitly duplicates the logic in
+ do_shadow_LoadG and should be kept in sync (in the very unlikely
+ event that the interpretation of such widening ops changes in
+ future). See comment in do_shadow_LoadG. */
IRAtom* vatom = expr2vbits( mce, atom );
tl_assert(isOriginalAtom(mce,atom));
switch (op) {
@@ -3932,11 +3980,13 @@
}
-/* Worker function; do not call directly. */
+/* Worker function; do not call directly. See comments on
+ expr2vbits_Load for the meaning of 'guard'. If 'guard' evaluates
+ to False at run time, the returned value is all-ones. */
static
IRAtom* expr2vbits_Load_WRK ( MCEnv* mce,
IREndness end, IRType ty,
- IRAtom* addr, UInt bias )
+ IRAtom* addr, UInt bias, IRAtom* guard )
{
void* helper;
const HChar* hname;
@@ -3949,7 +3999,7 @@
/* First, emit a definedness test for the address. This also sets
the address (shadow) to 'defined' following the test. */
- complainIfUndefined( mce, addr, NULL );
+ complainIfUndefined( mce, addr );
/* Now cook up a call to the relevant helper function, to read the
data V bits from shadow memory. */
@@ -4012,16 +4062,34 @@
hname, VG_(fnptr_to_fnentry)( helper ),
mkIRExprVec_1( addrAct ));
setHelperAnns( mce, di );
+ if (guard) {
+ di->guard = guard;
+ /* Ideally the didn't-happen return value here would be all-ones
+ (all-undefined), so it'd be obvious if it got used
+ inadvertantly. We can get by with the IR-mandated default
+ value (0b01 repeating, 0x55 etc) as that'll still look pretty
+ undefined if it ever leaks out. */
+ }
stmt( 'V', mce, IRStmt_Dirty(di) );
return mkexpr(datavbits);
}
+/* Generate IR to do a shadow load. The helper is expected to check
+ the validity of the address and return the V bits for that address.
+ This can optionally be controlled by a guard, which is assumed to
+ be True if NULL. In the case where the guard is False at runtime,
+ the helper will return the didn't-do-the-call value of all-ones.
+ Since all ones means "completely undefined result", the caller of
+ this function will need to fix up the result somehow in that
+ case.
+*/
static
IRAtom* expr2vbits_Load ( MCEnv* mce,
IREndness end, IRType ty,
- IRAtom* addr, UInt bias )
+ IRAtom* addr, UInt bias,
+ IRAtom* guard )
{
tl_assert(end == Iend_LE || end == Iend_BE);
switch (shadowTypeV(ty)) {
@@ -4029,15 +4097,15 @@
case Ity_I16:
case Ity_I32:
case Ity_I64:
- return expr2vbits_Load_WRK(mce, end, ty, addr, bias);
+ return expr2vbits_Load_WRK(mce, end, ty, addr, bias, guard);
case Ity_V128: {
IRAtom *v64hi, *v64lo;
if (end == Iend_LE) {
- v64lo = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+0);
- v64hi = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+8);
+ v64lo = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+0, guard);
+ v64hi = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+8, guard);
} else {
- v64hi = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+0);
- v64lo = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+8);
+ v64hi = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+0, guard);
+ v64lo = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+8, guard);
}
return assignNew( 'V', mce,
Ity_V128,
@@ -4047,10 +4115,14 @@
/* V256-bit case -- phrased in terms of 64 bit units (Qs),
with Q3 being the most significant lane. */
if (end == Iend_BE) goto unhandled;
- IRAtom* v64Q0 = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+0);
- IRAtom* v64Q1 = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+8);
- IRAtom* v64Q2 = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+16);
- IRAtom* v64Q3 = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+24);
+ IRAtom* v64Q0
+ = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+0, guard);
+ IRAtom* v64Q1
+ = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+8, guard);
+ IRAtom* v64Q2
+ = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+16, guard);
+ IRAtom* v64Q3
+ = expr2vbits_Load_WRK(mce, end, Ity_I64, addr, bias+24, guard);
return assignNew( 'V', mce,
Ity_V256,
IRExpr_Qop(Iop_64x4toV256,
@@ -4063,29 +4135,86 @@
}
-/* If there is no guard expression or the guard is always TRUE this function
- behaves like expr2vbits_Load. If the guard is not true at runtime, an
- all-bits-defined bit pattern will be returned.
- It is assumed that definedness of GUARD has already been checked at the call
- site. */
+/* The most general handler for guarded loads. Assumes the
+ definedness of GUARD and ADDR have already been checked by the
+ caller. A GUARD of NULL is assumed to mean "always True".
+
+ Generate IR to do a shadow load from ADDR and return the V bits.
+ The loaded type is TY. The loaded data is then (shadow) widened by
+ using VWIDEN, which can be Iop_INVALID to denote a no-op. If GUARD
+ evaluates to False at run time then the returned Vbits are simply
+ VALT instead. Note therefore that the argument type of VWIDEN must
+ be TY and the result type of VWIDEN must equal the type of VALT.
+*/
static
-IRAtom* expr2vbits_guarded_Load ( MCEnv* mce,
- IREndness end, IRType ty,
- IRAtom* addr, UInt bias, IRAtom *guard )
+IRAtom* expr2vbits_Load_guarded_General ( MCEnv* mce,
+ IREndness end, IRType ty,
+ IRAtom* addr, UInt bias,
+ IRAtom* guard,
+ IROp vwiden, IRAtom* valt )
{
- if (guard) {
- IRAtom *cond, *iffalse, *iftrue;
+ /* Sanity check the conversion operation, and also set TYWIDE. */
+ IRType tyWide = Ity_INVALID;
+ switch (vwiden) {
+ case Iop_INVALID:
+ tyWide = ty;
+ break;
+ case Iop_16Uto32: case Iop_16Sto32: case Iop_8Uto32: case Iop_8Sto32:
+ tyWide = Ity_I32;
+ break;
+ default:
+ VG_(tool_panic)("memcheck:expr2vbits_Load_guarded_General");
+ }
- cond = assignNew('V', mce, Ity_I8, unop(Iop_1Uto8, guard));
- iftrue = assignNew('V', mce, ty,
- expr2vbits_Load(mce, end, ty, addr, bias));
- iffalse = assignNew('V', mce, ty, definedOfType(ty));
+ /* If the guard evaluates to True, this will hold the loaded V bits
+ at TY. If the guard evaluates to False, this will be all
+ ones, meaning "all undefined", in which case we will have to
+ replace it using a Mux0X below. */
+ IRAtom* iftrue1
+ = assignNew('V', mce, ty,
+ expr2vbits_Load(mce, end, ty, addr, bias, guard));
+ /* Now (shadow-) widen the loaded V bits to the desired width. In
+ the guard-is-False case, the allowable widening operators will
+ in the worst case (unsigned widening) at least leave the
+ pre-widened part as being marked all-undefined, and in the best
+ case (signed widening) mark the whole widened result as
+ undefined. Anyway, it doesn't matter really, since in this case
+ we will replace said value with the default value |valt| using a
+ Mux0X. */
+ IRAtom* iftrue2
+ = vwiden == Iop_INVALID
+ ? iftrue1
+ : assignNew('V', mce, tyWide, unop(vwiden, iftrue1));
+ /* These are the V bits we will return if the load doesn't take
+ place. */
+ IRAtom* iffalse
+ = valt;
+ /* Prepare the cond for the Mux0X. Convert a NULL cond into
+ something that iropt knows how to fold out later. */
+ IRAtom* cond
+ = guard == NULL
+ ? mkU8(1)
+ : assignNew('V', mce, Ity_I8, unop(Iop_1Uto8, guard));
+ /* And assemble the final result. */
+ return assignNew('V', mce, tyWide, IRExpr_Mux0X(cond, iffalse, iftrue2));
+}
- return assignNew('V', mce, ty, IRExpr_Mux0X(cond, iffalse, iftrue));
- }
- /* No guard expression or unconditional load */
- return expr2vbits_Load(mce, end, ty, addr, bias);
+/* A simpler handler for guarded loads, in which there is no
+ conversion operation, and the default V bit return (when the guard
+ evaluates to False at runtime) is "all defined". If there is no
+ guard expression or the guard is always TRUE this function behaves
+ like expr2vbits_Load. It is assumed that definedness of GUARD and
+ ADDR has already been checked at the call site. */
+static
+IRAtom* expr2vbits_Load_guarded_Simple ( MCEnv* mce,
+ IREndness end, IRType ty,
+ IRAtom* addr, UInt bias,
+ IRAtom *guard )
+{
+ return expr2vbits_Load_guarded_General(
+ mce, end, ty, addr, bias, guard, Iop_INVALID, definedOfType(ty)
+ );
}
@@ -4164,7 +4293,8 @@
case Iex_Load:
return expr2vbits_Load( mce, e->Iex.Load.end,
e->Iex.Load.ty,
- e->Iex.Load.addr, 0/*addr bias*/ );
+ e->Iex.Load.addr, 0/*addr bias*/,
+ NULL/* guard == "always True"*/ );
case Iex_CCall:
return mkLazyN( mce, e->Iex.CCall.args,
@@ -4234,13 +4364,17 @@
}
-/* Generate a shadow store. addr is always the original address atom.
- You can pass in either originals or V-bits for the data atom, but
- obviously not both. guard :: Ity_I1 controls whether the store
- really happens; NULL means it unconditionally does. Note that
- guard itself is not checked for definedness; the caller of this
- function must do that if necessary. */
+/* Generate a shadow store. |addr| is always the original address
+ atom. You can pass in either originals or V-bits for the data
+ atom, but obviously not both. This function generates a check for
+ the definedness of |addr|. That check is performed regardless of
+ whether |guard| is true or not.
+ |guard| :: Ity_I1 controls whether the store really happens; NULL
+ means it unconditionally does. Note that |guard| itself is not
+ checked for definedness; the caller of this function must do that
+ if necessary.
+*/
static
void do_shadow_Store ( MCEnv* mce,
IREndness end,
@@ -4298,7 +4432,7 @@
/* First, emit a definedness test for the address. This also sets
the address (shadow) to 'defined' following the test. */
- complainIfUndefined( mce, addr, guard );
+ complainIfUndefined( mce, addr );
/* Now decide which helper function to call to write the data V
bits into shadow memory. */
@@ -4525,7 +4659,7 @@
# endif
/* First check the guard. */
- complainIfUndefined(mce, d->guard, NULL);
+ complainIfUndefined(mce, d->guard);
/* Now round up all inputs and PCast over them. */
curr = definedOfType(Ity_I32);
@@ -4599,7 +4733,7 @@
should remove all but this test. */
IRType tyAddr;
tl_assert(d->mAddr);
- complainIfUndefined(mce, d->mAddr, d->guard);
+ complainIfUndefined(mce, d->mAddr);
tyAddr = typeOfIRExpr(mce->sb->tyenv, d->mAddr);
tl_assert(tyAddr == Ity_I32 || tyAddr == Ity_I64);
@@ -4616,8 +4750,8 @@
while (toDo >= 4) {
here = mkPCastTo(
mce, Ity_I32,
- expr2vbits_guarded_Load ( mce, end, Ity_I32, d->mAddr,
- d->mSize - toDo, d->guard )
+ expr2vbits_Load_guarded_Simple(
+ mce, end, Ity_I32, d->mAddr, d->mSize - toDo, d->guard )
);
curr = mkUifU32(mce, here, curr);
toDo -= 4;
@@ -4626,8 +4760,8 @@
while (toDo >= 2) {
here = mkPCastTo(
mce, Ity_I32,
- expr2vbits_guarded_Load ( mce, end, Ity_I16, d->mAddr,
- d->mSize - toDo, d->guard )
+ expr2vbits_Load_guarded_Simple(
+ mce, end, Ity_I16, d->mAddr, d->mSize - toDo, d->guard )
);
curr = mkUifU32(mce, here, curr);
toDo -= 2;
@@ -4636,8 +4770,8 @@
if (toDo == 1) {
here = mkPCastTo(
mce, Ity_I32,
- expr2vbits_guarded_Load ( mce, end, Ity_I8, d->mAddr,
- d->mSize - toDo, d->guard )
+ expr2vbits_Load_guarded_Simple(
+ mce, end, Ity_I8, d->mAddr, d->mSize - toDo, d->guard )
);
curr = mkUifU32(mce, here, curr);
toDo -= 1;
@@ -5007,7 +5141,8 @@
'V', mce, elemTy,
expr2vbits_Load(
mce,
- cas->end, elemTy, cas->addr, 0/*Addr bias*/
+ cas->end, elemTy, cas->addr, 0/*Addr bias*/,
+ NULL/*always happens*/
));
bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldLo), voldLo);
if (otrak) {
@@ -5135,14 +5270,16 @@
'V', mce, elemTy,
expr2vbits_Load(
mce,
- cas->end, elemTy, cas->addr, memOffsHi/*Addr bias*/
+ cas->end, elemTy, cas->addr, memOffsHi/*Addr bias*/,
+ NULL/*always happens*/
));
voldLo
= assignNew(
'V', mce, elemTy,
expr2vbits_Load(
mce,
- cas->end, elemTy, cas->addr, memOffsLo/*Addr bias*/
+ cas->end, elemTy, cas->addr, memOffsLo/*Addr bias*/,
+ NULL/*always happens*/
));
bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldHi), voldHi);
bind_shadow_tmp_to_orig('V', mce, mkexpr(cas->oldLo), voldLo);
@@ -5229,7 +5366,8 @@
|| resTy == Ity_I16 || resTy == Ity_I8);
assign( 'V', mce, resTmp,
expr2vbits_Load(
- mce, stEnd, resTy, stAddr, 0/*addr bias*/));
+ mce, stEnd, resTy, stAddr, 0/*addr bias*/,
+ NULL/*always happens*/) );
} else {
/* Store Conditional */
/* Stay sane */
@@ -5263,6 +5401,58 @@
}
+/* ---- Dealing with LoadG/StoreG (not entirely simple) ---- */
+
+static void do_shadow_StoreG ( MCEnv* mce, IRStoreG* sg )
+{
+ if (0) VG_(printf)("XXXX StoreG\n");
+ complainIfUndefined(mce, sg->guard);
+ /* do_shadow_Store will check the definedness of sg->addr. */
+ do_shadow_Store( mce, sg->end,
+ sg->addr, 0/* addr bias */,
+ sg->data,
+ NULL /* shadow data */,
+ sg->guard );
+}
+
+static void do_shadow_LoadG ( MCEnv* mce, IRLoadG* lg )
+{
+ if (0) VG_(printf)("XXXX LoadG\n");
+ complainIfUndefined(mce, lg->guard);
+ /* expr2vbits_Load_guarded_General will check the definedness of
+ lg->addr. */
+
+ /* Look at the LoadG's built-in conversion operation, to determine
+ the source (actual loaded data) type, and the equivalent IROp.
+ NOTE that implicitly we are taking a widening operation to be
+ applied to original atoms and producing one that applies to V
+ bits. Since signed and unsigned widening are self-shadowing,
+ this is a straight copy of the op (modulo swapping from the
+ IRLoadGOp form to the IROp form). Note also therefore that this
+ implicitly duplicates the logic to do with said widening ops in
+ expr2vbits_Unop. See comment at the start of expr2vbits_Unop. */
+ IROp vwiden = Iop_INVALID;
+ IRType loadedTy = Ity_INVALID;
+ switch (lg->cvt) {
+ case ILGop_Ident32: loadedTy = Ity_I32; vwiden = Iop_INVALID; break;
+ case ILGop_16Uto32: loadedTy = Ity_I16; vwiden = Iop_16Uto32; break;
+ case ILGop_16Sto32: loadedTy = Ity_I16; vwiden = Iop_16Sto32; break;
+ case ILGop_8Uto32: loadedTy = Ity_I8; vwiden = Iop_8Uto32; break;
+ case ILGop_8Sto32: loadedTy = Ity_I8; vwiden = Iop_8Sto32; break;
+ default: VG_(tool_panic)("do_shadow_LoadG");
+ }
+
+ IRAtom* vbits_alt
+ = expr2vbits( mce, lg->alt );
+ IRAtom* vbits_final
+ = expr2vbits_Load_guarded_General(mce, lg->end, loadedTy,
+ lg->addr, 0/*addr bias*/,
+ lg->guard, vwiden, vbits_alt );
+ /* And finally, bind the V bits to the destination temporary. */
+ assign( 'V', mce, findShadowTmpV(mce, lg->dst), vbits_final );
+}
+
+
/*------------------------------------------------------------*/
/*--- Memcheck main ---*/
/*------------------------------------------------------------*/
@@ -5365,6 +5555,16 @@
case Ist_Store:
return isBogusAtom(st->Ist.Store.addr)
|| isBogusAtom(st->Ist.Store.data);
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ return isBogusAtom(sg->addr) || isBogusAtom(sg->data)
+ || isBogusAtom(sg->guard);
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ return isBogusAtom(lg->addr) || isBogusAtom(lg->alt)
+ || isBogusAtom(lg->guard);
+ }
case Ist_Exit:
return isBogusAtom(st->Ist.Exit.guard);
case Ist_AbiHint:
@@ -5602,8 +5802,16 @@
NULL/*guard*/ );
break;
+ case Ist_StoreG:
+ do_shadow_StoreG( &mce, st->Ist.StoreG.details );
+ break;
+
+ case Ist_LoadG:
+ do_shadow_LoadG( &mce, st->Ist.LoadG.details );
+ break;
+
case Ist_Exit:
- complainIfUndefined( &mce, st->Ist.Exit.guard, NULL );
+ complainIfUndefined( &mce, st->Ist.Exit.guard );
break;
case Ist_IMark:
@@ -5674,7 +5882,7 @@
VG_(printf)("\n\n");
}
- complainIfUndefined( &mce, sb_in->next, NULL );
+ complainIfUndefined( &mce, sb_in->next );
if (0 && verboze) {
for (j = first_stmt; j < sb_out->stmts_used; j++) {
@@ -5872,8 +6080,18 @@
return assignNew( 'B', mce, Ity_I32, binop(Iop_Max32U, b1, b2) );
}
-static IRAtom* gen_load_b ( MCEnv* mce, Int szB,
- IRAtom* baseaddr, Int offset )
+
+/* Make a guarded origin load, with no special handling in the
+ didn't-happen case. A GUARD of NULL is assumed to mean "always
+ True".
+
+ Generate IR to do a shadow origins load from BASEADDR+OFFSET and
+ return the otag. The loaded size is SZB. If GUARD evaluates to
+ False at run time then the returned otag is zero.
+*/
+static IRAtom* gen_guarded_load_b ( MCEnv* mce, Int szB,
+ IRAtom* baseaddr,
+ Int offset, IRExpr* guard )
{
void* hFun;
const HChar* hName;
@@ -5916,6 +6134,15 @@
bTmp, 1/*regparms*/, hName, VG_(fnptr_to_fnentry)( hFun ),
mkIRExprVec_1( ea )
);
+ if (guard) {
+ di->guard = guard;
+ /* Ideally the didn't-happen return value here would be
+ all-zeroes (unknown-origin), so it'd be harmless if it got
+ used inadvertantly. We slum it out with the IR-mandated
+ default value (0b01 repeating, 0x55 etc) as that'll probably
+ trump all legitimate otags via Max32, and it's pretty
+ obviously bogus. */
+ }
/* no need to mess with any annotations. This call accesses
neither guest state nor guest memory. */
stmt( 'B', mce, IRStmt_Dirty(di) );
@@ -5930,25 +6157,56 @@
}
}
-static IRAtom* gen_guarded_load_b ( MCEnv* mce, Int szB, IRAtom* baseaddr,
- Int offset, IRAtom* guard )
+
+/* Generate IR to do a shadow origins load from BASEADDR+OFFSET. The
+ loaded size is SZB. The load is regarded as unconditional (always
+ happens).
+*/
+static IRAtom* gen_load_b ( MCEnv* mce, Int szB, IRAtom* baseaddr,
+ Int offset )
{
- if (guard) {
- IRAtom *cond, *iffalse, *iftrue;
+ return gen_guarded_load_b(mce, szB, baseaddr, offset, NULL/*guard*/);
+}
- cond = assignNew('B', mce, Ity_I8, unop(Iop_1Uto8, guard));
- iftrue = assignNew('B', mce, Ity_I32,
- gen_load_b(mce, szB, baseaddr, offset));
- iffalse = mkU32(0);
- return assignNew('B', mce, Ity_I32, IRExpr_Mux0X(cond, iffalse, iftrue));
- }
+/* The most general handler for guarded origin loads. A GUARD of NULL
+ is assumed to mean "always True".
- return gen_load_b(mce, szB, baseaddr, offset);
+ Generate IR to do a shadow origin load from ADDR+BIAS and return
+ the B bits. The loaded type is TY. If GUARD evaluates to False at
+ run time then the returned B bits are simply BALT instead.
+*/
+static
+IRAtom* expr2ori_Load_guarded_General ( MCEnv* mce,
+ IRType ty,
+ IRAtom* addr, UInt bias,
+ IRAtom* guard, IRAtom* balt )
+{
+ /* If the guard evaluates to True, this will hold the loaded
+ origin. If the guard evaluates to False, this will be zero,
+ meaning "unknown origin", in which case we will have to replace
+ it using a Mux0X below. */
+ IRAtom* iftrue
+ = assignNew('B', mce, Ity_I32,
+ gen_guarded_load_b(mce, sizeofIRType(ty),
+ addr, bias, guard));
+ /* These are the bits we will return if the load doesn't take
+ place. */
+ IRAtom* iffalse
+ = balt;
+ /* Prepare the cond for the Mux0X. Convert a NULL cond into
+ something that iropt knows how to fold out later. */
+ IRAtom* cond
+ = guard == NULL
+ ? mkU8(1)
+ : assignNew('B', mce, Ity_I8, unop(Iop_1Uto8, guard));
+ /* And assemble the final result. */
+ return assignNew('B', mce, Ity_I32, IRExpr_Mux0X(cond, iffalse, iftrue));
}
-/* Generate a shadow store. guard :: Ity_I1 controls whether the
- store really happens; NULL means it unconditionally does. */
+
+/* Generate a shadow origins store. guard :: Ity_I1 controls whether
+ the store really happens; NULL means it unconditionally does. */
static void gen_store_b ( MCEnv* mce, Int szB,
IRAtom* baseaddr, Int offset, IRAtom* dataB,
IRAtom* guard )
@@ -6345,8 +6603,8 @@
}
/* handle possible 16-bit excess */
while (toDo >= 2) {
- gen_store_b( mce, 2, d->mAddr, d->mSize - toDo, curr,
- d->guard );
+ gen_store_b( mce, 2, d->mAddr, d->mSize - toDo, curr,
+ d->guard );
toDo -= 2;
}
/* chew off the remaining 8-bit chunk, if any */
@@ -6360,10 +6618,12 @@
}
-static void do_origins_Store ( MCEnv* mce,
- IREndness stEnd,
- IRExpr* stAddr,
- IRExpr* stData )
+/* Generate IR for origin shadowing for a general guarded store. */
+static void do_origins_Store_guarded ( MCEnv* mce,
+ IREndness stEnd,
+ IRExpr* stAddr,
+ IRExpr* stData,
+ IRExpr* guard )
{
Int dszB;
IRAtom* dataB;
@@ -6374,11 +6634,51 @@
tl_assert(isIRAtom(stData));
dszB = sizeofIRType( typeOfIRExpr(mce->sb->tyenv, stData ) );
dataB = schemeE( mce, stData );
- gen_store_b( mce, dszB, stAddr, 0/*offset*/, dataB,
- NULL/*guard*/ );
+ gen_store_b( mce, dszB, stAddr, 0/*offset*/, dataB, guard );
}
+/* Generate IR for origin shadowing for a plain store. */
+static void do_origins_Store_plain ( MCEnv* mce,
+ IREndness stEnd,
+ IRExpr* stAddr,
+ IRExpr* stData )
+{
+ do_origins_Store_guarded ( mce, stEnd, stAddr, stData,
+ NULL/*guard*/ );
+}
+
+
+/* ---- Dealing with LoadG/StoreG (not entirely simple) ---- */
+
+static void do_origins_StoreG ( MCEnv* mce, IRStoreG* sg )
+{
+ do_origins_Store_guarded( mce, sg->end, sg->addr,
+ sg->data, sg->guard );
+}
+
+static void do_origins_LoadG ( MCEnv* mce, IRLoadG* lg )
+{
+ IRType loadedTy = Ity_INVALID;
+ switch (lg->cvt) {
+ case ILGop_Ident32: loadedTy = Ity_I32; break;
+ case ILGop_16Uto32: loadedTy = Ity_I16; break;
+ case ILGop_16Sto32: loadedTy = Ity_I16; break;
+ case ILGop_8Uto32: loadedTy = Ity_I8; break;
+ case ILGop_8Sto32: loadedTy = Ity_I8; break;
+ default: VG_(tool_panic)("schemeS.IRLoadG");
+ }
+ IRAtom* ori_alt
+ = schemeE( mce,lg->alt );
+ IRAtom* ori_final
+ = expr2ori_Load_guarded_General(mce, loadedTy,
+ lg->addr, 0/*addr bias*/,
+ lg->guard, ori_alt );
+ /* And finally, bind the origin to the destination temporary. */
+ assign( 'B', mce, findShadowTmpB(mce, lg->dst), ori_final );
+}
+
+
static void schemeS ( MCEnv* mce, IRStmt* st )
{
tl_assert(MC_(clo_mc_level) == 3);
@@ -6426,11 +6726,19 @@
break;
case Ist_Store:
- do_origins_Store( mce, st->Ist.Store.end,
- st->Ist.Store.addr,
- st->Ist.Store.data );
+ do_origins_Store_plain( mce, st->Ist.Store.end,
+ st->Ist.Store.addr,
+ st->Ist.Store.data );
break;
+ case Ist_StoreG:
+ do_origins_StoreG( mce, st->Ist.StoreG.details );
+ break;
+
+ case Ist_LoadG:
+ do_origins_LoadG( mce, st->Ist.LoadG.details );
+ break;
+
case Ist_LLSC: {
/* In short: treat a load-linked like a normal load followed
by an assignment of the loaded (shadow) data the result
@@ -6448,9 +6756,9 @@
schemeE(mce, vanillaLoad));
} else {
/* Store conditional */
- do_origins_Store( mce, st->Ist.LLSC.end,
- st->Ist.LLSC.addr,
- st->Ist.LLSC.storedata );
+ do_origins_Store_plain( mce, st->Ist.LLSC.end,
+ st->Ist.LLSC.addr,
+ st->Ist.LLSC.storedata );
/* For the rationale behind this, see comments at the
place where the V-shadow for .result is constructed, in
do_shadow_LLSC. In short, we regard .result as
Property changed: trunk (+0 -0)
___________________________________________________________________
Name: svn:mergeinfo
- /branches/TCHAIN:12477-12516
+ /branches/COMEM:13142-13235
/branches/TCHAIN:12477-12516
Modified: trunk/callgrind/main.c (+75 -6)
===================================================================
--- trunk/callgrind/main.c 2013-01-16 22:07:02 +00:00 (rev 13235)
+++ trunk/callgrind/main.c 2013-01-17 14:24:35 +00:00 (rev 13236)
@@ -669,6 +669,50 @@
}
static
+void addEvent_D_guarded ( ClgState* clgs, InstrInfo* inode,
+ Int datasize, IRAtom* ea, IRAtom* guard,
+ Bool isWrite )
+{
+ tl_assert(isIRAtom(ea));
+ tl_assert(guard);
+ tl_assert(isIRAtom(guard));
+ tl_assert(datasize >= 1);
+ if (!CLG_(clo).simulate_cache) return;
+ tl_assert(datasize <= CLG_(min_line_size));
+
+ /* Adding guarded memory actions and merging them with the existing
+ queue is too complex. Simply flush the queue and add this
+ action immediately. Since guarded loads and stores are pretty
+ rare, this is not thought likely to cause any noticeable
+ performance loss as a result of the loss of event-merging
+ opportunities. */
+ tl_assert(clgs->events_used >= 0);
+ flushEvents(clgs);
+ tl_assert(clgs->events_used == 0);
+ /* Same as case Ev_Dw / case Ev_Dr in flushEvents, except with guard */
+ IRExpr* i_node_expr;
+ const HChar* helperName;
+ void* helperAddr;
+ IRExpr** argv;
+ Int regparms;
+ IRDirty* di;
+ i_node_expr = mkIRExpr_HWord( (HWord)inode );
+ helperName = isWrite ? CLG_(cachesim).log_0I1Dw_name
+ : CLG_(cachesim).log_0I1Dr_name;
+ helperAddr = isWrite ? CLG_(cachesim).log_0I1Dw
+ : CLG_(cachesim).log_0I1Dr;
+ argv = mkIRExprVec_3( i_node_expr,
+ ea, mkIRExpr_HWord( datasize ) );
+ regparms = 3;
+ di = unsafeIRDirty_0_N(
+ regparms,
+ helperName, VG_(fnptr_to_fnentry)( helperAddr ),
+ argv );
+ di->guard = guard;
+ addStmtToIRSB( clgs->sbOut, IRStmt_Dirty(di) );
+}
+
+static
void addEvent_Bc ( ClgState* clgs, InstrInfo* inode, IRAtom* guard )
{
Event* evt;
@@ -912,14 +956,14 @@
VexArchInfo* archinfo_host,
IRType gWordTy, IRType hWordTy )
{
- Int i;
- IRStmt* st;
- Addr origAddr;
+ Int i;
+ IRStmt* st;
+ Addr origAddr;
InstrInfo* curr_inode = NULL;
- ClgState clgs;
- UInt cJumps = 0;
+ ClgState clgs;
+ UInt cJumps = 0;
+ IRTypeEnv* tyenv = sbIn->tyenv;
-
if (gWordTy != hWordTy) {
/* We don't currently support this case. */
VG_(tool_panic)("host/guest word size mismatch");
@@ -1022,6 +1066,31 @@
break;
}
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ IRExpr* data = sg->data;
+ IRExpr* addr = sg->addr;
+ IRType type = typeOfIRExpr(tyenv, data);
+ tl_assert(type != Ity_INVALID);
+ addEvent_D_guarded( &clgs, curr_inode,
+ sizeofIRType(type), addr, sg->guard,
+ True/*isWrite*/ );
+ break;
+ }
+
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ IRType type = Ity_INVALID; /* loaded type */
+ IRType typeWide = Ity_INVALID; /* after implicit widening */
+ IRExpr* addr = lg->addr;
+ typeOfIRLoadGOp(lg->cvt, &typeWide, &type);
+ tl_assert(type != Ity_INVALID);
+ addEvent_D_guarded( &clgs, curr_inode,
+ sizeofIRType(type), addr, lg->guard,
+ False/*!isWrite*/ );
+ break;
+ }
+
case Ist_Dirty: {
Int dataSize;
IRDirty* d = st->Ist.Dirty.details;
Modified: trunk/callgrind/sim.c (+4 -0)
===================================================================
--- trunk/callgrind/sim.c 2013-01-16 22:07:02 +00:00 (rev 13235)
+++ trunk/callgrind/sim.c 2013-01-17 14:24:35 +00:00 (rev 13236)
@@ -1189,6 +1189,9 @@
}
+/* Note that addEvent_D_guarded assumes that log_0I1Dr and log_0I1Dw
+ have exactly the same prototype. If you change them, you must
+ change addEvent_D_guarded too. */
VG_REGPARM(3)
static void log_0I1Dr(InstrInfo* ii, Addr data_addr, Word data_size)
{
@@ -1248,6 +1251,7 @@
}
}
+/* See comment on log_0I1Dr. */
VG_REGPARM(3)
static void log_0I1Dw(InstrInfo* ii, Addr data_addr, Word data_size)
{
Modified: trunk/helgrind/hg_main.c (+71 -14)
===================================================================
--- trunk/helgrind/hg_main.c 2013-01-16 22:07:02 +00:00 (rev 13235)
+++ trunk/helgrind/hg_main.c 2013-01-17 14:24:35 +00:00 (rev 13236)
@@ -4126,18 +4126,40 @@
/*--- Instrumentation ---*/
/*--------------------------------------------------------------*/
+#define unop(_op, _arg1) IRExpr_Unop((_op),(_arg1))
#define binop(_op, _arg1, _arg2) IRExpr_Binop((_op),(_arg1),(_arg2))
#define mkexpr(_tmp) IRExpr_RdTmp((_tmp))
#define mkU32(_n) IRExpr_Const(IRConst_U32(_n))
#define mkU64(_n) IRExpr_Const(IRConst_U64(_n))
#define assign(_t, _e) IRStmt_WrTmp((_t), (_e))
+/* This takes and returns atoms, of course. Not full IRExprs. */
+static IRExpr* mk_And1 ( IRSB* sbOut, IRExpr* arg1, IRExpr* arg2 )
+{
+ tl_assert(arg1 && arg2);
+ tl_assert(isIRAtom(arg1));
+ tl_assert(isIRAtom(arg2));
+ /* Generate 32to1(And32(1Uto32(arg1), 1Uto32(arg2))). Appalling
+ code, I know. */
+ IRTemp wide1 = newIRTemp(sbOut->tyenv, Ity_I32);
+ IRTemp wide2 = newIRTemp(sbOut->tyenv, Ity_I32);
+ IRTemp anded = newIRTemp(sbOut->tyenv, Ity_I32);
+ IRTemp res = newIRTemp(sbOut->tyenv, Ity_I1);
+ addStmtToIRSB(sbOut, assign(wide1, unop(Iop_1Uto32, arg1)));
+ addStmtToIRSB(sbOut, assign(wide2, unop(Iop_1Uto32, arg2)));
+ addStmtToIRSB(sbOut, assign(anded, binop(Iop_And32, mkexpr(wide1),
+ mkexpr(wide2))));
+ addStmtToIRSB(sbOut, assign(res, unop(Iop_32to1, mkexpr(anded))));
+ return mkexpr(res);
+}
+
static void instrument_mem_access ( IRSB* sbOut,
IRExpr* addr,
Int szB,
Bool isStore,
Int hWordTy_szB,
- Int goff_sp )
+ Int goff_sp,
+ IRExpr* guard ) /* NULL => True */
{
IRType tyAddr = Ity_INVALID;
const HChar* hName = NULL;
@@ -4273,17 +4295,25 @@
: binop(Iop_Add64, mkexpr(addr_minus_sp), mkU64(rz_szB)))
);
- IRTemp guard = newIRTemp(sbOut->tyenv, Ity_I1);
+ /* guardA == "guard on the address" */
+ IRTemp guardA = newIRTemp(sbOut->tyenv, Ity_I1);
addStmtToIRSB(
sbOut,
- assign(guard,
+ assign(guardA,
tyAddr == Ity_I32
? binop(Iop_CmpLT32U, mkU32(THRESH), mkexpr(diff))
: binop(Iop_CmpLT64U, mkU64(THRESH), mkexpr(diff)))
);
- di->guard = mkexpr(guard);
+ di->guard = mkexpr(guardA);
}
+ /* If there's a guard on the access itself (as supplied by the
+ caller of this routine), we need to AND that in to any guard we
+ might already have. */
+ if (guard) {
+ di->guard = mk_And1(sbOut, di->guard, guard);
+ }
+
/* Add the helper. */
addStmtToIRSB( sbOut, IRStmt_Dirty(di) );
}
@@ -4428,7 +4458,8 @@
(isDCAS ? 2 : 1)
* sizeofIRType(typeOfIRExpr(bbIn->tyenv, cas->dataLo)),
False/*!isStore*/,
- sizeofIRType(hWordTy), goff_sp
+ sizeofIRType(hWordTy), goff_sp,
+ NULL/*no-guard*/
);
}
break;
@@ -4448,7 +4479,8 @@
st->Ist.LLSC.addr,
sizeofIRType(dataTy),
False/*!isStore*/,
- sizeofIRType(hWordTy), goff_sp
+ sizeofIRType(hWordTy), goff_sp,
+ NULL/*no-guard*/
);
}
} else {
@@ -4459,22 +4491,46 @@
}
case Ist_Store:
- /* It seems we pretend that store-conditionals don't
- exist, viz, just ignore them ... */
if (!inLDSO) {
instrument_mem_access(
bbOut,
st->Ist.Store.addr,
sizeofIRType(typeOfIRExpr(bbIn->tyenv, st->Ist.Store.data)),
True/*isStore*/,
- sizeofIRType(hWordTy), goff_sp
+ sizeofIRType(hWordTy), goff_sp,
+ NULL/*no-guard*/
);
}
break;
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ IRExpr* data = sg->data;
+ IRExpr* addr = sg->addr;
+ IRType type = typeOfIRExpr(bbIn->tyenv, data);
+ tl_assert(type != Ity_INVALID);
+ instrument_mem_access( bbOut, addr, sizeofIRType(type),
+ True/*isStore*/,
+ sizeofIRType(hWordTy),
+ goff_sp, sg->guard );
+ break;
+ }
+
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ IRType type = Ity_INVALID; /* loaded type */
+ IRType typeWide = Ity_INVALID; /* after implicit widening */
+ IRExpr* addr = lg->addr;
+ typeOfIRLoadGOp(lg->cvt, &typeWide, &type);
+ tl_assert(type != Ity_INVALID);
+ instrument_mem_access( bbOut, addr, sizeofIRType(type),
+ False/*!isStore*/,
+ sizeofIRType(hWordTy),
+ goff_sp, lg->guard );
+ break;
+ }
+
case Ist_WrTmp: {
- /* ... whereas here we don't care whether a load is a
- vanilla one or a load-linked. */
IRExpr* data = st->Ist.WrTmp.data;
if (data->tag == Iex_Load) {
if (!inLDSO) {
@@ -4483,7 +4539,8 @@
data->Iex.Load.addr,
sizeofIRType(data->Iex.Load.ty),
False/*!isStore*/,
- sizeofIRType(hWordTy), goff_sp
+ sizeofIRType(hWordTy), goff_sp,
+ NULL/*no-guard*/
);
}
}
@@ -4503,7 +4560,7 @@
if (!inLDSO) {
instrument_mem_access(
bbOut, d->mAddr, dataSize, False/*!isStore*/,
- sizeofIRType(hWordTy), goff_sp
+ sizeofIRType(hWordTy), goff_sp, NULL/*no-guard*/
);
}
}
@@ -4511,7 +4568,7 @@
if (!inLDSO) {
instrument_mem_access(
bbOut, d->mAddr, dataSize, True/*isStore*/,
- sizeofIRType(hWordTy), goff_sp
+ sizeofIRType(hWordTy), goff_sp, NULL/*no-guard*/
);
}
}
Modified: trunk/lackey/lk_main.c (+102 -26)
===================================================================
--- trunk/lackey/lk_main.c 2013-01-16 22:07:02 +00:00 (rev 13235)
+++ trunk/lackey/lk_main.c 2013-01-17 14:24:35 +00:00 (rev 13236)
@@ -292,6 +292,10 @@
/*--- Stuff for --detailed-counts ---*/
/*------------------------------------------------------------*/
+typedef
+ IRExpr
+ IRAtom;
+
/* --- Operations --- */
typedef enum { OpLoad=0, OpStore=1, OpAlu=2 } Op;
@@ -351,8 +355,10 @@
(*detail)++;
}
-/* A helper that adds the instrumentation for a detail. */
-static void instrument_detail(IRSB* sb, Op op, IRType type)
+/* A helper that adds the instrumentation for a detail. guard ::
+ Ity_I1 is the guarding condition for the event. If NULL it is
+ assumed to mean "always True". */
+static void instrument_detail(IRSB* sb, Op op, IRType type, IRAtom* guard)
{
IRDirty* di;
IRExpr** argv;
@@ -365,6 +371,7 @@
di = unsafeIRDirty_0_N( 1, "increment_detail",
VG_(fnptr_to_fnentry)( &increment_detail ),
argv);
+ if (guard) di->guard = guard;
addStmtToIRSB( sb, IRStmt_Dirty(di) );
}
@@ -391,10 +398,6 @@
#define MAX_DSIZE 512
-typedef
- IRExpr
- IRAtom;
-
typedef
enum { Event_Ir, Event_Dr, Event_Dw, Event_Dm }
EventKind;
@@ -404,6 +407,7 @@
EventKind ekind;
IRAtom* addr;
Int size;
+ IRAtom* guard; /* :: Ity_I1, or NULL=="always True" */
}
Event;
@@ -500,6 +504,9 @@
di = unsafeIRDirty_0_N( /*regparms*/2,
helperName, VG_(fnptr_to_fnentry)( helperAddr ),
argv );
+ if (ev->guard) {
+ di->guard = ev->guard;
+ }
addStmtToIRSB( sb, IRStmt_Dirty(di) );
}
@@ -524,11 +531,13 @@
evt->ekind = Event_Ir;
evt->addr = iaddr;
evt->size = isize;
+ evt->guard = NULL;
events_used++;
}
+/* Add a guarded read event. */
static
-void addEvent_Dr ( IRSB* sb, IRAtom* daddr, Int dsize )
+void addEvent_Dr_guarded ( IRSB* sb, IRAtom* daddr, Int dsize, IRAtom* guard )
{
Event* evt;
tl_assert(clo_trace_mem);
@@ -541,10 +550,41 @@
evt->ekind = Event_Dr;
evt->addr = daddr;
evt->size = dsize;
+ evt->guard = guard;
events_used++;
}
+/* Add an ordinary read event, by adding a guarded read event with an
+ always-true guard. */
static
+void addEvent_Dr ( IRSB* sb, IRAtom* daddr, Int dsize )
+{
+ addEvent_Dr_guarded(sb, daddr, dsize, NULL);
+}
+
+/* Add a guarded write event. */
+static
+void addEvent_Dw_guarded ( IRSB* sb, IRAtom* daddr, Int dsize, IRAtom* guard )
+{
+ Event* evt;
+ tl_assert(clo_trace_mem);
+ tl_assert(isIRAtom(daddr));
+ tl_assert(dsize >= 1 && dsize <= MAX_DSIZE);
+ if (events_used == N_EVENTS)
+ flushEvents(sb);
+ tl_assert(events_used >= 0 && events_used < N_EVENTS);
+ evt = &events[events_used];
+ evt->ekind = Event_Dw;
+ evt->addr = daddr;
+ evt->size = dsize;
+ evt->guard = guard;
+ events_used++;
+}
+
+/* Add an ordinary write event. Try to merge it with an immediately
+ preceding ordinary read event of the same size to the same
+ address. */
+static
void addEvent_Dw ( IRSB* sb, IRAtom* daddr, Int dsize )
{
Event* lastEvt;
@@ -556,9 +596,10 @@
// Is it possible to merge this write with the preceding read?
lastEvt = &events[events_used-1];
if (events_used > 0
- && lastEvt->ekind == Event_Dr
- && lastEvt->size == dsize
- && eqIRAtom(lastEvt->addr, daddr))
+ && lastEvt->ekind == Event_Dr
+ && lastEvt->size == dsize
+ && lastEvt->guard == NULL
+ && eqIRAtom(lastEvt->addr, daddr))
{
lastEvt->ekind = Event_Dm;
return;
@@ -572,6 +613,7 @@
evt->ekind = Event_Dw;
evt->size = dsize;
evt->addr = daddr;
+ evt->guard = NULL;
events_used++;
}
@@ -613,7 +655,6 @@
Int i;
IRSB* sbOut;
HChar fnname[100];
- IRType type;
IRTypeEnv* tyenv = sbIn->tyenv;
Addr iaddr = 0, dst;
UInt ilen = 0;
@@ -734,18 +775,18 @@
}
if (clo_detailed_counts) {
IRExpr* expr = st->Ist.WrTmp.data;
- type = typeOfIRExpr(sbOut->tyenv, expr);
+ IRType type = typeOfIRExpr(sbOut->tyenv, expr);
tl_assert(type != Ity_INVALID);
switch (expr->tag) {
case Iex_Load:
- instrument_detail( sbOut, OpLoad, type );
+ instrument_detail( sbOut, OpLoad, type, NULL/*guard*/ );
break;
case Iex_Unop:
case Iex_Binop:
case Iex_Triop:
case Iex_Qop:
case Iex_Mux0X:
- instrument_detail( sbOut, OpAlu, type );
+ instrument_detail( sbOut, OpAlu, type, NULL/*guard*/ );
break;
default:
break;
@@ -754,20 +795,54 @@
addStmtToIRSB( sbOut, st );
break;
- case Ist_Store:
+ case Ist_Store: {
+ IRExpr* data = st->Ist.Store.data;
+ IRType type = typeOfIRExpr(tyenv, data);
+ tl_assert(type != Ity_INVALID);
if (clo_trace_mem) {
- IRExpr* data = st->Ist.Store.data;
addEvent_Dw( sbOut, st->Ist.Store.addr,
- sizeofIRType(typeOfIRExpr(tyenv, data)) );
+ sizeofIRType(type) );
}
if (clo_detailed_counts) {
- type = typeOfIRExpr(sbOut->tyenv, st->Ist.Store.data);
- tl_assert(type != Ity_INVALID);
- instrument_detail( sbOut, OpStore, type );
+ instrument_detail( sbOut, OpStore, type, NULL/*guard*/ );
}
addStmtToIRSB( sbOut, st );
break;
+ }
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ IRExpr* data = sg->data;
+ IRType type = typeOfIRExpr(tyenv, data);
+ tl_assert(type != Ity_INVALID);
+ if (clo_trace_mem) {
+ addEvent_Dw_guarded( sbOut, sg->addr,
+ sizeofIRType(type), sg->guard );
+ }
+ if (clo_detailed_counts) {
+ instrument_detail( sbOut, OpStore, type, sg->guard );
+ }
+ addStmtToIRSB( sbOut, st );
+ break;
+ }
+
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ IRType type = Ity_INVALID; /* loaded type */
+ IRType typeWide = Ity_INVALID; /* after implicit widening */
+ typeOfIRLoadGOp(lg->cvt, &typeWide, &type);
+ tl_assert(type != Ity_INVALID);
+ if (clo_trace_mem) {
+ addEvent_Dr_guarded( sbOut, lg->addr,
+ sizeofIRType(type), lg->guard );
+ }
+ if (clo_detailed_counts) {
+ instrument_detail( sbOut, OpLoad, type, lg->guard );
+ }
+ addStmtToIRSB( sbOut, st );
+ break;
+ }
+
case Ist_Dirty: {
if (clo_trace_mem) {
Int dsize;
@@ -810,12 +885,12 @@
addEvent_Dw( sbOut, cas->addr, dataSize );
}
if (clo_detailed_counts) {
- instrument_detail( sbOut, OpLoad, dataTy );
+ instrument_detail( sbOut, OpLoad, dataTy, NULL/*guard*/ );
if (cas->dataHi != NULL) /* dcas */
- instrument_detail( sbOut, OpLoad, dataTy );
- instrument_detail( sbOut, OpStore, dataTy );
+ instrument_detail( sbOut, OpLoad, dataTy, NULL/*guard*/ );
+ instrument_detail( sbOut, OpStore, dataTy, NULL/*guard*/ );
if (cas->dataHi != NULL) /* dcas */
- instrument_detail( sbOut, OpStore, dataTy );
+ instrument_detail( sbOut, OpStore, dataTy, NULL/*guard*/ );
}
addStmtToIRSB( sbOut, st );
break;
@@ -833,7 +908,7 @@
flushEvents(sbOut);
}
if (clo_detailed_counts)
- instrument_detail( sbOut, OpLoad, dataTy );
+ instrument_detail( sbOut, OpLoad, dataTy, NULL/*guard*/ );
} else {
/* SC */
dataTy = typeOfIRExpr(tyenv, st->Ist.LLSC.storedata);
@@ -841,7 +916,7 @@
addEvent_Dw( sbOut, st->Ist.LLSC.addr,
sizeofIRType(dataTy) );
if (clo_detailed_counts)
- instrument_detail( sbOut, OpStore, dataTy );
+ instrument_detail( sbOut, OpStore, dataTy, NULL/*guard*/ );
}
addStmtToIRSB( sbOut, st );
break;
@@ -893,6 +968,7 @@
break;
default:
+ ppIRStmt(st);
tl_assert(0);
}
}
Modified: trunk/drd/drd_load_store.c (+70 -29)
===================================================================
--- trunk/drd/drd_load_store.c 2013-01-16 22:07:02 +00:00 (rev 13235)
+++ trunk/drd/drd_load_store.c 2013-01-17 14:24:35 +00:00 (rev 13236)
@@ -344,22 +344,23 @@
* Instrument the client code to trace a memory load (--trace-addr).
*/
static IRExpr* instr_trace_mem_load(IRSB* const bb, IRExpr* addr_expr,
- ...
[truncated message content] |
|
From: <sv...@va...> - 2013-01-17 14:24:05
|
sewardj 2013-01-17 14:23:53 +0000 (Thu, 17 Jan 2013)
New Revision: 2642
Log:
Merge, from branches/COMEM, revisions 2568 to 2641.
Modified directories:
trunk/
Modified files:
trunk/priv/guest_arm_toIR.c
trunk/priv/host_amd64_defs.c
trunk/priv/host_amd64_defs.h
trunk/priv/host_amd64_isel.c
trunk/priv/host_arm_defs.c
trunk/priv/host_arm_defs.h
trunk/priv/host_arm_isel.c
trunk/priv/host_generic_regs.c
trunk/priv/host_generic_regs.h
trunk/priv/host_mips_defs.c
trunk/priv/host_mips_defs.h
trunk/priv/host_mips_isel.c
trunk/priv/host_ppc_defs.c
trunk/priv/host_ppc_defs.h
trunk/priv/host_ppc_isel.c
trunk/priv/host_s390_defs.c
trunk/priv/host_x86_defs.c
trunk/priv/host_x86_defs.h
trunk/priv/host_x86_isel.c
trunk/priv/ir_defs.c
trunk/priv/ir_opt.c
trunk/pub/libvex_ir.h
Modified: trunk/
Modified: trunk/priv/host_s390_defs.c (+15 -0)
===================================================================
--- trunk/priv/host_s390_defs.c 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/host_s390_defs.c 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -8291,16 +8291,29 @@
}
+/* Returns a value == BUF to denote failure, != BUF to denote success. */
static UChar *
s390_insn_helper_call_emit(UChar *buf, const s390_insn *insn)
{
s390_cc_t cond;
ULong target;
UChar *ptmp = buf;
+ UChar *bufIN = buf;
cond = insn->variant.helper_call.cond;
target = insn->variant.helper_call.target;
+ if (cond != S390_CC_ALWAYS
+ && insn->variant.helper_call.dst != INVALID_HREG) {
+ /* The call might not happen (it isn't unconditional) and it
+ returns a result. In this case we will need to generate a
+ control flow diamond to put 0x555..555 in the return
+ register(s) in the case where the call doesn't happen. If
+ this ever becomes necessary, maybe copy code from the ARM
+ equivalent. Until that day, just give up. */
+ return bufIN; /* To denote failure. */
+ }
+
if (cond != S390_CC_ALWAYS) {
/* So we have something like this
if (cond) call X;
@@ -9473,6 +9486,7 @@
case S390_INSN_HELPER_CALL:
end = s390_insn_helper_call_emit(buf, insn);
+ if (end == buf) goto fail;
break;
case S390_INSN_BFP_TRIOP:
@@ -9559,6 +9573,7 @@
end = s390_insn_xassisted_emit(buf, insn, disp_cp_xassisted);
break;
+ fail:
default:
vpanic("emit_S390Instr");
}
Modified: trunk/priv/host_mips_isel.c (+48 -11)
===================================================================
--- trunk/priv/host_mips_isel.c 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/host_mips_isel.c 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -362,7 +362,7 @@
call is unconditional. */
static void doHelperCall(ISelEnv * env, Bool passBBP, IRExpr * guard,
- IRCallee * cee, IRExpr ** args)
+ IRCallee * cee, IRExpr ** args, RetLoc rloc)
{
MIPSCondCode cc;
HReg argregs[MIPS_N_REGPARMS];
@@ -526,13 +526,13 @@
/* Finally, the call itself. */
if (mode64)
if (cc == MIPScc_AL) {
- addInstr(env, MIPSInstr_CallAlways(cc, target, argiregs));
+ addInstr(env, MIPSInstr_CallAlways(cc, target, argiregs, rloc));
} else {
- addInstr(env, MIPSInstr_Call(cc, target, argiregs, src));
+ addInstr(env, MIPSInstr_Call(cc, target, argiregs, src, rloc));
} else if (cc == MIPScc_AL) {
- addInstr(env, MIPSInstr_CallAlways(cc, (Addr32) target, argiregs));
+ addInstr(env, MIPSInstr_CallAlways(cc, (Addr32) target, argiregs, rloc));
} else {
- addInstr(env, MIPSInstr_Call(cc, (Addr32) target, argiregs, src));
+ addInstr(env, MIPSInstr_Call(cc, (Addr32) target, argiregs, src, rloc));
}
/* restore GuestStatePointer */
addInstr(env, MIPSInstr_Load(4, GuestStatePointer(mode64),
@@ -1485,13 +1485,26 @@
HReg r_dst = newVRegI(env);
vassert(ty == e->Iex.CCall.retty);
- /* be very restrictive for now. Only 32/64-bit ints allowed
- for args, and 32 bits for return type. */
+ /* be very restrictive for now. Only 32/64-bit ints allowed for
+ args, and 32 bits for return type. Don't forget to change
+ the RetLoc if more return types are allowed in future. */
if (e->Iex.CCall.retty != Ity_I32 && !mode64)
goto irreducible;
+ /* What's the retloc? */
+ RetLoc rloc = RetLocINVALID;
+ if (ty == Ity_I32) {
+ rloc = RetLocInt;
+ }
+ else if (ty == Ity_I64) {
+ rloc = mode64 ? RetLocInt : RetLoc2Int;
+ }
+ else {
+ goto irreducible;
+ }
+
/* Marshal args, do the call, clear stack. */
- doHelperCall(env, False, NULL, e->Iex.CCall.cee, e->Iex.CCall.args);
+ doHelperCall(env, False, NULL, e->Iex.CCall.cee, e->Iex.CCall.args, rloc);
addInstr(env, mk_iMOVds_RR(r_dst, hregMIPS_GPR2(mode64)));
return r_dst;
}
@@ -2895,23 +2908,47 @@
/* --------- Call to DIRTY helper --------- */
case Ist_Dirty: {
- IRType retty;
IRDirty *d = stmt->Ist.Dirty.details;
Bool passBBP = False;
if (d->nFxState == 0)
vassert(!d->needsBBP);
+
passBBP = toBool(d->nFxState > 0 && d->needsBBP);
+ /* Figure out the return type, if any. */
+ IRType retty = Ity_INVALID;
+ if (d->tmp != IRTemp_INVALID)
+ retty = typeOfIRTemp(env->type_env, d->tmp);
+
+ /* Marshal args, do the call, clear stack, set the return
+ value to 0x555..555 if this is a conditional call that
+ returns a value and the call is skipped. We need to set
+ the ret-loc correctly in order to implement the IRDirty
+ semantics that the return value is 0x555..555 if the call
+ doesn't happen. */
+ RetLoc rloc = RetLocINVALID;
+ switch (retty) {
+ case Ity_INVALID: /* function doesn't return anything */
+ rloc = RetLocNone; break;
+ case Ity_I64:
+ rloc = mode64 ? RetLocInt : RetLoc2Int; break;
+ case Ity_I32: case Ity_I16: case Ity_I8:
+ rloc = RetLocInt; break;
+ default:
+ break;
+ }
+ if (rloc == RetLocINVALID)
+ break; /* will go to stmt_fail: */
+
/* Marshal args, do the call, clear stack. */
- doHelperCall(env, passBBP, d->guard, d->cee, d->args);
+ doHelperCall(env, passBBP, d->guard, d->cee, d->args, rloc);
/* Now figure out what to do with the returned value, if any. */
if (d->tmp == IRTemp_INVALID)
/* No return value. Nothing to do. */
return;
- retty = typeOfIRTemp(env->type_env, d->tmp);
if (retty == Ity_I64 && !mode64) {
HReg rHi = newVRegI(env);
HReg rLo = newVRegI(env);
Modified: trunk/priv/host_ppc_isel.c (+48 -13)
===================================================================
--- trunk/priv/host_ppc_isel.c 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/host_ppc_isel.c 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -1,5 +1,4 @@
-
/*---------------------------------------------------------------*/
/*--- begin host_ppc_isel.c ---*/
/*---------------------------------------------------------------*/
@@ -672,7 +671,8 @@
static
void doHelperCall ( ISelEnv* env,
Bool passBBP,
- IRExpr* guard, IRCallee* cee, IRExpr** args )
+ IRExpr* guard, IRCallee* cee, IRExpr** args,
+ RetLoc rloc )
{
PPCCondCode cc;
HReg argregs[PPC_N_REGPARMS];
@@ -902,7 +902,7 @@
toUInt(Ptr_to_ULong(cee->addr));
/* Finally, the call itself. */
- addInstr(env, PPCInstr_Call( cc, (Addr64)target, argiregs ));
+ addInstr(env, PPCInstr_Call( cc, (Addr64)target, argiregs, rloc ));
}
@@ -2008,6 +2008,9 @@
break;
case Iop_BCDtoDPB: {
+ /* the following is only valid in 64 bit mode */
+ if (!mode64) break;
+
PPCCondCode cc;
UInt argiregs;
HReg argregs[1];
@@ -2026,13 +2029,17 @@
cc = mk_PPCCondCode( Pct_ALWAYS, Pcf_NONE );
fdescr = (HWord*)h_BCDtoDPB;
- addInstr(env, PPCInstr_Call( cc, (Addr64)(fdescr[0]), argiregs ) );
+ addInstr(env, PPCInstr_Call( cc, (Addr64)(fdescr[0]),
+ argiregs, RetLocInt) );
addInstr(env, mk_iMOVds_RR(r_dst, argregs[0]));
return r_dst;
}
case Iop_DPBtoBCD: {
+ /* the following is only valid in 64 bit mode */
+ if (!mode64) break;
+
PPCCondCode cc;
UInt argiregs;
HReg argregs[1];
@@ -2051,7 +2058,8 @@
cc = mk_PPCCondCode( Pct_ALWAYS, Pcf_NONE );
fdescr = (HWord*)h_DPBtoBCD;
- addInstr(env, PPCInstr_Call( cc, (Addr64)(fdescr[0]), argiregs ) );
+ addInstr(env, PPCInstr_Call( cc, (Addr64)(fdescr[0]),
+ argiregs, RetLocInt ) );
addInstr(env, mk_iMOVds_RR(r_dst, argregs[0]));
return r_dst;
@@ -2100,14 +2108,15 @@
HReg r_dst = newVRegI(env);
vassert(ty == Ity_I32);
- /* be very restrictive for now. Only 32/64-bit ints allowed
- for args, and 32 bits for return type. */
+ /* be very restrictive for now. Only 32/64-bit ints allowed for
+ args, and 32 bits for return type. Don't forget to change +
+ the RetLoc if more return types are allowed in future. */
if (e->Iex.CCall.retty != Ity_I32)
goto irreducible;
/* Marshal args, do the call, clear stack. */
doHelperCall( env, False, NULL,
- e->Iex.CCall.cee, e->Iex.CCall.args );
+ e->Iex.CCall.cee, e->Iex.CCall.args, RetLocInt );
/* GPR3 now holds the destination address from Pin_Goto */
addInstr(env, mk_iMOVds_RR(r_dst, hregPPC_GPR3(mode64)));
@@ -3262,7 +3271,8 @@
cc = mk_PPCCondCode( Pct_ALWAYS, Pcf_NONE );
target = toUInt( Ptr_to_ULong(h_BCDtoDPB ) );
- addInstr( env, PPCInstr_Call( cc, (Addr64)target, argiregs ) );
+ addInstr( env, PPCInstr_Call( cc, (Addr64)target,
+ argiregs, RetLoc2Int ) );
addInstr( env, mk_iMOVds_RR( tHi, argregs[argreg-1] ) );
addInstr( env, mk_iMOVds_RR( tLo, argregs[argreg] ) );
@@ -3301,7 +3311,8 @@
target = toUInt( Ptr_to_ULong( h_DPBtoBCD ) );
- addInstr(env, PPCInstr_Call( cc, (Addr64)target, argiregs ) );
+ addInstr(env, PPCInstr_Call( cc, (Addr64)target,
+ argiregs, RetLoc2Int ) );
addInstr(env, mk_iMOVds_RR(tHi, argregs[argreg-1]));
addInstr(env, mk_iMOVds_RR(tLo, argregs[argreg]));
@@ -4973,7 +4984,6 @@
/* --------- Call to DIRTY helper --------- */
case Ist_Dirty: {
- IRType retty;
IRDirty* d = stmt->Ist.Dirty.details;
Bool passBBP = False;
@@ -4981,15 +4991,38 @@
vassert(!d->needsBBP);
passBBP = toBool(d->nFxState > 0 && d->needsBBP);
+ /* Figure out the return type, if any. */
+ IRType retty = Ity_INVALID;
+ if (d->tmp != IRTemp_INVALID)
+ retty = typeOfIRTemp(env->type_env, d->tmp);
+
+ /* Marshal args, do the call, clear stack, set the return value
+ to 0x555..555 if this is a conditional call that returns a
+ value and the call is skipped. We need to set the ret-loc
+ correctly in order to implement the IRDirty semantics that
+ the return value is 0x555..555 if the call doesn't happen. */
+ RetLoc rloc = RetLocINVALID;
+ switch (retty) {
+ case Ity_INVALID: /* function doesn't return anything */
+ rloc = RetLocNone; break;
+ case Ity_I64:
+ rloc = mode64 ? RetLocInt : RetLoc2Int; break;
+ case Ity_I32: case Ity_I16: case Ity_I8:
+ rloc = RetLocInt; break;
+ default:
+ break;
+ }
+ if (rloc == RetLocINVALID)
+ break; /* will go to stmt_fail: */
+
/* Marshal args, do the call, clear stack. */
- doHelperCall( env, passBBP, d->guard, d->cee, d->args );
+ doHelperCall( env, passBBP, d->guard, d->cee, d->args, rloc );
/* Now figure out what to do with the returned value, if any. */
if (d->tmp == IRTemp_INVALID)
/* No return value. Nothing to do. */
return;
- retty = typeOfIRTemp(env->type_env, d->tmp);
if (!mode64 && retty == Ity_I64) {
HReg r_dstHi, r_dstLo;
/* The returned value is in %r3:%r4. Park it in the
@@ -4997,6 +5030,7 @@
lookupIRTempPair( &r_dstHi, &r_dstLo, env, d->tmp);
addInstr(env, mk_iMOVds_RR(r_dstHi, hregPPC_GPR3(mode64)));
addInstr(env, mk_iMOVds_RR(r_dstLo, hregPPC_GPR4(mode64)));
+ vassert(rloc == RetLoc2Int);
return;
}
if (retty == Ity_I8 || retty == Ity_I16 ||
@@ -5005,6 +5039,7 @@
associated with tmp. */
HReg r_dst = lookupIRTemp(env, d->tmp);
addInstr(env, mk_iMOVds_RR(r_dst, hregPPC_GPR3(mode64)));
+ vassert(rloc == RetLocInt);
return;
}
break;
Modified: trunk/pub/libvex_ir.h (+139 -28)
===================================================================
--- trunk/pub/libvex_ir.h 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/pub/libvex_ir.h 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -218,7 +218,7 @@
float, or a vector (SIMD) value. */
typedef
enum {
- Ity_INVALID=0x11000,
+ Ity_INVALID=0x1100,
Ity_I1,
Ity_I8,
Ity_I16,
@@ -248,7 +248,7 @@
/* IREndness is used in load IRExprs and store IRStmts. */
typedef
enum {
- Iend_LE=0x12000, /* little endian */
+ Iend_LE=0x1200, /* little endian */
Iend_BE /* big endian */
}
IREndness;
@@ -261,7 +261,7 @@
/* The various kinds of constant. */
typedef
enum {
- Ico_U1=0x13000,
+ Ico_U1=0x1300,
Ico_U8,
Ico_U16,
Ico_U32,
@@ -412,7 +412,7 @@
/* -- Do not change this ordering. The IR generators rely on
(eg) Iop_Add64 == IopAdd8 + 3. -- */
- Iop_INVALID=0x14000,
+ Iop_INVALID=0x1400,
Iop_Add8, Iop_Add16, Iop_Add32, Iop_Add64,
Iop_Sub8, Iop_Sub16, Iop_Sub32, Iop_Sub64,
/* Signless mul. MullS/MullU is elsewhere. */
@@ -1604,7 +1604,7 @@
in the comments for IRExpr. */
typedef
enum {
- Iex_Binder=0x15000,
+ Iex_Binder=0x1900,
Iex_Get,
Iex_GetI,
Iex_RdTmp,
@@ -1813,7 +1813,7 @@
} Iex;
};
-/* ------------------ A ternary expression ---------------------- */
+/* Expression auxiliaries: a ternary expression. */
struct _IRTriop {
IROp op; /* op-code */
IRExpr* arg1; /* operand 1 */
@@ -1821,7 +1821,7 @@
IRExpr* arg3; /* operand 3 */
};
-/* ------------------ A quarternary expression ------------------ */
+/* Expression auxiliaries: a quarternary expression. */
struct _IRQop {
IROp op; /* op-code */
IRExpr* arg1; /* operand 1 */
@@ -1926,7 +1926,7 @@
*/
typedef
enum {
- Ijk_INVALID=0x16000,
+ Ijk_INVALID=0x1A00,
Ijk_Boring, /* not interesting; just goto next */
Ijk_Call, /* guest is doing a call */
Ijk_Ret, /* guest is doing a return */
@@ -1973,19 +1973,20 @@
Dirty calls are statements rather than expressions for obvious
reasons. If a dirty call is marked as writing guest state, any
- values derived from the written parts of the guest state are
- invalid. Similarly, if the dirty call is stated as writing
- memory, any loaded values are invalidated by it.
+ pre-existing values derived from the written parts of the guest
+ state are invalid. Similarly, if the dirty call is stated as
+ writing memory, any pre-existing loaded values are invalidated by
+ it.
In order that instrumentation is possible, the call must state, and
state correctly:
- * whether it reads, writes or modifies memory, and if so where
- (only one chunk can be stated)
+ * Whether it reads, writes or modifies memory, and if so where.
- * whether it reads, writes or modifies guest state, and if so which
- pieces (several pieces may be stated, and currently their extents
- must be known at translation-time).
+ * Whether it reads, writes or modifies guest state, and if so which
+ pieces. Several pieces may be stated, and their extents must be
+ known at translation-time. Each piece is allowed to repeat some
+ number of times at a fixed interval, if required.
Normally, code is generated to pass just the args to the helper.
However, if .needsBBP is set, then an extra first argument is
@@ -1995,9 +1996,11 @@
call does not access guest state.
IMPORTANT NOTE re GUARDS: Dirty calls are strict, very strict. The
- arguments are evaluated REGARDLESS of the guard value. The order of
- argument evaluation is unspecified. The guard expression is evaluated
- AFTER the arguments have been evaluated.
+ arguments and 'mFx' are evaluated REGARDLESS of the guard value.
+ The order of argument evaluation is unspecified. The guard
+ expression is evaluated AFTER the arguments and 'mFx' have been
+ evaluated. 'mFx' is expected (by Memcheck) to be a defined value
+ even if the guard evaluates to false.
*/
#define VEX_N_FXSTATE 7 /* enough for FXSAVE/FXRSTOR on x86 */
@@ -2005,7 +2008,7 @@
/* Effects on resources (eg. registers, memory locations) */
typedef
enum {
- Ifx_None = 0x1700, /* no effect */
+ Ifx_None=0x1B00, /* no effect */
Ifx_Read, /* reads the resource */
Ifx_Write, /* writes the resource */
Ifx_Modify, /* modifies the resource */
@@ -2015,14 +2018,13 @@
/* Pretty-print an IREffect */
extern void ppIREffect ( IREffect );
-
typedef
struct _IRDirty {
/* What to call, and details of args/results. .guard must be
- non-NULL. If .tmp is not IRTemp_INVALID (that is, the call
- returns a result) then .guard must be demonstrably (at
- JIT-time) always true, that is, the call must be
- unconditional. Conditional calls that assign .tmp are not
+ non-NULL. If .tmp is not IRTemp_INVALID, then the call
+ returns a result which is placed in .tmp. If at runtime the
+ guard evaluates to false, .tmp has an 0x555..555 bit pattern
+ written to it. Hence conditional calls that assign .tmp are
allowed. */
IRCallee* cee; /* where to call */
IRExpr* guard; /* :: Ity_Bit. Controls whether call happens */
@@ -2092,7 +2094,7 @@
typedef
enum {
- Imbe_Fence=0x18000,
+ Imbe_Fence=0x1C00,
/* Needed only on ARM. It cancels a reservation made by a
preceding Linked-Load, and needs to be handed through to the
back end, just as LL and SC themselves are. */
@@ -2192,6 +2194,7 @@
/* ------------------ Circular Array Put ------------------ */
+
typedef
struct {
IRRegArray* descr; /* Part of guest state treated as circular */
@@ -2208,6 +2211,85 @@
extern IRPutI* deepCopyIRPutI ( IRPutI* );
+/* --------------- Guarded loads and stores --------------- */
+
+/* Conditional stores are straightforward. They are the same as
+ normal stores, with an extra 'guard' field :: Ity_I1 that
+ determines whether or not the store actually happens. If not,
+ memory is unmodified.
+
+ The semantics of this is that 'addr' and 'data' are fully evaluated
+ even in the case where 'guard' evaluates to zero (false).
+*/
+typedef
+ struct {
+ IREndness end; /* Endianness of the store */
+ IRExpr* addr; /* store address */
+ IRExpr* data; /* value to write */
+ IRExpr* guard; /* Guarding value */
+ }
+ IRStoreG;
+
+/* Conditional loads are a little more complex. 'addr' is the
+ address, 'guard' is the guarding condition. If the load takes
+ place, the loaded value is placed in 'dst'. If it does not take
+ place, 'alt' is copied to 'dst'. However, the loaded value is not
+ placed directly in 'dst' -- it is first subjected to the conversion
+ specified by 'cvt'.
+
+ For example, imagine doing a conditional 8-bit load, in which the
+ loaded value is zero extended to 32 bits. Hence:
+ * 'dst' and 'alt' must have type I32
+ * 'cvt' must be a unary op which converts I8 to I32. In this
+ example, it would be ILGop_8Uto32.
+
+ There is no explicit indication of the type at which the load is
+ done, since that is inferrable from the arg type of 'cvt'. Note
+ that the types of 'alt' and 'dst' and the result type of 'cvt' must
+ all be the same.
+
+ Semantically, 'addr' is evaluated even in the case where 'guard'
+ evaluates to zero (false), and 'alt' is evaluated even when 'guard'
+ evaluates to one (true). That is, 'addr' and 'alt' are always
+ evaluated.
+*/
+typedef
+ enum {
+ ILGop_INVALID=0x1D00,
+ ILGop_Ident32, /* 32 bit, no conversion */
+ ILGop_16Uto32, /* 16 bit load, Z-widen to 32 */
+ ILGop_16Sto32, /* 16 bit load, S-widen to 32 */
+ ILGop_8Uto32, /* 8 bit load, Z-widen to 32 */
+ ILGop_8Sto32 /* 8 bit load, S-widen to 32 */
+ }
+ IRLoadGOp;
+
+typedef
+ struct {
+ IREndness end; /* Endianness of the load */
+ IRLoadGOp cvt; /* Conversion to apply to the loaded value */
+ IRTemp dst; /* Destination (LHS) of assignment */
+ IRExpr* addr; /* Address being loaded from */
+ IRExpr* alt; /* Value if load is not done. */
+ IRExpr* guard; /* Guarding value */
+ }
+ IRLoadG;
+
+extern void ppIRStoreG ( IRStoreG* sg );
+
+extern void ppIRLoadGOp ( IRLoadGOp cvt );
+
+extern void ppIRLoadG ( IRLoadG* lg );
+
+extern IRStoreG* mkIRStoreG ( IREndness end,
+ IRExpr* addr, IRExpr* data,
+ IRExpr* guard );
+
+extern IRLoadG* mkIRLoadG ( IREndness end, IRLoadGOp cvt,
+ IRTemp dst, IRExpr* addr, IRExpr* alt,
+ IRExpr* guard );
+
+
/* ------------------ Statements ------------------ */
/* The different kinds of statements. Their meaning is explained
@@ -2222,17 +2304,19 @@
typedef
enum {
- Ist_NoOp=0x19000,
+ Ist_NoOp=0x1E00,
Ist_IMark, /* META */
Ist_AbiHint, /* META */
Ist_Put,
Ist_PutI,
Ist_WrTmp,
Ist_Store,
+ Ist_LoadG,
+ Ist_StoreG,
Ist_CAS,
Ist_LLSC,
Ist_Dirty,
- Ist_MBE, /* META (maybe) */
+ Ist_MBE,
Ist_Exit
}
IRStmtTag;
@@ -2349,6 +2433,24 @@
IRExpr* data; /* value to write */
} Store;
+ /* Guarded store. Note that this is defined to evaluate all
+ expression fields (addr, data) even if the guard evaluates
+ to false.
+ ppIRStmt output:
+ if (<guard>) ST<end>(<addr>) = <data> */
+ struct {
+ IRStoreG* details;
+ } StoreG;
+
+ /* Guarded load. Note that this is defined to evaluate all
+ expression fields (addr, alt) even if the guard evaluates
+ to false.
+ ppIRStmt output:
+ t<tmp> = if (<guard>) <cvt>(LD<end>(<addr>)) else <alt> */
+ struct {
+ IRLoadG* details;
+ } LoadG;
+
/* Do an atomic compare-and-swap operation. Semantics are
described above on a comment at the definition of IRCAS.
@@ -2468,6 +2570,10 @@
extern IRStmt* IRStmt_PutI ( IRPutI* details );
extern IRStmt* IRStmt_WrTmp ( IRTemp tmp, IRExpr* data );
extern IRStmt* IRStmt_Store ( IREndness end, IRExpr* addr, IRExpr* data );
+extern IRStmt* IRStmt_StoreG ( IREndness end, IRExpr* addr, IRExpr* data,
+ IRExpr* guard );
+extern IRStmt* IRStmt_LoadG ( IREndness end, IRLoadGOp cvt, IRTemp dst,
+ IRExpr* addr, IRExpr* alt, IRExpr* guard );
extern IRStmt* IRStmt_CAS ( IRCAS* details );
extern IRStmt* IRStmt_LLSC ( IREndness end, IRTemp result,
IRExpr* addr, IRExpr* storedata );
@@ -2565,6 +2671,11 @@
extern IRType typeOfIRTemp ( IRTypeEnv*, IRTemp );
extern IRType typeOfIRExpr ( IRTypeEnv*, IRExpr* );
+/* What are the arg and result type for this IRLoadGOp? */
+extern void typeOfIRLoadGOp ( IRLoadGOp cvt,
+ /*OUT*/IRType* t_res,
+ /*OUT*/IRType* t_arg );
+
/* Sanity check a BB of IR */
extern void sanityCheckIRSB ( IRSB* bb,
const HChar* caller,
Property changed: trunk (+0 -0)
___________________________________________________________________
Name: svn:mergeinfo
- /branches/TCHAIN:2272-2295
+ /branches/COMEM:2570-2641
/branches/TCHAIN:2272-2295
Modified: trunk/priv/ir_opt.c (+274 -27)
===================================================================
--- trunk/priv/ir_opt.c 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/ir_opt.c 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -400,10 +400,12 @@
static void flatten_Stmt ( IRSB* bb, IRStmt* st )
{
Int i;
- IRExpr *e1, *e2, *e3, *e4, *e5;
- IRDirty *d, *d2;
- IRCAS *cas, *cas2;
- IRPutI *puti, *puti2;
+ IRExpr *e1, *e2, *e3, *e4, *e5;
+ IRDirty *d, *d2;
+ IRCAS *cas, *cas2;
+ IRPutI *puti, *puti2;
+ IRLoadG *lg;
+ IRStoreG *sg;
switch (st->tag) {
case Ist_Put:
if (isIRAtom(st->Ist.Put.data)) {
@@ -439,6 +441,21 @@
e2 = flatten_Expr(bb, st->Ist.Store.data);
addStmtToIRSB(bb, IRStmt_Store(st->Ist.Store.end, e1,e2));
break;
+ case Ist_StoreG:
+ sg = st->Ist.StoreG.details;
+ e1 = flatten_Expr(bb, sg->addr);
+ e2 = flatten_Expr(bb, sg->data);
+ e3 = flatten_Expr(bb, sg->guard);
+ addStmtToIRSB(bb, IRStmt_StoreG(sg->end, e1, e2, e3));
+ break;
+ case Ist_LoadG:
+ lg = st->Ist.LoadG.details;
+ e1 = flatten_Expr(bb, lg->addr);
+ e2 = flatten_Expr(bb, lg->alt);
+ e3 = flatten_Expr(bb, lg->guard);
+ addStmtToIRSB(bb, IRStmt_LoadG(lg->end, lg->cvt, lg->dst,
+ e1, e2, e3));
+ break;
case Ist_CAS:
cas = st->Ist.CAS.details;
e1 = flatten_Expr(bb, cas->addr);
@@ -763,7 +780,22 @@
vassert(isIRAtom(st->Ist.Store.data));
memRW = True;
break;
-
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ vassert(isIRAtom(sg->addr));
+ vassert(isIRAtom(sg->data));
+ vassert(isIRAtom(sg->guard));
+ memRW = True;
+ break;
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ vassert(isIRAtom(lg->addr));
+ vassert(isIRAtom(lg->alt));
+ vassert(isIRAtom(lg->guard));
+ memRW = True;
+ break;
+ }
case Ist_Exit:
vassert(isIRAtom(st->Ist.Exit.guard));
break;
@@ -2399,6 +2431,62 @@
fold_Expr(env, subst_Expr(env, st->Ist.Store.data))
);
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ vassert(isIRAtom(sg->addr));
+ vassert(isIRAtom(sg->data));
+ vassert(isIRAtom(sg->guard));
+ IRExpr* faddr = fold_Expr(env, subst_Expr(env, sg->addr));
+ IRExpr* fdata = fold_Expr(env, subst_Expr(env, sg->data));
+ IRExpr* fguard = fold_Expr(env, subst_Expr(env, sg->guard));
+ if (fguard->tag == Iex_Const) {
+ /* The condition on this store has folded down to a constant. */
+ vassert(fguard->Iex.Const.con->tag == Ico_U1);
+ if (fguard->Iex.Const.con->Ico.U1 == False) {
+ return IRStmt_NoOp();
+ } else {
+ vassert(fguard->Iex.Const.con->Ico.U1 == True);
+ return IRStmt_Store(sg->end, faddr, fdata);
+ }
+ }
+ return IRStmt_StoreG(sg->end, faddr, fdata, fguard);
+ }
+
+ case Ist_LoadG: {
+ /* This is complicated. If the guard folds down to 'false',
+ we can replace it with an assignment 'dst := alt', but if
+ the guard folds down to 'true', we can't conveniently
+ replace it with an unconditional load, because doing so
+ requires generating a new temporary, and that is not easy
+ to do at this point. */
+ IRLoadG* lg = st->Ist.LoadG.details;
+ vassert(isIRAtom(lg->addr));
+ vassert(isIRAtom(lg->alt));
+ vassert(isIRAtom(lg->guard));
+ IRExpr* faddr = fold_Expr(env, subst_Expr(env, lg->addr));
+ IRExpr* falt = fold_Expr(env, subst_Expr(env, lg->alt));
+ IRExpr* fguard = fold_Expr(env, subst_Expr(env, lg->guard));
+ if (fguard->tag == Iex_Const) {
+ /* The condition on this load has folded down to a constant. */
+ vassert(fguard->Iex.Const.con->tag == Ico_U1);
+ if (fguard->Iex.Const.con->Ico.U1 == False) {
+ /* The load is not going to happen -- instead 'alt' is
+ assigned to 'dst'. */
+ return IRStmt_WrTmp(lg->dst, falt);
+ } else {
+ vassert(fguard->Iex.Const.con->Ico.U1 == True);
+ /* The load is always going to happen. We want to
+ convert to an unconditional load and assign to 'dst'
+ (IRStmt_WrTmp). Problem is we need an extra temp to
+ hold the loaded value, but none is available.
+ Instead, reconstitute the conditional load (with
+ folded args, of course) and let the caller of this
+ routine deal with the problem. */
+ }
+ }
+ return IRStmt_LoadG(lg->end, lg->cvt, lg->dst, faddr, falt, fguard);
+ }
+
case Ist_CAS: {
IRCAS *cas, *cas2;
cas = st->Ist.CAS.details;
@@ -2472,8 +2560,6 @@
/* Interesting. The condition on this exit has folded down to
a constant. */
vassert(fcond->Iex.Const.con->tag == Ico_U1);
- vassert(fcond->Iex.Const.con->Ico.U1 == False
- || fcond->Iex.Const.con->Ico.U1 == True);
if (fcond->Iex.Const.con->Ico.U1 == False) {
/* exit is never going to happen, so dump the statement. */
return IRStmt_NoOp();
@@ -2507,6 +2593,11 @@
IRStmt* st2;
Int n_tmps = in->tyenv->types_used;
IRExpr** env = LibVEX_Alloc(n_tmps * sizeof(IRExpr*));
+ /* Keep track of IRStmt_LoadGs that we need to revisit after
+ processing all the other statements. */
+ const Int N_FIXUPS = 16;
+ Int fixups[N_FIXUPS]; /* indices in the stmt array of 'out' */
+ Int n_fixups = 0;
out = emptyIRSB();
out->tyenv = deepCopyIRTypeEnv( in->tyenv );
@@ -2534,40 +2625,124 @@
st2 = subst_and_fold_Stmt( env, st2 );
- /* If the statement has been folded into a no-op, forget it. */
- if (st2->tag == Ist_NoOp) continue;
+ /* Deal with some post-folding special cases. */
+ switch (st2->tag) {
- /* If the statement assigns to an IRTemp add it to the running
- environment. This is for the benefit of copy propagation
- and to allow sameIRExpr look through IRTemps. */
- if (st2->tag == Ist_WrTmp) {
- vassert(env[(Int)(st2->Ist.WrTmp.tmp)] == NULL);
- env[(Int)(st2->Ist.WrTmp.tmp)] = st2->Ist.WrTmp.data;
+ /* If the statement has been folded into a no-op, forget
+ it. */
+ case Ist_NoOp:
+ continue;
- /* 't1 = t2' -- don't add to BB; will be optimized out */
- if (st2->Ist.WrTmp.data->tag == Iex_RdTmp) continue;
+ /* If the statement assigns to an IRTemp add it to the
+ running environment. This is for the benefit of copy
+ propagation and to allow sameIRExpr look through
+ IRTemps. */
+ case Ist_WrTmp: {
+ vassert(env[(Int)(st2->Ist.WrTmp.tmp)] == NULL);
+ env[(Int)(st2->Ist.WrTmp.tmp)] = st2->Ist.WrTmp.data;
- /* 't = const' && 'const != F64i' -- don't add to BB
- Note, we choose not to propagate const when const is an
- F64i, so that F64i literals can be CSE'd later. This helps
- x86 floating point code generation. */
- if (st2->Ist.WrTmp.data->tag == Iex_Const
- && st2->Ist.WrTmp.data->Iex.Const.con->tag != Ico_F64i) continue;
+ /* 't1 = t2' -- don't add to BB; will be optimized out */
+ if (st2->Ist.WrTmp.data->tag == Iex_RdTmp)
+ continue;
+
+ /* 't = const' && 'const != F64i' -- don't add to BB
+ Note, we choose not to propagate const when const is an
+ F64i, so that F64i literals can be CSE'd later. This
+ helps x86 floating point code generation. */
+ if (st2->Ist.WrTmp.data->tag == Iex_Const
+ && st2->Ist.WrTmp.data->Iex.Const.con->tag != Ico_F64i) {
+ continue;
+ }
+ /* else add it to the output, as normal */
+ break;
+ }
+
+ case Ist_LoadG: {
+ IRLoadG* lg = st2->Ist.LoadG.details;
+ IRExpr* guard = lg->guard;
+ if (guard->tag == Iex_Const) {
+ /* The guard has folded to a constant, and that
+ constant must be 1:I1, since subst_and_fold_Stmt
+ folds out the case 0:I1 by itself. */
+ vassert(guard->Iex.Const.con->tag == Ico_U1);
+ vassert(guard->Iex.Const.con->Ico.U1 == True);
+ /* Add a NoOp here as a placeholder, and make a note of
+ where it is in the output block. Afterwards we'll
+ come back here and transform the NoOp and the LoadG
+ into a load-convert pair. The fixups[] entry
+ refers to the inserted NoOp, and we expect to find
+ the relevant LoadG immediately after it. */
+ vassert(n_fixups >= 0 && n_fixups <= N_FIXUPS);
+ if (n_fixups < N_FIXUPS) {
+ fixups[n_fixups++] = out->stmts_used;
+ addStmtToIRSB( out, IRStmt_NoOp() );
+ }
+ }
+ /* And always add the LoadG to the output, regardless. */
+ break;
+ }
+
+ default:
+ break;
}
/* Not interesting, copy st2 into the output block. */
addStmtToIRSB( out, st2 );
}
-#if STATS_IROPT
+# if STATS_IROPT
vex_printf("sameIRExpr: invoked = %u/%u equal = %u/%u max_nodes = %u\n",
invocation_count, recursion_count, success_count,
recursion_success_count, max_nodes_visited);
-#endif
+# endif
out->next = subst_Expr( env, in->next );
out->jumpkind = in->jumpkind;
out->offsIP = in->offsIP;
+
+ /* Process any leftover unconditional LoadGs that we noticed
+ in the main pass. */
+ vassert(n_fixups >= 0 && n_fixups <= N_FIXUPS);
+ for (i = 0; i < n_fixups; i++) {
+ Int ix = fixups[i];
+ /* Carefully verify that the LoadG has the expected form. */
+ vassert(ix >= 0 && ix+1 < out->stmts_used);
+ IRStmt* nop = out->stmts[ix];
+ IRStmt* lgu = out->stmts[ix+1];
+ vassert(nop->tag == Ist_NoOp);
+ vassert(lgu->tag == Ist_LoadG);
+ IRLoadG* lg = lgu->Ist.LoadG.details;
+ IRExpr* guard = lg->guard;
+ vassert(guard->Iex.Const.con->tag == Ico_U1);
+ vassert(guard->Iex.Const.con->Ico.U1 == True);
+ /* Figure out the load and result types, and the implied
+ conversion operation. */
+ IRType cvtRes = Ity_INVALID, cvtArg = Ity_INVALID;
+ typeOfIRLoadGOp(lg->cvt, &cvtRes, &cvtArg);
+ IROp cvtOp = Iop_INVALID;
+ switch (lg->cvt) {
+ case ILGop_Ident32: break;
+ case ILGop_8Uto32: cvtOp = Iop_8Uto32; break;
+ case ILGop_8Sto32: cvtOp = Iop_8Sto32; break;
+ case ILGop_16Uto32: cvtOp = Iop_16Uto32; break;
+ case ILGop_16Sto32: cvtOp = Iop_16Sto32; break;
+ default: vpanic("cprop_BB: unhandled ILGOp");
+ }
+ /* Replace the placeholder NoOp by the required unconditional
+ load. */
+ IRTemp tLoaded = newIRTemp(out->tyenv, cvtArg);
+ out->stmts[ix]
+ = IRStmt_WrTmp(tLoaded,
+ IRExpr_Load(lg->end, cvtArg, lg->addr));
+ /* Replace the LoadG by a conversion from the loaded value's
+ type to the required result type. */
+ out->stmts[ix+1]
+ = IRStmt_WrTmp(
+ lg->dst, cvtOp == Iop_INVALID
+ ? IRExpr_RdTmp(tLoaded)
+ : IRExpr_Unop(cvtOp, IRExpr_RdTmp(tLoaded)));
+ }
+
return out;
}
@@ -2663,6 +2838,20 @@
addUses_Expr(set, st->Ist.Store.addr);
addUses_Expr(set, st->Ist.Store.data);
return;
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ addUses_Expr(set, sg->addr);
+ addUses_Expr(set, sg->data);
+ addUses_Expr(set, sg->guard);
+ return;
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ addUses_Expr(set, lg->addr);
+ addUses_Expr(set, lg->alt);
+ addUses_Expr(set, lg->guard);
+ return;
+ }
case Ist_CAS:
cas = st->Ist.CAS.details;
addUses_Expr(set, cas->addr);
@@ -3469,7 +3658,7 @@
if (0) { ppIRSB(bb); vex_printf("\n\n"); }
/* Iterate forwards over the stmts.
- On seeing "t = E", where E is one of the 5 AvailExpr forms:
+ On seeing "t = E", where E is one of the AvailExpr forms:
let E' = apply tenv substitution to E
search aenv for E'
if a mapping E' -> q is found,
@@ -3499,11 +3688,12 @@
switch (st->tag) {
case Ist_Dirty: case Ist_Store: case Ist_MBE:
case Ist_CAS: case Ist_LLSC:
+ case Ist_StoreG:
paranoia = 2; break;
case Ist_Put: case Ist_PutI:
paranoia = 1; break;
case Ist_NoOp: case Ist_IMark: case Ist_AbiHint:
- case Ist_WrTmp: case Ist_Exit:
+ case Ist_WrTmp: case Ist_Exit: case Ist_LoadG:
paranoia = 0; break;
default:
vpanic("do_cse_BB(1)");
@@ -4204,6 +4394,21 @@
deltaIRExpr(st->Ist.Store.addr, delta);
deltaIRExpr(st->Ist.Store.data, delta);
break;
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ deltaIRExpr(sg->addr, delta);
+ deltaIRExpr(sg->data, delta);
+ deltaIRExpr(sg->guard, delta);
+ break;
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ lg->dst += delta;
+ deltaIRExpr(lg->addr, delta);
+ deltaIRExpr(lg->alt, delta);
+ deltaIRExpr(lg->guard, delta);
+ break;
+ }
case Ist_CAS:
if (st->Ist.CAS.details->oldHi != IRTemp_INVALID)
st->Ist.CAS.details->oldHi += delta;
@@ -4676,6 +4881,20 @@
aoccCount_Expr(uses, st->Ist.Store.addr);
aoccCount_Expr(uses, st->Ist.Store.data);
return;
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ aoccCount_Expr(uses, sg->addr);
+ aoccCount_Expr(uses, sg->data);
+ aoccCount_Expr(uses, sg->guard);
+ return;
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ aoccCount_Expr(uses, lg->addr);
+ aoccCount_Expr(uses, lg->alt);
+ aoccCount_Expr(uses, lg->guard);
+ return;
+ }
case Ist_CAS:
cas = st->Ist.CAS.details;
aoccCount_Expr(uses, cas->addr);
@@ -4992,6 +5211,20 @@
atbSubst_Expr(env, st->Ist.Store.addr),
atbSubst_Expr(env, st->Ist.Store.data)
);
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ return IRStmt_StoreG(sg->end,
+ atbSubst_Expr(env, sg->addr),
+ atbSubst_Expr(env, sg->data),
+ atbSubst_Expr(env, sg->guard));
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ return IRStmt_LoadG(lg->end, lg->cvt, lg->dst,
+ atbSubst_Expr(env, lg->addr),
+ atbSubst_Expr(env, lg->alt),
+ atbSubst_Expr(env, lg->guard));
+ }
case Ist_WrTmp:
return IRStmt_WrTmp(
st->Ist.WrTmp.tmp,
@@ -5429,6 +5662,20 @@
vassert(isIRAtom(st->Ist.Store.addr));
vassert(isIRAtom(st->Ist.Store.data));
break;
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ vassert(isIRAtom(sg->addr));
+ vassert(isIRAtom(sg->data));
+ vassert(isIRAtom(sg->guard));
+ break;
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ vassert(isIRAtom(lg->addr));
+ vassert(isIRAtom(lg->alt));
+ vassert(isIRAtom(lg->guard));
+ break;
+ }
case Ist_CAS:
cas = st->Ist.CAS.details;
vassert(isIRAtom(cas->addr));
Modified: trunk/priv/host_amd64_isel.c (+44 -15)
===================================================================
--- trunk/priv/host_amd64_isel.c 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/host_amd64_isel.c 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -407,7 +407,8 @@
static
void doHelperCall ( ISelEnv* env,
Bool passBBP,
- IRExpr* guard, IRCallee* cee, IRExpr** args )
+ IRExpr* guard, IRCallee* cee, IRExpr** args,
+ RetLoc rloc )
{
AMD64CondCode cc;
HReg argregs[6];
@@ -587,7 +588,7 @@
addInstr(env, AMD64Instr_Call(
cc,
Ptr_to_ULong(cee->addr),
- n_args + (passBBP ? 1 : 0)
+ n_args + (passBBP ? 1 : 0), rloc
)
);
}
@@ -1126,7 +1127,7 @@
addInstr(env, AMD64Instr_MovxLQ(False, argR, argR));
addInstr(env, mk_iMOVsd_RR(argL, hregAMD64_RDI()) );
addInstr(env, mk_iMOVsd_RR(argR, hregAMD64_RSI()) );
- addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 2 ));
+ addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 2, RetLocInt ));
addInstr(env, mk_iMOVsd_RR(hregAMD64_RAX(), dst));
return dst;
}
@@ -1595,7 +1596,8 @@
HReg arg = iselIntExpr_R(env, e->Iex.Unop.arg);
fn = (HWord)h_generic_calc_GetMSBs8x8;
addInstr(env, mk_iMOVsd_RR(arg, hregAMD64_RDI()) );
- addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 1 ));
+ addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn,
+ 1, RetLocInt ));
/* MovxLQ is not exactly the right thing here. We just
need to get the bottom 8 bits of RAX into dst, and zero
out everything else. Assuming that the helper returns
@@ -1625,7 +1627,8 @@
addInstr(env, AMD64Instr_Alu64R( Aalu_MOV,
AMD64RMI_Mem(m16_rsp),
hregAMD64_RSI() )); /* 2nd arg */
- addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 2 ));
+ addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn,
+ 2, RetLocInt ));
/* MovxLQ is not exactly the right thing here. We just
need to get the bottom 16 bits of RAX into dst, and zero
out everything else. Assuming that the helper returns
@@ -1659,7 +1662,7 @@
HReg dst = newVRegI(env);
HReg arg = iselIntExpr_R(env, e->Iex.Unop.arg);
addInstr(env, mk_iMOVsd_RR(arg, hregAMD64_RDI()) );
- addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 1 ));
+ addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 1, RetLocInt ));
addInstr(env, mk_iMOVsd_RR(hregAMD64_RAX(), dst));
return dst;
}
@@ -1713,13 +1716,15 @@
HReg dst = newVRegI(env);
vassert(ty == e->Iex.CCall.retty);
- /* be very restrictive for now. Only 64-bit ints allowed
- for args, and 64 or 32 bits for return type. */
+ /* be very restrictive for now. Only 64-bit ints allowed for
+ args, and 64 or 32 bits for return type. Don't forget to
+ change the RetLoc if more types are allowed in future. */
if (e->Iex.CCall.retty != Ity_I64 && e->Iex.CCall.retty != Ity_I32)
goto irreducible;
/* Marshal args, do the call. */
- doHelperCall( env, False, NULL, e->Iex.CCall.cee, e->Iex.CCall.args );
+ doHelperCall( env, False, NULL, e->Iex.CCall.cee, e->Iex.CCall.args,
+ RetLocInt );
/* Move to dst, and zero out the top 32 bits if the result type is
Ity_I32. Probably overkill, but still .. */
@@ -2256,7 +2261,8 @@
vassert(cal->Iex.CCall.retty == Ity_I64); /* else ill-typed IR */
vassert(con->Iex.Const.con->tag == Ico_U64);
/* Marshal args, do the call. */
- doHelperCall( env, False, NULL, cal->Iex.CCall.cee, cal->Iex.CCall.args );
+ doHelperCall( env, False, NULL, cal->Iex.CCall.cee, cal->Iex.CCall.args,
+ RetLocInt );
addInstr(env, AMD64Instr_Imm64(con->Iex.Const.con->Ico.U64, tmp));
addInstr(env, AMD64Instr_Alu64R(Aalu_CMP,
AMD64RMI_Reg(hregAMD64_RAX()), tmp));
@@ -3351,7 +3357,8 @@
addInstr(env, AMD64Instr_SseLdSt(False/*!isLoad*/, 16, argR,
AMD64AMode_IR(0, hregAMD64_RDX())));
/* call the helper */
- addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 3 ));
+ addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn,
+ 3, RetLocNone ));
/* fetch the result from memory, using %r_argp, which the
register allocator will keep alive across the call. */
addInstr(env, AMD64Instr_SseLdSt(True/*isLoad*/, 16, dst,
@@ -3399,7 +3406,8 @@
addInstr(env, mk_iMOVsd_RR(argR, hregAMD64_RDX()));
/* call the helper */
- addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn, 3 ));
+ addInstr(env, AMD64Instr_Call( Acc_ALWAYS, (ULong)fn,
+ 3, RetLocNone ));
/* fetch the result from memory, using %r_argp, which the
register allocator will keep alive across the call. */
addInstr(env, AMD64Instr_SseLdSt(True/*isLoad*/, 16, dst,
@@ -3945,7 +3953,6 @@
/* --------- Call to DIRTY helper --------- */
case Ist_Dirty: {
- IRType retty;
IRDirty* d = stmt->Ist.Dirty.details;
Bool passBBP = False;
@@ -3954,15 +3961,37 @@
passBBP = toBool(d->nFxState > 0 && d->needsBBP);
+ /* Figure out the return type, if any. */
+ IRType retty = Ity_INVALID;
+ if (d->tmp != IRTemp_INVALID)
+ retty = typeOfIRTemp(env->type_env, d->tmp);
+
+ /* Marshal args, do the call, clear stack, set the return value
+ to 0x555..555 if this is a conditional call that returns a
+ value and the call is skipped. We need to set the ret-loc
+ correctly in order to implement the IRDirty semantics that
+ the return value is 0x555..555 if the call doesn't happen. */
+ RetLoc rloc = RetLocINVALID;
+ switch (retty) {
+ case Ity_INVALID: /* function doesn't return anything */
+ rloc = RetLocNone; break;
+ case Ity_I64:
+ case Ity_I32: case Ity_I16: case Ity_I8:
+ rloc = RetLocInt; break;
+ default:
+ break;
+ }
+ if (rloc == RetLocINVALID)
+ break; /* will go to stmt_fail: */
+
/* Marshal args, do the call, clear stack. */
- doHelperCall( env, passBBP, d->guard, d->cee, d->args );
+ doHelperCall( env, passBBP, d->guard, d->cee, d->args, rloc );
/* Now figure out what to do with the returned value, if any. */
if (d->tmp == IRTemp_INVALID)
/* No return value. Nothing to do. */
return;
- retty = typeOfIRTemp(env->type_env, d->tmp);
if (retty == Ity_I64 || retty == Ity_I32
|| retty == Ity_I16 || retty == Ity_I8) {
/* The returned value is in %rax. Park it in the register
Modified: trunk/priv/host_amd64_defs.h (+2 -1)
===================================================================
--- trunk/priv/host_amd64_defs.h 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/host_amd64_defs.h 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -473,6 +473,7 @@
AMD64CondCode cond;
Addr64 target;
Int regparms; /* 0 .. 6 */
+ RetLoc rloc; /* where the return value will be */
} Call;
/* Update the guest RIP value, then exit requesting to chain
to it. May be conditional. */
@@ -701,7 +702,7 @@
extern AMD64Instr* AMD64Instr_MulL ( Bool syned, AMD64RM* );
extern AMD64Instr* AMD64Instr_Div ( Bool syned, Int sz, AMD64RM* );
extern AMD64Instr* AMD64Instr_Push ( AMD64RMI* );
-extern AMD64Instr* AMD64Instr_Call ( AMD64CondCode, Addr64, Int );
+extern AMD64Instr* AMD64Instr_Call ( AMD64CondCode, Addr64, Int, RetLoc );
extern AMD64Instr* AMD64Instr_XDirect ( Addr64 dstGA, AMD64AMode* amRIP,
AMD64CondCode cond, Bool toFastEP );
extern AMD64Instr* AMD64Instr_XIndir ( HReg dstGA, AMD64AMode* amRIP,
Modified: trunk/priv/host_amd64_defs.c (+16 -3)
===================================================================
--- trunk/priv/host_amd64_defs.c 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/host_amd64_defs.c 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -693,13 +693,16 @@
i->Ain.Push.src = src;
return i;
}
-AMD64Instr* AMD64Instr_Call ( AMD64CondCode cond, Addr64 target, Int regparms ) {
+AMD64Instr* AMD64Instr_Call ( AMD64CondCode cond, Addr64 target, Int regparms,
+ RetLoc rloc ) {
AMD64Instr* i = LibVEX_Alloc(sizeof(AMD64Instr));
i->tag = Ain_Call;
i->Ain.Call.cond = cond;
i->Ain.Call.target = target;
i->Ain.Call.regparms = regparms;
+ i->Ain.Call.rloc = rloc;
vassert(regparms >= 0 && regparms <= 6);
+ vassert(rloc != RetLocINVALID);
return i;
}
@@ -1070,11 +1073,12 @@
ppAMD64RMI(i->Ain.Push.src);
return;
case Ain_Call:
- vex_printf("call%s[%d] ",
+ vex_printf("call%s[%d,",
i->Ain.Call.cond==Acc_ALWAYS
? "" : showAMD64CondCode(i->Ain.Call.cond),
i->Ain.Call.regparms );
- vex_printf("0x%llx", i->Ain.Call.target);
+ ppRetLoc(i->Ain.Call.rloc);
+ vex_printf("] 0x%llx", i->Ain.Call.target);
break;
case Ain_XDirect:
@@ -2663,6 +2667,15 @@
}
case Ain_Call: {
+ if (i->Ain.Call.cond != Acc_ALWAYS && i->Ain.Call.rloc != RetLocNone) {
+ /* The call might not happen (it isn't unconditional) and it
+ returns a result. In this case we will need to generate a
+ control flow diamond to put 0x555..555 in the return
+ register(s) in the case where the call doesn't happen. If
+ this ever becomes necessary, maybe copy code from the ARM
+ equivalent. Until that day, just give up. */
+ goto bad;
+ }
/* As per detailed comment for Ain_Call in
getRegUsage_AMD64Instr above, %r11 is used as an address
temporary. */
Modified: trunk/priv/ir_defs.c (+192 -20)
===================================================================
--- trunk/priv/ir_defs.c 2013-01-16 22:11:13 +00:00 (rev 2641)
+++ trunk/priv/ir_defs.c 2013-01-17 14:23:53 +00:00 (rev 2642)
@@ -1234,6 +1234,42 @@
ppIRExpr(puti->data);
}
+void ppIRStoreG ( IRStoreG* sg )
+{
+ vex_printf("if (");
+ ppIRExpr(sg->guard);
+ vex_printf(") ST%s(", sg->end==Iend_LE ? "le" : "be");
+ ppIRExpr(sg->addr);
+ vex_printf(") = ");
+ ppIRExpr(sg->data);
+}
+
+void ppIRLoadGOp ( IRLoadGOp cvt )
+{
+ switch (cvt) {
+ case ILGop_INVALID: vex_printf("ILGop_INVALID"); break;
+ case ILGop_Ident32: vex_printf("Ident32"); break;
+ case ILGop_16Uto32: vex_printf("16Uto32"); break;
+ case ILGop_16Sto32: vex_printf("16Sto32"); break;
+ case ILGop_8Uto32: vex_printf("8Uto32"); break;
+ case ILGop_8Sto32: vex_printf("8Sto32"); break;
+ default: vpanic("ppIRLoadGOp");
+ }
+}
+
+void ppIRLoadG ( IRLoadG* lg )
+{
+ ppIRTemp(lg->dst);
+ vex_printf(" = if-strict (");
+ ppIRExpr(lg->guard);
+ vex_printf(") ");
+ ppIRLoadGOp(lg->cvt);
+ vex_printf("(LD%s(", lg->end==Iend_LE ? "le" : "be");
+ ppIRExpr(lg->addr);
+ vex_printf(")) else ");
+ ppIRExpr(lg->alt);
+}
+
void ppIRJumpKind ( IRJumpKind kind )
{
switch (kind) {
@@ -1315,6 +1351,12 @@
vex_printf( ") = ");
ppIRExpr(s->Ist.Store.data);
break;
+ case Ist_StoreG:
+ ppIRStoreG(s->Ist.StoreG.details);
+ break;
+ case Ist_LoadG:
+ ppIRLoadG(s->Ist.LoadG.details);
+ break;
case Ist_CAS:
ppIRCAS(s->Ist.CAS.details);
break;
@@ -1755,6 +1797,33 @@
}
+/* Constructors -- IRStoreG and IRLoadG */
+
+IRStoreG* mkIRStoreG ( IREndness end,
+ IRExpr* addr, IRExpr* data, IRExpr* guard )
+{
+ IRStoreG* sg = LibVEX_Alloc(sizeof(IRStoreG));
+ sg->end = end;
+ sg->addr = addr;
+ sg->data = data;
+ sg->guard = guard;
+ return sg;
+}
+
+IRLoadG* mkIRLoadG ( IREndness end, IRLoadGOp cvt,
+ IRTemp dst, IRExpr* addr, IRExpr* alt, IRExpr* guard )
+{
+ IRLoadG* lg = LibVEX_Alloc(sizeof(IRLoadG));
+ lg->end = end;
+ lg->cvt = cvt;
+ lg->dst = dst;
+ lg->addr = addr;
+ lg->alt = alt;
+ lg->guard = guard;
+ return lg;
+}
+
+
/* Constructors -- IRStmt */
IRStmt* IRStmt_NoOp ( void )
@@ -1809,6 +1878,21 @@
vassert(end == Iend_LE || end == Iend_BE);
return s;
}
+IRStmt* IRStmt_StoreG ( IREndness end, IRExpr* addr, IRExpr* data,
+ IRExpr* guard ) {
+ IRStmt* s = LibVEX_Alloc(sizeof(IRStmt));
+ s->tag = Ist_StoreG;
+ s->Ist.StoreG.details = mkIRStoreG(end, addr, data, guard);
+ vassert(end == Iend_LE || end == Iend_BE);
+ return s;
+}
+IRStmt* IRStmt_LoadG ( IREndness end, IRLoadGOp cvt, IRTemp dst,
+ IRExpr* addr, IRExpr* alt, IRExpr* guard ) {
+ IRStmt* s = LibVEX_Alloc(sizeof(IRStmt));
+ s->tag = Ist_LoadG;
+ s->Ist.LoadG.details = mkIRLoadG(end, cvt, dst, addr, alt, guard);
+ return s;
+}
IRStmt* IRStmt_CAS ( IRCAS* cas ) {
IRStmt* s = LibVEX_Alloc(sizeof(IRStmt));
s->tag = Ist_CAS;
@@ -2060,6 +2144,20 @@
return IRStmt_Store(s->Ist.Store.end,
deepCopyIRExpr(s->Ist.Store.addr),
deepCopyIRExpr(s->Ist.Store.data));
+ case Ist_StoreG: {
+ IRStoreG* sg = s->Ist.StoreG.details;
+ return IRStmt_StoreG(sg->end,
+ deepCopyIRExpr(sg->addr),
+ deepCopyIRExpr(sg->data),
+ deepCopyIRExpr(sg->guard));
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = s->Ist.LoadG.details;
+ return IRStmt_LoadG(lg->end, lg->cvt, lg->dst,
+ deepCopyIRExpr(lg->addr),
+ deepCopyIRExpr(lg->alt),
+ deepCopyIRExpr(lg->guard));
+ }
case Ist_CAS:
return IRStmt_CAS(deepCopyIRCAS(s->Ist.CAS.details));
case Ist_LLSC:
@@ -2988,7 +3086,6 @@
return env->types[tmp];
}
-
IRType typeOfIRConst ( IRConst* con )
{
switch (con->tag) {
@@ -3007,6 +3104,21 @@
}
}
+void typeOfIRLoadGOp ( IRLoadGOp cvt,
+ /*OUT*/IRType* t_res, /*OUT*/IRType* t_arg )
+{
+ switch (cvt) {
+ case ILGop_Ident32:
+ *t_res = Ity_I32; *t_arg = Ity_I32; break;
+ case ILGop_16Uto32: case ILGop_16Sto32:
+ *t_res = Ity_I32; *t_arg = Ity_I16; break;
+ case ILGop_8Uto32: case ILGop_8Sto32:
+ *t_res = Ity_I32; *t_arg = Ity_I8; break;
+ default:
+ vpanic("typeOfIRLoadGOp");
+ }
+}
+
IRType typeOfIRExpr ( IRTypeEnv* tyenv, IRExpr* e )
{
IRType t_dst, t_arg1, t_arg2, t_arg3, t_arg4;
@@ -3145,6 +3257,16 @@
case Ist_Store:
return toBool( isIRAtom(st->Ist.Store.addr)
&& isIRAtom(st->Ist.Store.data) );
+ case Ist_StoreG: {
+ IRStoreG* sg = st->Ist.StoreG.details;
+ return toBool( isIRAtom(sg->addr)
+ && isIRAtom(sg->data) && isIRAtom(sg->guard) );
+ }
+ case Ist_LoadG: {
+ IRLoadG* lg = st->Ist.LoadG.details;
+ return toBool( isIRAtom(lg->addr)
+ && isIRAtom(lg->alt) && isIRAtom(lg->guard) );
+ }
case Ist_CAS:
cas = st->Ist.CAS.details;
return toBool( isIRAtom(cas->addr)
@@ -3317,10 +3439,12 @@
static
void useBeforeDef_Stmt ( IRSB* bb, IRStmt* stmt, Int* def_counts )
{
- Int i;
- IRDirty* d;
- IRCAS* cas;
- IRPutI* puti;
+ Int i;
+ IRDirty* d;
+ IRCAS* cas;
+ IRPutI* puti;
+ IRLoadG* lg;
+ IRStoreG* sg;
switch (stmt->tag) {
case Ist_IMark:
break;
@@ -3343,6 +3467,18 @@
useBeforeDef_Expr(bb,stmt,stmt->Ist.Store.addr,def_counts);
useBeforeDef_Expr(bb,stmt,stmt->Ist.Store.data,def_counts);
break;
+ case Ist_StoreG:
+ sg = stmt->Ist.StoreG.details;
+ useBeforeDef_Expr(bb,stmt,sg->addr,def_counts);
+ useBeforeDef_Expr(bb,stmt,sg->data,def_counts);
+ useBeforeDef_Expr(bb,stmt,sg->guard,def_counts);
+ break;
+ case Ist_LoadG:
+ lg = stmt->Ist.LoadG.details;
+ useBeforeDef_Expr(bb,stmt,lg->addr,def_counts);
+ useBeforeDef_Expr(bb,stmt,lg->alt,def_counts);
+ useBeforeDef_Expr(bb,stmt,lg->guard,def_counts);
+ break;
case Ist_CAS:
cas = stmt->Ist.CAS.details;
useBeforeDef_Expr(bb,stmt,cas->addr,def_counts);
@@ -3630,18 +3766,54 @@
tcExpr( bb, stmt, stmt->Ist.WrTmp.data, gWordTy );
if (typeOfIRTemp(tyenv, stmt->Ist.WrTmp.tmp)
!= typeOfIRExpr(tyenv, stmt->Ist.WrTmp.data))
- sanityCheckFail(bb,stmt,"IRStmt.Put.Tmp: tmp and expr do not match");
+ sanityCheckFail(bb,stmt,
+ "IRStmt.Put.Tmp: tmp and expr do not match");
break;
case Ist_Store:
tcExpr( bb, stmt, stmt->Ist.Store.addr, gWordTy );
tcExpr( bb, stmt, stmt->Ist.Store.data, gWordTy );
if (typeOfIRExpr(tyenv, stmt->Ist.Store.addr) != gWordTy)
- sanityCheckFail(bb,stmt,"IRStmt.Store.addr: not :: guest word type");
+ sanityCheckFail(bb,stmt,
+ "IRStmt.Store.addr: not :: guest word type");
if (typeOfIRExpr(tyenv, stmt->Ist.Store.data) == Ity_I1)
- ...
[truncated message content] |
|
From: Rich C. <rc...@wi...> - 2013-01-17 13:44:43
|
I would take the file Heap.cpp and construct a small test case, main.cpp that initializes Heap exactly like your failing program. Cut out everything that is not relevant to the problem. Get this new code to fail with valgrind. Then send the source code to the list and the steps to reproduce the problem so we can see what you see. Rich On Fri, 11 Jan 2013 12:56:53 +0530 Muthumeenal Natarajan <mna...@ai...> wrote: > Hi, > > Pls find attached the Mem stats output from valgrind, attached both the versions (one without the patch and one with the patch which helps to increase the memory from http://sourceforge.net/mailarchive/message.php?msg_id=30299697) > > Memory stats look the same in both the cases and it's not about increasing the memory as we allocate very low (22MB) which terminates the process while running with valgrind. > > Can you pls redirect us if you have already faced similar issues or is there any settings which can bypass this cause we may not be able to reduce this size, as we need them for our project. > > Thanks, > Meenal > > From: Kalaivani R [mailto:kal...@gm...] > Sent: Friday, January 11, 2013 11:05 AM > To: Muthumeenal Natarajan > Subject: Fwd: [Valgrind-developers] Need help : to prevent valgrind from terminating due to SIGSEGV > > > ---------- Forwarded message ---------- > From: Philippe Waroquiers <phi...@sk...<mailto:phi...@sk...>> > Date: Fri, Jan 11, 2013 at 1:02 AM > Subject: Re: [Valgrind-developers] Need help : to prevent valgrind from terminating due to SIGSEGV > To: Kalaivani R <kal...@gm...<mailto:kal...@gm...>> > > On Fri, 2013-01-11 at 00:47 +0530, Kalaivani R wrote: > > > > > Today we tried to reduce the amount of memory allocated from HEAP for > > our binary by reducing the allocations and then the valgrind seems to > > work fine with the same executable.. > So, this seems to point at using a lot of memory. > > Which version of Valgrind are you using ? > Better use the last released version (3.8.1) as there was > some memory use improvements in recent versions. > > > > Is there a way to figure out how much memory valgrind is consuming > > while running with our exe? > To see what is going on, restart your program giving the flags > --stats=yes --profile-heap=yes to Valgrind. > With this, Valgrind will report various detailed statistics about > memory usage (for the client process, and for the Valgrind internals). > The memory of Valgrind is divided in "arenas". > Post the last statistics for each arena. > Typically, one arena stat looks like: > -------- Arena "core": 1048576/1048576 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 112800/112800 max/curr on_loan 4 rzB -------- > 16 in 1: stacks.rs.1 > 40 in 1: gdbserved_watches > 72 in 1: main.mpclo.3 > 1,008 in 71: errormgr.sLTy.1 > 2,592 in 81: errormgr.losf.1 > 2,880 in 81: errormgr.losf.2 > 3,216 in 81: errormgr.losf.4 > 4,440 in 175: errormgr.sLTy.2 > 33,000 in 6: gdbsrv > 65,536 in 1: di.syswrap-x86.azxG.1 > > > > > Is this patch required if more heap memory is consumed? > It should (or could?) help. To be confirmed based on the above stats. > > Philippe > > > > > > -- > Cheers, > Kalai -- Rich Coe rc...@wi... |
|
From: Philippe W. <phi...@sk...> - 2013-01-17 05:03:09
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) Assembler: GNU assembler version 2.21.53.0.1-6.fc16 20110716 C library: GNU C Library development release version 2.14.90 uname -mrs: Linux 3.3.1-3.fc16.ppc64 ppc64 Vendor version: Fedora release 16 (Verne) Nightly build on gcc110 ( Fedora release 16 (Verne), ppc64 ) Started at 2013-01-16 20:00:33 PST Ended at 2013-01-16 21:01:20 PST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 547 tests, 10 stderr failures, 5 stdout failures, 1 stderrB failure, 1 stdoutB failure, 2 post failures == gdbserver_tests/mcmain_pic (stdout) gdbserver_tests/mcmain_pic (stderr) gdbserver_tests/mcmain_pic (stdoutB) gdbserver_tests/mcmain_pic (stderrB) memcheck/tests/linux/getregset (stdout) memcheck/tests/linux/getregset (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/vbit-test/vbit-test (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) massif/tests/big-alloc (post) massif/tests/deep-D (post) none/tests/ppc32/test_dfp2 (stdout) none/tests/ppc32/test_dfp2 (stderr) none/tests/ppc64/test_dfp2 (stdout) none/tests/ppc64/test_dfp2 (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) |
|
From: Tom H. <to...@co...> - 2013-01-17 04:29:26
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.3.0 20080428 (Red Hat 4.3.0-8) Assembler: GNU assembler version 2.18.50.0.6-2 20080403 C library: GNU C Library stable release version 2.8 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 9 (Sulphur) Nightly build on bristol ( x86_64, Fedora 9 ) Started at 2013-01-17 03:42:05 GMT Ended at 2013-01-17 04:29:12 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 622 tests, 1 stderr failure, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/amd64/insn-pcmpistri (stderr) none/tests/amd64/sse4-64 (stdout) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 622 tests, 2 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/amd64/insn-pcmpistri (stderr) memcheck/tests/leak-pool-2 (stderr) none/tests/amd64/sse4-64 (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short 2013-01-17 04:07:42.120518240 +0000 --- new.short 2013-01-17 04:29:12.986908631 +0000 *************** *** 8,12 **** ! == 622 tests, 2 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/amd64/insn-pcmpistri (stderr) - memcheck/tests/leak-pool-2 (stderr) none/tests/amd64/sse4-64 (stdout) --- 8,11 ---- ! == 622 tests, 1 stderr failure, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/amd64/insn-pcmpistri (stderr) none/tests/amd64/sse4-64 (stdout) |
|
From: Rich C. <rc...@wi...> - 2013-01-17 04:18:55
|
valgrind revision: 13235
VEX revision: 2641
C compiler: gcc (SUSE Linux) 4.7.1 20120723 [gcc-4_7-branch revision 189773]
Assembler: GNU assembler (GNU Binutils; openSUSE 12.2) 2.22
C library: GNU C Library stable release version 2.15 (20120628)
uname -mrs: Linux 3.4.11-2.16-desktop x86_64
Vendor version: Welcome to openSUSE 12.2 "Mantis" - Kernel %r (%t).
Nightly build on ultra ( gcc 4.5.1 Linux 3.4.11-2.16-desktop x86_64 )
Started at 2013-01-16 21:30:01 CST
Ended at 2013-01-16 22:18:44 CST
Results unchanged from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 639 tests, 3 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures ==
gdbserver_tests/mcinfcallRU (stderr)
gdbserver_tests/mcinfcallWSRU (stderr)
memcheck/tests/origin5-bz2 (stderr)
=================================================
./valgrind-new/gdbserver_tests/mcinfcallRU.stderr.diff
=================================================
--- mcinfcallRU.stderr.exp 2013-01-16 22:00:11.160368401 -0600
+++ mcinfcallRU.stderr.out 2013-01-16 22:09:46.467225108 -0600
@@ -1,4 +1,12 @@
loops/sleep_ms/burn/threads_spec: 1 0 2000000000 ------B-
main ready to sleep and/or burn
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
pid .... Thread .... inferior call pushed from gdb in mcinfcallRU.stdinB.gdb
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
Reset valgrind output to log (orderly_finish)
=================================================
./valgrind-new/gdbserver_tests/mcinfcallWSRU.stderr.diff
=================================================
--- mcinfcallWSRU.stderr.exp 2013-01-16 22:00:11.162368299 -0600
+++ mcinfcallWSRU.stderr.out 2013-01-16 22:09:48.978154544 -0600
@@ -3,5 +3,13 @@
London ready to sleep and/or burn
Petaouchnok ready to sleep and/or burn
main ready to sleep and/or burn
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
pid .... Thread .... thread 1 inferior call pushed from gdb in mcinfcallWSRU.stdinB.gdb
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
Reset valgrind output to log (orderly_finish)
=================================================
./valgrind-new/memcheck/tests/origin5-bz2.stderr.diff-glibc212-s390x
=================================================
--- origin5-bz2.stderr.exp-glibc212-s390x 2013-01-16 22:01:22.193377879 -0600
+++ origin5-bz2.stderr.out 2013-01-16 22:11:40.602017752 -0600
@@ -75,17 +75,6 @@
at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
- at 0x........: mainSort (origin5-bz2.c:2859)
- by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
- by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
- by 0x........: handle_compress (origin5-bz2.c:4753)
- by 0x........: BZ2_bzCompress (origin5-bz2.c:4822)
- by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
- by 0x........: main (origin5-bz2.c:6484)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
-
-Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2963)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -131,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: g_serviceFn (origin5-bz2.c:6429)
+ by 0x........: default_bzalloc (origin5-bz2.c:4470)
+ by 0x........: BZ2_decompress (origin5-bz2.c:1578)
+ by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192)
+ by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678)
+ by 0x........: main (origin5-bz2.c:6498)
=================================================
./valgrind-new/memcheck/tests/origin5-bz2.stderr.diff-glibc234-s390x
=================================================
--- origin5-bz2.stderr.exp-glibc234-s390x 2013-01-16 22:01:12.868639201 -0600
+++ origin5-bz2.stderr.out 2013-01-16 22:11:40.602017752 -0600
@@ -120,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: g_serviceFn (origin5-bz2.c:6429)
+ by 0x........: default_bzalloc (origin5-bz2.c:4470)
+ by 0x........: BZ2_decompress (origin5-bz2.c:1578)
+ by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192)
+ by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678)
+ by 0x........: main (origin5-bz2.c:6498)
=================================================
./valgrind-new/memcheck/tests/origin5-bz2.stderr.diff-glibc25-amd64
=================================================
--- origin5-bz2.stderr.exp-glibc25-amd64 2013-01-16 22:00:19.052146989 -0600
+++ origin5-bz2.stderr.out 2013-01-16 22:11:40.602017752 -0600
@@ -120,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: g_serviceFn (origin5-bz2.c:6429)
+ by 0x........: default_bzalloc (origin5-bz2.c:4470)
+ by 0x........: BZ2_decompress (origin5-bz2.c:1578)
+ by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192)
+ by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678)
+ by 0x........: main (origin5-bz2.c:6498)
=================================================
./valgrind-new/memcheck/tests/origin5-bz2.stderr.diff-glibc25-x86
=================================================
--- origin5-bz2.stderr.exp-glibc25-x86 2013-01-16 22:00:50.276272933 -0600
+++ origin5-bz2.stderr.out 2013-01-16 22:11:40.602017752 -0600
@@ -12,7 +12,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
by 0x........: handle_compress (origin5-bz2.c:4750)
by 0x........: BZ2_bzCompress (origin5-bz2.c:4822)
@@ -21,7 +21,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
by 0x........: handle_compress (origin5-bz2.c:4750)
by 0x........: BZ2_bzCompress (origin5-bz2.c:4822)
@@ -30,7 +30,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2820)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -41,7 +41,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2823)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -52,7 +52,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2854)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -63,7 +63,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2858)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -74,7 +74,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2963)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -85,7 +85,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2964)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -96,7 +96,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: fallbackSort (origin5-bz2.c:2269)
by 0x........: BZ2_blockSort (origin5-bz2.c:3116)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -107,7 +107,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: fallbackSort (origin5-bz2.c:2275)
by 0x........: BZ2_blockSort (origin5-bz2.c:3116)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -120,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
<truncated beyond 100 lines>
=================================================
./valgrind-new/memcheck/tests/origin5-bz2.stderr.diff-glibc27-ppc64
=================================================
--- origin5-bz2.stderr.exp-glibc27-ppc64 2013-01-16 22:00:59.127024203 -0600
+++ origin5-bz2.stderr.out 2013-01-16 22:11:40.602017752 -0600
@@ -1,7 +1,7 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6481)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Conditional jump or move depends on uninitialised value(s)
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
@@ -10,7 +10,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
@@ -19,7 +19,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
@@ -28,7 +28,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2820)
@@ -39,7 +39,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2823)
@@ -50,7 +50,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2854)
@@ -61,7 +61,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2858)
@@ -72,7 +72,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2963)
@@ -83,7 +83,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2964)
@@ -94,7 +94,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: fallbackSort (origin5-bz2.c:2269)
@@ -105,7 +105,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
<truncated beyond 100 lines>
=================================================
./valgrind-old/gdbserver_tests/mcinfcallRU.stderr.diff
=================================================
--- mcinfcallRU.stderr.exp 2013-01-16 21:33:27.954237937 -0600
+++ mcinfcallRU.stderr.out 2013-01-16 21:49:36.852144440 -0600
@@ -1,4 +1,12 @@
loops/sleep_ms/burn/threads_spec: 1 0 2000000000 ------B-
main ready to sleep and/or burn
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
pid .... Thread .... inferior call pushed from gdb in mcinfcallRU.stdinB.gdb
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
Reset valgrind output to log (orderly_finish)
=================================================
./valgrind-old/gdbserver_tests/mcinfcallWSRU.stderr.diff
=================================================
--- mcinfcallWSRU.stderr.exp 2013-01-16 21:33:27.956237881 -0600
+++ mcinfcallWSRU.stderr.out 2013-01-16 21:49:39.433072093 -0600
@@ -3,5 +3,13 @@
London ready to sleep and/or burn
Petaouchnok ready to sleep and/or burn
main ready to sleep and/or burn
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
pid .... Thread .... thread 1 inferior call pushed from gdb in mcinfcallWSRU.stdinB.gdb
+vex amd64->IR: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........ 0x........
+vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
+vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x........ ESC=NONE
+vex amd64->IR: PFX.66=0 PFX.F2=0 PFX.F3=0
Reset valgrind output to log (orderly_finish)
=================================================
./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc212-s390x
=================================================
--- origin5-bz2.stderr.exp-glibc212-s390x 2013-01-16 21:38:37.808576378 -0600
+++ origin5-bz2.stderr.out 2013-01-16 21:51:28.690009589 -0600
@@ -75,17 +75,6 @@
at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
- at 0x........: mainSort (origin5-bz2.c:2859)
- by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
- by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
- by 0x........: handle_compress (origin5-bz2.c:4753)
- by 0x........: BZ2_bzCompress (origin5-bz2.c:4822)
- by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
- by 0x........: main (origin5-bz2.c:6484)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
-
-Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2963)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -131,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: g_serviceFn (origin5-bz2.c:6429)
+ by 0x........: default_bzalloc (origin5-bz2.c:4470)
+ by 0x........: BZ2_decompress (origin5-bz2.c:1578)
+ by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192)
+ by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678)
+ by 0x........: main (origin5-bz2.c:6498)
=================================================
./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc234-s390x
=================================================
--- origin5-bz2.stderr.exp-glibc234-s390x 2013-01-16 21:38:21.991018803 -0600
+++ origin5-bz2.stderr.out 2013-01-16 21:51:28.690009589 -0600
@@ -120,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: g_serviceFn (origin5-bz2.c:6429)
+ by 0x........: default_bzalloc (origin5-bz2.c:4470)
+ by 0x........: BZ2_decompress (origin5-bz2.c:1578)
+ by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192)
+ by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678)
+ by 0x........: main (origin5-bz2.c:6498)
=================================================
./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc25-amd64
=================================================
--- origin5-bz2.stderr.exp-glibc25-amd64 2013-01-16 21:34:10.167057862 -0600
+++ origin5-bz2.stderr.out 2013-01-16 21:51:28.690009589 -0600
@@ -120,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: g_serviceFn (origin5-bz2.c:6429)
+ by 0x........: default_bzalloc (origin5-bz2.c:4470)
+ by 0x........: BZ2_decompress (origin5-bz2.c:1578)
+ by 0x........: BZ2_bzDecompress (origin5-bz2.c:5192)
+ by 0x........: BZ2_bzBuffToBuffDecompress (origin5-bz2.c:5678)
+ by 0x........: main (origin5-bz2.c:6498)
=================================================
./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc25-x86
=================================================
--- origin5-bz2.stderr.exp-glibc25-x86 2013-01-16 21:37:39.827196695 -0600
+++ origin5-bz2.stderr.out 2013-01-16 21:51:28.690009589 -0600
@@ -12,7 +12,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
by 0x........: handle_compress (origin5-bz2.c:4750)
by 0x........: BZ2_bzCompress (origin5-bz2.c:4822)
@@ -21,7 +21,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
by 0x........: handle_compress (origin5-bz2.c:4750)
by 0x........: BZ2_bzCompress (origin5-bz2.c:4822)
@@ -30,7 +30,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2820)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -41,7 +41,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2823)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -52,7 +52,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2854)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -63,7 +63,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2858)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -74,7 +74,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2963)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -85,7 +85,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2964)
by 0x........: BZ2_blockSort (origin5-bz2.c:3105)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -96,7 +96,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: fallbackSort (origin5-bz2.c:2269)
by 0x........: BZ2_blockSort (origin5-bz2.c:3116)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -107,7 +107,7 @@
Uninitialised value was created by a client request
at 0x........: main (origin5-bz2.c:6479)
-Use of uninitialised value of size 4
+Use of uninitialised value of size 8
at 0x........: fallbackSort (origin5-bz2.c:2275)
by 0x........: BZ2_blockSort (origin5-bz2.c:3116)
by 0x........: BZ2_compressBlock (origin5-bz2.c:4034)
@@ -120,6 +120,12 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6512)
- Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6479)
+ Uninitialised value was created by a heap allocation
+ at 0x........: malloc (vg_replace_malloc.c:...)
<truncated beyond 100 lines>
=================================================
./valgrind-old/memcheck/tests/origin5-bz2.stderr.diff-glibc27-ppc64
=================================================
--- origin5-bz2.stderr.exp-glibc27-ppc64 2013-01-16 21:38:07.478424177 -0600
+++ origin5-bz2.stderr.out 2013-01-16 21:51:28.690009589 -0600
@@ -1,7 +1,7 @@
Conditional jump or move depends on uninitialised value(s)
at 0x........: main (origin5-bz2.c:6481)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Conditional jump or move depends on uninitialised value(s)
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
@@ -10,7 +10,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
@@ -19,7 +19,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: copy_input_until_stop (origin5-bz2.c:4686)
@@ -28,7 +28,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2820)
@@ -39,7 +39,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2823)
@@ -50,7 +50,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2854)
@@ -61,7 +61,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2858)
@@ -72,7 +72,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2963)
@@ -83,7 +83,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: mainSort (origin5-bz2.c:2964)
@@ -94,7 +94,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
at 0x........: fallbackSort (origin5-bz2.c:2269)
@@ -105,7 +105,7 @@
by 0x........: BZ2_bzBuffToBuffCompress (origin5-bz2.c:5630)
by 0x........: main (origin5-bz2.c:6484)
Uninitialised value was created by a client request
- at 0x........: main (origin5-bz2.c:6481)
+ at 0x........: main (origin5-bz2.c:6479)
Use of uninitialised value of size 8
<truncated beyond 100 lines>
|
|
From: Tom H. <to...@co...> - 2013-01-17 04:13:31
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2) Assembler: GNU assembler version 2.19.51.0.14-3.fc11 20090722 C library: GNU C Library stable release version 2.10.2 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 11 (Leonidas) Nightly build on bristol ( x86_64, Fedora 11 ) Started at 2013-01-17 03:31:18 GMT Ended at 2013-01-17 04:13:18 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 626 tests, 1 stderr failure, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/long_namespace_xml (stderr) none/tests/amd64/sse4-64 (stdout) |
|
From: Tom H. <to...@co...> - 2013-01-17 04:09:01
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4) Assembler: GNU assembler version 2.20.51.0.7-8.fc14 20100318 C library: GNU C Library stable release version 2.13 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 14 (Laughlin) Nightly build on bristol ( x86_64, Fedora 14 ) Started at 2013-01-17 03:12:40 GMT Ended at 2013-01-17 04:08:49 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 644 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) |
|
From: Tom H. <to...@co...> - 2013-01-17 04:08:55
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.4.5 20101112 (Red Hat 4.4.5-2) Assembler: GNU assembler version 2.20.51.0.2-20.fc13 20091009 C library: GNU C Library stable release version 2.12.2 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 13 (Goddard) Nightly build on bristol ( x86_64, Fedora 13 ) Started at 2013-01-17 03:21:41 GMT Ended at 2013-01-17 04:08:45 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 626 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/pth_barrier3 (stderr) |
|
From: Tom H. <to...@co...> - 2013-01-17 03:48:47
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) Assembler: GNU assembler version 2.21.51.0.6-6.fc15 20110118 C library: GNU C Library stable release version 2.14.1 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 15 (Lovelock) Nightly build on bristol ( x86_64, Fedora 15 ) Started at 2013-01-17 03:03:14 GMT Ended at 2013-01-17 03:48:34 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 646 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) |
|
From: Tom H. <to...@co...> - 2013-01-17 03:44:02
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) Assembler: GNU assembler version 2.21.53.0.1-6.fc16 20110716 C library: GNU C Library development release version 2.14.90 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 16 (Verne) Nightly build on bristol ( x86_64, Fedora 16 ) Started at 2013-01-17 02:52:11 GMT Ended at 2013-01-17 03:43:49 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 646 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) |
|
From: Christian B. <bor...@de...> - 2013-01-17 03:13:41
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973] Assembler: GNU assembler (GNU Binutils; SUSE Linux Enterprise 11) 2.21.1 C library: GNU C Library stable release version 2.11.3 (20110527) uname -mrs: Linux 3.0.42-0.7-default s390x Vendor version: Welcome to SUSE Linux Enterprise Server 11 SP2 (s390x) - Kernel %r (%t). Nightly build on sless390 ( SUSE Linux Enterprise Server 11 SP1 gcc 4.3.4 on z196 (s390x) ) Started at 2013-01-17 03:45:01 CET Ended at 2013-01-17 04:13:31 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... done Regression test results follow == 624 tests, 0 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == |
|
From: Tom H. <to...@co...> - 2013-01-17 03:11:59
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2) Assembler: GNU assembler version 2.22.52.0.1-10.fc17 20120131 C library: GNU C Library stable release version 2.15 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 17 (Beefy Miracle) Nightly build on bristol ( x86_64, Fedora 17 (Beefy Miracle) ) Started at 2013-01-17 02:41:24 GMT Ended at 2013-01-17 03:11:44 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 646 tests, 5 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallRU (stderr) gdbserver_tests/mcinfcallWSRU (stderr) gdbserver_tests/mcmain_pic (stderr) memcheck/tests/origin5-bz2 (stderr) exp-sgcheck/tests/preen_invars (stdout) exp-sgcheck/tests/preen_invars (stderr) |
|
From: Christian B. <bor...@de...> - 2013-01-17 03:06:33
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.6.1 20110908 (Red Hat 4.6.1-9bb4) Assembler: GNU assembler version 2.21.51.0.6-6bb6.fc15 20110118 C library: GNU C Library stable release version 2.14.1 uname -mrs: Linux 3.7.2-57.x.20130114-s390xperformance s390x Vendor version: unknown Nightly build on fedora390 ( Fedora 15 with devel libc/toolchain on z196 (s390x) ) Started at 2013-01-17 03:45:01 CET Ended at 2013-01-17 04:06:39 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 625 tests, 2 stderr failures, 0 stdout failures, 6 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcbreak (stderrB) gdbserver_tests/mcclean_after_fork (stderrB) gdbserver_tests/mcleak (stderrB) gdbserver_tests/mcmain_pic (stderrB) gdbserver_tests/mcvabits (stderrB) gdbserver_tests/mssnapshot (stderrB) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) |
|
From: Tom H. <to...@co...> - 2013-01-17 03:03:26
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.7.2 20121109 (Red Hat 4.7.2-8) Assembler: GNU assembler version 2.23.51.0.1-3.fc18 20120806 C library: GNU C Library stable release version 2.16 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 18 (Spherical Cow) Nightly build on bristol ( x86_64, Fedora 18 (Spherical Cow) ) Started at 2013-01-17 02:32:19 GMT Ended at 2013-01-17 03:03:09 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 646 tests, 2 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) exp-sgcheck/tests/preen_invars (stdout) exp-sgcheck/tests/preen_invars (stderr) |
|
From: Tom H. <to...@co...> - 2013-01-17 02:47:16
|
valgrind revision: 13235 VEX revision: 2641 C compiler: gcc (GCC) 4.7.2 20121109 (Red Hat 4.7.2-9) Assembler: GNU assembler version 2.23.51.0.8-2.fc19 20121218 C library: GNU C Library (GNU libc) stable release version 2.17 uname -mrs: Linux 3.7.1-5.fc18.x86_64 x86_64 Vendor version: Fedora release 19 (Rawhide) Nightly build on bristol ( x86_64, Fedora 19 ) Started at 2013-01-17 02:21:43 GMT Ended at 2013-01-17 02:47:01 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 646 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) |