You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(1) |
2
|
|
3
|
4
|
5
(5) |
6
|
7
(3) |
8
|
9
|
|
10
|
11
|
12
(4) |
13
(1) |
14
|
15
|
16
|
|
17
|
18
|
19
|
20
|
21
(3) |
22
|
23
|
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
|
31
|
|
|
|
|
|
|
|
From: Mineeva, T. A <tat...@in...> - 2017-12-05 14:37:33
|
Ivo, Thank you, it resolved the issue. -----Original Message----- From: Ivo Raisr [mailto:iv...@iv...] Sent: Tuesday, December 5, 2017 5:28 PM To: Mineeva, Tatyana A <tat...@in...> Cc: phi...@sk...; val...@li... Subject: Re: [Valgrind-developers] Vgdb sometimes shows unexpected values for vector registers 2017-12-05 15:16 GMT+01:00 Mineeva, Tatyana A <tat...@in...>: > Hello Philippe, > > (Sorry for a duplicate mail, the first one got rejected by the mailing > list). > > > > I have got an issue with vgdb: sometimes, it seems to show the values > of vector registers incorrectly. > > > > Please find a small reproducer (vgdb_ymm_test.c) attached. The > reproducer loads the data into register, sums it up with itself, and > stores the result back to memory. > > Steps I did to reproduce the issue: > > - Compile the reproducer with flags “-O0 -g” > > - Run the reproducer under vgdb (valgrind --tool=none --vgdb=yes > --vgdb-error=0 ./a.out); from another shell, start gdb and attach to > vgdb (target remote | vgdb) Please re-run with '--vgdb=full' instead of '--vgdb=yes'. See http://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver-limitations I. -------------------------------------------------------------------- Joint Stock Company Intel A/O Registered legal address: Krylatsky Hills Business Park, 17 Krylatskaya Str., Bldg 4, Moscow 121614, Russian Federation This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. |
|
From: Ivo R. <iv...@iv...> - 2017-12-05 14:28:02
|
2017-12-05 15:16 GMT+01:00 Mineeva, Tatyana A <tat...@in...>: > Hello Philippe, > > (Sorry for a duplicate mail, the first one got rejected by the mailing > list). > > > > I have got an issue with vgdb: sometimes, it seems to show the values of > vector registers incorrectly. > > > > Please find a small reproducer (vgdb_ymm_test.c) attached. The reproducer > loads the data into register, sums it up with itself, and stores the result > back to memory. > > Steps I did to reproduce the issue: > > - Compile the reproducer with flags “-O0 -g” > > - Run the reproducer under vgdb (valgrind --tool=none --vgdb=yes > --vgdb-error=0 ./a.out); from another shell, start gdb and attach to vgdb > (target remote | vgdb) Please re-run with '--vgdb=full' instead of '--vgdb=yes'. See http://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver-limitations I. |
|
From: Mineeva, T. A <tat...@in...> - 2017-12-05 14:17:34
|
Hello Philippe,
(Sorry for a duplicate mail, the first one got rejected by the mailing list).
I have got an issue with vgdb: sometimes, it seems to show the values of vector registers incorrectly.
Please find a small reproducer (vgdb_ymm_test.c) attached. The reproducer loads the data into register, sums it up with itself, and stores the result back to memory.
Steps I did to reproduce the issue:
- Compile the reproducer with flags "-O0 -g"
- Run the reproducer under vgdb (valgrind --tool=none --vgdb=yes --vgdb-error=0 ./a.out); from another shell, start gdb and attach to vgdb (target remote | vgdb)
- Set breakpoint at line 4, continue to the breakpoint
- Check ymm15. The register is expected to contain the values at this point, but "i r ymm15" returns "v4_int64 = {0x0, 0x0, 0x0, 0x0}"
- Go to the next instruction, check ymm15. vgdb shows the expected values, v4_int64 = {0x0, 0x2, 0x4, 0x6}
Is "v4_int64 = {0x0, 0x0, 0x0, 0x0}" a bug or expected vgdb behavior?
Valgrind version is 3.13.0 (also reproducible on valgrind-3.14.0.GIT), GDB version 8.0.1, GCC version 7.1.0; OS - CentOS Linux release 7.2.1511.
Thank you,
Tanya
--------------------------------------------------------------------
Joint Stock Company Intel A/O
Registered legal address: Krylatsky Hills Business Park,
17 Krylatskaya Str., Bldg 4, Moscow 121614,
Russian Federation
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
|
|
From: Julian S. <se...@so...> - 2017-12-05 11:39:29
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=40f0364e1e4c9d1d4d34c99a22ff775b5199974b commit 40f0364e1e4c9d1d4d34c99a22ff775b5199974b Author: Julian Seward <js...@ac...> Date: Tue Dec 5 12:35:09 2017 +0100 amd64: Add a new spec rule for SUBL then Cond{B,NB} in the case where the RHS is a constant power of two. LLVM 5.0 appears to have started generating such constructions in order to find out whether the top N bits of a value are all zero. This currently generates Iop_CmpLE32U on partially uninitialised data, causing false positives in Memcheck. It seems simplest and most efficient to remove such constructions at this point. Diff: --- VEX/priv/guest_amd64_helpers.c | 66 +++++++++++++++++++++++++++++++++++++++--- 1 file changed, 62 insertions(+), 4 deletions(-) diff --git a/VEX/priv/guest_amd64_helpers.c b/VEX/priv/guest_amd64_helpers.c index d69d110..e3bfffa 100644 --- a/VEX/priv/guest_amd64_helpers.c +++ b/VEX/priv/guest_amd64_helpers.c @@ -1006,11 +1006,34 @@ LibVEX_GuestAMD64_put_rflag_c ( ULong new_carry_flag, /* Used by the optimiser to try specialisations. Returns an equivalent expression, or NULL if none. */ -static Bool isU64 ( IRExpr* e, ULong n ) +static inline Bool isU64 ( IRExpr* e, ULong n ) { - return toBool( e->tag == Iex_Const - && e->Iex.Const.con->tag == Ico_U64 - && e->Iex.Const.con->Ico.U64 == n ); + return e->tag == Iex_Const + && e->Iex.Const.con->tag == Ico_U64 + && e->Iex.Const.con->Ico.U64 == n; +} + +/* Returns N if E is an immediate of the form 1 << N for N in 1 to 31, + and zero in any other case. */ +static Int isU64_1_shl_N ( IRExpr* e ) +{ + if (e->tag != Iex_Const || e->Iex.Const.con->tag != Ico_U64) + return 0; + ULong w64 = e->Iex.Const.con->Ico.U64; + if (w64 < (1ULL << 1) || w64 > (1ULL << 31)) + return 0; + if ((w64 & (w64 - 1)) != 0) + return 0; + /* At this point, we know w64 is a power of two in the range 2^1 .. 2^31, + and we only need to find out which one it is. */ + for (Int n = 1; n <= 31; n++) { + if (w64 == (1ULL << n)) + return n; + } + /* Consequently we should never get here. */ + /*UNREACHED*/ + vassert(0); + return 0; } IRExpr* guest_amd64_spechelper ( const HChar* function_name, @@ -1231,6 +1254,41 @@ IRExpr* guest_amd64_spechelper ( const HChar* function_name, } /* 2, 3 */ + { + /* It appears that LLVM 5.0 and later have a new way to find out + whether the top N bits of a word W are all zero, by computing + + W <u 0---(N-1)---0 1 0---0 + + In particular, the result will be defined if the top N bits of W + are defined, even if the trailing bits -- those corresponding to + the 0---0 section -- are undefined. Rather than make Memcheck + more complex, we detect this case where we can and shift out the + irrelevant and potentially undefined bits. */ + Int n = 0; + if (isU64(cc_op, AMD64G_CC_OP_SUBL) + && (isU64(cond, AMD64CondB) || isU64(cond, AMD64CondNB)) + && (n = isU64_1_shl_N(cc_dep2)) > 0) { + /* long sub/cmp, then B (unsigned less than), + where dep2 is a power of 2: + -> CmpLT32(dep1, 1 << N) + -> CmpEQ32(dep1 >>u N, 0) + and + long sub/cmp, then NB (unsigned greater than or equal), + where dep2 is a power of 2: + -> CmpGE32(dep1, 1 << N) + -> CmpNE32(dep1 >>u N, 0) + This avoids CmpLT32U/CmpGE32U being applied to potentially + uninitialised bits in the area being shifted out. */ + vassert(n >= 1 && n <= 31); + Bool isNB = isU64(cond, AMD64CondNB); + return unop(Iop_1Uto64, + binop(isNB ? Iop_CmpNE32 : Iop_CmpEQ32, + binop(Iop_Shr32, unop(Iop_64to32, cc_dep1), + mkU8(n)), + mkU32(0))); + } + } if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondB)) { /* long sub/cmp, then B (unsigned less than) --> test dst <u src */ |
|
From: Julian S. <se...@so...> - 2017-12-05 11:08:33
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=ad92845f6b1c4524ca2d20a58ad589e530bd5239 commit ad92845f6b1c4524ca2d20a58ad589e530bd5239 Author: Julian Seward <js...@ac...> Date: Tue Dec 5 12:04:17 2017 +0100 Rearrange sections in mc_translate.c. No functional change. Rearrange big sections in mc_translate.c, so that the "main" instrumentation function is at the end of the file rather than in the middle. The previous layout never made much sense. The new layout is, roughly: * stuff for baseline (level 2, non-origin tracking) instrumentation * stuff for origin tracking (level 3) instrumentation * the "final tidying" pass * the main instrumentation function (and soon, a new pre-instrumentation analysis pass) Diff: --- memcheck/mc_translate.c | 2291 ++++++++++++++++++++++++----------------------- 1 file changed, 1146 insertions(+), 1145 deletions(-) diff --git a/memcheck/mc_translate.c b/memcheck/mc_translate.c index 9d4f651..4fcf34b 100644 --- a/memcheck/mc_translate.c +++ b/memcheck/mc_translate.c @@ -131,6 +131,7 @@ Ist_Store, IRLoadG, IRStoreG, LLSC, CAS and Dirty memory loads/stores) was re-checked 11 May 2013. */ + /*------------------------------------------------------------*/ /*--- Forward decls ---*/ /*------------------------------------------------------------*/ @@ -143,6 +144,7 @@ static IRTemp findShadowTmpB ( struct _MCEnv* mce, IRTemp orig ); static IRExpr *i128_const_zero(void); + /*------------------------------------------------------------*/ /*--- Memcheck running state, and tmp management. ---*/ /*------------------------------------------------------------*/ @@ -5131,6 +5133,7 @@ IRExpr* expr2vbits ( MCEnv* mce, IRExpr* e ) } } + /*------------------------------------------------------------*/ /*--- Generate shadow stmts from all kinds of IRStmts. ---*/ /*------------------------------------------------------------*/ @@ -6294,463 +6297,756 @@ static void do_shadow_LoadG ( MCEnv* mce, IRLoadG* lg ) /*------------------------------------------------------------*/ -/*--- Memcheck main ---*/ +/*--- Origin tracking stuff ---*/ /*------------------------------------------------------------*/ -static void schemeS ( MCEnv* mce, IRStmt* st ); - -static Bool isBogusAtom ( IRAtom* at ) +/* Almost identical to findShadowTmpV. */ +static IRTemp findShadowTmpB ( MCEnv* mce, IRTemp orig ) { - ULong n = 0; - IRConst* con; - tl_assert(isIRAtom(at)); - if (at->tag == Iex_RdTmp) - return False; - tl_assert(at->tag == Iex_Const); - con = at->Iex.Const.con; - switch (con->tag) { - case Ico_U1: return False; - case Ico_U8: n = (ULong)con->Ico.U8; break; - case Ico_U16: n = (ULong)con->Ico.U16; break; - case Ico_U32: n = (ULong)con->Ico.U32; break; - case Ico_U64: n = (ULong)con->Ico.U64; break; - case Ico_F32: return False; - case Ico_F64: return False; - case Ico_F32i: return False; - case Ico_F64i: return False; - case Ico_V128: return False; - case Ico_V256: return False; - default: ppIRExpr(at); tl_assert(0); + TempMapEnt* ent; + /* VG_(indexXA) range-checks 'orig', hence no need to check + here. */ + ent = (TempMapEnt*)VG_(indexXA)( mce->tmpMap, (Word)orig ); + tl_assert(ent->kind == Orig); + if (ent->shadowB == IRTemp_INVALID) { + IRTemp tmpB + = newTemp( mce, Ity_I32, BSh ); + /* newTemp may cause mce->tmpMap to resize, hence previous results + from VG_(indexXA) are invalid. */ + ent = (TempMapEnt*)VG_(indexXA)( mce->tmpMap, (Word)orig ); + tl_assert(ent->kind == Orig); + tl_assert(ent->shadowB == IRTemp_INVALID); + ent->shadowB = tmpB; } - /* VG_(printf)("%llx\n", n); */ - return (/*32*/ n == 0xFEFEFEFFULL - /*32*/ || n == 0x80808080ULL - /*32*/ || n == 0x7F7F7F7FULL - /*32*/ || n == 0x7EFEFEFFULL - /*32*/ || n == 0x81010100ULL - /*64*/ || n == 0xFFFFFFFFFEFEFEFFULL - /*64*/ || n == 0xFEFEFEFEFEFEFEFFULL - /*64*/ || n == 0x0000000000008080ULL - /*64*/ || n == 0x8080808080808080ULL - /*64*/ || n == 0x0101010101010101ULL - ); + return ent->shadowB; } -static Bool checkForBogusLiterals ( /*FLAT*/ IRStmt* st ) +static IRAtom* gen_maxU32 ( MCEnv* mce, IRAtom* b1, IRAtom* b2 ) { - Int i; - IRExpr* e; - IRDirty* d; - IRCAS* cas; - switch (st->tag) { - case Ist_WrTmp: - e = st->Ist.WrTmp.data; - switch (e->tag) { - case Iex_Get: - case Iex_RdTmp: - return False; - case Iex_Const: - return isBogusAtom(e); - case Iex_Unop: - return isBogusAtom(e->Iex.Unop.arg) - || e->Iex.Unop.op == Iop_GetMSBs8x16; - case Iex_GetI: - return isBogusAtom(e->Iex.GetI.ix); - case Iex_Binop: - return isBogusAtom(e->Iex.Binop.arg1) - || isBogusAtom(e->Iex.Binop.arg2); - case Iex_Triop: - return isBogusAtom(e->Iex.Triop.details->arg1) - || isBogusAtom(e->Iex.Triop.details->arg2) - || isBogusAtom(e->Iex.Triop.details->arg3); - case Iex_Qop: - return isBogusAtom(e->Iex.Qop.details->arg1) - || isBogusAtom(e->Iex.Qop.details->arg2) - || isBogusAtom(e->Iex.Qop.details->arg3) - || isBogusAtom(e->Iex.Qop.details->arg4); - case Iex_ITE: - return isBogusAtom(e->Iex.ITE.cond) - || isBogusAtom(e->Iex.ITE.iftrue) - || isBogusAtom(e->Iex.ITE.iffalse); - case Iex_Load: - return isBogusAtom(e->Iex.Load.addr); - case Iex_CCall: - for (i = 0; e->Iex.CCall.args[i]; i++) - if (isBogusAtom(e->Iex.CCall.args[i])) - return True; - return False; - default: - goto unhandled; - } - case Ist_Dirty: - d = st->Ist.Dirty.details; - for (i = 0; d->args[i]; i++) { - IRAtom* atom = d->args[i]; - if (LIKELY(!is_IRExpr_VECRET_or_GSPTR(atom))) { - if (isBogusAtom(atom)) - return True; - } - } - if (isBogusAtom(d->guard)) - return True; - if (d->mAddr && isBogusAtom(d->mAddr)) - return True; - return False; - case Ist_Put: - return isBogusAtom(st->Ist.Put.data); - case Ist_PutI: - return isBogusAtom(st->Ist.PutI.details->ix) - || isBogusAtom(st->Ist.PutI.details->data); - case Ist_Store: - return isBogusAtom(st->Ist.Store.addr) - || isBogusAtom(st->Ist.Store.data); - case Ist_StoreG: { - IRStoreG* sg = st->Ist.StoreG.details; - return isBogusAtom(sg->addr) || isBogusAtom(sg->data) - || isBogusAtom(sg->guard); - } - case Ist_LoadG: { - IRLoadG* lg = st->Ist.LoadG.details; - return isBogusAtom(lg->addr) || isBogusAtom(lg->alt) - || isBogusAtom(lg->guard); - } - case Ist_Exit: - return isBogusAtom(st->Ist.Exit.guard); - case Ist_AbiHint: - return isBogusAtom(st->Ist.AbiHint.base) - || isBogusAtom(st->Ist.AbiHint.nia); - case Ist_NoOp: - case Ist_IMark: - case Ist_MBE: - return False; - case Ist_CAS: - cas = st->Ist.CAS.details; - return isBogusAtom(cas->addr) - || (cas->expdHi ? isBogusAtom(cas->expdHi) : False) - || isBogusAtom(cas->expdLo) - || (cas->dataHi ? isBogusAtom(cas->dataHi) : False) - || isBogusAtom(cas->dataLo); - case Ist_LLSC: - return isBogusAtom(st->Ist.LLSC.addr) - || (st->Ist.LLSC.storedata - ? isBogusAtom(st->Ist.LLSC.storedata) - : False); - default: - unhandled: - ppIRStmt(st); - VG_(tool_panic)("hasBogusLiterals"); - } + return assignNew( 'B', mce, Ity_I32, binop(Iop_Max32U, b1, b2) ); } -IRSB* MC_(instrument) ( VgCallbackClosure* closure, - IRSB* sb_in, - const VexGuestLayout* layout, - const VexGuestExtents* vge, - const VexArchInfo* archinfo_host, - IRType gWordTy, IRType hWordTy ) +/* Make a guarded origin load, with no special handling in the + didn't-happen case. A GUARD of NULL is assumed to mean "always + True". + + Generate IR to do a shadow origins load from BASEADDR+OFFSET and + return the otag. The loaded size is SZB. If GUARD evaluates to + False at run time then the returned otag is zero. +*/ +static IRAtom* gen_guarded_load_b ( MCEnv* mce, Int szB, + IRAtom* baseaddr, + Int offset, IRExpr* guard ) { - Bool verboze = 0||False; - Int i, j, first_stmt; - IRStmt* st; - MCEnv mce; - IRSB* sb_out; + void* hFun; + const HChar* hName; + IRTemp bTmp; + IRDirty* di; + IRType aTy = typeOfIRExpr( mce->sb->tyenv, baseaddr ); + IROp opAdd = aTy == Ity_I32 ? Iop_Add32 : Iop_Add64; + IRAtom* ea = baseaddr; + if (offset != 0) { + IRAtom* off = aTy == Ity_I32 ? mkU32( offset ) + : mkU64( (Long)(Int)offset ); + ea = assignNew( 'B', mce, aTy, binop(opAdd, ea, off)); + } + bTmp = newTemp(mce, mce->hWordTy, BSh); - if (gWordTy != hWordTy) { - /* We don't currently support this case. */ - VG_(tool_panic)("host/guest word size mismatch"); + switch (szB) { + case 1: hFun = (void*)&MC_(helperc_b_load1); + hName = "MC_(helperc_b_load1)"; + break; + case 2: hFun = (void*)&MC_(helperc_b_load2); + hName = "MC_(helperc_b_load2)"; + break; + case 4: hFun = (void*)&MC_(helperc_b_load4); + hName = "MC_(helperc_b_load4)"; + break; + case 8: hFun = (void*)&MC_(helperc_b_load8); + hName = "MC_(helperc_b_load8)"; + break; + case 16: hFun = (void*)&MC_(helperc_b_load16); + hName = "MC_(helperc_b_load16)"; + break; + case 32: hFun = (void*)&MC_(helperc_b_load32); + hName = "MC_(helperc_b_load32)"; + break; + default: + VG_(printf)("mc_translate.c: gen_load_b: unhandled szB == %d\n", szB); + tl_assert(0); + } + di = unsafeIRDirty_1_N( + bTmp, 1/*regparms*/, hName, VG_(fnptr_to_fnentry)( hFun ), + mkIRExprVec_1( ea ) + ); + if (guard) { + di->guard = guard; + /* Ideally the didn't-happen return value here would be + all-zeroes (unknown-origin), so it'd be harmless if it got + used inadvertently. We slum it out with the IR-mandated + default value (0b01 repeating, 0x55 etc) as that'll probably + trump all legitimate otags via Max32, and it's pretty + obviously bogus. */ + } + /* no need to mess with any annotations. This call accesses + neither guest state nor guest memory. */ + stmt( 'B', mce, IRStmt_Dirty(di) ); + if (mce->hWordTy == Ity_I64) { + /* 64-bit host */ + IRTemp bTmp32 = newTemp(mce, Ity_I32, BSh); + assign( 'B', mce, bTmp32, unop(Iop_64to32, mkexpr(bTmp)) ); + return mkexpr(bTmp32); + } else { + /* 32-bit host */ + return mkexpr(bTmp); } +} - /* Check we're not completely nuts */ - tl_assert(sizeof(UWord) == sizeof(void*)); - tl_assert(sizeof(Word) == sizeof(void*)); - tl_assert(sizeof(Addr) == sizeof(void*)); - tl_assert(sizeof(ULong) == 8); - tl_assert(sizeof(Long) == 8); - tl_assert(sizeof(UInt) == 4); - tl_assert(sizeof(Int) == 4); - tl_assert(MC_(clo_mc_level) >= 1 && MC_(clo_mc_level) <= 3); - - /* Set up SB */ - sb_out = deepCopyIRSBExceptStmts(sb_in); - - /* Set up the running environment. Both .sb and .tmpMap are - modified as we go along. Note that tmps are added to both - .sb->tyenv and .tmpMap together, so the valid index-set for - those two arrays should always be identical. */ - VG_(memset)(&mce, 0, sizeof(mce)); - mce.sb = sb_out; - mce.trace = verboze; - mce.layout = layout; - mce.hWordTy = hWordTy; - mce.bogusLiterals = False; +/* Generate IR to do a shadow origins load from BASEADDR+OFFSET. The + loaded size is SZB. The load is regarded as unconditional (always + happens). +*/ +static IRAtom* gen_load_b ( MCEnv* mce, Int szB, IRAtom* baseaddr, + Int offset ) +{ + return gen_guarded_load_b(mce, szB, baseaddr, offset, NULL/*guard*/); +} - /* Do expensive interpretation for Iop_Add32 and Iop_Add64 on - Darwin. 10.7 is mostly built with LLVM, which uses these for - bitfield inserts, and we get a lot of false errors if the cheap - interpretation is used, alas. Could solve this much better if - we knew which of such adds came from x86/amd64 LEA instructions, - since these are the only ones really needing the expensive - interpretation, but that would require some way to tag them in - the _toIR.c front ends, which is a lot of faffing around. So - for now just use the slow and blunt-instrument solution. */ - mce.useLLVMworkarounds = False; -# if defined(VGO_darwin) - mce.useLLVMworkarounds = True; -# endif - mce.tmpMap = VG_(newXA)( VG_(malloc), "mc.MC_(instrument).1", VG_(free), - sizeof(TempMapEnt)); - VG_(hintSizeXA) (mce.tmpMap, sb_in->tyenv->types_used); - for (i = 0; i < sb_in->tyenv->types_used; i++) { - TempMapEnt ent; - ent.kind = Orig; - ent.shadowV = IRTemp_INVALID; - ent.shadowB = IRTemp_INVALID; - VG_(addToXA)( mce.tmpMap, &ent ); - } - tl_assert( VG_(sizeXA)( mce.tmpMap ) == sb_in->tyenv->types_used ); +/* The most general handler for guarded origin loads. A GUARD of NULL + is assumed to mean "always True". - if (MC_(clo_expensive_definedness_checks)) { - /* For expensive definedness checking skip looking for bogus - literals. */ - mce.bogusLiterals = True; - } else { - /* Make a preliminary inspection of the statements, to see if there - are any dodgy-looking literals. If there are, we generate - extra-detailed (hence extra-expensive) instrumentation in - places. Scan the whole bb even if dodgyness is found earlier, - so that the flatness assertion is applied to all stmts. */ - Bool bogus = False; + Generate IR to do a shadow origin load from ADDR+BIAS and return + the B bits. The loaded type is TY. If GUARD evaluates to False at + run time then the returned B bits are simply BALT instead. +*/ +static +IRAtom* expr2ori_Load_guarded_General ( MCEnv* mce, + IRType ty, + IRAtom* addr, UInt bias, + IRAtom* guard, IRAtom* balt ) +{ + /* If the guard evaluates to True, this will hold the loaded + origin. If the guard evaluates to False, this will be zero, + meaning "unknown origin", in which case we will have to replace + it using an ITE below. */ + IRAtom* iftrue + = assignNew('B', mce, Ity_I32, + gen_guarded_load_b(mce, sizeofIRType(ty), + addr, bias, guard)); + /* These are the bits we will return if the load doesn't take + place. */ + IRAtom* iffalse + = balt; + /* Prepare the cond for the ITE. Convert a NULL cond into + something that iropt knows how to fold out later. */ + IRAtom* cond + = guard == NULL ? mkU1(1) : guard; + /* And assemble the final result. */ + return assignNew('B', mce, Ity_I32, IRExpr_ITE(cond, iftrue, iffalse)); +} - for (i = 0; i < sb_in->stmts_used; i++) { - st = sb_in->stmts[i]; - tl_assert(st); - tl_assert(isFlatIRStmt(st)); - if (!bogus) { - bogus = checkForBogusLiterals(st); - if (0 && bogus) { - VG_(printf)("bogus: "); - ppIRStmt(st); - VG_(printf)("\n"); - } - if (bogus) break; - } - } - mce.bogusLiterals = bogus; +/* Generate a shadow origins store. guard :: Ity_I1 controls whether + the store really happens; NULL means it unconditionally does. */ +static void gen_store_b ( MCEnv* mce, Int szB, + IRAtom* baseaddr, Int offset, IRAtom* dataB, + IRAtom* guard ) +{ + void* hFun; + const HChar* hName; + IRDirty* di; + IRType aTy = typeOfIRExpr( mce->sb->tyenv, baseaddr ); + IROp opAdd = aTy == Ity_I32 ? Iop_Add32 : Iop_Add64; + IRAtom* ea = baseaddr; + if (guard) { + tl_assert(isOriginalAtom(mce, guard)); + tl_assert(typeOfIRExpr(mce->sb->tyenv, guard) == Ity_I1); } - - /* Copy verbatim any IR preamble preceding the first IMark */ - - tl_assert(mce.sb == sb_out); - tl_assert(mce.sb != sb_in); - - i = 0; - while (i < sb_in->stmts_used && sb_in->stmts[i]->tag != Ist_IMark) { - - st = sb_in->stmts[i]; - tl_assert(st); - tl_assert(isFlatIRStmt(st)); - - stmt( 'C', &mce, sb_in->stmts[i] ); - i++; + if (offset != 0) { + IRAtom* off = aTy == Ity_I32 ? mkU32( offset ) + : mkU64( (Long)(Int)offset ); + ea = assignNew( 'B', mce, aTy, binop(opAdd, ea, off)); } + if (mce->hWordTy == Ity_I64) + dataB = assignNew( 'B', mce, Ity_I64, unop(Iop_32Uto64, dataB)); - /* Nasty problem. IR optimisation of the pre-instrumented IR may - cause the IR following the preamble to contain references to IR - temporaries defined in the preamble. Because the preamble isn't - instrumented, these temporaries don't have any shadows. - Nevertheless uses of them following the preamble will cause - memcheck to generate references to their shadows. End effect is - to cause IR sanity check failures, due to references to - non-existent shadows. This is only evident for the complex - preambles used for function wrapping on TOC-afflicted platforms - (ppc64-linux). - - The following loop therefore scans the preamble looking for - assignments to temporaries. For each one found it creates an - assignment to the corresponding (V) shadow temp, marking it as - 'defined'. This is the same resulting IR as if the main - instrumentation loop before had been applied to the statement - 'tmp = CONSTANT'. - - Similarly, if origin tracking is enabled, we must generate an - assignment for the corresponding origin (B) shadow, claiming - no-origin, as appropriate for a defined value. - */ - for (j = 0; j < i; j++) { - if (sb_in->stmts[j]->tag == Ist_WrTmp) { - /* findShadowTmpV checks its arg is an original tmp; - no need to assert that here. */ - IRTemp tmp_o = sb_in->stmts[j]->Ist.WrTmp.tmp; - IRTemp tmp_v = findShadowTmpV(&mce, tmp_o); - IRType ty_v = typeOfIRTemp(sb_out->tyenv, tmp_v); - assign( 'V', &mce, tmp_v, definedOfType( ty_v ) ); - if (MC_(clo_mc_level) == 3) { - IRTemp tmp_b = findShadowTmpB(&mce, tmp_o); - tl_assert(typeOfIRTemp(sb_out->tyenv, tmp_b) == Ity_I32); - assign( 'B', &mce, tmp_b, mkU32(0)/* UNKNOWN ORIGIN */); - } - if (0) { - VG_(printf)("create shadow tmp(s) for preamble tmp [%d] ty ", j); - ppIRType( ty_v ); - VG_(printf)("\n"); - } - } + switch (szB) { + case 1: hFun = (void*)&MC_(helperc_b_store1); + hName = "MC_(helperc_b_store1)"; + break; + case 2: hFun = (void*)&MC_(helperc_b_store2); + hName = "MC_(helperc_b_store2)"; + break; + case 4: hFun = (void*)&MC_(helperc_b_store4); + hName = "MC_(helperc_b_store4)"; + break; + case 8: hFun = (void*)&MC_(helperc_b_store8); + hName = "MC_(helperc_b_store8)"; + break; + case 16: hFun = (void*)&MC_(helperc_b_store16); + hName = "MC_(helperc_b_store16)"; + break; + case 32: hFun = (void*)&MC_(helperc_b_store32); + hName = "MC_(helperc_b_store32)"; + break; + default: + tl_assert(0); } + di = unsafeIRDirty_0_N( 2/*regparms*/, + hName, VG_(fnptr_to_fnentry)( hFun ), + mkIRExprVec_2( ea, dataB ) + ); + /* no need to mess with any annotations. This call accesses + neither guest state nor guest memory. */ + if (guard) di->guard = guard; + stmt( 'B', mce, IRStmt_Dirty(di) ); +} - /* Iterate over the remaining stmts to generate instrumentation. */ - - tl_assert(sb_in->stmts_used > 0); - tl_assert(i >= 0); - tl_assert(i < sb_in->stmts_used); - tl_assert(sb_in->stmts[i]->tag == Ist_IMark); - - for (/* use current i*/; i < sb_in->stmts_used; i++) { - - st = sb_in->stmts[i]; - first_stmt = sb_out->stmts_used; +static IRAtom* narrowTo32 ( MCEnv* mce, IRAtom* e ) { + IRType eTy = typeOfIRExpr(mce->sb->tyenv, e); + if (eTy == Ity_I64) + return assignNew( 'B', mce, Ity_I32, unop(Iop_64to32, e) ); + if (eTy == Ity_I32) + return e; + tl_assert(0); +} - if (verboze) { - VG_(printf)("\n"); - ppIRStmt(st); - VG_(printf)("\n"); - } +static IRAtom* zWidenFrom32 ( MCEnv* mce, IRType dstTy, IRAtom* e ) { + IRType eTy = typeOfIRExpr(mce->sb->tyenv, e); + tl_assert(eTy == Ity_I32); + if (dstTy == Ity_I64) + return assignNew( 'B', mce, Ity_I64, unop(Iop_32Uto64, e) ); + tl_assert(0); +} - if (MC_(clo_mc_level) == 3) { - /* See comments on case Ist_CAS below. */ - if (st->tag != Ist_CAS) - schemeS( &mce, st ); - } - /* Generate instrumentation code for each stmt ... */ +static IRAtom* schemeE ( MCEnv* mce, IRExpr* e ) +{ + tl_assert(MC_(clo_mc_level) == 3); - switch (st->tag) { + switch (e->tag) { - case Ist_WrTmp: - assign( 'V', &mce, findShadowTmpV(&mce, st->Ist.WrTmp.tmp), - expr2vbits( &mce, st->Ist.WrTmp.data) ); - break; + case Iex_GetI: { + IRRegArray* descr_b; + IRAtom *t1, *t2, *t3, *t4; + IRRegArray* descr = e->Iex.GetI.descr; + IRType equivIntTy + = MC_(get_otrack_reg_array_equiv_int_type)(descr); + /* If this array is unshadowable for whatever reason, use the + usual approximation. */ + if (equivIntTy == Ity_INVALID) + return mkU32(0); + tl_assert(sizeofIRType(equivIntTy) >= 4); + tl_assert(sizeofIRType(equivIntTy) == sizeofIRType(descr->elemTy)); + descr_b = mkIRRegArray( descr->base + 2*mce->layout->total_sizeB, + equivIntTy, descr->nElems ); + /* Do a shadow indexed get of the same size, giving t1. Take + the bottom 32 bits of it, giving t2. Compute into t3 the + origin for the index (almost certainly zero, but there's + no harm in being completely general here, since iropt will + remove any useless code), and fold it in, giving a final + value t4. */ + t1 = assignNew( 'B', mce, equivIntTy, + IRExpr_GetI( descr_b, e->Iex.GetI.ix, + e->Iex.GetI.bias )); + t2 = narrowTo32( mce, t1 ); + t3 = schemeE( mce, e->Iex.GetI.ix ); + t4 = gen_maxU32( mce, t2, t3 ); + return t4; + } + case Iex_CCall: { + Int i; + IRAtom* here; + IRExpr** args = e->Iex.CCall.args; + IRAtom* curr = mkU32(0); + for (i = 0; args[i]; i++) { + tl_assert(i < 32); + tl_assert(isOriginalAtom(mce, args[i])); + /* Only take notice of this arg if the callee's + mc-exclusion mask does not say it is to be excluded. */ + if (e->Iex.CCall.cee->mcx_mask & (1<<i)) { + /* the arg is to be excluded from definedness checking. + Do nothing. */ + if (0) VG_(printf)("excluding %s(%d)\n", + e->Iex.CCall.cee->name, i); + } else { + /* calculate the arg's definedness, and pessimistically + merge it in. */ + here = schemeE( mce, args[i] ); + curr = gen_maxU32( mce, curr, here ); + } + } + return curr; + } + case Iex_Load: { + Int dszB; + dszB = sizeofIRType(e->Iex.Load.ty); + /* assert that the B value for the address is already + available (somewhere) */ + tl_assert(isIRAtom(e->Iex.Load.addr)); + tl_assert(mce->hWordTy == Ity_I32 || mce->hWordTy == Ity_I64); + return gen_load_b( mce, dszB, e->Iex.Load.addr, 0 ); + } + case Iex_ITE: { + IRAtom* b1 = schemeE( mce, e->Iex.ITE.cond ); + IRAtom* b3 = schemeE( mce, e->Iex.ITE.iftrue ); + IRAtom* b2 = schemeE( mce, e->Iex.ITE.iffalse ); + return gen_maxU32( mce, b1, gen_maxU32( mce, b2, b3 )); + } + case Iex_Qop: { + IRAtom* b1 = schemeE( mce, e->Iex.Qop.details->arg1 ); + IRAtom* b2 = schemeE( mce, e->Iex.Qop.details->arg2 ); + IRAtom* b3 = schemeE( mce, e->Iex.Qop.details->arg3 ); + IRAtom* b4 = schemeE( mce, e->Iex.Qop.details->arg4 ); + return gen_maxU32( mce, gen_maxU32( mce, b1, b2 ), + gen_maxU32( mce, b3, b4 ) ); + } + case Iex_Triop: { + IRAtom* b1 = schemeE( mce, e->Iex.Triop.details->arg1 ); + IRAtom* b2 = schemeE( mce, e->Iex.Triop.details->arg2 ); + IRAtom* b3 = schemeE( mce, e->Iex.Triop.details->arg3 ); + return gen_maxU32( mce, b1, gen_maxU32( mce, b2, b3 ) ); + } + case Iex_Binop: { + switch (e->Iex.Binop.op) { + case Iop_CasCmpEQ8: case Iop_CasCmpNE8: + case Iop_CasCmpEQ16: case Iop_CasCmpNE16: + case Iop_CasCmpEQ32: case Iop_CasCmpNE32: + case Iop_CasCmpEQ64: case Iop_CasCmpNE64: + /* Just say these all produce a defined result, + regardless of their arguments. See + COMMENT_ON_CasCmpEQ in this file. */ + return mkU32(0); + default: { + IRAtom* b1 = schemeE( mce, e->Iex.Binop.arg1 ); + IRAtom* b2 = schemeE( mce, e->Iex.Binop.arg2 ); + return gen_maxU32( mce, b1, b2 ); + } + } + tl_assert(0); + /*NOTREACHED*/ + } + case Iex_Unop: { + IRAtom* b1 = schemeE( mce, e->Iex.Unop.arg ); + return b1; + } + case Iex_Const: + return mkU32(0); + case Iex_RdTmp: + return mkexpr( findShadowTmpB( mce, e->Iex.RdTmp.tmp )); + case Iex_Get: { + Int b_offset = MC_(get_otrack_shadow_offset)( + e->Iex.Get.offset, + sizeofIRType(e->Iex.Get.ty) + ); + tl_assert(b_offset >= -1 + && b_offset <= mce->layout->total_sizeB -4); + if (b_offset >= 0) { + /* FIXME: this isn't an atom! */ + return IRExpr_Get( b_offset + 2*mce->layout->total_sizeB, + Ity_I32 ); + } + return mkU32(0); + } + default: + VG_(printf)("mc_translate.c: schemeE: unhandled: "); + ppIRExpr(e); + VG_(tool_panic)("memcheck:schemeE"); + } +} - case Ist_Put: - do_shadow_PUT( &mce, - st->Ist.Put.offset, - st->Ist.Put.data, - NULL /* shadow atom */, NULL /* guard */ ); - break; - case Ist_PutI: - do_shadow_PUTI( &mce, st->Ist.PutI.details); - break; +static void do_origins_Dirty ( MCEnv* mce, IRDirty* d ) +{ + // This is a hacked version of do_shadow_Dirty + Int i, k, n, toDo, gSz, gOff; + IRAtom *here, *curr; + IRTemp dst; - case Ist_Store: - do_shadow_Store( &mce, st->Ist.Store.end, - st->Ist.Store.addr, 0/* addr bias */, - st->Ist.Store.data, - NULL /* shadow data */, - NULL/*guard*/ ); - break; + /* First check the guard. */ + curr = schemeE( mce, d->guard ); - case Ist_StoreG: - do_shadow_StoreG( &mce, st->Ist.StoreG.details ); - break; + /* Now round up all inputs and maxU32 over them. */ - case Ist_LoadG: - do_shadow_LoadG( &mce, st->Ist.LoadG.details ); - break; + /* Inputs: unmasked args + Note: arguments are evaluated REGARDLESS of the guard expression */ + for (i = 0; d->args[i]; i++) { + IRAtom* arg = d->args[i]; + if ( (d->cee->mcx_mask & (1<<i)) + || UNLIKELY(is_IRExpr_VECRET_or_GSPTR(arg)) ) { + /* ignore this arg */ + } else { + here = schemeE( mce, arg ); + curr = gen_maxU32( mce, curr, here ); + } + } - case Ist_Exit: - complainIfUndefined( &mce, st->Ist.Exit.guard, NULL ); - break; + /* Inputs: guest state that we read. */ + for (i = 0; i < d->nFxState; i++) { + tl_assert(d->fxState[i].fx != Ifx_None); + if (d->fxState[i].fx == Ifx_Write) + continue; - case Ist_IMark: - break; + /* Enumerate the described state segments */ + for (k = 0; k < 1 + d->fxState[i].nRepeats; k++) { + gOff = d->fxState[i].offset + k * d->fxState[i].repeatLen; + gSz = d->fxState[i].size; - case Ist_NoOp: - case Ist_MBE: - break; + /* Ignore any sections marked as 'always defined'. */ + if (isAlwaysDefd(mce, gOff, gSz)) { + if (0) + VG_(printf)("memcheck: Dirty gst: ignored off %d, sz %d\n", + gOff, gSz); + continue; + } - case Ist_Dirty: - do_shadow_Dirty( &mce, st->Ist.Dirty.details ); - break; + /* This state element is read or modified. So we need to + consider it. If larger than 4 bytes, deal with it in + 4-byte chunks. */ + while (True) { + Int b_offset; + tl_assert(gSz >= 0); + if (gSz == 0) break; + n = gSz <= 4 ? gSz : 4; + /* update 'curr' with maxU32 of the state slice + gOff .. gOff+n-1 */ + b_offset = MC_(get_otrack_shadow_offset)(gOff, 4); + if (b_offset != -1) { + /* Observe the guard expression. If it is false use 0, i.e. + nothing is known about the origin */ + IRAtom *cond, *iffalse, *iftrue; - case Ist_AbiHint: - do_AbiHint( &mce, st->Ist.AbiHint.base, - st->Ist.AbiHint.len, - st->Ist.AbiHint.nia ); - break; + cond = assignNew( 'B', mce, Ity_I1, d->guard); + iffalse = mkU32(0); + iftrue = assignNew( 'B', mce, Ity_I32, + IRExpr_Get(b_offset + + 2*mce->layout->total_sizeB, + Ity_I32)); + here = assignNew( 'B', mce, Ity_I32, + IRExpr_ITE(cond, iftrue, iffalse)); + curr = gen_maxU32( mce, curr, here ); + } + gSz -= n; + gOff += n; + } + } + } - case Ist_CAS: - do_shadow_CAS( &mce, st->Ist.CAS.details ); - /* Note, do_shadow_CAS copies the CAS itself to the output - block, because it needs to add instrumentation both - before and after it. Hence skip the copy below. Also - skip the origin-tracking stuff (call to schemeS) above, - since that's all tangled up with it too; do_shadow_CAS - does it all. */ - break; + /* Inputs: memory */ - case Ist_LLSC: - do_shadow_LLSC( &mce, - st->Ist.LLSC.end, - st->Ist.LLSC.result, - st->Ist.LLSC.addr, - st->Ist.LLSC.storedata ); + if (d->mFx != Ifx_None) { + /* Because we may do multiple shadow loads/stores from the same + base address, it's best to do a single test of its + definedness right now. Post-instrumentation optimisation + should remove all but this test. */ + tl_assert(d->mAddr); + here = schemeE( mce, d->mAddr ); + curr = gen_maxU32( mce, curr, here ); + } + + /* Deal with memory inputs (reads or modifies) */ + if (d->mFx == Ifx_Read || d->mFx == Ifx_Modify) { + toDo = d->mSize; + /* chew off 32-bit chunks. We don't care about the endianness + since it's all going to be condensed down to a single bit, + but nevertheless choose an endianness which is hopefully + native to the platform. */ + while (toDo >= 4) { + here = gen_guarded_load_b( mce, 4, d->mAddr, d->mSize - toDo, + d->guard ); + curr = gen_maxU32( mce, curr, here ); + toDo -= 4; + } + /* handle possible 16-bit excess */ + while (toDo >= 2) { + here = gen_guarded_load_b( mce, 2, d->mAddr, d->mSize - toDo, + d->guard ); + curr = gen_maxU32( mce, curr, here ); + toDo -= 2; + } + /* chew off the remaining 8-bit chunk, if any */ + if (toDo == 1) { + here = gen_guarded_load_b( mce, 1, d->mAddr, d->mSize - toDo, + d->guard ); + curr = gen_maxU32( mce, curr, here ); + toDo -= 1; + } + tl_assert(toDo == 0); + } + + /* Whew! So curr is a 32-bit B-value which should give an origin + of some use if any of the inputs to the helper are undefined. + Now we need to re-distribute the results to all destinations. */ + + /* Outputs: the destination temporary, if there is one. */ + if (d->tmp != IRTemp_INVALID) { + dst = findShadowTmpB(mce, d->tmp); + assign( 'V', mce, dst, curr ); + } + + /* Outputs: guest state that we write or modify. */ + for (i = 0; i < d->nFxState; i++) { + tl_assert(d->fxState[i].fx != Ifx_None); + if (d->fxState[i].fx == Ifx_Read) + continue; + + /* Enumerate the described state segments */ + for (k = 0; k < 1 + d->fxState[i].nRepeats; k++) { + gOff = d->fxState[i].offset + k * d->fxState[i].repeatLen; + gSz = d->fxState[i].size; + + /* Ignore any sections marked as 'always defined'. */ + if (isAlwaysDefd(mce, gOff, gSz)) + continue; + + /* This state element is written or modified. So we need to + consider it. If larger than 4 bytes, deal with it in + 4-byte chunks. */ + while (True) { + Int b_offset; + tl_assert(gSz >= 0); + if (gSz == 0) break; + n = gSz <= 4 ? gSz : 4; + /* Write 'curr' to the state slice gOff .. gOff+n-1 */ + b_offset = MC_(get_otrack_shadow_offset)(gOff, 4); + if (b_offset != -1) { + + /* If the guard expression evaluates to false we simply Put + the value that is already stored in the guest state slot */ + IRAtom *cond, *iffalse; + + cond = assignNew('B', mce, Ity_I1, + d->guard); + iffalse = assignNew('B', mce, Ity_I32, + IRExpr_Get(b_offset + + 2*mce->layout->total_sizeB, + Ity_I32)); + curr = assignNew('V', mce, Ity_I32, + IRExpr_ITE(cond, curr, iffalse)); + + stmt( 'B', mce, IRStmt_Put(b_offset + + 2*mce->layout->total_sizeB, + curr )); + } + gSz -= n; + gOff += n; + } + } + } + + /* Outputs: memory that we write or modify. Same comments about + endianness as above apply. */ + if (d->mFx == Ifx_Write || d->mFx == Ifx_Modify) { + toDo = d->mSize; + /* chew off 32-bit chunks */ + while (toDo >= 4) { + gen_store_b( mce, 4, d->mAddr, d->mSize - toDo, curr, + d->guard ); + toDo -= 4; + } + /* handle possible 16-bit excess */ + while (toDo >= 2) { + gen_store_b( mce, 2, d->mAddr, d->mSize - toDo, curr, + d->guard ); + toDo -= 2; + } + /* chew off the remaining 8-bit chunk, if any */ + if (toDo == 1) { + gen_store_b( mce, 1, d->mAddr, d->mSize - toDo, curr, + d->guard ); + toDo -= 1; + } + tl_assert(toDo == 0); + } +} + + +/* Generate IR for origin shadowing for a general guarded store. */ +static void do_origins_Store_guarded ( MCEnv* mce, + IREndness stEnd, + IRExpr* stAddr, + IRExpr* stData, + IRExpr* guard ) +{ + Int dszB; + IRAtom* dataB; + /* assert that the B value for the address is already available + (somewhere), since the call to schemeE will want to see it. + XXXX how does this actually ensure that?? */ + tl_assert(isIRAtom(stAddr)); + tl_assert(isIRAtom(stData)); + dszB = sizeofIRType( typeOfIRExpr(mce->sb->tyenv, stData ) ); + dataB = schemeE( mce, stData ); + gen_store_b( mce, dszB, stAddr, 0/*offset*/, dataB, guard ); +} + + +/* Generate IR for origin shadowing for a plain store. */ +static void do_origins_Store_plain ( MCEnv* mce, + IREndness stEnd, + IRExpr* stAddr, + IRExpr* stData ) +{ + do_origins_Store_guarded ( mce, stEnd, stAddr, stData, + NULL/*guard*/ ); +} + + +/* ---- Dealing with LoadG/StoreG (not entirely simple) ---- */ + +static void do_origins_StoreG ( MCEnv* mce, IRStoreG* sg ) +{ + do_origins_Store_guarded( mce, sg->end, sg->addr, + sg->data, sg->guard ); +} + +static void do_origins_LoadG ( MCEnv* mce, IRLoadG* lg ) +{ + IRType loadedTy = Ity_INVALID; + switch (lg->cvt) { + case ILGop_IdentV128: loadedTy = Ity_V128; break; + case ILGop_Ident64: loadedTy = Ity_I64; break; + case ILGop_Ident32: loadedTy = Ity_I32; break; + case ILGop_16Uto32: loadedTy = Ity_I16; break; + case ILGop_16Sto32: loadedTy = Ity_I16; break; + case ILGop_8Uto32: loadedTy = Ity_I8; break; + case ILGop_8Sto32: loadedTy = Ity_I8; break; + default: VG_(tool_panic)("schemeS.IRLoadG"); + } + IRAtom* ori_alt + = schemeE( mce,lg->alt ); + IRAtom* ori_final + = expr2ori_Load_guarded_General(mce, loadedTy, + lg->addr, 0/*addr bias*/, + lg->guard, ori_alt ); + /* And finally, bind the origin to the destination temporary. */ + assign( 'B', mce, findShadowTmpB(mce, lg->dst), ori_final ); +} + + +static void schemeS ( MCEnv* mce, IRStmt* st ) +{ + tl_assert(MC_(clo_mc_level) == 3); + + switch (st->tag) { + + case Ist_AbiHint: + /* The value-check instrumenter handles this - by arranging + to pass the address of the next instruction to + MC_(helperc_MAKE_STACK_UNINIT). This is all that needs to + happen for origin tracking w.r.t. AbiHints. So there is + nothing to do here. */ + break; + + case Ist_PutI: { + IRPutI *puti = st->Ist.PutI.details; + IRRegArray* descr_b; + IRAtom *t1, *t2, *t3, *t4; + IRRegArray* descr = puti->descr; + IRType equivIntTy + = MC_(get_otrack_reg_array_equiv_int_type)(descr); + /* If this array is unshadowable for whatever reason, + generate no code. */ + if (equivIntTy == Ity_INVALID) break; + tl_assert(sizeofIRType(equivIntTy) >= 4); + tl_assert(sizeofIRType(equivIntTy) == sizeofIRType(descr->elemTy)); + descr_b + = mkIRRegArray( descr->base + 2*mce->layout->total_sizeB, + equivIntTy, descr->nElems ); + /* Compute a value to Put - the conjoinment of the origin for + the data to be Put-ted (obviously) and of the index value + (not so obviously). */ + t1 = schemeE( mce, puti->data ); + t2 = schemeE( mce, puti->ix ); + t3 = gen_maxU32( mce, t1, t2 ); + t4 = zWidenFrom32( mce, equivIntTy, t3 ); + stmt( 'B', mce, IRStmt_PutI( mkIRPutI(descr_b, puti->ix, + puti->bias, t4) )); + break; + } - default: - VG_(printf)("\n"); - ppIRStmt(st); - VG_(printf)("\n"); - VG_(tool_panic)("memcheck: unhandled IRStmt"); + case Ist_Dirty: + do_origins_Dirty( mce, st->Ist.Dirty.details ); + break; + + case Ist_Store: + do_origins_Store_plain( mce, st->Ist.Store.end, + st->Ist.Store.addr, + st->Ist.Store.data ); + break; + + case Ist_StoreG: + do_origins_StoreG( mce, st->Ist.StoreG.details ); + break; - } /* switch (st->tag) */ + case Ist_LoadG: + do_origins_LoadG( mce, st->Ist.LoadG.details ); + break; - if (0 && verboze) { - for (j = first_stmt; j < sb_out->stmts_used; j++) { - VG_(printf)(" "); - ppIRStmt(sb_out->stmts[j]); - VG_(printf)("\n"); + case Ist_LLSC: { + /* In short: treat a load-linked like a normal load followed + by an assignment of the loaded (shadow) data the result + temporary. Treat a store-conditional like a normal store, + and mark the result temporary as defined. */ + if (st->Ist.LLSC.storedata == NULL) { + /* Load Linked */ + IRType resTy + = typeOfIRTemp(mce->sb->tyenv, st->Ist.LLSC.result); + IRExpr* vanillaLoad + = IRExpr_Load(st->Ist.LLSC.end, resTy, st->Ist.LLSC.addr); + tl_assert(resTy == Ity_I64 || resTy == Ity_I32 + || resTy == Ity_I16 || resTy == Ity_I8); + assign( 'B', mce, findShadowTmpB(mce, st->Ist.LLSC.result), + schemeE(mce, vanillaLoad)); + } else { + /* Store conditional */ + do_origins_Store_plain( mce, st->Ist.LLSC.end, + st->Ist.LLSC.addr, + st->Ist.LLSC.storedata ); + /* For the rationale behind this, see comments at the + place where the V-shadow for .result is constructed, in + do_shadow_LLSC. In short, we regard .result as + always-defined. */ + assign( 'B', mce, findShadowTmpB(mce, st->Ist.LLSC.result), + mkU32(0) ); } - VG_(printf)("\n"); + break; } - /* ... and finally copy the stmt itself to the output. Except, - skip the copy of IRCASs; see comments on case Ist_CAS - above. */ - if (st->tag != Ist_CAS) - stmt('C', &mce, st); - } - - /* Now we need to complain if the jump target is undefined. */ - first_stmt = sb_out->stmts_used; + case Ist_Put: { + Int b_offset + = MC_(get_otrack_shadow_offset)( + st->Ist.Put.offset, + sizeofIRType(typeOfIRExpr(mce->sb->tyenv, st->Ist.Put.data)) + ); + if (b_offset >= 0) { + /* FIXME: this isn't an atom! */ + stmt( 'B', mce, IRStmt_Put(b_offset + 2*mce->layout->total_sizeB, + schemeE( mce, st->Ist.Put.data )) ); + } + break; + } - if (verboze) { - VG_(printf)("sb_in->next = "); - ppIRExpr(sb_in->next); - VG_(printf)("\n\n"); - } + case Ist_WrTmp: + assign( 'B', mce, findShadowTmpB(mce, st->Ist.WrTmp.tmp), + schemeE(mce, st->Ist.WrTmp.data) ); + break; - complainIfUndefined( &mce, sb_in->next, NULL ); + case Ist_MBE: + case Ist_NoOp: + case Ist_Exit: + case Ist_IMark: + break; - if (0 && verboze) { - for (j = first_stmt; j < sb_out->stmts_used; j++) { - VG_(printf)(" "); - ppIRStmt(sb_out->stmts[j]); - VG_(printf)("\n"); - } - VG_(printf)("\n"); + default: + VG_(printf)("mc_translate.c: schemeS: unhandled: "); + ppIRStmt(st); + VG_(tool_panic)("memcheck:schemeS"); } - - /* If this fails, there's been some serious snafu with tmp management, - that should be investigated. */ - tl_assert( VG_(sizeXA)( mce.tmpMap ) == mce.sb->tyenv->types_used ); - VG_(deleteXA)( mce.tmpMap ); - - tl_assert(mce.sb == sb_out); - return sb_out; } @@ -6998,826 +7294,531 @@ IRSB* MC_(final_tidy) ( IRSB* sb_in ) alreadyPresent = check_or_add( &pairs, guard, cee->addr ); if (alreadyPresent) { sb_in->stmts[i] = IRStmt_NoOp(); - if (0) VG_(printf)("XX\n"); - } - } - - tl_assert(pairs.pairs[N_TIDYING_PAIRS].entry == (void*)0x123); - tl_assert(pairs.pairs[N_TIDYING_PAIRS].guard == (IRExpr*)0x456); - - return sb_in; -} - -#undef N_TIDYING_PAIRS - - -/*------------------------------------------------------------*/ -/*--- Origin tracking stuff ---*/ -/*------------------------------------------------------------*/ - -/* Almost identical to findShadowTmpV. */ -static IRTemp findShadowTmpB ( MCEnv* mce, IRTemp orig ) -{ - TempMapEnt* ent; - /* VG_(indexXA) range-checks 'orig', hence no need to check - here. */ - ent = (TempMapEnt*)VG_(indexXA)( mce->tmpMap, (Word)orig ); - tl_assert(ent->kind == Orig); - if (ent->shadowB == IRTemp_INVALID) { - IRTemp tmpB - = newTemp( mce, Ity_I32, BSh ); - /* newTemp may cause mce->tmpMap to resize, hence previous results - from VG_(indexXA) are invalid. */ - ent = (TempMapEnt*)VG_(indexXA)( mce->tmpMap, (Word)orig ); - tl_assert(ent->kind == Orig); - tl_assert(ent->shadowB == IRTemp_INVALID); - ent->shadowB = tmpB; - } - return ent->shadowB; -} - -static IRAtom* gen_maxU32 ( MCEnv* mce, IRAtom* b1, IRAtom* b2 ) -{ - return assignNew( 'B', mce, Ity_I32, binop(Iop_Max32U, b1, b2) ); -} - - -/* Make a guarded origin load, with no special handling in the - didn't-happen case. A GUARD of NULL is assumed to mean "always - True". - - Generate IR to do a shadow origins load from BASEADDR+OFFSET and - return the otag. The loaded size is SZB. If GUARD evaluates to - False at run time then the returned otag is zero. -*/ -static IRAtom* gen_guarded_load_b ( MCEnv* mce, Int szB, - IRAtom* baseaddr, - Int offset, IRExpr* guard ) -{ - void* hFun; - const HChar* hName; - IRTemp bTmp; - IRDirty* di; - IRType aTy = typeOfIRExpr( mce->sb->tyenv, baseaddr ); - IROp opAdd = aTy == Ity_I32 ? Iop_Add32 : Iop_Add64; - IRAtom* ea = baseaddr; - if (offset != 0) { - IRAtom* off = aTy == Ity_I32 ? mkU32( offset ) - : mkU64( (Long)(Int)offset ); - ea = assignNew( 'B', mce, aTy, binop(opAdd, ea, off)); - } - bTmp = newTemp(mce, mce->hWordTy, BSh); - - switch (szB) { - case 1: hFun = (void*)&MC_(helperc_b_load1); - hName = "MC_(helperc_b_load1)"; - break; - case 2: hFun = (void*)&MC_(helperc_b_load2); - hName = "MC_(helperc_b_load2)"; - break; - case 4: hFun = (void*)&MC_(helperc_b_load4); - hName = "MC_(helperc_b_load4)"; - break; - case 8: hFun = (void*)&MC_(helperc_b_load8); - hName = "MC_(helperc_b_load8)"; - break; - case 16: hFun = (void*)&MC_(helperc_b_load16); - hName = "MC_(helperc_b_load16)"; - break; - case 32: hFun = (void*)&MC_(helperc_b_load32); - hName = "MC_(helperc_b_load32)"; - break; - default: - VG_(printf)("mc_translate.c: gen_load_b: unhandled szB == %d\n", szB); - tl_assert(0); - } - di = unsafeIRDirty_1_N( - bTmp, 1/*regparms*/, hName, VG_(fnptr_to_fnentry)( hFun ), - mkIRExprVec_1( ea ) - ); - if (guard) { - di->guard = guard; - /* Ideally the didn't-happen return value here would be - all-zeroes (unknown-origin), so it'd be harmless if it got - used inadvertently. We slum it out with the IR-mandated - default value (0b01 repeating, 0x55 etc) as that'll probably - trump all legitimate otags via Max32, and it's pretty - obviously bogus. */ - } - /* no need to mess with any annotations. This call accesses - neither guest state nor guest memory. */ - stmt( 'B', mce, IRStmt_Dirty(di) ); - if (mce->hWordTy == Ity_I64) { - /* 64-bit host */ - IRTemp bTmp32 = newTemp(mce, Ity_I32, BSh); - assign( 'B', mce, bTmp32, unop(Iop_64to32, mkexpr(bTmp)) ); - return mkexpr(bTmp32); - } else { - /* 32-bit host */ - return mkexpr(bTmp); - } -} - - -/* Generate IR to do a shadow origins load from BASEADDR+OFFSET. The - loaded size is SZB. The load is regarded as unconditional (always - happens). -*/ -static IRAtom* gen_load_b ( MCEnv* mce, Int szB, IRAtom* baseaddr, - Int offset ) -{ - return gen_guarded_load_b(mce, szB, baseaddr, offset, NULL/*guard*/); -} - - -/* The most general handler for guarded origin loads. A GUARD of NULL - is assumed to mean "always True". - - Generate IR to do a shadow origin load from ADDR+BIAS and return - the B bits. The loaded type is TY. If GUARD evaluates to False at - run time then the returned B bits are simply BALT instead. -*/ -static -IRAtom* expr2ori_Load_guarded_General ( MCEnv* mce, - IRType ty, - IRAtom* addr, UInt bias, - IRAtom* guard, IRAtom* balt ) -{ - /* If the guard evaluates to True, this will hold the loaded - origin. If the guard evaluates to False, this will be zero, - meaning "unknown origin", in which case we will have to replace - it using an ITE below. */ - IRAtom* iftrue - = assignNew('B', mce, Ity_I32, - gen_guarded_load_b(mce, sizeofIRType(ty), - addr, bias, guard)); - /* These are the bits we will return if the load doesn't take - place. */ - IRAtom* iffalse - = balt; - /* Prepare the cond for the ITE. Convert a NULL cond into - something that iropt knows how to fold out later. */ - IRAtom* cond - = guard == NULL ? mkU1(1) : guard; - /* And assemble the final result. */ - return assignNew('B', mce, Ity_I32, IRExpr_ITE(cond, iftrue, iffalse)); + if (0) VG_(printf)("XX\n"); + } + } + + tl_assert(pairs.pairs[N_TIDYING_PAIRS].entry == (void*)0x123); + tl_assert(pairs.pairs[N_TIDYING_PAIRS].guard == (IRExpr*)0x456); + + return sb_in; } +#undef N_TIDYING_PAIRS -/* Generate a shadow origins store. guard :: Ity_I1 controls whether - the store really happens; NULL means it unconditionally does. */ -static void gen_store_b ( MCEnv* mce, Int szB, - IRAtom* baseaddr, Int offset, IRAtom* dataB, - IRAtom* guard ) + +/*------------------------------------------------------------*/ +/*--- Startup assertion checking ---*/ +/*------------------------------------------------------------*/ + +void MC_(do_instrumentation_startup_checks)( void ) { - void* hFun; - const HChar* hName; - IRDirty* di; - IRType aTy = typeOfIRExpr( mce->sb->tyenv, baseaddr ); - IROp opAdd = aTy == Ity_I32 ? Iop_Add32 : Iop_Add64; - IRAtom* ea = baseaddr; - if (guard) { - tl_assert(isOriginalAtom(mce, guard)); - tl_assert(typeOfIRExpr(mce->sb->tyenv, guard) == Ity_I1); - } - if (offset != 0) { - IRAtom* off = aTy == Ity_I32 ? mkU32( offset ) - : mkU64( (Long)(Int)offset ); - ea = assignNew( 'B', mce, aTy, binop(opAdd, ea, off)); - } - if (mce->hWordTy == Ity_I64) - dataB = assignNew( 'B', mce, Ity_I64, unop(Iop_32Uto64, dataB)); + /* Make a best-effort check to see that is_helperc_value_checkN_fail + is working as we expect. */ - switch (szB) { - case 1: hFun = (void*)&MC_(helperc_b_store1); - hName = "MC_(helperc_b_store1)"; - break; - case 2: hFun = (void*)&MC_(helperc_b_store2); - hName = "MC_(helperc_b_store2)"; - break; - case 4: hFun = (void*)&MC_(helperc_b_store4); - hName = "MC_(helperc_b_store4)"; - break; - case 8: hFun = (void*)&MC_(helperc_b_store8); - hName = "MC_(helperc_b_store8)"; - break; - case 16: hFun = (void*)&MC_(helperc_b_store16); - hName = "MC_(helperc_b_store16)"; - break; - case 32: hFun = (void*)&MC_(helperc_b_store32); - hName = "MC_(helperc_b_store32)"; - break; - default: - tl_assert(0); - } - di = unsafeIRDirty_0_N( 2/*regparms*/, - hName, VG_(fnptr_to_fnentry)( hFun ), - mkIRExprVec_2( ea, dataB ) - ); - /* no need to mess with any annotations. This call accesses - neither guest state nor guest memory. */ - if (guard) di->guard = guard; - stmt( 'B', mce, IRStmt_Dirty(di) ); -} +# define CHECK(_expected, _string) \ + tl_assert((_expected) == is_helperc_value_checkN_fail(_string)) -static IRAtom* narrowTo32 ( MCEnv* mce, IRAtom* e ) { - IRType eTy = typeOfIRExpr(mce->sb->tyenv, e); - if (eTy == Ity_I64) - return assignNew( 'B', mce, Ity_I32, unop(Iop_64to32, e) ); - if (eTy == Ity_I32) - return e; - tl_assert(0); -} + /* It should identify these 8, and no others, as targets. */ + CHECK(True, "MC_(helperc_value_check8_fail_no_o)"); + CHECK(True, "MC_(helperc_value_check4_fail_no_o)"); + CHECK(True, "MC_(helperc_value_check0_fail_no_o)"); + CHECK(True, "MC_(helperc_value_check1_fail_no_o)"); + CHECK(True, "MC_(helperc_value_check8_fail_w_o)"); + CHECK(True, "MC_(helperc_value_check0_fail_w_o)"); + CHECK(True, "MC_(helperc_value_check1_fail_w_o)"); + CHECK(True, "MC_(helperc_value_check4_fail_w_o)"); -static IRAtom* zWidenFrom32 ( MCEnv* mce, IRType dstTy, IRAtom* e ) { - IRType eTy = typeOfIRExpr(mce->sb->tyenv, e); - tl_assert(eTy == Ity_I32); - if (dstTy == Ity_I64) - return assignNew( 'B', mce, Ity_I64, unop(Iop_32Uto64, e) ); - tl_assert(0); + /* Ad-hoc selection of other strings gathered via a quick test. */ + CHECK(False, "amd64g_dirtyhelper_CPUID_avx2"); + CHECK(False, "amd64g_dirtyhelper_RDTSC"); + CHECK(False, "MC_(helperc_b_load1)"); + CHECK(False, "MC_(helperc_b_load2)"); + CHECK(False, "MC_(helperc_b_load4)"); + CHECK(False, "MC_(helperc_b_load8)"); + CHECK(False, "MC_(helperc_b_load16)"); + CHECK(False, "MC_(helperc_b_load32)"); + CHECK(False, "MC_(helperc_b_store1)"); + CHECK(False, "MC_(helperc_b_store2)"); + CHECK(False, "MC_(helperc_b_store4)"); + CHECK(False, "MC_(helperc_b_store8)"); + CHECK(False, "MC_(helperc_b_store16)"); + CHECK(False, "MC_(helperc_b_store32)"); + CHECK(False, "MC_(helperc_LOADV8)"); + CHECK(False, "MC_(helperc_LOADV16le)"); + CHECK(False, "MC_(helperc_LOADV32le)"); + CHECK(False, "MC_(helperc_LOADV64le)"); + CHECK(False, "MC_(helperc_LOADV128le)"); + CHECK(False, "MC_(helperc_LOADV256le)"); + CHECK(False, "MC_(helperc_STOREV16le)"); + CHECK(False, "MC_(helperc_STOREV32le)"); + CHECK(False, "MC_(helperc_STOREV64le)"); + CHECK(False, "MC_(helperc_STOREV8)"); + CHECK(False, "track_die_mem_stack_8"); + CHECK(False, "track_new_mem_stack_8_w_ECU"); + CHECK(False, "MC_(helperc_MAKE_STACK_UNINIT_w_o)"); + CHECK(False, "VG_(unknown_SP_update_w_ECU)"); + +# undef CHECK } -static IRAtom* schemeE ( MCEnv* mce, IRExpr* e ) -{ - tl_assert(MC_(clo_mc_level) == 3); +/*------------------------------------------------------------*/ +/*--- Memcheck main ---*/ +/*------------------------------------------------------------*/ - switch (e->tag) { +static Bool isBogusAtom ( IRAtom* at ) +{ + ULong n = 0; + IRConst* con; + tl_assert(isIRAtom(at)); + if (at->tag == Iex_RdTmp) + return False; + tl_assert(at->tag == Iex_Const); + con = at->Iex.Const.con; + switch (con->tag) { + case Ico_U1: return False; + case Ico_U8: n = (ULong)con->Ico.U8; break; + case Ico_U16: n = (ULong)con->Ico.U16; break; + case Ico_U32: n = (ULong)con->Ico.U32; break; + case Ico_U64: n = (ULong)con->Ico.U64; break; + case Ico_F32: return False; + case Ico_F64: return False; + case Ico_F32i: return False; + case Ico_F64i: return False; + case Ico_V128: return False; + case Ico_V256: return False; + default: ppIRExpr(at); tl_assert(0); + } + /* VG_(printf)("%llx\n", n); */ + return (/*32*/ n == 0xFEFEFEFFULL + /*32*/ || n == 0x80808080ULL + /*32*/ || n == 0x7F7F7F7FULL + /*32*/ || n == 0x7EFEFEFFULL + /*32*/ || n == 0x81010100ULL + /*64*/ || n == 0xFFFFFFFFFEFEFEFFULL + /*64*/ || n == 0xFEFEFEFEFEFEFEFFULL + /*64*/ || n == 0x0000000000008080ULL + /*64*/ || n == 0x8080808080808080ULL + /*64*/ || n == 0x0101010101010101ULL + ); +} - case Iex_GetI: { - IRRegArray* descr_b; - IRAtom *t1, *t2, *t3, *t4; - IRRegArray* descr = e->Iex.GetI.descr; - IRType equivIntTy - = MC_(get_otrack_reg_array_equiv_int_type)(descr); - /* If this array is unshadowable for whatever reason, use the - usual approximation. */ - if (equivIntTy == Ity_INVALID) - return mkU32(0); - tl_assert(sizeofIRType(equivIntTy) >= 4); - tl_assert(sizeofIRType(equivIntTy) == sizeofIRType(descr->elemTy)); - descr_b = mkIRRegArray( descr->base + 2*mce->layout->total_sizeB, - equivIntTy, descr->nElems ); - /* Do a shadow indexed get of the same size, giving t1. Take - the bottom 32 bits of it, giving t2. Compute into t3 the - origin for the index (almost certainly zero, but there's - no harm in being complete... [truncated message content] |