You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(1) |
|
2
|
3
(6) |
4
(5) |
5
|
6
(2) |
7
(1) |
8
|
|
9
|
10
(4) |
11
(2) |
12
(2) |
13
(3) |
14
(1) |
15
|
|
16
(4) |
17
|
18
(3) |
19
(3) |
20
(3) |
21
|
22
|
|
23
(1) |
24
(10) |
25
(13) |
26
(6) |
27
(2) |
28
(3) |
29
(5) |
|
30
(6) |
|
|
|
|
|
|
|
From: <sv...@va...> - 2017-04-04 12:02:26
|
Author: mjw
Date: Tue Apr 4 13:02:14 2017
New Revision: 3344
Log:
Initialize s390_host_hwcaps early in LibVEX_FrontEnd.
VEX svn r3341 split LibVEX_Translate into LibVEX_FrontEnd and
LibVEX_BackEnd. The s390_host_hwcaps (KLUDGE) needs to be initialized
early in LibVEX_FrontEnd.
Modified:
trunk/priv/main_main.c
Modified: trunk/priv/main_main.c
==============================================================================
--- trunk/priv/main_main.c (original)
+++ trunk/priv/main_main.c Tue Apr 4 13:02:14 2017
@@ -438,6 +438,9 @@
break;
case VexArchS390X:
+ /* KLUDGE: export hwcaps. */
+ s390_host_hwcaps = vta->archinfo_host.hwcaps;
+
preciseMemExnsFn
= S390FN(guest_s390x_state_requires_precise_mem_exns);
disInstrFn = S390FN(disInstr_S390);
@@ -951,8 +954,6 @@
case VexArchS390X:
mode64 = True;
- /* KLUDGE: export hwcaps. */
- s390_host_hwcaps = vta->archinfo_host.hwcaps;
rRegUniv = S390FN(getRRegUniverse_S390());
isMove = CAST_TO_TYPEOF(isMove) S390FN(isMove_S390Instr);
getRegUsage
|
|
From: <sv...@va...> - 2017-04-04 11:09:06
|
Author: petarj
Date: Tue Apr 4 12:09:00 2017
New Revision: 16294
Log:
mips: update the list of fixed bugs for 3.12 release
For the record, bug 348924 has been fixed with VEX r3219.
This fix is already available in 3.12 release.
Modified:
trunk/NEWS
Modified: trunk/NEWS
==============================================================================
--- trunk/NEWS (original)
+++ trunk/NEWS Tue Apr 4 12:09:00 2017
@@ -274,6 +274,7 @@
303877 valgrind doesn't support compressed debuginfo sections.
345307 Warning about "still reachable" memory when using libstdc++ from gcc 5
348345 Assertion fails for negative lineno
+348924 MIPS: Load doubles through memory so the code compiles with the FPXX ABI
351282 V 3.10.1 MIPS softfloat build broken with GCC 4.9.3 / binutils 2.25.1
351692 Dumps created by valgrind are not readable by gdb (mips32 specific)
351804 Crash on generating suppressions for "printf" call on OS X 10.10
|
|
From: <sv...@va...> - 2017-04-04 10:40:33
|
Author: petarj
Date: Tue Apr 4 11:40:22 2017
New Revision: 16293
Log:
mips: update the list of fixed bugs
Bug 340777 has been resolved with different changes over the last two years.
The last important commit is r16261.
Modified:
trunk/NEWS
Modified: trunk/NEWS
==============================================================================
--- trunk/NEWS (original)
+++ trunk/NEWS Tue Apr 4 11:40:22 2017
@@ -93,6 +93,7 @@
where XXXXXX is the bug number as listed below.
162848 --log-file output isn't split when a program forks
+340777 Illegal instruction on mips (ar71xx)
341481 MIPS64: Iop_CmpNE32 triggers false warning on MIPS64 platforms
342040 Valgrind mishandles clone with CLONE_VFORK | CLONE_VM that clones
to a different stack.
|
|
From: <sv...@va...> - 2017-04-03 20:30:55
|
Author: iraisr
Date: Mon Apr 3 21:30:46 2017
New Revision: 3343
Log:
Implement support for traversing IfThenElse statements
into IR tree building pass.
Modified:
branches/VEX_JIT_HACKS/priv/ir_defs.c
branches/VEX_JIT_HACKS/priv/ir_opt.c
Modified: branches/VEX_JIT_HACKS/priv/ir_defs.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/ir_defs.c (original)
+++ branches/VEX_JIT_HACKS/priv/ir_defs.c Mon Apr 3 21:30:46 2017
@@ -1590,7 +1590,9 @@
for (UInt i = 0; i < phis->phis_used; i++) {
print_depth(depth);
ppIRPhi(phis->phis[i]);
- vex_printf("\n");
+ if (i < phis->phis_used - 1) {
+ vex_printf("\n");
+ }
}
}
Modified: branches/VEX_JIT_HACKS/priv/ir_opt.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/ir_opt.c (original)
+++ branches/VEX_JIT_HACKS/priv/ir_opt.c Mon Apr 3 21:30:46 2017
@@ -4062,7 +4062,7 @@
Bool invalidate;
Bool anyDone = False;
- HashHW* tenv = newHHW(); /* :: IRTemp.index -> IRTemp */
+ HashHW* tenv = newHHW(); /* :: IRTemp -> IRTemp */
HashHW* aenv = newHHW(); /* :: AvailExpr* -> IRTemp */
/* Iterate forwards over the stmts.
@@ -5261,6 +5261,14 @@
}
}
+static void initAEnv(ATmpInfo env[], ATmpInfo parent[])
+{
+ for (UInt i = 0; i < A_NENV; i++) {
+ env[i].bindee = (parent != NULL) ? parent[i].bindee : NULL;
+ env[i].binder = (parent != NULL) ? parent[i].binder : IRTemp_INVALID;
+ }
+}
+
/* --- Tree-traversal fns --- */
/* Traverse an expr, and detect if any part of it reads memory or does
@@ -5343,11 +5351,10 @@
env[0].getInterval.high = -1; /* filled in later */
}
-/* Given uses :: array of UShort, indexed by IRTemp.index.
+/* Given uses :: array of UShort, indexed by IRTemp.
Add the use-occurrences of temps in this expression
- to the env.
-*/
-static void aoccCount_Expr ( UShort* uses, IRExpr* e )
+ to the env. */
+static void aoccCount_Expr(UShort uses[], const IRExpr* e)
{
Int i;
@@ -5408,12 +5415,14 @@
}
}
+static void aoccCount_IRStmtVec(UShort uses[], const IRStmtVec* stmts,
+ Bool* max_ga_known, Addr* max_ga);
-/* Given uses :: array of UShort, indexed by IRTemp.index.
+/* Given uses :: array of UShort, indexed by IRTemp.
Add the use-occurrences of temps in this statement
- to the env.
-*/
-static void aoccCount_Stmt ( UShort* uses, IRStmt* st )
+ to the env. */
+static void aoccCount_Stmt(UShort uses[], const IRStmt* st, Bool* max_ga_known,
+ Addr* max_ga)
{
IRDirty* d;
IRCAS* cas;
@@ -5485,9 +5494,14 @@
return;
case Ist_IfThenElse: {
aoccCount_Expr(uses, st->Ist.IfThenElse.cond);
+ aoccCount_IRStmtVec(uses, st->Ist.IfThenElse.then_leg,
+ max_ga_known, max_ga);
+ aoccCount_IRStmtVec(uses, st->Ist.IfThenElse.else_leg,
+ max_ga_known, max_ga);
IRPhiVec* phi_nodes = st->Ist.IfThenElse.phi_nodes;
for (UInt i = 0; i < phi_nodes->phis_used; i++) {
- uses[phi_nodes->phis[i]->dst]++;
+ uses[phi_nodes->phis[i]->srcThen]++;
+ uses[phi_nodes->phis[i]->srcElse]++;
}
return;
}
@@ -5497,6 +5511,30 @@
}
}
+static void aoccCount_IRStmtVec(UShort uses[], const IRStmtVec* stmts,
+ Bool* max_ga_known, Addr* max_ga)
+{
+ for (UInt i = 0; i < stmts->stmts_used; i++) {
+ const IRStmt* st = stmts->stmts[i];
+ switch (st->tag) {
+ case Ist_NoOp:
+ continue;
+ case Ist_IMark: {
+ UInt len = st->Ist.IMark.len;
+ Addr mga = st->Ist.IMark.addr + (len < 1 ? 1 : len) - 1;
+ *max_ga_known = True;
+ if (mga > *max_ga)
+ *max_ga = mga;
+ break;
+ }
+ default:
+ break;
+ }
+ aoccCount_Stmt(uses, st, max_ga_known, max_ga);
+ }
+}
+
+
/* Look up a binding for tmp in the env. If found, return the bound
expression, and set the env's binding to NULL so it is marked as
unused. If not found, return NULL. */
@@ -5773,11 +5811,16 @@
}
}
-/* Same deal as atbSubst_Expr, except for stmts. */
+static IRStmtVec* atbSubst_StmtVec(
+ const IRTypeEnv* tyenv, ATmpInfo* env, IRStmtVec* stmts,
+ Bool (*preciseMemExnsFn)(Int, Int, VexRegisterUpdates),
+ VexRegisterUpdates pxControl, UShort uses[]);
-static IRStmt* atbSubst_Stmt ( ATmpInfo* env, IRStmt* st )
+/* Same deal as atbSubst_Expr, except for stmts. */
+static IRStmt* atbSubst_Stmt(const IRTypeEnv* tyenv, ATmpInfo* env, IRStmt* st,
+ Bool (*preciseMemExnsFn)(Int, Int, VexRegisterUpdates),
+ VexRegisterUpdates pxControl, UShort uses[])
{
- Int i;
IRDirty *d, *d2;
IRCAS *cas, *cas2;
IRPutI *puti, *puti2;
@@ -5868,15 +5911,51 @@
if (d2->mFx != Ifx_None)
d2->mAddr = atbSubst_Expr(env, d2->mAddr);
d2->guard = atbSubst_Expr(env, d2->guard);
- for (i = 0; d2->args[i]; i++) {
+ for (UInt i = 0; d2->args[i]; i++) {
IRExpr* arg = d2->args[i];
if (LIKELY(!is_IRExpr_VECRET_or_GSPTR(arg)))
d2->args[i] = atbSubst_Expr(env, arg);
}
return IRStmt_Dirty(d2);
- case Ist_IfThenElse:
- vpanic("TODO-JIT: what to do?");
- break;
+ case Ist_IfThenElse: {
+ ATmpInfo then_env[A_NENV];
+ initAEnv(then_env, env);
+ IRStmtVec* then_leg
+ = atbSubst_StmtVec(tyenv, then_env, st->Ist.IfThenElse.then_leg,
+ preciseMemExnsFn, pxControl, uses);
+
+ ATmpInfo else_env[A_NENV];
+ initAEnv(else_env, env);
+ IRStmtVec* else_leg
+ = atbSubst_StmtVec(tyenv, else_env, st->Ist.IfThenElse.else_leg,
+ preciseMemExnsFn, pxControl, uses);
+
+ const IRPhiVec* phi_nodes = st->Ist.IfThenElse.phi_nodes;
+ IRPhiVec* out = emptyIRPhiVec();
+ for (UInt i = 0; i < phi_nodes->phis_used; i++) {
+ IRPhi* phi = phi_nodes->phis[i];
+ if (uses[phi->dst] == 0) {
+ if (0) vex_printf("DEAD phi node\n");
+ continue; /* for (UInt i = 0; i < phi_nodes->phis_used; i++) */
+ }
+ addIRPhiToIRPhiVec(out, phi);
+
+ /* Dump bindings in env (if any) which lead to
+ phi->srcThen or phi->srcElse. */
+ IRExpr* e = atbSubst_Temp(then_env, phi->srcThen);
+ if (e != NULL) {
+ addStmtToIRStmtVec(then_leg, IRStmt_WrTmp(phi->srcThen, e));
+ }
+
+ e = atbSubst_Temp(else_env, phi->srcElse);
+ if (e != NULL) {
+ addStmtToIRStmtVec(else_leg, IRStmt_WrTmp(phi->srcElse, e));
+ }
+ }
+
+ return IRStmt_IfThenElse(atbSubst_Expr(env, st->Ist.IfThenElse.cond),
+ then_leg, else_leg, out);
+ }
default:
vex_printf("\n"); ppIRStmt(st, NULL, 0); vex_printf("\n");
vpanic("atbSubst_Stmt");
@@ -5943,7 +6022,7 @@
requiresPreciseMemExns return whether or not that modification
requires precise exceptions. */
static Interval stmt_modifies_guest_state (
- IRSB *bb, const IRStmt *st,
+ const IRTypeEnv *tyenv, const IRStmt *st,
Bool (*preciseMemExnsFn)(Int,Int,VexRegisterUpdates),
VexRegisterUpdates pxControl,
/*OUT*/Bool *requiresPreciseMemExns
@@ -5954,7 +6033,7 @@
switch (st->tag) {
case Ist_Put: {
Int offset = st->Ist.Put.offset;
- Int size = sizeofIRType(typeOfIRExpr(bb->tyenv, st->Ist.Put.data));
+ Int size = sizeofIRType(typeOfIRExpr(tyenv, st->Ist.Put.data));
*requiresPreciseMemExns
= preciseMemExnsFn(offset, offset + size - 1, pxControl);
@@ -5996,138 +6075,57 @@
}
}
-/* notstatic */ Addr ado_treebuild_BB (
- IRSB* bb,
- Bool (*preciseMemExnsFn)(Int,Int,VexRegisterUpdates),
- VexRegisterUpdates pxControl
- )
-{
- Int j, k, m;
- Bool stmtStores, invalidateMe;
- Interval putInterval;
- IRStmt* st2;
- ATmpInfo env[A_NENV];
-
- Bool max_ga_known = False;
- Addr max_ga = 0;
-
- Int n_tmps = bb->tyenv->used;
- UShort* uses = LibVEX_Alloc_inline(n_tmps * sizeof(UShort));
-
- /* Phase 1. Scan forwards in bb, counting use occurrences of each
- temp. Also count occurrences in the bb->next field. Take the
- opportunity to also find the maximum guest address in the block,
- since that will be needed later for deciding when we can safely
- elide event checks. */
-
- for (UInt i = 0; i < n_tmps; i++)
- uses[i] = 0;
-
- for (UInt i = 0; i < bb->stmts->stmts_used; i++) {
- IRStmt* st = bb->stmts->stmts[i];
- switch (st->tag) {
- case Ist_NoOp:
- continue;
- case Ist_IMark: {
- UInt len = st->Ist.IMark.len;
- Addr mga = st->Ist.IMark.addr + (len < 1 ? 1 : len) - 1;
- max_ga_known = True;
- if (mga > max_ga)
- max_ga = mga;
- break;
- }
- default:
- break;
- }
- aoccCount_Stmt( uses, st );
- }
- aoccCount_Expr(uses, bb->next );
-
-# if 0
- for (i = 0; i < n_tmps; i++) {
- if (uses[i] == 0)
- continue;
- ppIRTemp( (IRTemp)i );
- vex_printf(" used %d\n", (Int)uses[i] );
- }
-# endif
-
- /* Phase 2. Scan forwards in bb. For each statement in turn:
-
- If the env is full, emit the end element. This guarantees
- there is at least one free slot in the following.
-
- On seeing 't = E', occ(t)==1,
- let E'=env(E)
- delete this stmt
- add t -> E' to the front of the env
- Examine E' and set the hints for E' appropriately
- (doesLoad? doesGet?)
-
- On seeing any other stmt,
- let stmt' = env(stmt)
- remove from env any 't=E' binds invalidated by stmt
- emit the invalidated stmts
- emit stmt'
- compact any holes in env
- by sliding entries towards the front
-
- Finally, apply env to bb->next.
- */
-
- for (UInt i = 0; i < A_NENV; i++) {
- env[i].bindee = NULL;
- env[i].binder = IRTemp_INVALID;
- }
-
- /* The stmts in bb are being reordered, and we are guaranteed to
- end up with no more than the number we started with. Use i to
- be the cursor of the current stmt examined and j <= i to be that
- for the current stmt being written.
- */
- j = 0;
- for (UInt i = 0; i < bb->stmts->stmts_used; i++) {
+static IRStmtVec* atbSubst_StmtVec(
+ const IRTypeEnv* tyenv, ATmpInfo* env, IRStmtVec* stmts,
+ Bool (*preciseMemExnsFn)(Int, Int, VexRegisterUpdates),
+ VexRegisterUpdates pxControl, UShort uses[])
+{
+ /* The stmts are being reordered, and we are guaranteed to end up with no
+ more than the number we started with. Use i to be the cursor of the
+ current stmt examined and j <= i to be that for the current stmt being
+ written. */
+ UInt j = 0;
+ for (UInt i = 0; i < stmts->stmts_used; i++) {
- IRStmt* st = bb->stmts->stmts[i];
+ IRStmt* st = stmts->stmts[i];
if (st->tag == Ist_NoOp)
continue;
/* Ensure there's at least one space in the env, by emitting
the oldest binding if necessary. */
- if (env[A_NENV-1].bindee != NULL) {
- bb->stmts->stmts[j] = IRStmt_WrTmp( env[A_NENV-1].binder,
- env[A_NENV-1].bindee );
+ if (env[A_NENV - 1].bindee != NULL) {
+ stmts->stmts[j] = IRStmt_WrTmp(env[A_NENV - 1].binder,
+ env[A_NENV - 1].bindee);
j++;
vassert(j <= i);
- env[A_NENV-1].bindee = NULL;
+ env[A_NENV - 1].bindee = NULL;
}
/* Consider current stmt. */
if (st->tag == Ist_WrTmp && uses[st->Ist.WrTmp.tmp] <= 1) {
- IRExpr *e, *e2;
-
/* optional extra: dump dead bindings as we find them.
Removes the need for a prior dead-code removal pass. */
if (uses[st->Ist.WrTmp.tmp] == 0) {
if (0) vex_printf("DEAD binding\n");
- continue; /* for (i = 0; i < bb->stmts->stmts_used; i++) loop */
+ continue; /* for (UInt i = 0; i < stmts->stmts_used; i++) loop */
}
vassert(uses[st->Ist.WrTmp.tmp] == 1);
/* ok, we have 't = E', occ(t)==1. Do the abovementioned
actions. */
- e = st->Ist.WrTmp.data;
- e2 = atbSubst_Expr(env, e);
+ IRExpr* e = st->Ist.WrTmp.data;
+ IRExpr* e2 = atbSubst_Expr(env, e);
addToEnvFront(env, st->Ist.WrTmp.tmp, e2);
setHints_Expr(&env[0].doesLoad, &env[0].getInterval, e2);
/* don't advance j, as we are deleting this stmt and instead
holding it temporarily in the env. */
- continue; /* for (i = 0; i < bb->stmts_used; i++) loop */
+ continue; /* for (UInt i = 0; i < stmts->stmts_used; i++) loop */
}
/* we get here for any other kind of statement. */
/* 'use up' any bindings required by the current statement. */
- st2 = atbSubst_Stmt(env, st);
+ IRStmt* st2 = atbSubst_Stmt(tyenv, env, st, preciseMemExnsFn, pxControl,
+ uses);
/* Now, before this stmt, dump any bindings in env that it
invalidates. These need to be dumped in the order in which
@@ -6138,29 +6136,28 @@
consideration does, or might do (sidely safe @ True). */
Bool putRequiresPreciseMemExns;
- putInterval = stmt_modifies_guest_state(
- bb, st, preciseMemExnsFn, pxControl,
- &putRequiresPreciseMemExns
- );
+ Interval putInterval = stmt_modifies_guest_state(
+ tyenv, st, preciseMemExnsFn, pxControl,
+ &putRequiresPreciseMemExns);
/* be True if this stmt writes memory or might do (==> we don't
want to reorder other loads or stores relative to it). Also,
both LL and SC fall under this classification, since we
really ought to be conservative and not reorder any other
memory transactions relative to them. */
- stmtStores
+ Bool stmtStores
= toBool( st->tag == Ist_Store
|| (st->tag == Ist_Dirty
&& dirty_helper_stores(st->Ist.Dirty.details))
|| st->tag == Ist_LLSC
|| st->tag == Ist_CAS );
- for (k = A_NENV-1; k >= 0; k--) {
+ for (Int k = A_NENV - 1; k >= 0; k--) {
if (env[k].bindee == NULL)
continue;
/* Compare the actions of this stmt with the actions of
binding 'k', to see if they invalidate the binding. */
- invalidateMe
+ Bool invalidateMe
= toBool(
/* a store invalidates loaded data */
(env[k].doesLoad && stmtStores)
@@ -6170,7 +6167,7 @@
/* a put invalidates loaded data. That means, in essense, that
a load expression cannot be substituted into a statement
that follows the put. But there is nothing wrong doing so
- except when the put statement requries precise exceptions.
+ except when the put statement requires precise exceptions.
Think of a load that is moved past a put where the put
updates the IP in the guest state. If the load generates
a segfault, the wrong address (line number) would be
@@ -6186,7 +6183,7 @@
|| st->tag == Ist_AbiHint
);
if (invalidateMe) {
- bb->stmts->stmts[j] = IRStmt_WrTmp(env[k].binder, env[k].bindee);
+ stmts->stmts[j] = IRStmt_WrTmp(env[k].binder, env[k].bindee);
j++;
vassert(j <= i);
env[k].bindee = NULL;
@@ -6194,8 +6191,8 @@
}
/* Slide in-use entries in env up to the front */
- m = 0;
- for (k = 0; k < A_NENV; k++) {
+ UInt m = 0;
+ for (UInt k = 0; k < A_NENV; k++) {
if (env[k].bindee != NULL) {
env[m] = env[k];
m++;
@@ -6206,19 +6203,83 @@
}
/* finally, emit the substituted statement */
- bb->stmts->stmts[j] = st2;
- /* vex_printf("**2 "); ppIRStmt(bb->stmts->stmts[j]); vex_printf("\n");*/
+ stmts->stmts[j] = st2;
+ /* vex_printf("**2 "); ppIRStmt(stmts->stmts[j]); vex_printf("\n");*/
j++;
vassert(j <= i+1);
} /* for each stmt in the original bb ... */
- /* Finally ... substitute the ->next field as much as possible, and
- dump any left-over bindings. Hmm. Perhaps there should be no
- left over bindings? Or any left-over bindings are
- by definition dead? */
+ /* Finally ... dump any left-over bindings. Hmm. Perhaps there should be no
+ left over bindings? Or any left-over bindings are by definition dead? */
+ stmts->stmts_used = j;
+ return stmts;
+}
+
+/* notstatic */ Addr ado_treebuild_BB (
+ IRSB* bb,
+ Bool (*preciseMemExnsFn)(Int,Int,VexRegisterUpdates),
+ VexRegisterUpdates pxControl
+ )
+{
+ ATmpInfo env[A_NENV];
+
+ Bool max_ga_known = False;
+ Addr max_ga = 0;
+
+ Int n_tmps = bb->tyenv->used;
+ UShort* uses = LibVEX_Alloc_inline(n_tmps * sizeof(UShort));
+
+ /* Phase 1. Scan forwards in bb, counting use occurrences of each
+ temp. Also count occurrences in the bb->next field. Take the
+ opportunity to also find the maximum guest address in the block,
+ since that will be needed later for deciding when we can safely
+ elide event checks. */
+
+ for (UInt i = 0; i < n_tmps; i++)
+ uses[i] = 0;
+
+ aoccCount_IRStmtVec(uses, bb->stmts, &max_ga_known, &max_ga);
+ aoccCount_Expr(uses, bb->next);
+
+# if 0
+ for (i = 0; i < n_tmps; i++) {
+ if (uses[i] == 0)
+ continue;
+ ppIRTemp( (IRTemp)i );
+ vex_printf(" used %d\n", (Int)uses[i] );
+ }
+# endif
+
+ /* Phase 2. Scan forwards in bb. For each statement in turn:
+
+ If the env is full, emit the end element. This guarantees
+ there is at least one free slot in the following.
+
+ On seeing 't = E', occ(t)==1,
+ let E'=env(E)
+ delete this stmt
+ add t -> E' to the front of the env
+ Examine E' and set the hints for E' appropriately
+ (doesLoad? doesGet?)
+
+ On seeing any other stmt,
+ let stmt' = env(stmt)
+ remove from env any 't=E' binds invalidated by stmt
+ emit the invalidated stmts
+ emit stmt'
+ compact any holes in env
+ by sliding entries towards the front
+
+ Finally, apply env to bb->next.
+ */
+
+ initAEnv(env, NULL);
+ bb->stmts = atbSubst_StmtVec(bb->tyenv, env, bb->stmts, preciseMemExnsFn,
+ pxControl, uses);
+
+ /* Finally ... substitute the ->next field as much as possible. */
bb->next = atbSubst_Expr(env, bb->next);
- bb->stmts->stmts_used = j;
return max_ga_known ? max_ga : ~(Addr)0;
}
@@ -6231,12 +6292,12 @@
/* This isn't part of IR optimisation however this pass is needed before IRSB
is handed to instruction selection phase. Deconstructs all phi nodes.
Consider this example:
- t0:2 = phi(t1:0,t2:1)
+ t2 = phi(t1,t0)
which gets trivially deconstructed into statements appended to:
- then leg:
- t0:2 = t1:0
+ t2 = t1
- else leg:
- t0:2 = t2:1
+ t2 = t0
Such an IRSB no longer holds SSA property after this pass but subsequent
phases do no require it. */
@@ -6252,7 +6313,7 @@
IRStmtVec* else_leg = st->Ist.IfThenElse.else_leg;
IRPhiVec* phi_nodes = st->Ist.IfThenElse.phi_nodes;
for (UInt j = 0; j < phi_nodes->phis_used; j++) {
- IRPhi* phi = phi_nodes->phis[j];
+ const IRPhi* phi = phi_nodes->phis[j];
addStmtToIRStmtVec(then_leg, IRStmt_WrTmp(phi->dst,
IRExpr_RdTmp(phi->srcThen)));
addStmtToIRStmtVec(else_leg, IRStmt_WrTmp(phi->dst,
|
|
From: <sv...@va...> - 2017-04-03 14:30:21
|
Author: petarj
Date: Mon Apr 3 15:30:13 2017
New Revision: 3342
Log:
mips64: sign-extend results from dirty helper
Values returned from the dirty helper may not be sign-extended, so let's
make sure the values get passed as sign-extended for Ity_I32, Ity_I16, and
Ity_I8 cases.
At the same time, we can remove now redundant sign-extensions introduced in
VEX r3304.
This fixes memcheck/test/bug340392 on some MIPS64 boards.
Patch by Aleksandar Rikalo.
Modified:
trunk/priv/host_mips_defs.c
trunk/priv/host_mips_isel.c
Modified: trunk/priv/host_mips_defs.c
==============================================================================
--- trunk/priv/host_mips_defs.c (original)
+++ trunk/priv/host_mips_defs.c Mon Apr 3 15:30:13 2017
@@ -1584,15 +1584,8 @@
addHRegUse(u, HRmWrite, i->Min.Shft.dst);
return;
case Min_Cmp:
- if (i->Min.Cmp.sz32 && mode64 &&
- (i->Min.Cmp.cond != MIPScc_EQ) &&
- (i->Min.Cmp.cond != MIPScc_NE)) {
- addHRegUse(u, HRmModify, i->Min.Cmp.srcL);
- addHRegUse(u, HRmModify, i->Min.Cmp.srcR);
- } else {
- addHRegUse(u, HRmRead, i->Min.Cmp.srcL);
- addHRegUse(u, HRmRead, i->Min.Cmp.srcR);
- }
+ addHRegUse(u, HRmRead, i->Min.Cmp.srcL);
+ addHRegUse(u, HRmRead, i->Min.Cmp.srcR);
addHRegUse(u, HRmWrite, i->Min.Cmp.dst);
return;
case Min_Unary:
@@ -2761,68 +2754,35 @@
UInt r_srcL = iregNo(i->Min.Cmp.srcL, mode64);
UInt r_srcR = iregNo(i->Min.Cmp.srcR, mode64);
UInt r_dst = iregNo(i->Min.Cmp.dst, mode64);
- Bool sz32 = i->Min.Cmp.sz32;
switch (i->Min.Cmp.cond) {
case MIPScc_EQ:
/* xor r_dst, r_srcL, r_srcR
sltiu r_dst, r_dst, 1 */
p = mkFormR(p, 0, r_srcL, r_srcR, r_dst, 0, 38);
- if (mode64 && sz32) {
- /* sll r_dst, r_dst, 0 */
- p = mkFormS(p, 0, r_dst, 0, r_dst, 0, 0);
- }
p = mkFormI(p, 11, r_dst, r_dst, 1);
break;
case MIPScc_NE:
/* xor r_dst, r_srcL, r_srcR
sltu r_dst, zero, r_dst */
p = mkFormR(p, 0, r_srcL, r_srcR, r_dst, 0, 38);
- if (mode64 && sz32) {
- /* sll r_dst, r_dst, 0 */
- p = mkFormS(p, 0, r_dst, 0, r_dst, 0, 0);
- }
p = mkFormR(p, 0, 0, r_dst, r_dst, 0, 43);
break;
case MIPScc_LT:
- if (mode64 && sz32) {
- /* sll r_srcL, r_srcL, 0
- sll r_srcR, r_srcR, 0 */
- p = mkFormS(p, 0, r_srcL, 0, r_srcL, 0, 0);
- p = mkFormS(p, 0, r_srcR, 0, r_srcR, 0, 0);
- }
/* slt r_dst, r_srcL, r_srcR */
p = mkFormR(p, 0, r_srcL, r_srcR, r_dst, 0, 42);
break;
case MIPScc_LO:
- if (mode64 && sz32) {
- /* sll r_srcL, r_srcL, 0
- sll r_srcR, r_srcR, 0 */
- p = mkFormS(p, 0, r_srcL, 0, r_srcL, 0, 0);
- p = mkFormS(p, 0, r_srcR, 0, r_srcR, 0, 0);
- }
/* sltu r_dst, r_srcL, r_srcR */
p = mkFormR(p, 0, r_srcL, r_srcR, r_dst, 0, 43);
break;
case MIPScc_LE:
- if (mode64 && sz32) {
- /* sll r_srcL, r_srcL, 0
- sll r_srcR, r_srcR, 0 */
- p = mkFormS(p, 0, r_srcL, 0, r_srcL, 0, 0);
- p = mkFormS(p, 0, r_srcR, 0, r_srcR, 0, 0);
- }
/* slt r_dst, r_srcR, r_srcL
xori r_dst, r_dst, 1 */
p = mkFormR(p, 0, r_srcR, r_srcL, r_dst, 0, 42);
p = mkFormI(p, 14, r_dst, r_dst, 1);
break;
case MIPScc_LS:
- if (mode64 && sz32) {
- /* sll r_srcL, r_srcL, 0
- sll r_srcR, r_srcR, 0 */
- p = mkFormS(p, 0, r_srcL, 0, r_srcL, 0, 0);
- p = mkFormS(p, 0, r_srcR, 0, r_srcR, 0, 0);
- }
/* sltu r_dst, rsrcR, r_srcL
xori r_dsr, r_dst, 1 */
p = mkFormR(p, 0, r_srcR, r_srcL, r_dst, 0, 43);
Modified: trunk/priv/host_mips_isel.c
==============================================================================
--- trunk/priv/host_mips_isel.c (original)
+++ trunk/priv/host_mips_isel.c Mon Apr 3 15:30:13 2017
@@ -3843,7 +3843,9 @@
/* The returned value is in $v0. Park it in the register
associated with tmp. */
HReg r_dst = lookupIRTemp(env, d->tmp);
- addInstr(env, mk_iMOVds_RR(r_dst, hregMIPS_GPR2(mode64)));
+ addInstr(env, MIPSInstr_Shft(Mshft_SLL, True, r_dst,
+ hregMIPS_GPR2(mode64),
+ MIPSRH_Imm(False, 0)));
vassert(rloc.pri == RLPri_Int);
vassert(addToSp == 0);
return;
|
|
From: <sv...@va...> - 2017-04-03 13:24:15
|
Author: sewardj
Date: Mon Apr 3 14:24:05 2017
New Revision: 3341
Log:
Split LibVEX_Translate into front- and back-end parts. Also, removes use
of __typeof__ when built with MSVC. A combination of parts of two patches
from Andrew Dutcher <and...@gm...>.
Modified:
trunk/priv/main_main.c
trunk/priv/main_util.h
trunk/pub/libvex.h
Modified: trunk/priv/main_main.c
==============================================================================
--- trunk/priv/main_main.c (original)
+++ trunk/priv/main_main.c Mon Apr 3 14:24:05 2017
@@ -173,7 +173,7 @@
static void check_hwcaps ( VexArch arch, UInt hwcaps );
static const HChar* show_hwcaps ( VexArch arch, UInt hwcaps );
-
+static IRType arch_word_size ( VexArch arch );
/* --------- helpers --------- */
@@ -306,6 +306,7 @@
/* --------- Make a translation. --------- */
+
/* KLUDGE: S390 need to know the hwcaps of the host when generating
code. But that info is not passed to emit_S390Instr. Only mode64 is
being passed. So, ideally, we want this passed as an argument, too.
@@ -318,72 +319,31 @@
/* Exported to library client. */
-VexTranslateResult LibVEX_Translate ( VexTranslateArgs* vta )
+IRSB* LibVEX_FrontEnd ( /*MOD*/ VexTranslateArgs* vta,
+ /*OUT*/ VexTranslateResult* res,
+ /*OUT*/ VexRegisterUpdates* pxControl)
{
- /* This the bundle of functions we need to do the back-end stuff
- (insn selection, reg-alloc, assembly) whilst being insulated
- from the target instruction set. */
- Bool (*isMove) ( const HInstr*, HReg*, HReg* );
- void (*getRegUsage) ( HRegUsage*, const HInstr*, Bool );
- void (*mapRegs) ( HRegRemap*, HInstr*, Bool );
- void (*genSpill) ( HInstr**, HInstr**, HReg, Int, Bool );
- void (*genReload) ( HInstr**, HInstr**, HReg, Int, Bool );
- HInstr* (*directReload) ( HInstr*, HReg, Short );
- void (*ppInstr) ( const HInstr*, Bool );
- void (*ppReg) ( HReg );
- HInstrArray* (*iselSB) ( const IRSB*, VexArch, const VexArchInfo*,
- const VexAbiInfo*, Int, Int, Bool, Bool,
- Addr );
- Int (*emit) ( /*MB_MOD*/Bool*,
- UChar*, Int, const HInstr*, Bool, VexEndness,
- const void*, const void*, const void*,
- const void* );
IRExpr* (*specHelper) ( const HChar*, IRExpr**, IRStmt**, Int );
- Bool (*preciseMemExnsFn) ( Int, Int, VexRegisterUpdates );
-
- const RRegUniverse* rRegUniv = NULL;
-
+ Bool (*preciseMemExnsFn) ( Int, Int, VexRegisterUpdates );
DisOneInstrFn disInstrFn;
VexGuestLayout* guest_layout;
IRSB* irsb;
- HInstrArray* vcode;
- HInstrArray* rcode;
- Int i, j, k, out_used, guest_sizeB;
+ Int i;
Int offB_CMSTART, offB_CMLEN, offB_GUEST_IP, szB_GUEST_IP;
- Int offB_HOST_EvC_COUNTER, offB_HOST_EvC_FAILADDR;
- UChar insn_bytes[128];
IRType guest_word_type;
IRType host_word_type;
- Bool mode64, chainingAllowed;
- Addr max_ga;
- guest_layout = NULL;
- isMove = NULL;
- getRegUsage = NULL;
- mapRegs = NULL;
- genSpill = NULL;
- genReload = NULL;
- directReload = NULL;
- ppInstr = NULL;
- ppReg = NULL;
- iselSB = NULL;
- emit = NULL;
- specHelper = NULL;
- preciseMemExnsFn = NULL;
- disInstrFn = NULL;
- guest_word_type = Ity_INVALID;
- host_word_type = Ity_INVALID;
- offB_CMSTART = 0;
- offB_CMLEN = 0;
- offB_GUEST_IP = 0;
- szB_GUEST_IP = 0;
- offB_HOST_EvC_COUNTER = 0;
- offB_HOST_EvC_FAILADDR = 0;
- mode64 = False;
- chainingAllowed = False;
-
- vex_traceflags = vta->traceflags;
+ guest_layout = NULL;
+ specHelper = NULL;
+ disInstrFn = NULL;
+ preciseMemExnsFn = NULL;
+ guest_word_type = arch_word_size(vta->arch_guest);
+ host_word_type = arch_word_size(vta->arch_host);
+ offB_CMSTART = 0;
+ offB_CMLEN = 0;
+ offB_GUEST_IP = 0;
+ szB_GUEST_IP = 0;
vassert(vex_initdone);
vassert(vta->needs_self_check != NULL);
@@ -392,7 +352,6 @@
if (vta->disp_cp_chain_me_to_slowEP != NULL) {
vassert(vta->disp_cp_chain_me_to_fastEP != NULL);
vassert(vta->disp_cp_xindir != NULL);
- chainingAllowed = True;
} else {
vassert(vta->disp_cp_chain_me_to_fastEP == NULL);
vassert(vta->disp_cp_xindir == NULL);
@@ -401,213 +360,23 @@
vexSetAllocModeTEMP_and_clear();
vexAllocSanityCheck();
+ vex_traceflags = vta->traceflags;
+
/* First off, check that the guest and host insn sets
are supported. */
- switch (vta->arch_host) {
-
- case VexArchX86:
- mode64 = False;
- rRegUniv = X86FN(getRRegUniverse_X86());
- isMove = (__typeof__(isMove)) X86FN(isMove_X86Instr);
- getRegUsage
- = (__typeof__(getRegUsage)) X86FN(getRegUsage_X86Instr);
- mapRegs = (__typeof__(mapRegs)) X86FN(mapRegs_X86Instr);
- genSpill = (__typeof__(genSpill)) X86FN(genSpill_X86);
- genReload = (__typeof__(genReload)) X86FN(genReload_X86);
- directReload = (__typeof__(directReload)) X86FN(directReload_X86);
- ppInstr = (__typeof__(ppInstr)) X86FN(ppX86Instr);
- ppReg = (__typeof__(ppReg)) X86FN(ppHRegX86);
- iselSB = X86FN(iselSB_X86);
- emit = (__typeof__(emit)) X86FN(emit_X86Instr);
- host_word_type = Ity_I32;
- vassert(vta->archinfo_host.endness == VexEndnessLE);
- break;
-
- case VexArchAMD64:
- mode64 = True;
- rRegUniv = AMD64FN(getRRegUniverse_AMD64());
- isMove = (__typeof__(isMove)) AMD64FN(isMove_AMD64Instr);
- getRegUsage
- = (__typeof__(getRegUsage)) AMD64FN(getRegUsage_AMD64Instr);
- mapRegs = (__typeof__(mapRegs)) AMD64FN(mapRegs_AMD64Instr);
- genSpill = (__typeof__(genSpill)) AMD64FN(genSpill_AMD64);
- genReload = (__typeof__(genReload)) AMD64FN(genReload_AMD64);
- directReload = (__typeof__(directReload)) AMD64FN(directReload_AMD64);
- ppInstr = (__typeof__(ppInstr)) AMD64FN(ppAMD64Instr);
- ppReg = (__typeof__(ppReg)) AMD64FN(ppHRegAMD64);
- iselSB = AMD64FN(iselSB_AMD64);
- emit = (__typeof__(emit)) AMD64FN(emit_AMD64Instr);
- host_word_type = Ity_I64;
- vassert(vta->archinfo_host.endness == VexEndnessLE);
- break;
-
- case VexArchPPC32:
- mode64 = False;
- rRegUniv = PPC32FN(getRRegUniverse_PPC(mode64));
- isMove = (__typeof__(isMove)) PPC32FN(isMove_PPCInstr);
- getRegUsage
- = (__typeof__(getRegUsage)) PPC32FN(getRegUsage_PPCInstr);
- mapRegs = (__typeof__(mapRegs)) PPC32FN(mapRegs_PPCInstr);
- genSpill = (__typeof__(genSpill)) PPC32FN(genSpill_PPC);
- genReload = (__typeof__(genReload)) PPC32FN(genReload_PPC);
- ppInstr = (__typeof__(ppInstr)) PPC32FN(ppPPCInstr);
- ppReg = (__typeof__(ppReg)) PPC32FN(ppHRegPPC);
- iselSB = PPC32FN(iselSB_PPC);
- emit = (__typeof__(emit)) PPC32FN(emit_PPCInstr);
- host_word_type = Ity_I32;
- vassert(vta->archinfo_host.endness == VexEndnessBE);
- break;
-
- case VexArchPPC64:
- mode64 = True;
- rRegUniv = PPC64FN(getRRegUniverse_PPC(mode64));
- isMove = (__typeof__(isMove)) PPC64FN(isMove_PPCInstr);
- getRegUsage
- = (__typeof__(getRegUsage)) PPC64FN(getRegUsage_PPCInstr);
- mapRegs = (__typeof__(mapRegs)) PPC64FN(mapRegs_PPCInstr);
- genSpill = (__typeof__(genSpill)) PPC64FN(genSpill_PPC);
- genReload = (__typeof__(genReload)) PPC64FN(genReload_PPC);
- ppInstr = (__typeof__(ppInstr)) PPC64FN(ppPPCInstr);
- ppReg = (__typeof__(ppReg)) PPC64FN(ppHRegPPC);
- iselSB = PPC64FN(iselSB_PPC);
- emit = (__typeof__(emit)) PPC64FN(emit_PPCInstr);
- host_word_type = Ity_I64;
- vassert(vta->archinfo_host.endness == VexEndnessBE ||
- vta->archinfo_host.endness == VexEndnessLE );
- break;
-
- case VexArchS390X:
- mode64 = True;
- /* KLUDGE: export hwcaps. */
- s390_host_hwcaps = vta->archinfo_host.hwcaps;
- rRegUniv = S390FN(getRRegUniverse_S390());
- isMove = (__typeof__(isMove)) S390FN(isMove_S390Instr);
- getRegUsage
- = (__typeof__(getRegUsage)) S390FN(getRegUsage_S390Instr);
- mapRegs = (__typeof__(mapRegs)) S390FN(mapRegs_S390Instr);
- genSpill = (__typeof__(genSpill)) S390FN(genSpill_S390);
- genReload = (__typeof__(genReload)) S390FN(genReload_S390);
- // fixs390: consider implementing directReload_S390
- ppInstr = (__typeof__(ppInstr)) S390FN(ppS390Instr);
- ppReg = (__typeof__(ppReg)) S390FN(ppHRegS390);
- iselSB = S390FN(iselSB_S390);
- emit = (__typeof__(emit)) S390FN(emit_S390Instr);
- host_word_type = Ity_I64;
- vassert(vta->archinfo_host.endness == VexEndnessBE);
- break;
-
- case VexArchARM:
- mode64 = False;
- rRegUniv = ARMFN(getRRegUniverse_ARM());
- isMove = (__typeof__(isMove)) ARMFN(isMove_ARMInstr);
- getRegUsage
- = (__typeof__(getRegUsage)) ARMFN(getRegUsage_ARMInstr);
- mapRegs = (__typeof__(mapRegs)) ARMFN(mapRegs_ARMInstr);
- genSpill = (__typeof__(genSpill)) ARMFN(genSpill_ARM);
- genReload = (__typeof__(genReload)) ARMFN(genReload_ARM);
- ppInstr = (__typeof__(ppInstr)) ARMFN(ppARMInstr);
- ppReg = (__typeof__(ppReg)) ARMFN(ppHRegARM);
- iselSB = ARMFN(iselSB_ARM);
- emit = (__typeof__(emit)) ARMFN(emit_ARMInstr);
- host_word_type = Ity_I32;
- vassert(vta->archinfo_host.endness == VexEndnessLE);
- break;
-
- case VexArchARM64:
- mode64 = True;
- rRegUniv = ARM64FN(getRRegUniverse_ARM64());
- isMove = (__typeof__(isMove)) ARM64FN(isMove_ARM64Instr);
- getRegUsage
- = (__typeof__(getRegUsage)) ARM64FN(getRegUsage_ARM64Instr);
- mapRegs = (__typeof__(mapRegs)) ARM64FN(mapRegs_ARM64Instr);
- genSpill = (__typeof__(genSpill)) ARM64FN(genSpill_ARM64);
- genReload = (__typeof__(genReload)) ARM64FN(genReload_ARM64);
- ppInstr = (__typeof__(ppInstr)) ARM64FN(ppARM64Instr);
- ppReg = (__typeof__(ppReg)) ARM64FN(ppHRegARM64);
- iselSB = ARM64FN(iselSB_ARM64);
- emit = (__typeof__(emit)) ARM64FN(emit_ARM64Instr);
- host_word_type = Ity_I64;
- vassert(vta->archinfo_host.endness == VexEndnessLE);
- break;
-
- case VexArchMIPS32:
- mode64 = False;
- rRegUniv = MIPS32FN(getRRegUniverse_MIPS(mode64));
- isMove = (__typeof__(isMove)) MIPS32FN(isMove_MIPSInstr);
- getRegUsage
- = (__typeof__(getRegUsage)) MIPS32FN(getRegUsage_MIPSInstr);
- mapRegs = (__typeof__(mapRegs)) MIPS32FN(mapRegs_MIPSInstr);
- genSpill = (__typeof__(genSpill)) MIPS32FN(genSpill_MIPS);
- genReload = (__typeof__(genReload)) MIPS32FN(genReload_MIPS);
- ppInstr = (__typeof__(ppInstr)) MIPS32FN(ppMIPSInstr);
- ppReg = (__typeof__(ppReg)) MIPS32FN(ppHRegMIPS);
- iselSB = MIPS32FN(iselSB_MIPS);
- emit = (__typeof__(emit)) MIPS32FN(emit_MIPSInstr);
- host_word_type = Ity_I32;
- vassert(vta->archinfo_host.endness == VexEndnessLE
- || vta->archinfo_host.endness == VexEndnessBE);
- break;
-
- case VexArchMIPS64:
- mode64 = True;
- rRegUniv = MIPS64FN(getRRegUniverse_MIPS(mode64));
- isMove = (__typeof__(isMove)) MIPS64FN(isMove_MIPSInstr);
- getRegUsage
- = (__typeof__(getRegUsage)) MIPS64FN(getRegUsage_MIPSInstr);
- mapRegs = (__typeof__(mapRegs)) MIPS64FN(mapRegs_MIPSInstr);
- genSpill = (__typeof__(genSpill)) MIPS64FN(genSpill_MIPS);
- genReload = (__typeof__(genReload)) MIPS64FN(genReload_MIPS);
- ppInstr = (__typeof__(ppInstr)) MIPS64FN(ppMIPSInstr);
- ppReg = (__typeof__(ppReg)) MIPS64FN(ppHRegMIPS);
- iselSB = MIPS64FN(iselSB_MIPS);
- emit = (__typeof__(emit)) MIPS64FN(emit_MIPSInstr);
- host_word_type = Ity_I64;
- vassert(vta->archinfo_host.endness == VexEndnessLE
- || vta->archinfo_host.endness == VexEndnessBE);
- break;
-
- case VexArchTILEGX:
- mode64 = True;
- rRegUniv = TILEGXFN(getRRegUniverse_TILEGX());
- isMove = (__typeof__(isMove)) TILEGXFN(isMove_TILEGXInstr);
- getRegUsage =
- (__typeof__(getRegUsage)) TILEGXFN(getRegUsage_TILEGXInstr);
- mapRegs = (__typeof__(mapRegs)) TILEGXFN(mapRegs_TILEGXInstr);
- genSpill = (__typeof__(genSpill)) TILEGXFN(genSpill_TILEGX);
- genReload = (__typeof__(genReload)) TILEGXFN(genReload_TILEGX);
- ppInstr = (__typeof__(ppInstr)) TILEGXFN(ppTILEGXInstr);
- ppReg = (__typeof__(ppReg)) TILEGXFN(ppHRegTILEGX);
- iselSB = TILEGXFN(iselSB_TILEGX);
- emit = (__typeof__(emit)) TILEGXFN(emit_TILEGXInstr);
- host_word_type = Ity_I64;
- vassert(vta->archinfo_host.endness == VexEndnessLE);
- break;
-
- default:
- vpanic("LibVEX_Translate: unsupported host insn set");
- }
-
- // Are the host's hardware capabilities feasible. The function will
- // not return if hwcaps are infeasible in some sense.
- check_hwcaps(vta->arch_host, vta->archinfo_host.hwcaps);
-
switch (vta->arch_guest) {
case VexArchX86:
preciseMemExnsFn
= X86FN(guest_x86_state_requires_precise_mem_exns);
- disInstrFn = X86FN(disInstr_X86);
- specHelper = X86FN(guest_x86_spechelper);
- guest_sizeB = sizeof(VexGuestX86State);
- guest_word_type = Ity_I32;
- guest_layout = X86FN(&x86guest_layout);
- offB_CMSTART = offsetof(VexGuestX86State,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestX86State,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestX86State,guest_EIP);
- szB_GUEST_IP = sizeof( ((VexGuestX86State*)0)->guest_EIP );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestX86State,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestX86State,host_EvC_FAILADDR);
+ disInstrFn = X86FN(disInstr_X86);
+ specHelper = X86FN(guest_x86_spechelper);
+ guest_layout = X86FN(&x86guest_layout);
+ offB_CMSTART = offsetof(VexGuestX86State,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestX86State,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestX86State,guest_EIP);
+ szB_GUEST_IP = sizeof( ((VexGuestX86State*)0)->guest_EIP );
vassert(vta->archinfo_guest.endness == VexEndnessLE);
vassert(0 == sizeof(VexGuestX86State) % LibVEX_GUEST_STATE_ALIGN);
vassert(sizeof( ((VexGuestX86State*)0)->guest_CMSTART) == 4);
@@ -618,17 +387,13 @@
case VexArchAMD64:
preciseMemExnsFn
= AMD64FN(guest_amd64_state_requires_precise_mem_exns);
- disInstrFn = AMD64FN(disInstr_AMD64);
- specHelper = AMD64FN(guest_amd64_spechelper);
- guest_sizeB = sizeof(VexGuestAMD64State);
- guest_word_type = Ity_I64;
- guest_layout = AMD64FN(&amd64guest_layout);
- offB_CMSTART = offsetof(VexGuestAMD64State,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestAMD64State,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestAMD64State,guest_RIP);
- szB_GUEST_IP = sizeof( ((VexGuestAMD64State*)0)->guest_RIP );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestAMD64State,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestAMD64State,host_EvC_FAILADDR);
+ disInstrFn = AMD64FN(disInstr_AMD64);
+ specHelper = AMD64FN(guest_amd64_spechelper);
+ guest_layout = AMD64FN(&amd64guest_layout);
+ offB_CMSTART = offsetof(VexGuestAMD64State,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestAMD64State,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestAMD64State,guest_RIP);
+ szB_GUEST_IP = sizeof( ((VexGuestAMD64State*)0)->guest_RIP );
vassert(vta->archinfo_guest.endness == VexEndnessLE);
vassert(0 == sizeof(VexGuestAMD64State) % LibVEX_GUEST_STATE_ALIGN);
vassert(sizeof( ((VexGuestAMD64State*)0)->guest_CMSTART ) == 8);
@@ -639,17 +404,13 @@
case VexArchPPC32:
preciseMemExnsFn
= PPC32FN(guest_ppc32_state_requires_precise_mem_exns);
- disInstrFn = PPC32FN(disInstr_PPC);
- specHelper = PPC32FN(guest_ppc32_spechelper);
- guest_sizeB = sizeof(VexGuestPPC32State);
- guest_word_type = Ity_I32;
- guest_layout = PPC32FN(&ppc32Guest_layout);
- offB_CMSTART = offsetof(VexGuestPPC32State,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestPPC32State,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestPPC32State,guest_CIA);
- szB_GUEST_IP = sizeof( ((VexGuestPPC32State*)0)->guest_CIA );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestPPC32State,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestPPC32State,host_EvC_FAILADDR);
+ disInstrFn = PPC32FN(disInstr_PPC);
+ specHelper = PPC32FN(guest_ppc32_spechelper);
+ guest_layout = PPC32FN(&ppc32Guest_layout);
+ offB_CMSTART = offsetof(VexGuestPPC32State,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestPPC32State,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestPPC32State,guest_CIA);
+ szB_GUEST_IP = sizeof( ((VexGuestPPC32State*)0)->guest_CIA );
vassert(vta->archinfo_guest.endness == VexEndnessBE);
vassert(0 == sizeof(VexGuestPPC32State) % LibVEX_GUEST_STATE_ALIGN);
vassert(sizeof( ((VexGuestPPC32State*)0)->guest_CMSTART ) == 4);
@@ -660,17 +421,13 @@
case VexArchPPC64:
preciseMemExnsFn
= PPC64FN(guest_ppc64_state_requires_precise_mem_exns);
- disInstrFn = PPC64FN(disInstr_PPC);
- specHelper = PPC64FN(guest_ppc64_spechelper);
- guest_sizeB = sizeof(VexGuestPPC64State);
- guest_word_type = Ity_I64;
- guest_layout = PPC64FN(&ppc64Guest_layout);
- offB_CMSTART = offsetof(VexGuestPPC64State,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestPPC64State,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestPPC64State,guest_CIA);
- szB_GUEST_IP = sizeof( ((VexGuestPPC64State*)0)->guest_CIA );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestPPC64State,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestPPC64State,host_EvC_FAILADDR);
+ disInstrFn = PPC64FN(disInstr_PPC);
+ specHelper = PPC64FN(guest_ppc64_spechelper);
+ guest_layout = PPC64FN(&ppc64Guest_layout);
+ offB_CMSTART = offsetof(VexGuestPPC64State,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestPPC64State,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestPPC64State,guest_CIA);
+ szB_GUEST_IP = sizeof( ((VexGuestPPC64State*)0)->guest_CIA );
vassert(vta->archinfo_guest.endness == VexEndnessBE ||
vta->archinfo_guest.endness == VexEndnessLE );
vassert(0 == sizeof(VexGuestPPC64State) % LibVEX_GUEST_STATE_ALIGN);
@@ -683,17 +440,13 @@
case VexArchS390X:
preciseMemExnsFn
= S390FN(guest_s390x_state_requires_precise_mem_exns);
- disInstrFn = S390FN(disInstr_S390);
- specHelper = S390FN(guest_s390x_spechelper);
- guest_sizeB = sizeof(VexGuestS390XState);
- guest_word_type = Ity_I64;
- guest_layout = S390FN(&s390xGuest_layout);
- offB_CMSTART = offsetof(VexGuestS390XState,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestS390XState,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestS390XState,guest_IA);
- szB_GUEST_IP = sizeof( ((VexGuestS390XState*)0)->guest_IA);
- offB_HOST_EvC_COUNTER = offsetof(VexGuestS390XState,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestS390XState,host_EvC_FAILADDR);
+ disInstrFn = S390FN(disInstr_S390);
+ specHelper = S390FN(guest_s390x_spechelper);
+ guest_layout = S390FN(&s390xGuest_layout);
+ offB_CMSTART = offsetof(VexGuestS390XState,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestS390XState,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestS390XState,guest_IA);
+ szB_GUEST_IP = sizeof( ((VexGuestS390XState*)0)->guest_IA);
vassert(vta->archinfo_guest.endness == VexEndnessBE);
vassert(0 == sizeof(VexGuestS390XState) % LibVEX_GUEST_STATE_ALIGN);
vassert(sizeof( ((VexGuestS390XState*)0)->guest_CMSTART ) == 8);
@@ -704,17 +457,13 @@
case VexArchARM:
preciseMemExnsFn
= ARMFN(guest_arm_state_requires_precise_mem_exns);
- disInstrFn = ARMFN(disInstr_ARM);
- specHelper = ARMFN(guest_arm_spechelper);
- guest_sizeB = sizeof(VexGuestARMState);
- guest_word_type = Ity_I32;
- guest_layout = ARMFN(&armGuest_layout);
- offB_CMSTART = offsetof(VexGuestARMState,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestARMState,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestARMState,guest_R15T);
- szB_GUEST_IP = sizeof( ((VexGuestARMState*)0)->guest_R15T );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestARMState,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestARMState,host_EvC_FAILADDR);
+ disInstrFn = ARMFN(disInstr_ARM);
+ specHelper = ARMFN(guest_arm_spechelper);
+ guest_layout = ARMFN(&armGuest_layout);
+ offB_CMSTART = offsetof(VexGuestARMState,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestARMState,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestARMState,guest_R15T);
+ szB_GUEST_IP = sizeof( ((VexGuestARMState*)0)->guest_R15T );
vassert(vta->archinfo_guest.endness == VexEndnessLE);
vassert(0 == sizeof(VexGuestARMState) % LibVEX_GUEST_STATE_ALIGN);
vassert(sizeof( ((VexGuestARMState*)0)->guest_CMSTART) == 4);
@@ -725,17 +474,13 @@
case VexArchARM64:
preciseMemExnsFn
= ARM64FN(guest_arm64_state_requires_precise_mem_exns);
- disInstrFn = ARM64FN(disInstr_ARM64);
- specHelper = ARM64FN(guest_arm64_spechelper);
- guest_sizeB = sizeof(VexGuestARM64State);
- guest_word_type = Ity_I64;
- guest_layout = ARM64FN(&arm64Guest_layout);
- offB_CMSTART = offsetof(VexGuestARM64State,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestARM64State,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestARM64State,guest_PC);
- szB_GUEST_IP = sizeof( ((VexGuestARM64State*)0)->guest_PC );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestARM64State,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestARM64State,host_EvC_FAILADDR);
+ disInstrFn = ARM64FN(disInstr_ARM64);
+ specHelper = ARM64FN(guest_arm64_spechelper);
+ guest_layout = ARM64FN(&arm64Guest_layout);
+ offB_CMSTART = offsetof(VexGuestARM64State,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestARM64State,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestARM64State,guest_PC);
+ szB_GUEST_IP = sizeof( ((VexGuestARM64State*)0)->guest_PC );
vassert(vta->archinfo_guest.endness == VexEndnessLE);
vassert(0 == sizeof(VexGuestARM64State) % LibVEX_GUEST_STATE_ALIGN);
vassert(sizeof( ((VexGuestARM64State*)0)->guest_CMSTART) == 8);
@@ -746,17 +491,13 @@
case VexArchMIPS32:
preciseMemExnsFn
= MIPS32FN(guest_mips32_state_requires_precise_mem_exns);
- disInstrFn = MIPS32FN(disInstr_MIPS);
- specHelper = MIPS32FN(guest_mips32_spechelper);
- guest_sizeB = sizeof(VexGuestMIPS32State);
- guest_word_type = Ity_I32;
- guest_layout = MIPS32FN(&mips32Guest_layout);
- offB_CMSTART = offsetof(VexGuestMIPS32State,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestMIPS32State,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestMIPS32State,guest_PC);
- szB_GUEST_IP = sizeof( ((VexGuestMIPS32State*)0)->guest_PC );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestMIPS32State,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestMIPS32State,host_EvC_FAILADDR);
+ disInstrFn = MIPS32FN(disInstr_MIPS);
+ specHelper = MIPS32FN(guest_mips32_spechelper);
+ guest_layout = MIPS32FN(&mips32Guest_layout);
+ offB_CMSTART = offsetof(VexGuestMIPS32State,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestMIPS32State,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestMIPS32State,guest_PC);
+ szB_GUEST_IP = sizeof( ((VexGuestMIPS32State*)0)->guest_PC );
vassert(vta->archinfo_guest.endness == VexEndnessLE
|| vta->archinfo_guest.endness == VexEndnessBE);
vassert(0 == sizeof(VexGuestMIPS32State) % LibVEX_GUEST_STATE_ALIGN);
@@ -768,17 +509,13 @@
case VexArchMIPS64:
preciseMemExnsFn
= MIPS64FN(guest_mips64_state_requires_precise_mem_exns);
- disInstrFn = MIPS64FN(disInstr_MIPS);
- specHelper = MIPS64FN(guest_mips64_spechelper);
- guest_sizeB = sizeof(VexGuestMIPS64State);
- guest_word_type = Ity_I64;
- guest_layout = MIPS64FN(&mips64Guest_layout);
- offB_CMSTART = offsetof(VexGuestMIPS64State,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestMIPS64State,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestMIPS64State,guest_PC);
- szB_GUEST_IP = sizeof( ((VexGuestMIPS64State*)0)->guest_PC );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestMIPS64State,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestMIPS64State,host_EvC_FAILADDR);
+ disInstrFn = MIPS64FN(disInstr_MIPS);
+ specHelper = MIPS64FN(guest_mips64_spechelper);
+ guest_layout = MIPS64FN(&mips64Guest_layout);
+ offB_CMSTART = offsetof(VexGuestMIPS64State,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestMIPS64State,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestMIPS64State,guest_PC);
+ szB_GUEST_IP = sizeof( ((VexGuestMIPS64State*)0)->guest_PC );
vassert(vta->archinfo_guest.endness == VexEndnessLE
|| vta->archinfo_guest.endness == VexEndnessBE);
vassert(0 == sizeof(VexGuestMIPS64State) % LibVEX_GUEST_STATE_ALIGN);
@@ -790,17 +527,13 @@
case VexArchTILEGX:
preciseMemExnsFn =
TILEGXFN(guest_tilegx_state_requires_precise_mem_exns);
- disInstrFn = TILEGXFN(disInstr_TILEGX);
- specHelper = TILEGXFN(guest_tilegx_spechelper);
- guest_sizeB = sizeof(VexGuestTILEGXState);
- guest_word_type = Ity_I64;
- guest_layout = TILEGXFN(&tilegxGuest_layout);
- offB_CMSTART = offsetof(VexGuestTILEGXState,guest_CMSTART);
- offB_CMLEN = offsetof(VexGuestTILEGXState,guest_CMLEN);
- offB_GUEST_IP = offsetof(VexGuestTILEGXState,guest_pc);
- szB_GUEST_IP = sizeof( ((VexGuestTILEGXState*)0)->guest_pc );
- offB_HOST_EvC_COUNTER = offsetof(VexGuestTILEGXState,host_EvC_COUNTER);
- offB_HOST_EvC_FAILADDR = offsetof(VexGuestTILEGXState,host_EvC_FAILADDR);
+ disInstrFn = TILEGXFN(disInstr_TILEGX);
+ specHelper = TILEGXFN(guest_tilegx_spechelper);
+ guest_layout = TILEGXFN(&tilegxGuest_layout);
+ offB_CMSTART = offsetof(VexGuestTILEGXState,guest_CMSTART);
+ offB_CMLEN = offsetof(VexGuestTILEGXState,guest_CMLEN);
+ offB_GUEST_IP = offsetof(VexGuestTILEGXState,guest_pc);
+ szB_GUEST_IP = sizeof( ((VexGuestTILEGXState*)0)->guest_pc );
vassert(vta->archinfo_guest.endness == VexEndnessLE);
vassert(0 ==
sizeof(VexGuestTILEGXState) % LibVEX_GUEST_STATE_ALIGN);
@@ -818,13 +551,12 @@
// FIXME: how can we know the guest's hardware capabilities?
check_hwcaps(vta->arch_guest, vta->archinfo_guest.hwcaps);
- /* Set up result struct. */
- VexTranslateResult res;
- res.status = VexTransOK;
- res.n_sc_extents = 0;
- res.offs_profInc = -1;
- res.n_guest_instrs = 0;
+ res->status = VexTransOK;
+ res->n_sc_extents = 0;
+ res->offs_profInc = -1;
+ res->n_guest_instrs = 0;
+#ifndef VEXMULTIARCH
/* yet more sanity checks ... */
if (vta->arch_guest == vta->arch_host) {
/* doesn't necessarily have to be true, but if it isn't it means
@@ -834,6 +566,7 @@
/* ditto */
vassert(vta->archinfo_guest.endness == vta->archinfo_host.endness);
}
+#endif
vexAllocSanityCheck();
@@ -842,14 +575,14 @@
" Front end "
"------------------------\n\n");
- VexRegisterUpdates pxControl = vex_control.iropt_register_updates_default;
- vassert(pxControl >= VexRegUpdSpAtMemAccess
- && pxControl <= VexRegUpdAllregsAtEachInsn);
+ *pxControl = vex_control.iropt_register_updates_default;
+ vassert(*pxControl >= VexRegUpdSpAtMemAccess
+ && *pxControl <= VexRegUpdAllregsAtEachInsn);
irsb = bb_to_IR ( vta->guest_extents,
- &res.n_sc_extents,
- &res.n_guest_instrs,
- &pxControl,
+ &res->n_sc_extents,
+ &res->n_guest_instrs,
+ pxControl,
vta->callback_opaque,
disInstrFn,
vta->guest_bytes,
@@ -873,8 +606,7 @@
if (irsb == NULL) {
/* Access failure. */
vexSetAllocModeTEMP_and_clear();
- vex_traceflags = 0;
- res.status = VexTransAccessFail; return res;
+ return NULL;
}
vassert(vta->guest_extents->n_used >= 1 && vta->guest_extents->n_used <= 3);
@@ -884,8 +616,8 @@
}
/* bb_to_IR() could have caused pxControl to change. */
- vassert(pxControl >= VexRegUpdSpAtMemAccess
- && pxControl <= VexRegUpdAllregsAtEachInsn);
+ vassert(*pxControl >= VexRegUpdSpAtMemAccess
+ && *pxControl <= VexRegUpdAllregsAtEachInsn);
/* If debugging, show the raw guest bytes for this bb. */
if (0 || (vex_traceflags & VEX_TRACE_FE)) {
@@ -914,7 +646,7 @@
vexAllocSanityCheck();
/* Clean it up, hopefully a lot. */
- irsb = do_iropt_BB ( irsb, specHelper, preciseMemExnsFn, pxControl,
+ irsb = do_iropt_BB ( irsb, specHelper, preciseMemExnsFn, *pxControl,
vta->guest_bytes_addr,
vta->arch_guest );
@@ -985,6 +717,348 @@
vex_printf("\n");
}
+ return irsb;
+}
+
+
+/* Back end of the compilation pipeline. Is not exported. */
+
+static void libvex_BackEnd ( const VexTranslateArgs *vta,
+ /*MOD*/ VexTranslateResult* res,
+ /*MOD*/ IRSB* irsb,
+ VexRegisterUpdates pxControl )
+{
+ /* This the bundle of functions we need to do the back-end stuff
+ (insn selection, reg-alloc, assembly) whilst being insulated
+ from the target instruction set. */
+ Bool (*isMove) ( const HInstr*, HReg*, HReg* );
+ void (*getRegUsage) ( HRegUsage*, const HInstr*, Bool );
+ void (*mapRegs) ( HRegRemap*, HInstr*, Bool );
+ void (*genSpill) ( HInstr**, HInstr**, HReg, Int, Bool );
+ void (*genReload) ( HInstr**, HInstr**, HReg, Int, Bool );
+ HInstr* (*directReload) ( HInstr*, HReg, Short );
+ void (*ppInstr) ( const HInstr*, Bool );
+ void (*ppReg) ( HReg );
+ HInstrArray* (*iselSB) ( const IRSB*, VexArch, const VexArchInfo*,
+ const VexAbiInfo*, Int, Int, Bool, Bool,
+ Addr );
+ Int (*emit) ( /*MB_MOD*/Bool*,
+ UChar*, Int, const HInstr*, Bool, VexEndness,
+ const void*, const void*, const void*,
+ const void* );
+ Bool (*preciseMemExnsFn) ( Int, Int, VexRegisterUpdates );
+
+ const RRegUniverse* rRegUniv = NULL;
+
+ Bool mode64, chainingAllowed;
+ Int i, j, k, out_used;
+ Int guest_sizeB;
+ Int offB_HOST_EvC_COUNTER;
+ Int offB_HOST_EvC_FAILADDR;
+ Addr max_ga;
+ UChar insn_bytes[128];
+ HInstrArray* vcode;
+ HInstrArray* rcode;
+
+ isMove = NULL;
+ getRegUsage = NULL;
+ mapRegs = NULL;
+ genSpill = NULL;
+ genReload = NULL;
+ directReload = NULL;
+ ppInstr = NULL;
+ ppReg = NULL;
+ iselSB = NULL;
+ emit = NULL;
+
+ mode64 = False;
+ chainingAllowed = False;
+ guest_sizeB = 0;
+ offB_HOST_EvC_COUNTER = 0;
+ offB_HOST_EvC_FAILADDR = 0;
+ preciseMemExnsFn = NULL;
+
+ vassert(vex_initdone);
+ vassert(vta->disp_cp_xassisted != NULL);
+
+ vex_traceflags = vta->traceflags;
+
+ /* Both the chainers and the indir are either NULL or non-NULL. */
+ if (vta->disp_cp_chain_me_to_slowEP != NULL) {
+ vassert(vta->disp_cp_chain_me_to_fastEP != NULL);
+ vassert(vta->disp_cp_xindir != NULL);
+ chainingAllowed = True;
+ } else {
+ vassert(vta->disp_cp_chain_me_to_fastEP == NULL);
+ vassert(vta->disp_cp_xindir == NULL);
+ }
+
+ switch (vta->arch_guest) {
+
+ case VexArchX86:
+ preciseMemExnsFn
+ = X86FN(guest_x86_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestX86State);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestX86State,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestX86State,host_EvC_FAILADDR);
+ break;
+
+ case VexArchAMD64:
+ preciseMemExnsFn
+ = AMD64FN(guest_amd64_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestAMD64State);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestAMD64State,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestAMD64State,host_EvC_FAILADDR);
+ break;
+
+ case VexArchPPC32:
+ preciseMemExnsFn
+ = PPC32FN(guest_ppc32_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestPPC32State);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestPPC32State,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestPPC32State,host_EvC_FAILADDR);
+ break;
+
+ case VexArchPPC64:
+ preciseMemExnsFn
+ = PPC64FN(guest_ppc64_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestPPC64State);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestPPC64State,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestPPC64State,host_EvC_FAILADDR);
+ break;
+
+ case VexArchS390X:
+ preciseMemExnsFn
+ = S390FN(guest_s390x_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestS390XState);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestS390XState,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestS390XState,host_EvC_FAILADDR);
+ break;
+
+ case VexArchARM:
+ preciseMemExnsFn
+ = ARMFN(guest_arm_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestARMState);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestARMState,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestARMState,host_EvC_FAILADDR);
+ break;
+
+ case VexArchARM64:
+ preciseMemExnsFn
+ = ARM64FN(guest_arm64_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestARM64State);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestARM64State,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestARM64State,host_EvC_FAILADDR);
+ break;
+
+ case VexArchMIPS32:
+ preciseMemExnsFn
+ = MIPS32FN(guest_mips32_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestMIPS32State);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestMIPS32State,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestMIPS32State,host_EvC_FAILADDR);
+ break;
+
+ case VexArchMIPS64:
+ preciseMemExnsFn
+ = MIPS64FN(guest_mips64_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestMIPS64State);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestMIPS64State,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestMIPS64State,host_EvC_FAILADDR);
+ break;
+
+ case VexArchTILEGX:
+ preciseMemExnsFn =
+ TILEGXFN(guest_tilegx_state_requires_precise_mem_exns);
+ guest_sizeB = sizeof(VexGuestTILEGXState);
+ offB_HOST_EvC_COUNTER = offsetof(VexGuestTILEGXState,host_EvC_COUNTER);
+ offB_HOST_EvC_FAILADDR = offsetof(VexGuestTILEGXState,host_EvC_FAILADDR);
+ break;
+
+ default:
+ vpanic("LibVEX_Codegen: unsupported guest insn set");
+ }
+
+
+ switch (vta->arch_host) {
+
+ case VexArchX86:
+ mode64 = False;
+ rRegUniv = X86FN(getRRegUniverse_X86());
+ isMove = CAST_TO_TYPEOF(isMove) X86FN(isMove_X86Instr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) X86FN(getRegUsage_X86Instr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) X86FN(mapRegs_X86Instr);
+ genSpill = CAST_TO_TYPEOF(genSpill) X86FN(genSpill_X86);
+ genReload = CAST_TO_TYPEOF(genReload) X86FN(genReload_X86);
+ directReload = CAST_TO_TYPEOF(directReload) X86FN(directReload_X86);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) X86FN(ppX86Instr);
+ ppReg = CAST_TO_TYPEOF(ppReg) X86FN(ppHRegX86);
+ iselSB = X86FN(iselSB_X86);
+ emit = CAST_TO_TYPEOF(emit) X86FN(emit_X86Instr);
+ vassert(vta->archinfo_host.endness == VexEndnessLE);
+ break;
+
+ case VexArchAMD64:
+ mode64 = True;
+ rRegUniv = AMD64FN(getRRegUniverse_AMD64());
+ isMove = CAST_TO_TYPEOF(isMove) AMD64FN(isMove_AMD64Instr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) AMD64FN(getRegUsage_AMD64Instr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) AMD64FN(mapRegs_AMD64Instr);
+ genSpill = CAST_TO_TYPEOF(genSpill) AMD64FN(genSpill_AMD64);
+ genReload = CAST_TO_TYPEOF(genReload) AMD64FN(genReload_AMD64);
+ directReload = CAST_TO_TYPEOF(directReload) AMD64FN(directReload_AMD64);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) AMD64FN(ppAMD64Instr);
+ ppReg = CAST_TO_TYPEOF(ppReg) AMD64FN(ppHRegAMD64);
+ iselSB = AMD64FN(iselSB_AMD64);
+ emit = CAST_TO_TYPEOF(emit) AMD64FN(emit_AMD64Instr);
+ vassert(vta->archinfo_host.endness == VexEndnessLE);
+ break;
+
+ case VexArchPPC32:
+ mode64 = False;
+ rRegUniv = PPC32FN(getRRegUniverse_PPC(mode64));
+ isMove = CAST_TO_TYPEOF(isMove) PPC32FN(isMove_PPCInstr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) PPC32FN(getRegUsage_PPCInstr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) PPC32FN(mapRegs_PPCInstr);
+ genSpill = CAST_TO_TYPEOF(genSpill) PPC32FN(genSpill_PPC);
+ genReload = CAST_TO_TYPEOF(genReload) PPC32FN(genReload_PPC);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) PPC32FN(ppPPCInstr);
+ ppReg = CAST_TO_TYPEOF(ppReg) PPC32FN(ppHRegPPC);
+ iselSB = PPC32FN(iselSB_PPC);
+ emit = CAST_TO_TYPEOF(emit) PPC32FN(emit_PPCInstr);
+ vassert(vta->archinfo_host.endness == VexEndnessBE);
+ break;
+
+ case VexArchPPC64:
+ mode64 = True;
+ rRegUniv = PPC64FN(getRRegUniverse_PPC(mode64));
+ isMove = CAST_TO_TYPEOF(isMove) PPC64FN(isMove_PPCInstr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) PPC64FN(getRegUsage_PPCInstr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) PPC64FN(mapRegs_PPCInstr);
+ genSpill = CAST_TO_TYPEOF(genSpill) PPC64FN(genSpill_PPC);
+ genReload = CAST_TO_TYPEOF(genReload) PPC64FN(genReload_PPC);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) PPC64FN(ppPPCInstr);
+ ppReg = CAST_TO_TYPEOF(ppReg) PPC64FN(ppHRegPPC);
+ iselSB = PPC64FN(iselSB_PPC);
+ emit = CAST_TO_TYPEOF(emit) PPC64FN(emit_PPCInstr);
+ vassert(vta->archinfo_host.endness == VexEndnessBE ||
+ vta->archinfo_host.endness == VexEndnessLE );
+ break;
+
+ case VexArchS390X:
+ mode64 = True;
+ /* KLUDGE: export hwcaps. */
+ s390_host_hwcaps = vta->archinfo_host.hwcaps;
+ rRegUniv = S390FN(getRRegUniverse_S390());
+ isMove = CAST_TO_TYPEOF(isMove) S390FN(isMove_S390Instr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) S390FN(getRegUsage_S390Instr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) S390FN(mapRegs_S390Instr);
+ genSpill = CAST_TO_TYPEOF(genSpill) S390FN(genSpill_S390);
+ genReload = CAST_TO_TYPEOF(genReload) S390FN(genReload_S390);
+ // fixs390: consider implementing directReload_S390
+ ppInstr = CAST_TO_TYPEOF(ppInstr) S390FN(ppS390Instr);
+ ppReg = CAST_TO_TYPEOF(ppReg) S390FN(ppHRegS390);
+ iselSB = S390FN(iselSB_S390);
+ emit = CAST_TO_TYPEOF(emit) S390FN(emit_S390Instr);
+ vassert(vta->archinfo_host.endness == VexEndnessBE);
+ break;
+
+ case VexArchARM:
+ mode64 = False;
+ rRegUniv = ARMFN(getRRegUniverse_ARM());
+ isMove = CAST_TO_TYPEOF(isMove) ARMFN(isMove_ARMInstr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) ARMFN(getRegUsage_ARMInstr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) ARMFN(mapRegs_ARMInstr);
+ genSpill = CAST_TO_TYPEOF(genSpill) ARMFN(genSpill_ARM);
+ genReload = CAST_TO_TYPEOF(genReload) ARMFN(genReload_ARM);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) ARMFN(ppARMInstr);
+ ppReg = CAST_TO_TYPEOF(ppReg) ARMFN(ppHRegARM);
+ iselSB = ARMFN(iselSB_ARM);
+ emit = CAST_TO_TYPEOF(emit) ARMFN(emit_ARMInstr);
+ vassert(vta->archinfo_host.endness == VexEndnessLE);
+ break;
+
+ case VexArchARM64:
+ mode64 = True;
+ rRegUniv = ARM64FN(getRRegUniverse_ARM64());
+ isMove = CAST_TO_TYPEOF(isMove) ARM64FN(isMove_ARM64Instr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) ARM64FN(getRegUsage_ARM64Instr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) ARM64FN(mapRegs_ARM64Instr);
+ genSpill = CAST_TO_TYPEOF(genSpill) ARM64FN(genSpill_ARM64);
+ genReload = CAST_TO_TYPEOF(genReload) ARM64FN(genReload_ARM64);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) ARM64FN(ppARM64Instr);
+ ppReg = CAST_TO_TYPEOF(ppReg) ARM64FN(ppHRegARM64);
+ iselSB = ARM64FN(iselSB_ARM64);
+ emit = CAST_TO_TYPEOF(emit) ARM64FN(emit_ARM64Instr);
+ vassert(vta->archinfo_host.endness == VexEndnessLE);
+ break;
+
+ case VexArchMIPS32:
+ mode64 = False;
+ rRegUniv = MIPS32FN(getRRegUniverse_MIPS(mode64));
+ isMove = CAST_TO_TYPEOF(isMove) MIPS32FN(isMove_MIPSInstr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) MIPS32FN(getRegUsage_MIPSInstr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) MIPS32FN(mapRegs_MIPSInstr);
+ genSpill = CAST_TO_TYPEOF(genSpill) MIPS32FN(genSpill_MIPS);
+ genReload = CAST_TO_TYPEOF(genReload) MIPS32FN(genReload_MIPS);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) MIPS32FN(ppMIPSInstr);
+ ppReg = CAST_TO_TYPEOF(ppReg) MIPS32FN(ppHRegMIPS);
+ iselSB = MIPS32FN(iselSB_MIPS);
+ emit = CAST_TO_TYPEOF(emit) MIPS32FN(emit_MIPSInstr);
+ vassert(vta->archinfo_host.endness == VexEndnessLE
+ || vta->archinfo_host.endness == VexEndnessBE);
+ break;
+
+ case VexArchMIPS64:
+ mode64 = True;
+ rRegUniv = MIPS64FN(getRRegUniverse_MIPS(mode64));
+ isMove = CAST_TO_TYPEOF(isMove) MIPS64FN(isMove_MIPSInstr);
+ getRegUsage
+ = CAST_TO_TYPEOF(getRegUsage) MIPS64FN(getRegUsage_MIPSInstr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) MIPS64FN(mapRegs_MIPSInstr);
+ genSpill = CAST_TO_TYPEOF(genSpill) MIPS64FN(genSpill_MIPS);
+ genReload = CAST_TO_TYPEOF(genReload) MIPS64FN(genReload_MIPS);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) MIPS64FN(ppMIPSInstr);
+ ppReg = CAST_TO_TYPEOF(ppReg) MIPS64FN(ppHRegMIPS);
+ iselSB = MIPS64FN(iselSB_MIPS);
+ emit = CAST_TO_TYPEOF(emit) MIPS64FN(emit_MIPSInstr);
+ vassert(vta->archinfo_host.endness == VexEndnessLE
+ || vta->archinfo_host.endness == VexEndnessBE);
+ break;
+
+ case VexArchTILEGX:
+ mode64 = True;
+ rRegUniv = TILEGXFN(getRRegUniverse_TILEGX());
+ isMove = CAST_TO_TYPEOF(isMove) TILEGXFN(isMove_TILEGXInstr);
+ getRegUsage =
+ CAST_TO_TYPEOF(getRegUsage) TILEGXFN(getRegUsage_TILEGXInstr);
+ mapRegs = CAST_TO_TYPEOF(mapRegs) TILEGXFN(mapRegs_TILEGXInstr);
+ genSpill = CAST_TO_TYPEOF(genSpill) TILEGXFN(genSpill_TILEGX);
+ genReload = CAST_TO_TYPEOF(genReload) TILEGXFN(genReload_TILEGX);
+ ppInstr = CAST_TO_TYPEOF(ppInstr) TILEGXFN(ppTILEGXInstr);
+ ppReg = CAST_TO_TYPEOF(ppReg) TILEGXFN(ppHRegTILEGX);
+ iselSB = TILEGXFN(iselSB_TILEGX);
+ emit = CAST_TO_TYPEOF(emit) TILEGXFN(emit_TILEGXInstr);
+ vassert(vta->archinfo_host.endness == VexEndnessLE);
+ break;
+
+ default:
+ vpanic("LibVEX_Translate: unsupported host insn set");
+ }
+
+ // Are the host's hardware capabilities feasible. The function will
+ // not return if hwcaps are infeasible in some sense.
+ check_hwcaps(vta->arch_host, vta->archinfo_host.hwcaps);
+
+
/* Turn it into virtual-registerised code. Build trees -- this
also throws away any dead bindings. */
max_ga = ado_treebuild_BB( irsb, preciseMemExnsFn, pxControl );
@@ -1006,7 +1080,7 @@
/* HACK */
if (0) {
*(vta->host_bytes_used) = 0;
- res.status = VexTransOK; return res;
+ res->status = VexTransOK; return;
}
/* end HACK */
@@ -1067,7 +1141,7 @@
/* HACK */
if (0) {
*(vta->host_bytes_used) = 0;
- res.status = VexTransOK; return res;
+ res->status = VexTransOK; return;
}
/* end HACK */
@@ -1101,14 +1175,14 @@
if (UNLIKELY(out_used + j > vta->host_bytes_size)) {
vexSetAllocModeTEMP_and_clear();
vex_traceflags = 0;
- res.status = VexTransOutputFull;
- return res;
+ res->status = VexTransOutputFull;
+ return;
}
if (UNLIKELY(hi_isProfInc)) {
vassert(vta->addProfInc); /* else where did it come from? */
- vassert(res.offs_profInc == -1); /* there can be only one (tm) */
+ vassert(res->offs_profInc == -1); /* there can be only one (tm) */
vassert(out_used >= 0);
- res.offs_profInc = out_used;
+ res->offs_profInc = out_used;
}
{ UChar* dst = &vta->host_bytes[out_used];
for (k = 0; k < j; k++) {
@@ -1134,7 +1208,20 @@
}
vex_traceflags = 0;
- res.status = VexTransOK;
+ res->status = VexTransOK;
+ return;
+}
+
+
+/* Exported to library client. */
+
+VexTranslateResult LibVEX_Translate ( /*MOD*/ VexTranslateArgs* vta )
+{
+ VexTranslateResult res = { 0 };
+ VexRegisterUpdates pxControl = VexRegUpd_INVALID;
+
+ IRSB* irsb = LibVEX_FrontEnd(vta, &res, &pxControl);
+ libvex_BackEnd(vta, &res, irsb, pxControl);
return res;
}
@@ -1470,6 +1557,29 @@
}
+static IRType arch_word_size (VexArch arch) {
+ switch (arch) {
+ case VexArchX86:
+ case VexArchARM:
+ case VexArchMIPS32:
+ case VexArchPPC32:
+ return Ity_I32;
+
+ case VexArchAMD64:
+ case VexArchARM64:
+ case VexArchMIPS64:
+ case VexArchPPC64:
+ case VexArchS390X:
+ case VexArchTILEGX:
+ return Ity_I64;
+
+ default:
+ vex_printf("Fatal: unknown arch in arch_word_size\n");
+ vassert(0);
+ }
+}
+
+
/* Convenience macro to be used in show_hwcaps_ARCH functions */
#define NUM_HWCAPS (sizeof hwcaps_list / sizeof hwcaps_list[0])
Modified: trunk/priv/main_util.h
==============================================================================
--- trunk/priv/main_util.h (original)
+++ trunk/priv/main_util.h Mon Apr 3 14:24:05 2017
@@ -43,8 +43,15 @@
#define NULL ((void*)0)
-#define LIKELY(x) __builtin_expect(!!(x), 1)
-#define UNLIKELY(x) __builtin_expect(!!(x), 0)
+#if defined(_MSC_VER) // building with MSVC
+# define LIKELY(x) (x)
+# define UNLIKELY(x) (x)
+# define CAST_TO_TYPEOF(x) /**/
+#else
+# define LIKELY(x) __builtin_expect(!!(x), 1)
+# define UNLIKELY(x) __builtin_expect(!!(x), 0)
+# define CAST_TO_TYPEOF(x) (__typeof__(x))
+#endif // defined(_MSC_VER)
#if !defined(offsetof)
# define offsetof(type,memb) ((SizeT)(HWord)&((type*)0)->memb)
Modified: trunk/pub/libvex.h
==============================================================================
--- trunk/pub/libvex.h (original)
+++ trunk/pub/libvex.h Mon Apr 3 14:24:05 2017
@@ -776,8 +776,17 @@
VexTranslateArgs;
+/* Runs the entire compilation pipeline. */
extern
-VexTranslateResult LibVEX_Translate ( VexTranslateArgs* );
+VexTranslateResult LibVEX_Translate ( /*MOD*/ VexTranslateArgs* );
+
+/* Runs the first half of the compilation pipeline: lifts guest code to IR,
+ optimises, instruments and optimises it some more. */
+extern
+IRSB* LibVEX_FrontEnd ( /*MOD*/ VexTranslateArgs*,
+ /*OUT*/ VexTranslateResult* res,
+ /*OUT*/ VexRegisterUpdates* pxControl );
+
/* A subtlety re interaction between self-checking translations and
bb-chasing. The supplied chase_into_ok function should say NO
|
|
From: <sv...@va...> - 2017-04-03 12:27:11
|
Author: iraisr
Date: Mon Apr 3 13:27:00 2017
New Revision: 16292
Log:
Follow up to SVN r16291.
Fix compilation warnings in coregrind/m_syswrap/syswrap-x86-solaris.c.
Modified:
trunk/coregrind/m_syswrap/syswrap-x86-solaris.c
Modified: trunk/coregrind/m_syswrap/syswrap-x86-solaris.c
==============================================================================
--- trunk/coregrind/m_syswrap/syswrap-x86-solaris.c (original)
+++ trunk/coregrind/m_syswrap/syswrap-x86-solaris.c Mon Apr 3 13:27:00 2017
@@ -576,7 +576,7 @@
{
if (!vex->guest_GDT)
return;
- VG_(free)((void*)vex->guest_GDT);
+ VG_(free)((void *) (HWord) vex->guest_GDT);
vex->guest_GDT = 0;
}
@@ -586,7 +586,8 @@
{
ThreadState *tst = VG_(get_ThreadState)(tid);
Addr base = tst->os_state.thrptr;
- VexGuestX86SegDescr *gdt = (VexGuestX86SegDescr*)tst->arch.vex.guest_GDT;
+ VexGuestX86SegDescr *gdt
+ = (VexGuestX86SegDescr *) (HWord) tst->arch.vex.guest_GDT;
VexGuestX86SegDescr desc;
vg_assert(gdt);
|
|
From: <sv...@va...> - 2017-04-03 10:20:19
|
Author: sewardj
Date: Mon Apr 3 11:20:11 2017
New Revision: 16291
Log:
Fix compilation warnings about pointer size conversions following vex r3340
(x86 guest: switch descriptor table registers to ULong type).
Modified:
trunk/coregrind/m_syswrap/syswrap-x86-linux.c
Modified: trunk/coregrind/m_syswrap/syswrap-x86-linux.c
==============================================================================
--- trunk/coregrind/m_syswrap/syswrap-x86-linux.c (original)
+++ trunk/coregrind/m_syswrap/syswrap-x86-linux.c Mon Apr 3 11:20:11 2017
@@ -370,16 +370,16 @@
if (0)
VG_(printf)("deallocate_LGDTs_for_thread: "
- "ldt = 0x%lx, gdt = 0x%lx\n",
+ "ldt = 0x%llx, gdt = 0x%llx\n",
vex->guest_LDT, vex->guest_GDT );
if (vex->guest_LDT != (HWord)NULL) {
- free_LDT_or_GDT( (VexGuestX86SegDescr*)vex->guest_LDT );
+ free_LDT_or_GDT( (VexGuestX86SegDescr*)(HWord)vex->guest_LDT );
vex->guest_LDT = (HWord)NULL;
}
if (vex->guest_GDT != (HWord)NULL) {
- free_LDT_or_GDT( (VexGuestX86SegDescr*)vex->guest_GDT );
+ free_LDT_or_GDT( (VexGuestX86SegDescr*)(HWord)vex->guest_GDT );
vex->guest_GDT = (HWord)NULL;
}
}
@@ -412,7 +412,7 @@
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
vg_assert(8 == sizeof(VexGuestX86SegDescr));
- ldt = (UChar*)(VG_(threads)[tid].arch.vex.guest_LDT);
+ ldt = (UChar*)(HWord)(VG_(threads)[tid].arch.vex.guest_LDT);
res = VG_(mk_SysRes_Success)( 0 );
if (ldt == NULL)
/* LDT not allocated, meaning all entries are null */
@@ -446,7 +446,7 @@
vg_assert(8 == sizeof(VexGuestX86SegDescr));
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
- ldt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_LDT;
+ ldt = (VexGuestX86SegDescr*)(HWord)VG_(threads)[tid].arch.vex.guest_LDT;
ldt_info = (vki_modify_ldt_t*)ptr;
res = VG_(mk_SysRes_Error)( VKI_EINVAL );
@@ -527,7 +527,7 @@
return VG_(mk_SysRes_Error)( VKI_EFAULT );
}
- gdt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_GDT;
+ gdt = (VexGuestX86SegDescr*)(HWord)VG_(threads)[tid].arch.vex.guest_GDT;
/* If the thread doesn't have a GDT, allocate it now. */
if (!gdt) {
@@ -586,7 +586,7 @@
if (idx < 0 || idx >= VEX_GUEST_X86_GDT_NENT)
return VG_(mk_SysRes_Error)( VKI_EINVAL );
- gdt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_GDT;
+ gdt = (VexGuestX86SegDescr*)(HWord)VG_(threads)[tid].arch.vex.guest_GDT;
/* If the thread doesn't have a GDT, allocate it now. */
if (!gdt) {
@@ -632,8 +632,8 @@
} else {
/* No luck .. we have to take a copy of the parent's. */
child->vex.guest_LDT = (HWord)alloc_zeroed_x86_LDT();
- copy_LDT_from_to( (VexGuestX86SegDescr*)parent->vex.guest_LDT,
- (VexGuestX86SegDescr*)child->vex.guest_LDT );
+ copy_LDT_from_to( (VexGuestX86SegDescr*)(HWord)parent->vex.guest_LDT,
+ (VexGuestX86SegDescr*)(HWord)child->vex.guest_LDT );
}
/* Either we start with an empty GDT (the usual case) or inherit a
@@ -643,8 +643,8 @@
if (parent->vex.guest_GDT != (HWord)NULL) {
child->vex.guest_GDT = (HWord)alloc_system_x86_GDT();
- copy_GDT_from_to( (VexGuestX86SegDescr*)parent->vex.guest_GDT,
- (VexGuestX86SegDescr*)child->vex.guest_GDT );
+ copy_GDT_from_to( (VexGuestX86SegDescr*)(HWord)parent->vex.guest_GDT,
+ (VexGuestX86SegDescr*)(HWord)child->vex.guest_GDT );
}
}
|
|
From: <sv...@va...> - 2017-04-03 10:19:21
|
Author: sewardj
Date: Mon Apr 3 11:19:13 2017
New Revision: 3340
Log:
x86 guest: switch descriptor table registers to ULong type so they will take up
consistent amount of space (VEX side). Andrew Dutcher <and...@gm...>.
Modified:
trunk/priv/guest_x86_helpers.c
trunk/pub/libvex_guest_x86.h
Modified: trunk/priv/guest_x86_helpers.c
==============================================================================
--- trunk/priv/guest_x86_helpers.c (original)
+++ trunk/priv/guest_x86_helpers.c Mon Apr 3 11:19:13 2017
@@ -2879,6 +2879,8 @@
vex_state->guest_IP_AT_SYSCALL = 0;
vex_state->padding1 = 0;
+ vex_state->padding2 = 0;
+ vex_state->padding3 = 0;
}
Modified: trunk/pub/libvex_guest_x86.h
==============================================================================
--- trunk/pub/libvex_guest_x86.h (original)
+++ trunk/pub/libvex_guest_x86.h Mon Apr 3 11:19:13 2017
@@ -194,8 +194,8 @@
UShort guest_GS;
UShort guest_SS;
/* LDT/GDT stuff. */
- HWord guest_LDT; /* host addr, a VexGuestX86SegDescr* */
- HWord guest_GDT; /* host addr, a VexGuestX86SegDescr* */
+ ULong guest_LDT; /* host addr, a VexGuestX86SegDescr* */
+ ULong guest_GDT; /* host addr, a VexGuestX86SegDescr* */
/* Emulation notes */
UInt guest_EMNOTE;
@@ -223,6 +223,8 @@
/* Padding to make it have an 16-aligned size */
UInt padding1;
+ UInt padding2;
+ UInt padding3;
}
VexGuestX86State;
|
|
From: zboom <net...@fo...> - 2017-04-01 07:02:36
|
The valgrind can be running on my cavium octeon3 board now! Actually, my yocto build environment was caching the 3.12.0 building file yesterday. Today I use the latest SVN code, and delete the cache. Finally it works! Thanks a lot! -- View this message in context: http://valgrind.10908.n7.nabble.com/Why-valgrind-could-not-be-used-on-cavium-octeon3-tp57531p57555.html Sent from the Valgrind - Dev mailing list archive at Nabble.com. |