You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(14) |
2
(12) |
3
(14) |
4
(12) |
5
(15) |
6
(12) |
7
(20) |
|
8
(10) |
9
(2) |
10
(8) |
11
(12) |
12
(20) |
13
(12) |
14
(15) |
|
15
(12) |
16
(17) |
17
(16) |
18
(10) |
19
(7) |
20
(7) |
21
(9) |
|
22
(4) |
23
(8) |
24
(4) |
25
|
26
(8) |
27
(5) |
28
(10) |
|
29
(6) |
30
(20) |
31
(9) |
|
|
|
|
|
From: Philippe W. <phi...@sk...> - 2015-03-31 23:24:33
|
On Mon, 2015-03-30 at 23:13 +0200, Philippe Waroquiers wrote: > An advantage of option 2.5 is that main_main.c will always be compiled > in both VEXMULTIARCH and single arch mode, to help catching an unlikely > compilation bug in VEXMULTIARCH not detected otherwise. > > A user that needs multi arch can then have it just > by giving 2 libs to link (in the good order) : > libvexmultiarch-x86-linux.a libvex-x86-linux.a > where the first lib only contains main_main.c compiled with VEXMULTIARCH > defined. The 2nd lib contains main_main.c compiled in single arch, > and all the other VEX object files, and is the one to use by default. > Option '2.5' committed as vex: r3113 So, the VEX user can choose at link time whether single arch or multi-arch VEX is linked in. Example: # this produces t1single (calling LibVEX_Translate), that is single arch: gcc -o t1single t1.o -LInst/lib/valgrind -lvex-x86-linux -lgcc # this produces t1multi, that is multi arch: gcc -o t1multi t1.o -LInst/lib/valgrind -lvexmultiarch-x86-linux -lvex-x86-linux -lgcc Philippe |
|
From: <sv...@va...> - 2015-03-31 23:02:07
|
Author: philippe
Date: Wed Apr 1 00:01:57 2015
New Revision: 3113
Log:
This patch reduces the size of all tools by about 2MB of text
(depending on the arch).
This has as advantages:
1. somewhat faster build/link time (very probably neglectible)
2. somewhat faster tool startup (probably neglectible for most users,
but regression tests are helped by this)
3. a gain in memory of about 10MB
The valgrind tools are making the assumption that host and guest
are the same. So, no need to drag the full set of archs when
linking a tool.
The VEX library is nicely split in arch independent and arch dependent
objects. Only main_main.c is dragging the various arch specific files.
So, main_main.c (the main entry point of the VEX library) is compiled
only for the current guest/host arch.
The disadvantage of the above is that the VEX lib cannot be used
anymore with host and guest different, while VEX is able to do that
(i.e. does not make the assumption that host and guest are the same).
So, to still allow a VEX user to use the VEX lib in a multi arch setup,
main_main.c is compiled twice:
1. in 'single arch mode', going in the libvex-<arch>-<os>
2. in 'multi arch mode', going in a new lib libvexmultiarch-<arch>-<os>
A VEX user can choose at link time to link with the main_main
that is multi-arch, by linking with both libs (the multi arch being
the first one).
Here is a small (rubbish crashing) standalone usage of the VEX lib,
first linked in single arch, then linked in multi-arch:
// file t1.c
#include <stdio.h>
#include <libvex.h>
void main()
{
(void)LibVEX_Translate(NULL);
}
$ gcc -I Inst/include/valgrind -c -g t1.c
$ gcc -o t1 t1.o -LInst/lib/valgrind -lvex-x86-linux -lgcc
$ gcc -o t1multi t1.o -LInst/lib/valgrind -lvexmultiarch-x86-linux -lvex-x86-linux -lgcc
$ size t1 t1multi
text data bss dec hex filename
519393 556 5012188 5532137 5469e9 t1
2295717 1740 5015144 7312601 6f94d9 t1multi
In a next commit, some regtests will be added to validate that the two libs
are working properly (and that no arch specific symbol is missing when
linking multi-arch)
Added:
trunk/priv/multiarch_main_main.c
Modified:
trunk/priv/host_s390_defs.c
trunk/priv/main_main.c
Modified: trunk/priv/host_s390_defs.c
==============================================================================
--- trunk/priv/host_s390_defs.c (original)
+++ trunk/priv/host_s390_defs.c Wed Apr 1 00:01:57 2015
@@ -44,14 +44,6 @@
#include "guest_s390_defs.h" /* S390X_GUEST_OFFSET */
#include <stdarg.h>
-/* KLUDGE: We need to know the hwcaps of the host when generating
- code. But that info is not passed to emit_S390Instr. Only mode64 is
- being passed. So, ideally, we want this passed as an argument, too.
- Until then, we use a global variable. This variable is set as a side
- effect of LibVEX_Translate. */
-UInt s390_host_hwcaps;
-
-
/*------------------------------------------------------------*/
/*--- Forward declarations ---*/
/*------------------------------------------------------------*/
Modified: trunk/priv/main_main.c
==============================================================================
--- trunk/priv/main_main.c (original)
+++ trunk/priv/main_main.c Wed Apr 1 00:01:57 2015
@@ -70,6 +70,90 @@
#include "host_generic_simd128.h"
+/* For each architecture <arch>, we define 2 macros:
+ <arch>FN that has as argument a pointer (typically to a function
+ or the return value of a function).
+ <arch>ST that has as argument a statement.
+ If main_main.c is compiled for <arch>, then these macros just expand
+ their arg.
+ Otherwise, the macros expand to respectively NULL and vassert(0).
+ These macros are used to avoid introducing dependencies to object
+ files not needed for the (only) architecture we are compiling for.
+
+ To still compile the below for all supported architectures, define
+ VEXMULTIARCH. This is used by the file multiarch_main_main.c */
+
+#if defined(VGA_x86) || defined(VEXMULTIARCH)
+#define X86FN(f) f
+#define X86ST(f) f
+#else
+#define X86FN(f) NULL
+#define X86ST(f) vassert(0)
+#endif
+
+#if defined(VGA_amd64) || defined(VEXMULTIARCH)
+#define AMD64FN(f) f
+#define AMD64ST(f) f
+#else
+#define AMD64FN(f) NULL
+#define AMD64ST(f) vassert(0)
+#endif
+
+#if defined(VGA_ppc32) || defined(VEXMULTIARCH)
+#define PPC32FN(f) f
+#define PPC32ST(f) f
+#else
+#define PPC32FN(f) NULL
+#define PPC32ST(f) vassert(0)
+#endif
+
+#if defined(VGA_ppc64be) || defined(VGA_ppc64le) || defined(VEXMULTIARCH)
+#define PPC64FN(f) f
+#define PPC64ST(f) f
+#else
+#define PPC64FN(f) NULL
+#define PPC64ST(f) vassert(0)
+#endif
+
+#if defined(VGA_s390x) || defined(VEXMULTIARCH)
+#define S390FN(f) f
+#define S390ST(f) f
+#else
+#define S390FN(f) NULL
+#define S390ST(f) vassert(0)
+#endif
+
+#if defined(VGA_arm) || defined(VEXMULTIARCH)
+#define ARMFN(f) f
+#define ARMST(f) f
+#else
+#define ARMFN(f) NULL
+#define ARMST(f) vassert(0)
+#endif
+
+#if defined(VGA_arm64) || defined(VEXMULTIARCH)
+#define ARM64FN(f) f
+#define ARM64ST(f) f
+#else
+#define ARM64FN(f) NULL
+#define ARM64ST(f) vassert(0)
+#endif
+
+#if defined(VGA_mips32) || defined(VEXMULTIARCH)
+#define MIPS32FN(f) f
+#define MIPS32ST(f) f
+#else
+#define MIPS32FN(f) NULL
+#define MIPS32ST(f) vassert(0)
+#endif
+
+#if defined(VGA_mips64) || defined(VEXMULTIARCH)
+#define MIPS64FN(f) f
+#define MIPS64ST(f) f
+#else
+#define MIPS64FN(f) NULL
+#define MIPS64ST(f) vassert(0)
+#endif
/* This file contains the top level interface to the library. */
@@ -210,6 +294,15 @@
/* --------- Make a translation. --------- */
+/* KLUDGE: S390 need to know the hwcaps of the host when generating
+ code. But that info is not passed to emit_S390Instr. Only mode64 is
+ being passed. So, ideally, we want this passed as an argument, too.
+ Until then, we use a global variable. This variable is set as a side
+ effect of LibVEX_Translate. The variable is defined here rather than
+ in host_s390_defs.c to avoid having main_main.c dragging S390
+ object files in non VEXMULTIARCH. */
+UInt s390_host_hwcaps;
+
/* Exported to library client. */
@@ -303,65 +396,69 @@
case VexArchX86:
mode64 = False;
- rRegUniv = getRRegUniverse_X86();
- isMove = (__typeof__(isMove)) isMove_X86Instr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_X86Instr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_X86Instr;
- genSpill = (__typeof__(genSpill)) genSpill_X86;
- genReload = (__typeof__(genReload)) genReload_X86;
- directReload = (__typeof__(directReload)) directReload_X86;
- ppInstr = (__typeof__(ppInstr)) ppX86Instr;
- ppReg = (__typeof__(ppReg)) ppHRegX86;
- iselSB = iselSB_X86;
- emit = (__typeof__(emit)) emit_X86Instr;
+ rRegUniv = X86FN(getRRegUniverse_X86());
+ isMove = (__typeof__(isMove)) X86FN(isMove_X86Instr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) X86FN(getRegUsage_X86Instr);
+ mapRegs = (__typeof__(mapRegs)) X86FN(mapRegs_X86Instr);
+ genSpill = (__typeof__(genSpill)) X86FN(genSpill_X86);
+ genReload = (__typeof__(genReload)) X86FN(genReload_X86);
+ directReload = (__typeof__(directReload)) X86FN(directReload_X86);
+ ppInstr = (__typeof__(ppInstr)) X86FN(ppX86Instr);
+ ppReg = (__typeof__(ppReg)) X86FN(ppHRegX86);
+ iselSB = X86FN(iselSB_X86);
+ emit = (__typeof__(emit)) X86FN(emit_X86Instr);
host_word_type = Ity_I32;
vassert(vta->archinfo_host.endness == VexEndnessLE);
break;
case VexArchAMD64:
mode64 = True;
- rRegUniv = getRRegUniverse_AMD64();
- isMove = (__typeof__(isMove)) isMove_AMD64Instr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_AMD64Instr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_AMD64Instr;
- genSpill = (__typeof__(genSpill)) genSpill_AMD64;
- genReload = (__typeof__(genReload)) genReload_AMD64;
- ppInstr = (__typeof__(ppInstr)) ppAMD64Instr;
- ppReg = (__typeof__(ppReg)) ppHRegAMD64;
- iselSB = iselSB_AMD64;
- emit = (__typeof__(emit)) emit_AMD64Instr;
+ rRegUniv = AMD64FN(getRRegUniverse_AMD64());
+ isMove = (__typeof__(isMove)) AMD64FN(isMove_AMD64Instr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) AMD64FN(getRegUsage_AMD64Instr);
+ mapRegs = (__typeof__(mapRegs)) AMD64FN(mapRegs_AMD64Instr);
+ genSpill = (__typeof__(genSpill)) AMD64FN(genSpill_AMD64);
+ genReload = (__typeof__(genReload)) AMD64FN(genReload_AMD64);
+ ppInstr = (__typeof__(ppInstr)) AMD64FN(ppAMD64Instr);
+ ppReg = (__typeof__(ppReg)) AMD64FN(ppHRegAMD64);
+ iselSB = AMD64FN(iselSB_AMD64);
+ emit = (__typeof__(emit)) AMD64FN(emit_AMD64Instr);
host_word_type = Ity_I64;
vassert(vta->archinfo_host.endness == VexEndnessLE);
break;
case VexArchPPC32:
mode64 = False;
- rRegUniv = getRRegUniverse_PPC(mode64);
- isMove = (__typeof__(isMove)) isMove_PPCInstr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_PPCInstr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_PPCInstr;
- genSpill = (__typeof__(genSpill)) genSpill_PPC;
- genReload = (__typeof__(genReload)) genReload_PPC;
- ppInstr = (__typeof__(ppInstr)) ppPPCInstr;
- ppReg = (__typeof__(ppReg)) ppHRegPPC;
- iselSB = iselSB_PPC;
- emit = (__typeof__(emit)) emit_PPCInstr;
+ rRegUniv = PPC32FN(getRRegUniverse_PPC(mode64));
+ isMove = (__typeof__(isMove)) PPC32FN(isMove_PPCInstr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) PPC32FN(getRegUsage_PPCInstr);
+ mapRegs = (__typeof__(mapRegs)) PPC32FN(mapRegs_PPCInstr);
+ genSpill = (__typeof__(genSpill)) PPC32FN(genSpill_PPC);
+ genReload = (__typeof__(genReload)) PPC32FN(genReload_PPC);
+ ppInstr = (__typeof__(ppInstr)) PPC32FN(ppPPCInstr);
+ ppReg = (__typeof__(ppReg)) PPC32FN(ppHRegPPC);
+ iselSB = PPC32FN(iselSB_PPC);
+ emit = (__typeof__(emit)) PPC32FN(emit_PPCInstr);
host_word_type = Ity_I32;
vassert(vta->archinfo_host.endness == VexEndnessBE);
break;
case VexArchPPC64:
mode64 = True;
- rRegUniv = getRRegUniverse_PPC(mode64);
- isMove = (__typeof__(isMove)) isMove_PPCInstr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_PPCInstr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_PPCInstr;
- genSpill = (__typeof__(genSpill)) genSpill_PPC;
- genReload = (__typeof__(genReload)) genReload_PPC;
- ppInstr = (__typeof__(ppInstr)) ppPPCInstr;
- ppReg = (__typeof__(ppReg)) ppHRegPPC;
- iselSB = iselSB_PPC;
- emit = (__typeof__(emit)) emit_PPCInstr;
+ rRegUniv = PPC64FN(getRRegUniverse_PPC(mode64));
+ isMove = (__typeof__(isMove)) PPC64FN(isMove_PPCInstr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) PPC64FN(getRegUsage_PPCInstr);
+ mapRegs = (__typeof__(mapRegs)) PPC64FN(mapRegs_PPCInstr);
+ genSpill = (__typeof__(genSpill)) PPC64FN(genSpill_PPC);
+ genReload = (__typeof__(genReload)) PPC64FN(genReload_PPC);
+ ppInstr = (__typeof__(ppInstr)) PPC64FN(ppPPCInstr);
+ ppReg = (__typeof__(ppReg)) PPC64FN(ppHRegPPC);
+ iselSB = PPC64FN(iselSB_PPC);
+ emit = (__typeof__(emit)) PPC64FN(emit_PPCInstr);
host_word_type = Ity_I64;
vassert(vta->archinfo_host.endness == VexEndnessBE ||
vta->archinfo_host.endness == VexEndnessLE );
@@ -371,65 +468,69 @@
mode64 = True;
/* KLUDGE: export hwcaps. */
s390_host_hwcaps = vta->archinfo_host.hwcaps;
- rRegUniv = getRRegUniverse_S390();
- isMove = (__typeof__(isMove)) isMove_S390Instr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_S390Instr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_S390Instr;
- genSpill = (__typeof__(genSpill)) genSpill_S390;
- genReload = (__typeof__(genReload)) genReload_S390;
+ rRegUniv = S390FN(getRRegUniverse_S390());
+ isMove = (__typeof__(isMove)) S390FN(isMove_S390Instr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) S390FN(getRegUsage_S390Instr);
+ mapRegs = (__typeof__(mapRegs)) S390FN(mapRegs_S390Instr);
+ genSpill = (__typeof__(genSpill)) S390FN(genSpill_S390);
+ genReload = (__typeof__(genReload)) S390FN(genReload_S390);
// fixs390: consider implementing directReload_S390
- ppInstr = (__typeof__(ppInstr)) ppS390Instr;
- ppReg = (__typeof__(ppReg)) ppHRegS390;
- iselSB = iselSB_S390;
- emit = (__typeof__(emit)) emit_S390Instr;
+ ppInstr = (__typeof__(ppInstr)) S390FN(ppS390Instr);
+ ppReg = (__typeof__(ppReg)) S390FN(ppHRegS390);
+ iselSB = S390FN(iselSB_S390);
+ emit = (__typeof__(emit)) S390FN(emit_S390Instr);
host_word_type = Ity_I64;
vassert(vta->archinfo_host.endness == VexEndnessBE);
break;
case VexArchARM:
mode64 = False;
- rRegUniv = getRRegUniverse_ARM();
- isMove = (__typeof__(isMove)) isMove_ARMInstr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_ARMInstr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_ARMInstr;
- genSpill = (__typeof__(genSpill)) genSpill_ARM;
- genReload = (__typeof__(genReload)) genReload_ARM;
- ppInstr = (__typeof__(ppInstr)) ppARMInstr;
- ppReg = (__typeof__(ppReg)) ppHRegARM;
- iselSB = iselSB_ARM;
- emit = (__typeof__(emit)) emit_ARMInstr;
+ rRegUniv = ARMFN(getRRegUniverse_ARM());
+ isMove = (__typeof__(isMove)) ARMFN(isMove_ARMInstr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) ARMFN(getRegUsage_ARMInstr);
+ mapRegs = (__typeof__(mapRegs)) ARMFN(mapRegs_ARMInstr);
+ genSpill = (__typeof__(genSpill)) ARMFN(genSpill_ARM);
+ genReload = (__typeof__(genReload)) ARMFN(genReload_ARM);
+ ppInstr = (__typeof__(ppInstr)) ARMFN(ppARMInstr);
+ ppReg = (__typeof__(ppReg)) ARMFN(ppHRegARM);
+ iselSB = ARMFN(iselSB_ARM);
+ emit = (__typeof__(emit)) ARMFN(emit_ARMInstr);
host_word_type = Ity_I32;
vassert(vta->archinfo_host.endness == VexEndnessLE);
break;
case VexArchARM64:
mode64 = True;
- rRegUniv = getRRegUniverse_ARM64();
- isMove = (__typeof__(isMove)) isMove_ARM64Instr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_ARM64Instr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_ARM64Instr;
- genSpill = (__typeof__(genSpill)) genSpill_ARM64;
- genReload = (__typeof__(genReload)) genReload_ARM64;
- ppInstr = (__typeof__(ppInstr)) ppARM64Instr;
- ppReg = (__typeof__(ppReg)) ppHRegARM64;
- iselSB = iselSB_ARM64;
- emit = (__typeof__(emit)) emit_ARM64Instr;
+ rRegUniv = ARM64FN(getRRegUniverse_ARM64());
+ isMove = (__typeof__(isMove)) ARM64FN(isMove_ARM64Instr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) ARM64FN(getRegUsage_ARM64Instr);
+ mapRegs = (__typeof__(mapRegs)) ARM64FN(mapRegs_ARM64Instr);
+ genSpill = (__typeof__(genSpill)) ARM64FN(genSpill_ARM64);
+ genReload = (__typeof__(genReload)) ARM64FN(genReload_ARM64);
+ ppInstr = (__typeof__(ppInstr)) ARM64FN(ppARM64Instr);
+ ppReg = (__typeof__(ppReg)) ARM64FN(ppHRegARM64);
+ iselSB = ARM64FN(iselSB_ARM64);
+ emit = (__typeof__(emit)) ARM64FN(emit_ARM64Instr);
host_word_type = Ity_I64;
vassert(vta->archinfo_host.endness == VexEndnessLE);
break;
case VexArchMIPS32:
mode64 = False;
- rRegUniv = getRRegUniverse_MIPS(mode64);
- isMove = (__typeof__(isMove)) isMove_MIPSInstr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_MIPSInstr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_MIPSInstr;
- genSpill = (__typeof__(genSpill)) genSpill_MIPS;
- genReload = (__typeof__(genReload)) genReload_MIPS;
- ppInstr = (__typeof__(ppInstr)) ppMIPSInstr;
- ppReg = (__typeof__(ppReg)) ppHRegMIPS;
- iselSB = iselSB_MIPS;
- emit = (__typeof__(emit)) emit_MIPSInstr;
+ rRegUniv = MIPS32FN(getRRegUniverse_MIPS(mode64));
+ isMove = (__typeof__(isMove)) MIPS32FN(isMove_MIPSInstr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) MIPS32FN(getRegUsage_MIPSInstr);
+ mapRegs = (__typeof__(mapRegs)) MIPS32FN(mapRegs_MIPSInstr);
+ genSpill = (__typeof__(genSpill)) MIPS32FN(genSpill_MIPS);
+ genReload = (__typeof__(genReload)) MIPS32FN(genReload_MIPS);
+ ppInstr = (__typeof__(ppInstr)) MIPS32FN(ppMIPSInstr);
+ ppReg = (__typeof__(ppReg)) MIPS32FN(ppHRegMIPS);
+ iselSB = MIPS32FN(iselSB_MIPS);
+ emit = (__typeof__(emit)) MIPS32FN(emit_MIPSInstr);
host_word_type = Ity_I32;
vassert(vta->archinfo_host.endness == VexEndnessLE
|| vta->archinfo_host.endness == VexEndnessBE);
@@ -437,16 +538,17 @@
case VexArchMIPS64:
mode64 = True;
- rRegUniv = getRRegUniverse_MIPS(mode64);
- isMove = (__typeof__(isMove)) isMove_MIPSInstr;
- getRegUsage = (__typeof__(getRegUsage)) getRegUsage_MIPSInstr;
- mapRegs = (__typeof__(mapRegs)) mapRegs_MIPSInstr;
- genSpill = (__typeof__(genSpill)) genSpill_MIPS;
- genReload = (__typeof__(genReload)) genReload_MIPS;
- ppInstr = (__typeof__(ppInstr)) ppMIPSInstr;
- ppReg = (__typeof__(ppReg)) ppHRegMIPS;
- iselSB = iselSB_MIPS;
- emit = (__typeof__(emit)) emit_MIPSInstr;
+ rRegUniv = MIPS64FN(getRRegUniverse_MIPS(mode64));
+ isMove = (__typeof__(isMove)) MIPS64FN(isMove_MIPSInstr);
+ getRegUsage
+ = (__typeof__(getRegUsage)) MIPS64FN(getRegUsage_MIPSInstr);
+ mapRegs = (__typeof__(mapRegs)) MIPS64FN(mapRegs_MIPSInstr);
+ genSpill = (__typeof__(genSpill)) MIPS64FN(genSpill_MIPS);
+ genReload = (__typeof__(genReload)) MIPS64FN(genReload_MIPS);
+ ppInstr = (__typeof__(ppInstr)) MIPS64FN(ppMIPSInstr);
+ ppReg = (__typeof__(ppReg)) MIPS64FN(ppHRegMIPS);
+ iselSB = MIPS64FN(iselSB_MIPS);
+ emit = (__typeof__(emit)) MIPS64FN(emit_MIPSInstr);
host_word_type = Ity_I64;
vassert(vta->archinfo_host.endness == VexEndnessLE
|| vta->archinfo_host.endness == VexEndnessBE);
@@ -463,12 +565,13 @@
switch (vta->arch_guest) {
case VexArchX86:
- preciseMemExnsFn = guest_x86_state_requires_precise_mem_exns;
- disInstrFn = disInstr_X86;
- specHelper = guest_x86_spechelper;
+ preciseMemExnsFn
+ = X86FN(guest_x86_state_requires_precise_mem_exns);
+ disInstrFn = X86FN(disInstr_X86);
+ specHelper = X86FN(guest_x86_spechelper);
guest_sizeB = sizeof(VexGuestX86State);
guest_word_type = Ity_I32;
- guest_layout = &x86guest_layout;
+ guest_layout = X86FN(&x86guest_layout);
offB_CMSTART = offsetof(VexGuestX86State,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestX86State,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestX86State,guest_EIP);
@@ -483,12 +586,13 @@
break;
case VexArchAMD64:
- preciseMemExnsFn = guest_amd64_state_requires_precise_mem_exns;
- disInstrFn = disInstr_AMD64;
- specHelper = guest_amd64_spechelper;
+ preciseMemExnsFn
+ = AMD64FN(guest_amd64_state_requires_precise_mem_exns);
+ disInstrFn = AMD64FN(disInstr_AMD64);
+ specHelper = AMD64FN(guest_amd64_spechelper);
guest_sizeB = sizeof(VexGuestAMD64State);
guest_word_type = Ity_I64;
- guest_layout = &amd64guest_layout;
+ guest_layout = AMD64FN(&amd64guest_layout);
offB_CMSTART = offsetof(VexGuestAMD64State,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestAMD64State,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestAMD64State,guest_RIP);
@@ -503,12 +607,13 @@
break;
case VexArchPPC32:
- preciseMemExnsFn = guest_ppc32_state_requires_precise_mem_exns;
- disInstrFn = disInstr_PPC;
- specHelper = guest_ppc32_spechelper;
+ preciseMemExnsFn
+ = PPC32FN(guest_ppc32_state_requires_precise_mem_exns);
+ disInstrFn = PPC32FN(disInstr_PPC);
+ specHelper = PPC32FN(guest_ppc32_spechelper);
guest_sizeB = sizeof(VexGuestPPC32State);
guest_word_type = Ity_I32;
- guest_layout = &ppc32Guest_layout;
+ guest_layout = PPC32FN(&ppc32Guest_layout);
offB_CMSTART = offsetof(VexGuestPPC32State,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestPPC32State,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestPPC32State,guest_CIA);
@@ -523,12 +628,13 @@
break;
case VexArchPPC64:
- preciseMemExnsFn = guest_ppc64_state_requires_precise_mem_exns;
- disInstrFn = disInstr_PPC;
- specHelper = guest_ppc64_spechelper;
+ preciseMemExnsFn
+ = PPC64FN(guest_ppc64_state_requires_precise_mem_exns);
+ disInstrFn = PPC64FN(disInstr_PPC);
+ specHelper = PPC64FN(guest_ppc64_spechelper);
guest_sizeB = sizeof(VexGuestPPC64State);
guest_word_type = Ity_I64;
- guest_layout = &ppc64Guest_layout;
+ guest_layout = PPC64FN(&ppc64Guest_layout);
offB_CMSTART = offsetof(VexGuestPPC64State,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestPPC64State,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestPPC64State,guest_CIA);
@@ -545,12 +651,13 @@
break;
case VexArchS390X:
- preciseMemExnsFn = guest_s390x_state_requires_precise_mem_exns;
- disInstrFn = disInstr_S390;
- specHelper = guest_s390x_spechelper;
+ preciseMemExnsFn
+ = S390FN(guest_s390x_state_requires_precise_mem_exns);
+ disInstrFn = S390FN(disInstr_S390);
+ specHelper = S390FN(guest_s390x_spechelper);
guest_sizeB = sizeof(VexGuestS390XState);
guest_word_type = Ity_I64;
- guest_layout = &s390xGuest_layout;
+ guest_layout = S390FN(&s390xGuest_layout);
offB_CMSTART = offsetof(VexGuestS390XState,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestS390XState,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestS390XState,guest_IA);
@@ -565,12 +672,13 @@
break;
case VexArchARM:
- preciseMemExnsFn = guest_arm_state_requires_precise_mem_exns;
- disInstrFn = disInstr_ARM;
- specHelper = guest_arm_spechelper;
+ preciseMemExnsFn
+ = ARMFN(guest_arm_state_requires_precise_mem_exns);
+ disInstrFn = ARMFN(disInstr_ARM);
+ specHelper = ARMFN(guest_arm_spechelper);
guest_sizeB = sizeof(VexGuestARMState);
guest_word_type = Ity_I32;
- guest_layout = &armGuest_layout;
+ guest_layout = ARMFN(&armGuest_layout);
offB_CMSTART = offsetof(VexGuestARMState,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestARMState,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestARMState,guest_R15T);
@@ -585,12 +693,13 @@
break;
case VexArchARM64:
- preciseMemExnsFn = guest_arm64_state_requires_precise_mem_exns;
- disInstrFn = disInstr_ARM64;
- specHelper = guest_arm64_spechelper;
+ preciseMemExnsFn
+ = ARM64FN(guest_arm64_state_requires_precise_mem_exns);
+ disInstrFn = ARM64FN(disInstr_ARM64);
+ specHelper = ARM64FN(guest_arm64_spechelper);
guest_sizeB = sizeof(VexGuestARM64State);
guest_word_type = Ity_I64;
- guest_layout = &arm64Guest_layout;
+ guest_layout = ARM64FN(&arm64Guest_layout);
offB_CMSTART = offsetof(VexGuestARM64State,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestARM64State,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestARM64State,guest_PC);
@@ -605,12 +714,13 @@
break;
case VexArchMIPS32:
- preciseMemExnsFn = guest_mips32_state_requires_precise_mem_exns;
- disInstrFn = disInstr_MIPS;
- specHelper = guest_mips32_spechelper;
+ preciseMemExnsFn
+ = MIPS32FN(guest_mips32_state_requires_precise_mem_exns);
+ disInstrFn = MIPS32FN(disInstr_MIPS);
+ specHelper = MIPS32FN(guest_mips32_spechelper);
guest_sizeB = sizeof(VexGuestMIPS32State);
guest_word_type = Ity_I32;
- guest_layout = &mips32Guest_layout;
+ guest_layout = MIPS32FN(&mips32Guest_layout);
offB_CMSTART = offsetof(VexGuestMIPS32State,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestMIPS32State,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestMIPS32State,guest_PC);
@@ -626,12 +736,13 @@
break;
case VexArchMIPS64:
- preciseMemExnsFn = guest_mips64_state_requires_precise_mem_exns;
- disInstrFn = disInstr_MIPS;
- specHelper = guest_mips64_spechelper;
+ preciseMemExnsFn
+ = MIPS64FN(guest_mips64_state_requires_precise_mem_exns);
+ disInstrFn = MIPS64FN(disInstr_MIPS);
+ specHelper = MIPS64FN(guest_mips64_spechelper);
guest_sizeB = sizeof(VexGuestMIPS64State);
guest_word_type = Ity_I64;
- guest_layout = &mips64Guest_layout;
+ guest_layout = MIPS64FN(&mips64Guest_layout);
offB_CMSTART = offsetof(VexGuestMIPS64State,guest_CMSTART);
offB_CMLEN = offsetof(VexGuestMIPS64State,guest_CMLEN);
offB_GUEST_IP = offsetof(VexGuestMIPS64State,guest_PC);
@@ -978,50 +1089,50 @@
{
switch (arch_host) {
case VexArchX86:
- return chainXDirect_X86(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to);
+ X86ST(return chainXDirect_X86(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to));
case VexArchAMD64:
- return chainXDirect_AMD64(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to);
+ AMD64ST(return chainXDirect_AMD64(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to));
case VexArchARM:
- return chainXDirect_ARM(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to);
+ ARMST(return chainXDirect_ARM(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to));
case VexArchARM64:
- return chainXDirect_ARM64(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to);
+ ARM64ST(return chainXDirect_ARM64(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to));
case VexArchS390X:
- return chainXDirect_S390(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to);
+ S390ST(return chainXDirect_S390(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to));
case VexArchPPC32:
- return chainXDirect_PPC(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to, False/*!mode64*/);
+ PPC32ST(return chainXDirect_PPC(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to, False/*!mode64*/));
case VexArchPPC64:
- return chainXDirect_PPC(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to, True/*mode64*/);
+ PPC64ST(return chainXDirect_PPC(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to, True/*mode64*/));
case VexArchMIPS32:
- return chainXDirect_MIPS(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to, False/*!mode64*/);
+ MIPS32ST(return chainXDirect_MIPS(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to, False/*!mode64*/));
case VexArchMIPS64:
- return chainXDirect_MIPS(endness_host,
- place_to_chain,
- disp_cp_chain_me_EXPECTED,
- place_to_jump_to, True/*!mode64*/);
+ MIPS64ST(return chainXDirect_MIPS(endness_host,
+ place_to_chain,
+ disp_cp_chain_me_EXPECTED,
+ place_to_jump_to, True/*!mode64*/));
default:
vassert(0);
}
@@ -1035,50 +1146,50 @@
{
switch (arch_host) {
case VexArchX86:
- return unchainXDirect_X86(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me);
+ X86ST(return unchainXDirect_X86(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me));
case VexArchAMD64:
- return unchainXDirect_AMD64(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me);
+ AMD64ST(return unchainXDirect_AMD64(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me));
case VexArchARM:
- return unchainXDirect_ARM(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me);
+ ARMST(return unchainXDirect_ARM(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me));
case VexArchARM64:
- return unchainXDirect_ARM64(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me);
+ ARM64ST(return unchainXDirect_ARM64(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me));
case VexArchS390X:
- return unchainXDirect_S390(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me);
+ S390ST(return unchainXDirect_S390(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me));
case VexArchPPC32:
- return unchainXDirect_PPC(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me, False/*!mode64*/);
+ PPC32ST(return unchainXDirect_PPC(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me, False/*!mode64*/));
case VexArchPPC64:
- return unchainXDirect_PPC(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me, True/*mode64*/);
+ PPC64ST(return unchainXDirect_PPC(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me, True/*mode64*/));
case VexArchMIPS32:
- return unchainXDirect_MIPS(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me, False/*!mode64*/);
+ MIPS32ST(return unchainXDirect_MIPS(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me, False/*!mode64*/));
case VexArchMIPS64:
- return unchainXDirect_MIPS(endness_host,
- place_to_unchain,
- place_to_jump_to_EXPECTED,
- disp_cp_chain_me, True/*!mode64*/);
+ MIPS64ST(return unchainXDirect_MIPS(endness_host,
+ place_to_unchain,
+ place_to_jump_to_EXPECTED,
+ disp_cp_chain_me, True/*!mode64*/));
default:
vassert(0);
}
@@ -1090,21 +1201,23 @@
if (UNLIKELY(cached == 0)) {
switch (arch_host) {
case VexArchX86:
- cached = evCheckSzB_X86(); break;
+ X86ST(cached = evCheckSzB_X86()); break;
case VexArchAMD64:
- cached = evCheckSzB_AMD64(); break;
+ AMD64ST(cached = evCheckSzB_AMD64()); break;
case VexArchARM:
- cached = evCheckSzB_ARM(); break;
+ ARMST(cached = evCheckSzB_ARM()); break;
case VexArchARM64:
- cached = evCheckSzB_ARM64(); break;
+ ARM64ST(cached = evCheckSzB_ARM64()); break;
case VexArchS390X:
- cached = evCheckSzB_S390(); break;
+ S390ST(cached = evCheckSzB_S390()); break;
case VexArchPPC32:
+ PPC32ST(cached = evCheckSzB_PPC()); break;
case VexArchPPC64:
- cached = evCheckSzB_PPC(); break;
+ PPC64ST(cached = evCheckSzB_PPC()); break;
case VexArchMIPS32:
+ MIPS32ST(cached = evCheckSzB_MIPS()); break;
case VexArchMIPS64:
- cached = evCheckSzB_MIPS(); break;
+ MIPS64ST(cached = evCheckSzB_MIPS()); break;
default:
vassert(0);
}
@@ -1119,32 +1232,32 @@
{
switch (arch_host) {
case VexArchX86:
- return patchProfInc_X86(endness_host, place_to_patch,
- location_of_counter);
+ X86ST(return patchProfInc_X86(endness_host, place_to_patch,
+ location_of_counter));
case VexArchAMD64:
- return patchProfInc_AMD64(endness_host, place_to_patch,
- location_of_counter);
+ AMD64ST(return patchProfInc_AMD64(endness_host, place_to_patch,
+ location_of_counter));
case VexArchARM:
- return patchProfInc_ARM(endness_host, place_to_patch,
- location_of_counter);
+ ARMST(return patchProfInc_ARM(endness_host, place_to_patch,
+ location_of_counter));
case VexArchARM64:
- return patchProfInc_ARM64(endness_host, place_to_patch,
- location_of_counter);
+ ARM64ST(return patchProfInc_ARM64(endness_host, place_to_patch,
+ location_of_counter));
case VexArchS390X:
- return patchProfInc_S390(endness_host, place_to_patch,
- location_of_counter);
+ S390ST(return patchProfInc_S390(endness_host, place_to_patch,
+ location_of_counter));
case VexArchPPC32:
- return patchProfInc_PPC(endness_host, place_to_patch,
- location_of_counter, False/*!mode64*/);
+ PPC32ST(return patchProfInc_PPC(endness_host, place_to_patch,
+ location_of_counter, False/*!mode64*/));
case VexArchPPC64:
- return patchProfInc_PPC(endness_host, place_to_patch,
- location_of_counter, True/*mode64*/);
+ PPC64ST(return patchProfInc_PPC(endness_host, place_to_patch,
+ location_of_counter, True/*mode64*/));
case VexArchMIPS32:
- return patchProfInc_MIPS(endness_host, place_to_patch,
- location_of_counter, False/*!mode64*/);
+ MIPS32ST(return patchProfInc_MIPS(endness_host, place_to_patch,
+ location_of_counter, False/*!mode64*/));
case VexArchMIPS64:
- return patchProfInc_MIPS(endness_host, place_to_patch,
- location_of_counter, True/*!mode64*/);
+ MIPS64ST(return patchProfInc_MIPS(endness_host, place_to_patch,
+ location_of_counter, True/*!mode64*/));
default:
vassert(0);
}
Added: trunk/priv/multiarch_main_main.c
==============================================================================
--- trunk/priv/multiarch_main_main.c (added)
+++ trunk/priv/multiarch_main_main.c Wed Apr 1 00:01:57 2015
@@ -0,0 +1,2 @@
+#define VEXMULTIARCH 1
+#include "main_main.c"
|
|
From: <sv...@va...> - 2015-03-31 22:19:31
|
Author: florian
Date: Tue Mar 31 23:19:23 2015
New Revision: 15055
Log:
Update list of ignored files.
Modified:
trunk/memcheck/tests/ (props changed)
|
|
From: <sv...@va...> - 2015-03-31 20:39:00
|
Author: philippe
Date: Tue Mar 31 21:38:52 2015
New Revision: 15054
Log:
Further reduction of the size of the sector TTE tables
For default memcheck configuration, 32 bits) this patch
decreases by 13.6 MB ie. from 89945856 to 76317696.
Note that the type EClassNo is introduced only for readibility
purpose (and avoid some cast). That does not change the size
of the TTEntry.
The TTEntry size is reduced by using unions and/or Bool on 1 bit.
No performance impact detected (outer callgrind/inner memcheck bz2
on x86 shows a small improvement).
Modified:
trunk/coregrind/m_transtab.c
Modified: trunk/coregrind/m_transtab.c
==============================================================================
--- trunk/coregrind/m_transtab.c (original)
+++ trunk/coregrind/m_transtab.c Tue Mar 31 21:38:52 2015
@@ -92,13 +92,15 @@
/* Equivalence classes for fast address range deletion. There are 1 +
2^ECLASS_WIDTH bins. The highest one, ECLASS_MISC, describes an
address range which does not fall cleanly within any specific bin.
- Note that ECLASS_SHIFT + ECLASS_WIDTH must be < 32. */
+ Note that ECLASS_SHIFT + ECLASS_WIDTH must be < 32.
+ ECLASS_N must fit in a EclassNo. */
#define ECLASS_SHIFT 11
#define ECLASS_WIDTH 8
#define ECLASS_MISC (1 << ECLASS_WIDTH)
#define ECLASS_N (1 + ECLASS_MISC)
+STATIC_ASSERT(ECLASS_SHIFT + ECLASS_WIDTH < 32);
-
+typedef UShort EClassNo;
/*------------------ TYPES ------------------*/
@@ -107,8 +109,9 @@
struct {
SECno from_sNo; /* sector number */
TTEno from_tteNo; /* TTE number in given sector */
- UInt from_offs; /* code offset from TCEntry::tcptr where the patch is */
- Bool to_fastEP; /* Is the patch to a fast or slow entry point? */
+ UInt from_offs: (sizeof(UInt)*8)-1; /* code offset from TCEntry::tcptr
+ where the patch is */
+ Bool to_fastEP:1; /* Is the patch to a fast or slow entry point? */
}
InEdge;
@@ -126,18 +129,24 @@
#define N_FIXED_IN_EDGE_ARR 3
typedef
struct {
- UInt n_fixed; /* 0 .. N_FIXED_IN_EDGE_ARR */
- InEdge fixed[N_FIXED_IN_EDGE_ARR];
- XArray* var; /* XArray* of InEdgeArr */
+ Bool has_var:1; /* True if var is used (then n_fixed must be 0) */
+ UInt n_fixed: (sizeof(UInt)*8)-1; /* 0 .. N_FIXED_IN_EDGE_ARR */
+ union {
+ InEdge fixed[N_FIXED_IN_EDGE_ARR]; /* if !has_var */
+ XArray* var; /* XArray* of InEdgeArr */ /* if has_var */
+ } edges;
}
InEdgeArr;
#define N_FIXED_OUT_EDGE_ARR 2
typedef
struct {
- UInt n_fixed; /* 0 .. N_FIXED_OUT_EDGE_ARR */
- OutEdge fixed[N_FIXED_OUT_EDGE_ARR];
- XArray* var; /* XArray* of OutEdgeArr */
+ Bool has_var:1; /* True if var is used (then n_fixed must be 0) */
+ UInt n_fixed: (sizeof(UInt)*8)-1; /* 0 .. N_FIXED_OUT_EDGE_ARR */
+ union {
+ OutEdge fixed[N_FIXED_OUT_EDGE_ARR]; /* if !has_var */
+ XArray* var; /* XArray* of OutEdgeArr */ /* if has_var */
+ } edges;
}
OutEdgeArr;
@@ -193,8 +202,8 @@
The eclass info is similar to, and derived from, this entry's
'vge' field, but it is not the same */
UShort n_tte2ec; // # tte2ec pointers (1 to 3)
- UShort tte2ec_ec[3]; // for each, the eclass #
- UInt tte2ec_ix[3]; // and the index within the eclass.
+ EClassNo tte2ec_ec[3]; // for each, the eclass #
+ UInt tte2ec_ix[3]; // and the index within the eclass.
// for i in 0 .. n_tte2ec-1
// sec->ec2tte[ tte2ec_ec[i] ][ tte2ec_ix[i] ]
// should be the index
@@ -497,9 +506,9 @@
static UWord InEdgeArr__size ( const InEdgeArr* iea )
{
- if (iea->var) {
+ if (iea->has_var) {
vg_assert(iea->n_fixed == 0);
- return VG_(sizeXA)(iea->var);
+ return VG_(sizeXA)(iea->edges.var);
} else {
vg_assert(iea->n_fixed <= N_FIXED_IN_EDGE_ARR);
return iea->n_fixed;
@@ -508,10 +517,11 @@
static void InEdgeArr__makeEmpty ( InEdgeArr* iea )
{
- if (iea->var) {
+ if (iea->has_var) {
vg_assert(iea->n_fixed == 0);
- VG_(deleteXA)(iea->var);
- iea->var = NULL;
+ VG_(deleteXA)(iea->edges.var);
+ iea->edges.var = NULL;
+ iea->has_var = False;
} else {
vg_assert(iea->n_fixed <= N_FIXED_IN_EDGE_ARR);
iea->n_fixed = 0;
@@ -521,25 +531,25 @@
static
InEdge* InEdgeArr__index ( InEdgeArr* iea, UWord i )
{
- if (iea->var) {
+ if (iea->has_var) {
vg_assert(iea->n_fixed == 0);
- return (InEdge*)VG_(indexXA)(iea->var, i);
+ return (InEdge*)VG_(indexXA)(iea->edges.var, i);
} else {
vg_assert(i < iea->n_fixed);
- return &iea->fixed[i];
+ return &iea->edges.fixed[i];
}
}
static
void InEdgeArr__deleteIndex ( InEdgeArr* iea, UWord i )
{
- if (iea->var) {
+ if (iea->has_var) {
vg_assert(iea->n_fixed == 0);
- VG_(removeIndexXA)(iea->var, i);
+ VG_(removeIndexXA)(iea->edges.var, i);
} else {
vg_assert(i < iea->n_fixed);
for (; i+1 < iea->n_fixed; i++) {
- iea->fixed[i] = iea->fixed[i+1];
+ iea->edges.fixed[i] = iea->edges.fixed[i+1];
}
iea->n_fixed--;
}
@@ -548,35 +558,37 @@
static
void InEdgeArr__add ( InEdgeArr* iea, InEdge* ie )
{
- if (iea->var) {
+ if (iea->has_var) {
vg_assert(iea->n_fixed == 0);
- VG_(addToXA)(iea->var, ie);
+ VG_(addToXA)(iea->edges.var, ie);
} else {
vg_assert(iea->n_fixed <= N_FIXED_IN_EDGE_ARR);
if (iea->n_fixed == N_FIXED_IN_EDGE_ARR) {
/* The fixed array is full, so we have to initialise an
XArray and copy the fixed array into it. */
- iea->var = VG_(newXA)(ttaux_malloc, "transtab.IEA__add",
- ttaux_free,
- sizeof(InEdge));
+ XArray *var = VG_(newXA)(ttaux_malloc, "transtab.IEA__add",
+ ttaux_free,
+ sizeof(InEdge));
UWord i;
for (i = 0; i < iea->n_fixed; i++) {
- VG_(addToXA)(iea->var, &iea->fixed[i]);
+ VG_(addToXA)(var, &iea->edges.fixed[i]);
}
- VG_(addToXA)(iea->var, ie);
+ VG_(addToXA)(var, ie);
iea->n_fixed = 0;
+ iea->has_var = True;
+ iea->edges.var = var;
} else {
/* Just add to the fixed array. */
- iea->fixed[iea->n_fixed++] = *ie;
+ iea->edges.fixed[iea->n_fixed++] = *ie;
}
}
}
static UWord OutEdgeArr__size ( const OutEdgeArr* oea )
{
- if (oea->var) {
+ if (oea->has_var) {
vg_assert(oea->n_fixed == 0);
- return VG_(sizeXA)(oea->var);
+ return VG_(sizeXA)(oea->edges.var);
} else {
vg_assert(oea->n_fixed <= N_FIXED_OUT_EDGE_ARR);
return oea->n_fixed;
@@ -585,10 +597,11 @@
static void OutEdgeArr__makeEmpty ( OutEdgeArr* oea )
{
- if (oea->var) {
+ if (oea->has_var) {
vg_assert(oea->n_fixed == 0);
- VG_(deleteXA)(oea->var);
- oea->var = NULL;
+ VG_(deleteXA)(oea->edges.var);
+ oea->edges.var = NULL;
+ oea->has_var = False;
} else {
vg_assert(oea->n_fixed <= N_FIXED_OUT_EDGE_ARR);
oea->n_fixed = 0;
@@ -598,25 +611,25 @@
static
OutEdge* OutEdgeArr__index ( OutEdgeArr* oea, UWord i )
{
- if (oea->var) {
+ if (oea->has_var) {
vg_assert(oea->n_fixed == 0);
- return (OutEdge*)VG_(indexXA)(oea->var, i);
+ return (OutEdge*)VG_(indexXA)(oea->edges.var, i);
} else {
vg_assert(i < oea->n_fixed);
- return &oea->fixed[i];
+ return &oea->edges.fixed[i];
}
}
static
void OutEdgeArr__deleteIndex ( OutEdgeArr* oea, UWord i )
{
- if (oea->var) {
+ if (oea->has_var) {
vg_assert(oea->n_fixed == 0);
- VG_(removeIndexXA)(oea->var, i);
+ VG_(removeIndexXA)(oea->edges.var, i);
} else {
vg_assert(i < oea->n_fixed);
for (; i+1 < oea->n_fixed; i++) {
- oea->fixed[i] = oea->fixed[i+1];
+ oea->edges.fixed[i] = oea->edges.fixed[i+1];
}
oea->n_fixed--;
}
@@ -625,26 +638,28 @@
static
void OutEdgeArr__add ( OutEdgeArr* oea, OutEdge* oe )
{
- if (oea->var) {
+ if (oea->has_var) {
vg_assert(oea->n_fixed == 0);
- VG_(addToXA)(oea->var, oe);
+ VG_(addToXA)(oea->edges.var, oe);
} else {
vg_assert(oea->n_fixed <= N_FIXED_OUT_EDGE_ARR);
if (oea->n_fixed == N_FIXED_OUT_EDGE_ARR) {
/* The fixed array is full, so we have to initialise an
XArray and copy the fixed array into it. */
- oea->var = VG_(newXA)(ttaux_malloc, "transtab.OEA__add",
- ttaux_free,
- sizeof(OutEdge));
+ XArray *var = VG_(newXA)(ttaux_malloc, "transtab.OEA__add",
+ ttaux_free,
+ sizeof(OutEdge));
UWord i;
for (i = 0; i < oea->n_fixed; i++) {
- VG_(addToXA)(oea->var, &oea->fixed[i]);
+ VG_(addToXA)(var, &oea->edges.fixed[i]);
}
- VG_(addToXA)(oea->var, oe);
+ VG_(addToXA)(var, oe);
oea->n_fixed = 0;
+ oea->has_var = True;
+ oea->edges.var = var;
} else {
/* Just add to the fixed array. */
- oea->fixed[oea->n_fixed++] = *oe;
+ oea->edges.fixed[oea->n_fixed++] = *oe;
}
}
}
@@ -964,7 +979,7 @@
/* Return equivalence class number for a range. */
-static Int range_to_eclass ( Addr start, UInt len )
+static EClassNo range_to_eclass ( Addr start, UInt len )
{
UInt mask = (1 << ECLASS_WIDTH) - 1;
UInt lo = (UInt)start;
@@ -987,14 +1002,15 @@
*/
static
-Int vexGuestExtents_to_eclasses ( /*OUT*/Int* eclasses,
+Int vexGuestExtents_to_eclasses ( /*OUT*/EClassNo* eclasses,
const VexGuestExtents* vge )
{
# define SWAP(_lv1,_lv2) \
do { Int t = _lv1; _lv1 = _lv2; _lv2 = t; } while (0)
- Int i, j, n_ec, r;
+ Int i, j, n_ec;
+ EClassNo r;
vg_assert(vge->n_used >= 1 && vge->n_used <= 3);
@@ -1047,7 +1063,7 @@
this sector. Returns used location in eclass array. */
static
-UInt addEClassNo ( /*MOD*/Sector* sec, Int ec, TTEno tteno )
+UInt addEClassNo ( /*MOD*/Sector* sec, EClassNo ec, TTEno tteno )
{
Int old_sz, new_sz, i, r;
TTEno *old_ar, *new_ar;
@@ -1090,7 +1106,8 @@
static
void upd_eclasses_after_add ( /*MOD*/Sector* sec, TTEno tteno )
{
- Int i, r, eclasses[3];
+ Int i, r;
+ EClassNo eclasses[3];
TTEntry* tte;
vg_assert(tteno >= 0 && tteno < N_TTES_PER_SECTOR);
@@ -1115,7 +1132,9 @@
# define BAD(_str) do { whassup = (_str); goto bad; } while (0)
const HChar* whassup = NULL;
- Int i, j, k, n, ec_num, ec_idx;
+ Int j, k, n, ec_idx;
+ EClassNo i;
+ EClassNo ec_num;
TTEntry* tte;
TTEno tteno;
ULong* tce;
@@ -1417,10 +1436,10 @@
vg_assert(sec->tt == NULL);
vg_assert(sec->tc_next == NULL);
vg_assert(sec->tt_n_inuse == 0);
- for (i = 0; i < ECLASS_N; i++) {
- vg_assert(sec->ec2tte_size[i] == 0);
- vg_assert(sec->ec2tte_used[i] == 0);
- vg_assert(sec->ec2tte[i] == NULL);
+ for (EClassNo e = 0; e < ECLASS_N; e++) {
+ vg_assert(sec->ec2tte_size[e] == 0);
+ vg_assert(sec->ec2tte_used[e] == 0);
+ vg_assert(sec->ec2tte[e] == NULL);
}
vg_assert(sec->host_extents == NULL);
@@ -1526,16 +1545,16 @@
sno);
/* Free up the eclass structures. */
- for (i = 0; i < ECLASS_N; i++) {
- if (sec->ec2tte_size[i] == 0) {
- vg_assert(sec->ec2tte_used[i] == 0);
- vg_assert(sec->ec2tte[i] == NULL);
+ for (EClassNo e = 0; e < ECLASS_N; i++) {
+ if (sec->ec2tte_size[e] == 0) {
+ vg_assert(sec->ec2tte_used[e] == 0);
+ vg_assert(sec->ec2tte[e] == NULL);
} else {
- vg_assert(sec->ec2tte[i] != NULL);
- ttaux_free(sec->ec2tte[i]);
- sec->ec2tte[i] = NULL;
- sec->ec2tte_size[i] = 0;
- sec->ec2tte_used[i] = 0;
+ vg_assert(sec->ec2tte[e] != NULL);
+ ttaux_free(sec->ec2tte[e]);
+ sec->ec2tte[e] = NULL;
+ sec->ec2tte_size[e] = 0;
+ sec->ec2tte_used[e] = 0;
}
}
@@ -1856,7 +1875,8 @@
static void delete_tte ( /*MOD*/Sector* sec, SECno secNo, TTEno tteno,
VexArch arch_host, VexEndness endness_host )
{
- Int i, ec_num, ec_idx;
+ Int i, ec_idx;
+ EClassNo ec_num;
TTEntry* tte;
/* sec and secNo are mutually redundant; cross-check. */
@@ -1872,7 +1892,7 @@
/* Deal with the ec-to-tte links first. */
for (i = 0; i < tte->n_tte2ec; i++) {
- ec_num = (Int)tte->tte2ec_ec[i];
+ ec_num = tte->tte2ec_ec[i];
ec_idx = tte->tte2ec_ix[i];
vg_assert(ec_num >= 0 && ec_num < ECLASS_N);
vg_assert(ec_idx >= 0);
@@ -1923,7 +1943,7 @@
static
Bool delete_translations_in_sector_eclass ( /*MOD*/Sector* sec, SECno secNo,
Addr guest_start, ULong range,
- Int ec,
+ EClassNo ec,
VexArch arch_host,
VexEndness endness_host )
{
@@ -1987,7 +2007,7 @@
{
Sector* sec;
SECno sno;
- Int ec;
+ EClassNo ec;
Bool anyDeleted = False;
vg_assert(init_done);
@@ -2420,11 +2440,10 @@
n_disc_count, n_disc_osize );
if (DEBUG_TRANSTAB) {
- Int i;
VG_(printf)("\n");
- for (i = 0; i < ECLASS_N; i++) {
- VG_(printf)(" %4d", sectors[0].ec2tte_used[i]);
- if (i % 16 == 15)
+ for (EClassNo e = 0; e < ECLASS_N; e++) {
+ VG_(printf)(" %4d", sectors[0].ec2tte_used[e]);
+ if (e % 16 == 15)
VG_(printf)("\n");
}
VG_(printf)("\n\n");
|
|
From: Julian S. <js...@ac...> - 2015-03-31 18:31:55
|
On 28/03/15 19:46, Florian Krohm wrote: > So I ended up using > +// Poor man's static assert > +#define STATIC_ASSERT(x) extern int VG_(VG_(VG_(unused)))[(x) ? 1 : -1] Excellent. There are a whole bunch of asserts, particularly to do with structure sizes, that we could pull out and turn into static asserts. If anyone has enthusiasm :) J |
|
From: Andres T. <and...@ta...> - 2015-03-31 18:28:48
|
But if you want to replace functions from pthread like helgrind and drd do you have to replace the client malloc/free I don't know why. 2015-03-30 15:26 GMT-03:00 Philippe Waroquiers <phi...@sk...>: > On Mon, 2015-03-30 at 15:10 -0300, Andres Tiraboschi wrote: >> I was making a tool for valgrind for measuring things related with threads. >> I made it work but for that, surprisingly I had to replace mallocs and >> frees with VG_(needs_malloc_replacement). >> If I don't do this I get a segmentation fault. >> >> In drd if I make that the tool doens't replace the mallocs and frees >> there is the same segmentation fault. >> >> I couldn't find any documentation about this, so I don't know if this >> is supposed to be like this or is a bug. > No, it is not mandatory to replace (client) malloc/free. > > There are several tools that do not replace malloc/free. > E.g. cachegrind and callgrind are not replacing client malloc/free. > > Philippe > > |
|
From: Julian S. <js...@ac...> - 2015-03-31 18:24:57
|
On 29/03/15 18:03, Philippe Waroquiers wrote: > When linking a tool for a certain architecture (e.g. x86), the resulting > executable contains a significant proportion of the VEX library for > other architectures (amd64, arm, ppc, mips, s390). There are two reasons why it is like it currently is: 1. To retain the possibility of people using Vex for binary translation, and being able to do this simply by passing in suitable parameters to LibVEX_Translate. That implies linking in all of the front ends and back ends. We still have some folks who need that ability, for example Yan (for binary analysis) and recent post "Guest ARM, Host X86". 2. To avoid the code falling into a mass of ifdefs, and so having the possibility of build failures on some targets but not others. The simplest way to avoid that was simply to compile and link all files on all targets. I can see that it might be useful to reduce the binary sizes. But I'd still want a way to be able to retain both (1) and (2). I would prefer the proposed configure time scheme, so that (eg) maybe by default, only the front- and back-end object for the primary and secondary arch get linked in. But if --all-vex-archs is given then it will be built as at present, so that requirement (2) above is still easily satisfied on developer machines and nightly builder.s. J |
|
From: Julian S. <js...@ac...> - 2015-03-31 17:45:43
|
On 30/03/15 17:59, 林作健 wrote: > After my job is done, should I post my patch to libVEX? I don't think any > one like it, because c++11 unordered_map involved . Implementing a C hash > table? Maybe glib GHashtable? Maybe an option. I think you should make your patch available. Even if not all parts are good, maybe there might be something useful to take. I would be interested to hear about your experiences using VEX for binary translation. > What is a arm-linux-andriodeabi vm? Look at intel's libhoudini. That is > what it is. Just Google it . Are you trying to make a replacement of libhoudini, or something else? I didn't understand what you are trying to do here. J |
|
From: Julian S. <js...@ac...> - 2015-03-31 17:41:03
|
On 30/03/15 11:14, Ivo Raisr wrote: > Please could you shed some light on why setHelperAnns() > in mc_translate.c annotates all memcheck's dirty helpers > to indicate that IP and SP could be read? > > I guess the reason is that IP and SP are required for > stack unwinding in case one of these dirty helpers > detect a problem which needs to be reported. Yes, that is correct. > But if this is so, then why also FP is not present in annotations? Hmm. I don't know. My first impression is that this is a bug. How did you detect this? By code inspection, or did you see some unwind failures -- that is, direct evidence of unwind failures? J |