You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(6) |
2
(3) |
3
(2) |
4
(1) |
5
(1) |
6
(6) |
7
(9) |
|
8
(8) |
9
(6) |
10
(13) |
11
(9) |
12
(12) |
13
(6) |
14
(1) |
|
15
(4) |
16
(8) |
17
(15) |
18
(7) |
19
(3) |
20
(11) |
21
(7) |
|
22
(26) |
23
(7) |
24
(4) |
25
(9) |
26
(10) |
27
(13) |
28
(6) |
|
29
(11) |
30
(6) |
31
(8) |
|
|
|
|
|
From: <sv...@va...> - 2010-08-22 22:21:27
|
Author: sewardj
Date: 2010-08-22 23:21:19 +0100 (Sun, 22 Aug 2010)
New Revision: 2019
Log:
Handle "Special" instructions in Thumb mode: "R3 = guest_NRADDR" and
"branch-and-link-to-noredir R4". This makes function wrapping work in
Thumb mode.
Modified:
trunk/priv/guest_arm_toIR.c
Modified: trunk/priv/guest_arm_toIR.c
===================================================================
--- trunk/priv/guest_arm_toIR.c 2010-08-22 18:47:30 UTC (rev 2018)
+++ trunk/priv/guest_arm_toIR.c 2010-08-22 22:21:19 UTC (rev 2019)
@@ -11757,10 +11757,9 @@
dres.whatNext = Dis_StopHere;
goto decode_success;
}
-#if 0
else
// 0x 0B 0B EA 4B
- if (getUIntLittleEndianly(code+16) == 0xE18BB00B
+ if (getUIntLittleEndianly(code+16) == 0x0B0BEA4B
/* orr r11,r11,r11 */) {
/* R3 = guest_NRADDR */
DIP("r3 = guest_NRADDR\n");
@@ -11770,17 +11769,16 @@
}
else
// 0x 0C 0C EA 4C
- if (getUIntLittleEndianly(code+16) == 0xE18CC00C
+ if (getUIntLittleEndianly(code+16) == 0x0C0CEA4C
/* orr r12,r12,r12 */) {
/* branch-and-link-to-noredir R4 */
DIP("branch-and-link-to-noredir r4\n");
- llPutIReg(14, mkU32( guest_R15_curr_instr_notENC + 20) );
+ llPutIReg(14, mkU32( (guest_R15_curr_instr_notENC + 20) | 1 ));
irsb->next = getIRegT(4);
irsb->jumpkind = Ijk_NoRedir;
dres.whatNext = Dis_StopHere;
goto decode_success;
}
-#endif
/* We don't know what it is. Set insn0 so decode_failure
can print the insn following the Special-insn preamble. */
insn0 = getUShortLittleEndianly(code+16);
@@ -15180,7 +15178,7 @@
/* All decode successes end up here. */
DIP("\n");
- vassert(dres.len == 2 || dres.len == 4);
+ vassert(dres.len == 2 || dres.len == 4 || dres.len == 20);
#if 0
// XXX is this necessary on Thumb?
|
|
From: <sv...@va...> - 2010-08-22 22:18:43
|
Author: sewardj
Date: 2010-08-22 23:18:31 +0100 (Sun, 22 Aug 2010)
New Revision: 11287
Log:
arm-linux: make restarting interrupted syscalls work in Thumb mode.
This isn't exactly right, in the sense that the if the SVC instruction
was conditional, then it will be restarted with the condition for the
following instruction. IOW we should back up ITSTATE too, but don't.
This doesn't happen in glibc, though, afaics.
Also tighten up the checks for restarting in ARM mode.
Modified:
trunk/coregrind/m_syswrap/syswrap-main.c
Modified: trunk/coregrind/m_syswrap/syswrap-main.c
===================================================================
--- trunk/coregrind/m_syswrap/syswrap-main.c 2010-08-22 18:23:29 UTC (rev 11286)
+++ trunk/coregrind/m_syswrap/syswrap-main.c 2010-08-22 22:18:31 UTC (rev 11287)
@@ -1864,18 +1864,42 @@
}
#elif defined(VGP_arm_linux)
- // INTERWORKING FIXME. This is certainly wrong. Need to look at
- // R15T to determine current mode, then back up accordingly.
- arch->vex.guest_R15T -= 4; // sizeof(arm instr)
- {
- UChar *p = (UChar*)arch->vex.guest_R15T;
-
- if ((p[3] & 0xF) != 0xF)
+ if (arch->vex.guest_R15T & 1) {
+ // Thumb mode. SVC is a encoded as
+ // 1101 1111 imm8
+ // where imm8 is the SVC number, and we only accept 0.
+ arch->vex.guest_R15T -= 2; // sizeof(thumb 16 bit insn)
+ UChar* p = (UChar*)(arch->vex.guest_R15T - 1);
+ Bool valid = p[0] == 0 && p[1] == 0xDF;
+ if (!valid) {
VG_(message)(Vg_DebugMsg,
- "?! restarting over syscall that is not syscall at %#llx %02x %02x %02x %02x\n",
+ "?! restarting over (Thumb) syscall that is not syscall "
+ "at %#llx %02x %02x\n",
+ arch->vex.guest_R15T - 1ULL, p[0], p[1]);
+ }
+ vg_assert(valid);
+ // FIXME: NOTE, this really isn't right. We need to back up
+ // ITSTATE to what it was before the SVC instruction, but we
+ // don't know what it was. At least assert that it is now
+ // zero, because if it is nonzero then it must also have
+ // been nonzero for the SVC itself, which means it was
+ // conditional. Urk.
+ vg_assert(arch->vex.guest_ITSTATE == 0);
+ } else {
+ // ARM mode. SVC is encoded as
+ // cond 1111 imm24
+ // where imm24 is the SVC number, and we only accept 0.
+ arch->vex.guest_R15T -= 4; // sizeof(arm instr)
+ UChar* p = (UChar*)arch->vex.guest_R15T;
+ Bool valid = p[0] == 0 && p[1] == 0 && p[2] == 0
+ && (p[3] & 0xF) == 0xF;
+ if (!valid) {
+ VG_(message)(Vg_DebugMsg,
+ "?! restarting over (ARM) syscall that is not syscall "
+ "at %#llx %02x %02x %02x %02x\n",
arch->vex.guest_R15T + 0ULL, p[0], p[1], p[2], p[3]);
-
- vg_assert((p[3] & 0xF) == 0xF);
+ }
+ vg_assert(valid);
}
#elif defined(VGP_ppc32_aix5) || defined(VGP_ppc64_aix5)
|
|
From: <sv...@va...> - 2010-08-22 18:47:38
|
Author: sewardj
Date: 2010-08-22 19:47:30 +0100 (Sun, 22 Aug 2010)
New Revision: 2018
Log:
Fix some compiler complaints when building on 64-bit platforms.
Modified:
trunk/priv/guest_arm_toIR.c
Modified: trunk/priv/guest_arm_toIR.c
===================================================================
--- trunk/priv/guest_arm_toIR.c 2010-08-22 18:24:51 UTC (rev 2017)
+++ trunk/priv/guest_arm_toIR.c 2010-08-22 18:47:30 UTC (rev 2018)
@@ -11847,7 +11847,7 @@
insn. So, have a look at them. */
forceZ = True; /* assume no 'it' insn found, till we do */
- UShort* hwp = (UShort*)pc;
+ UShort* hwp = (UShort*)(HWord)pc;
Int i;
for (i = -1; i >= -9; i--) {
/* We're in the same page. (True, but commented out due
@@ -12863,7 +12863,7 @@
UInt rD = INSN0(10,8);
UInt imm8 = INSN0(7,0);
putIRegT(rD, binop(Iop_Add32,
- binop(Iop_And32, getIRegT(15), mkU32(~3UL)),
+ binop(Iop_And32, getIRegT(15), mkU32(~3U)),
mkU32(imm8 * 4)),
condT);
DIP("add r%u, pc, #%u\n", rD, imm8 * 4);
@@ -12923,7 +12923,7 @@
// now uncond
assign(ea, binop(Iop_Add32,
- binop(Iop_And32, getIRegT(15), mkU32(~3UL)),
+ binop(Iop_And32, getIRegT(15), mkU32(~3U)),
mkU32(imm8 * 4)));
put_ITSTATE(old_itstate); // backout
putIRegT(rD, loadLE(Ity_I32, mkexpr(ea)),
@@ -13058,7 +13058,7 @@
IRTemp oldRn = newTemp(Ity_I32);
IRTemp base = newTemp(Ity_I32);
assign(oldRn, getIRegT(rN));
- assign(base, binop(Iop_And32, mkexpr(oldRn), mkU32(~3UL)));
+ assign(base, binop(Iop_And32, mkexpr(oldRn), mkU32(~3U)));
for (i = 0; i < 8; i++) {
if (0 == (list & (1 << i)))
continue;
@@ -13112,7 +13112,7 @@
IRTemp oldRn = newTemp(Ity_I32);
IRTemp base = newTemp(Ity_I32);
assign(oldRn, getIRegT(rN));
- assign(base, binop(Iop_And32, mkexpr(oldRn), mkU32(~3UL)));
+ assign(base, binop(Iop_And32, mkexpr(oldRn), mkU32(~3U)));
for (i = 0; i < 8; i++) {
if (0 == (list & (1 << i)))
continue;
@@ -14865,7 +14865,7 @@
UInt imm32 = (INSN0(10,10) << 11)
| (INSN1(14,12) << 8) | INSN1(7,0);
putIRegT(rD, binop(Iop_Add32,
- binop(Iop_And32, getIRegT(15), mkU32(~3UL)),
+ binop(Iop_And32, getIRegT(15), mkU32(~3U)),
mkU32(imm32)),
condT);
DIP("add r%u, pc, #%u\n", rD, imm32);
@@ -14917,7 +14917,7 @@
UInt imm32 = (INSN0(10,10) << 11)
| (INSN1(14,12) << 8) | INSN1(7,0);
putIRegT(rD, binop(Iop_Sub32,
- binop(Iop_And32, getIRegT(15), mkU32(~3UL)),
+ binop(Iop_And32, getIRegT(15), mkU32(~3U)),
mkU32(imm32)),
condT);
DIP("sub r%u, pc, #%u\n", rD, imm32);
|
|
From: <sv...@va...> - 2010-08-22 18:24:59
|
Author: sewardj
Date: 2010-08-22 19:24:51 +0100 (Sun, 22 Aug 2010)
New Revision: 2017
Log:
Fix various compiler warnings and remove an unused function.
Modified:
trunk/priv/guest_arm_helpers.c
trunk/priv/guest_arm_toIR.c
trunk/priv/host_arm_isel.c
Modified: trunk/priv/guest_arm_helpers.c
===================================================================
--- trunk/priv/guest_arm_helpers.c 2010-08-22 12:59:02 UTC (rev 2016)
+++ trunk/priv/guest_arm_helpers.c 2010-08-22 18:24:51 UTC (rev 2017)
@@ -205,8 +205,9 @@
}
/* CALLED FROM GENERATED CODE: CLEAN HELPER */
-/* Calculate the QC flag from the thunk components, in the lowest bit
- of the word (bit 0). */
+/* Calculate the QC flag from the arguments, in the lowest bit
+ of the word (bit 0). Urr, having this out of line is bizarre.
+ Push back inline. */
UInt armg_calculate_flag_qc ( UInt resL1, UInt resL2,
UInt resR1, UInt resR2 )
{
@@ -216,20 +217,6 @@
return 0;
}
-UInt armg_calculate_flag_idc ( UInt res1, UInt res2,
- UInt res3, UInt res4 )
-{
- UInt exp1 = (res1 >> 23) & 0xff;
- UInt exp2 = (res2 >> 23) & 0xff;
- UInt exp3 = (res3 >> 23) & 0xff;
- UInt exp4 = (res4 >> 23) & 0xff;
- if ((exp1 == 0) || (exp2 == 0) || (exp3 == 0) || (exp3 == 0))
- return 1;
- else
- return 0;
-}
-
-
/* CALLED FROM GENERATED CODE: CLEAN HELPER */
/* Calculate the specified condition from the thunk components, in the
lowest bit of the word (bit 0). */
Modified: trunk/priv/guest_arm_toIR.c
===================================================================
--- trunk/priv/guest_arm_toIR.c 2010-08-22 12:59:02 UTC (rev 2016)
+++ trunk/priv/guest_arm_toIR.c 2010-08-22 18:24:51 UTC (rev 2017)
@@ -1186,6 +1186,8 @@
return res;
}
+// FIXME: this is named wrongly .. looks like a sticky set of
+// QC, not a write to it.
static void setFlag_QC ( IRExpr* resL, IRExpr* resR, Bool Q,
IRTemp condT )
{
@@ -2570,6 +2572,8 @@
op2 = Iop_GetElem32x2;
index = imm4 >> 3;
size = 32;
+ } else {
+ return False; // can this ever happen?
}
assign(res, unop(op, binop(op2, mkexpr(arg_m), mkU8(index))));
if (Q) {
Modified: trunk/priv/host_arm_isel.c
===================================================================
--- trunk/priv/host_arm_isel.c 2010-08-22 12:59:02 UTC (rev 2016)
+++ trunk/priv/host_arm_isel.c 2010-08-22 18:24:51 UTC (rev 2017)
@@ -182,37 +182,12 @@
return IRExpr_Binop(op, a1, a2);
}
-static IRExpr* triop ( IROp op, IRExpr* a1, IRExpr* a2, IRExpr* a3 )
-{
- return IRExpr_Triop(op, a1, a2, a3);
-}
-
static IRExpr* bind ( Int binder )
{
return IRExpr_Binder(binder);
}
-static IRExpr* mkU64 ( ULong i )
-{
- return IRExpr_Const(IRConst_U64(i));
-}
-static IRExpr* mkU32 ( UInt i )
-{
- return IRExpr_Const(IRConst_U32(i));
-}
-
-static IRExpr* mkU8 ( UInt i )
-{
- vassert(i < 256);
- return IRExpr_Const(IRConst_U8( (UChar)i ));
-}
-
-static IRExpr* mkU128 ( ULong i )
-{
- return binop(Iop_64HLtoV128, mkU64(i), mkU64(i));
-}
-
/*---------------------------------------------------------*/
/*--- ISEL: Forward declarations ---*/
/*---------------------------------------------------------*/
|
|
From: <sv...@va...> - 2010-08-22 18:23:38
|
Author: sewardj
Date: 2010-08-22 19:23:29 +0100 (Sun, 22 Aug 2010)
New Revision: 11286
Log:
Add tests for all {ARM,Thumb} x {integer,NEON} instructions.
Added:
trunk/none/tests/arm/neon128.stderr.exp
trunk/none/tests/arm/neon128.stdout.exp
trunk/none/tests/arm/neon128.vgtest
trunk/none/tests/arm/neon64.stderr.exp
trunk/none/tests/arm/neon64.stdout.exp
trunk/none/tests/arm/neon64.vgtest
trunk/none/tests/arm/v6intARM.c
trunk/none/tests/arm/v6intARM.stderr.exp
trunk/none/tests/arm/v6intARM.stdout.exp
trunk/none/tests/arm/v6intARM.vgtest
trunk/none/tests/arm/v6intThumb.stderr.exp
trunk/none/tests/arm/v6intThumb.stdout.exp
trunk/none/tests/arm/v6intThumb.vgtest
Removed:
trunk/none/tests/arm/v6int.c
trunk/none/tests/arm/v6int.stderr.exp
trunk/none/tests/arm/v6int.stdout.exp
trunk/none/tests/arm/v6int.vgtest
Modified:
trunk/none/tests/arm/Makefile.am
[... diff too large to include ...]
|
|
From: <sv...@va...> - 2010-08-22 12:59:11
|
Author: sewardj
Date: 2010-08-22 13:59:02 +0100 (Sun, 22 Aug 2010)
New Revision: 2016
Log:
Merge from branches/THUMB: new IR primops and associated
infrastructure, needed to represent NEON instructions. Way more new
ones than I would like, but I can't see a way to avoid having them.
Modified:
trunk/priv/ir_defs.c
trunk/pub/libvex_ir.h
trunk/test_main.c
Modified: trunk/priv/ir_defs.c
===================================================================
--- trunk/priv/ir_defs.c 2010-08-22 12:54:56 UTC (rev 2015)
+++ trunk/priv/ir_defs.c 2010-08-22 12:59:02 UTC (rev 2016)
@@ -313,48 +313,125 @@
case Iop_I32UtoFx4: vex_printf("I32UtoFx4"); return;
case Iop_I32StoFx4: vex_printf("I32StoFx4"); return;
+ case Iop_F32toF16x4: vex_printf("F32toF16x4"); return;
+ case Iop_F16toF32x4: vex_printf("F16toF32x4"); return;
+
+ case Iop_Rsqrte32Fx4: vex_printf("VRsqrte32Fx4"); return;
+ case Iop_Rsqrte32x4: vex_printf("VRsqrte32x4"); return;
+ case Iop_Rsqrte32Fx2: vex_printf("VRsqrte32Fx2"); return;
+ case Iop_Rsqrte32x2: vex_printf("VRsqrte32x2"); return;
+
case Iop_QFtoI32Ux4_RZ: vex_printf("QFtoI32Ux4_RZ"); return;
case Iop_QFtoI32Sx4_RZ: vex_printf("QFtoI32Sx4_RZ"); return;
+ case Iop_FtoI32Ux4_RZ: vex_printf("FtoI32Ux4_RZ"); return;
+ case Iop_FtoI32Sx4_RZ: vex_printf("FtoI32Sx4_RZ"); return;
+
+ case Iop_I32UtoFx2: vex_printf("I32UtoFx2"); return;
+ case Iop_I32StoFx2: vex_printf("I32StoFx2"); return;
+
+ case Iop_FtoI32Ux2_RZ: vex_printf("FtoI32Ux2_RZ"); return;
+ case Iop_FtoI32Sx2_RZ: vex_printf("FtoI32Sx2_RZ"); return;
+
case Iop_RoundF32x4_RM: vex_printf("RoundF32x4_RM"); return;
case Iop_RoundF32x4_RP: vex_printf("RoundF32x4_RP"); return;
case Iop_RoundF32x4_RN: vex_printf("RoundF32x4_RN"); return;
case Iop_RoundF32x4_RZ: vex_printf("RoundF32x4_RZ"); return;
+ case Iop_Abs8x8: vex_printf("Abs8x8"); return;
+ case Iop_Abs16x4: vex_printf("Abs16x4"); return;
+ case Iop_Abs32x2: vex_printf("Abs32x2"); return;
case Iop_Add8x8: vex_printf("Add8x8"); return;
case Iop_Add16x4: vex_printf("Add16x4"); return;
case Iop_Add32x2: vex_printf("Add32x2"); return;
case Iop_QAdd8Ux8: vex_printf("QAdd8Ux8"); return;
case Iop_QAdd16Ux4: vex_printf("QAdd16Ux4"); return;
+ case Iop_QAdd32Ux2: vex_printf("QAdd32Ux2"); return;
+ case Iop_QAdd64Ux1: vex_printf("QAdd64Ux1"); return;
case Iop_QAdd8Sx8: vex_printf("QAdd8Sx8"); return;
case Iop_QAdd16Sx4: vex_printf("QAdd16Sx4"); return;
+ case Iop_QAdd32Sx2: vex_printf("QAdd32Sx2"); return;
+ case Iop_QAdd64Sx1: vex_printf("QAdd64Sx1"); return;
+ case Iop_PwAdd8x8: vex_printf("PwAdd8x8"); return;
+ case Iop_PwAdd16x4: vex_printf("PwAdd16x4"); return;
+ case Iop_PwAdd32x2: vex_printf("PwAdd32x2"); return;
+ case Iop_PwAdd32Fx2: vex_printf("PwAdd32Fx2"); return;
+ case Iop_PwAddL8Ux8: vex_printf("PwAddL8Ux8"); return;
+ case Iop_PwAddL16Ux4: vex_printf("PwAddL16Ux4"); return;
+ case Iop_PwAddL32Ux2: vex_printf("PwAddL32Ux2"); return;
+ case Iop_PwAddL8Sx8: vex_printf("PwAddL8Sx8"); return;
+ case Iop_PwAddL16Sx4: vex_printf("PwAddL16Sx4"); return;
+ case Iop_PwAddL32Sx2: vex_printf("PwAddL32Sx2"); return;
case Iop_Sub8x8: vex_printf("Sub8x8"); return;
case Iop_Sub16x4: vex_printf("Sub16x4"); return;
case Iop_Sub32x2: vex_printf("Sub32x2"); return;
case Iop_QSub8Ux8: vex_printf("QSub8Ux8"); return;
case Iop_QSub16Ux4: vex_printf("QSub16Ux4"); return;
+ case Iop_QSub32Ux2: vex_printf("QSub32Ux2"); return;
+ case Iop_QSub64Ux1: vex_printf("QSub64Ux1"); return;
case Iop_QSub8Sx8: vex_printf("QSub8Sx8"); return;
case Iop_QSub16Sx4: vex_printf("QSub16Sx4"); return;
+ case Iop_QSub32Sx2: vex_printf("QSub32Sx2"); return;
+ case Iop_QSub64Sx1: vex_printf("QSub64Sx1"); return;
+ case Iop_Mul8x8: vex_printf("Mul8x8"); return;
case Iop_Mul16x4: vex_printf("Mul16x4"); return;
case Iop_Mul32x2: vex_printf("Mul32x2"); return;
- case Iop_Mul32x4: vex_printf("Mul32x4"); return;
+ case Iop_Mul32Fx2: vex_printf("Mul32Fx2"); return;
+ case Iop_PolynomialMul8x8: vex_printf("PolynomialMul8x8"); return;
case Iop_MulHi16Ux4: vex_printf("MulHi16Ux4"); return;
case Iop_MulHi16Sx4: vex_printf("MulHi16Sx4"); return;
+ case Iop_QDMulHi16Sx4: vex_printf("QDMulHi16Sx4"); return;
+ case Iop_QDMulHi32Sx2: vex_printf("QDMulHi32Sx2"); return;
+ case Iop_QRDMulHi16Sx4: vex_printf("QRDMulHi16Sx4"); return;
+ case Iop_QRDMulHi32Sx2: vex_printf("QRDMulHi32Sx2"); return;
+ case Iop_QDMulLong16Sx4: vex_printf("QDMulLong16Sx4"); return;
+ case Iop_QDMulLong32Sx2: vex_printf("QDMulLong32Sx2"); return;
case Iop_Avg8Ux8: vex_printf("Avg8Ux8"); return;
case Iop_Avg16Ux4: vex_printf("Avg16Ux4"); return;
+ case Iop_Max8Sx8: vex_printf("Max8Sx8"); return;
case Iop_Max16Sx4: vex_printf("Max16Sx4"); return;
+ case Iop_Max32Sx2: vex_printf("Max32Sx2"); return;
case Iop_Max8Ux8: vex_printf("Max8Ux8"); return;
+ case Iop_Max16Ux4: vex_printf("Max16Ux4"); return;
+ case Iop_Max32Ux2: vex_printf("Max32Ux2"); return;
+ case Iop_Min8Sx8: vex_printf("Min8Sx8"); return;
case Iop_Min16Sx4: vex_printf("Min16Sx4"); return;
+ case Iop_Min32Sx2: vex_printf("Min32Sx2"); return;
case Iop_Min8Ux8: vex_printf("Min8Ux8"); return;
+ case Iop_Min16Ux4: vex_printf("Min16Ux4"); return;
+ case Iop_Min32Ux2: vex_printf("Min32Ux2"); return;
+ case Iop_PwMax8Sx8: vex_printf("PwMax8Sx8"); return;
+ case Iop_PwMax16Sx4: vex_printf("PwMax16Sx4"); return;
+ case Iop_PwMax32Sx2: vex_printf("PwMax32Sx2"); return;
+ case Iop_PwMax8Ux8: vex_printf("PwMax8Ux8"); return;
+ case Iop_PwMax16Ux4: vex_printf("PwMax16Ux4"); return;
+ case Iop_PwMax32Ux2: vex_printf("PwMax32Ux2"); return;
+ case Iop_PwMin8Sx8: vex_printf("PwMin8Sx8"); return;
+ case Iop_PwMin16Sx4: vex_printf("PwMin16Sx4"); return;
+ case Iop_PwMin32Sx2: vex_printf("PwMin32Sx2"); return;
+ case Iop_PwMin8Ux8: vex_printf("PwMin8Ux8"); return;
+ case Iop_PwMin16Ux4: vex_printf("PwMin16Ux4"); return;
+ case Iop_PwMin32Ux2: vex_printf("PwMin32Ux2"); return;
case Iop_CmpEQ8x8: vex_printf("CmpEQ8x8"); return;
case Iop_CmpEQ16x4: vex_printf("CmpEQ16x4"); return;
case Iop_CmpEQ32x2: vex_printf("CmpEQ32x2"); return;
+ case Iop_CmpGT8Ux8: vex_printf("CmpGT8Ux8"); return;
+ case Iop_CmpGT16Ux4: vex_printf("CmpGT16Ux4"); return;
+ case Iop_CmpGT32Ux2: vex_printf("CmpGT32Ux2"); return;
case Iop_CmpGT8Sx8: vex_printf("CmpGT8Sx8"); return;
case Iop_CmpGT16Sx4: vex_printf("CmpGT16Sx4"); return;
case Iop_CmpGT32Sx2: vex_printf("CmpGT32Sx2"); return;
+ case Iop_Cnt8x8: vex_printf("Cnt8x8"); return;
+ case Iop_Clz8Sx8: vex_printf("Clz8Sx8"); return;
+ case Iop_Clz16Sx4: vex_printf("Clz16Sx4"); return;
+ case Iop_Clz32Sx2: vex_printf("Clz32Sx2"); return;
+ case Iop_Cls8Sx8: vex_printf("Cls8Sx8"); return;
+ case Iop_Cls16Sx4: vex_printf("Cls16Sx4"); return;
+ case Iop_Cls32Sx2: vex_printf("Cls32Sx2"); return;
case Iop_ShlN8x8: vex_printf("ShlN8x8"); return;
case Iop_ShlN16x4: vex_printf("ShlN16x4"); return;
case Iop_ShlN32x2: vex_printf("ShlN32x2"); return;
+ case Iop_ShrN8x8: vex_printf("ShrN8x8"); return;
case Iop_ShrN16x4: vex_printf("ShrN16x4"); return;
case Iop_ShrN32x2: vex_printf("ShrN32x2"); return;
case Iop_SarN8x8: vex_printf("SarN8x8"); return;
@@ -369,15 +446,62 @@
case Iop_InterleaveLO8x8: vex_printf("InterleaveLO8x8"); return;
case Iop_InterleaveLO16x4: vex_printf("InterleaveLO16x4"); return;
case Iop_InterleaveLO32x2: vex_printf("InterleaveLO32x2"); return;
+ case Iop_CatOddLanes8x8: vex_printf("CatOddLanes8x8"); return;
case Iop_CatOddLanes16x4: vex_printf("CatOddLanes16x4"); return;
+ case Iop_CatEvenLanes8x8: vex_printf("CatEvenLanes8x8"); return;
case Iop_CatEvenLanes16x4: vex_printf("CatEvenLanes16x4"); return;
+ case Iop_InterleaveOddLanes8x8: vex_printf("InterleaveOddLanes8x8"); return;
+ case Iop_InterleaveOddLanes16x4: vex_printf("InterleaveOddLanes16x4"); return;
+ case Iop_InterleaveEvenLanes8x8: vex_printf("InterleaveEvenLanes8x8"); return;
+ case Iop_InterleaveEvenLanes16x4: vex_printf("InterleaveEvenLanes16x4"); return;
+ case Iop_Shl8x8: vex_printf("Shl8x8"); return;
+ case Iop_Shl16x4: vex_printf("Shl16x4"); return;
+ case Iop_Shl32x2: vex_printf("Shl32x2"); return;
+ case Iop_Shr8x8: vex_printf("Shr8x8"); return;
+ case Iop_Shr16x4: vex_printf("Shr16x4"); return;
+ case Iop_Shr32x2: vex_printf("Shr32x2"); return;
+ case Iop_QShl8x8: vex_printf("QShl8x8"); return;
+ case Iop_QShl16x4: vex_printf("QShl16x4"); return;
+ case Iop_QShl32x2: vex_printf("QShl32x2"); return;
+ case Iop_QShl64x1: vex_printf("QShl64x1"); return;
+ case Iop_QSal8x8: vex_printf("QSal8x8"); return;
+ case Iop_QSal16x4: vex_printf("QSal16x4"); return;
+ case Iop_QSal32x2: vex_printf("QSal32x2"); return;
+ case Iop_QSal64x1: vex_printf("QSal64x1"); return;
+ case Iop_QShlN8x8: vex_printf("QShlN8x8"); return;
+ case Iop_QShlN16x4: vex_printf("QShlN16x4"); return;
+ case Iop_QShlN32x2: vex_printf("QShlN32x2"); return;
+ case Iop_QShlN64x1: vex_printf("QShlN64x1"); return;
+ case Iop_QShlN8Sx8: vex_printf("QShlN8Sx8"); return;
+ case Iop_QShlN16Sx4: vex_printf("QShlN16Sx4"); return;
+ case Iop_QShlN32Sx2: vex_printf("QShlN32Sx2"); return;
+ case Iop_QShlN64Sx1: vex_printf("QShlN64Sx1"); return;
+ case Iop_QSalN8x8: vex_printf("QSalN8x8"); return;
+ case Iop_QSalN16x4: vex_printf("QSalN16x4"); return;
+ case Iop_QSalN32x2: vex_printf("QSalN32x2"); return;
+ case Iop_QSalN64x1: vex_printf("QSalN64x1"); return;
+ case Iop_Sar8x8: vex_printf("Sar8x8"); return;
+ case Iop_Sar16x4: vex_printf("Sar16x4"); return;
+ case Iop_Sar32x2: vex_printf("Sar32x2"); return;
+ case Iop_Sal8x8: vex_printf("Sal8x8"); return;
+ case Iop_Sal16x4: vex_printf("Sal16x4"); return;
+ case Iop_Sal32x2: vex_printf("Sal32x2"); return;
+ case Iop_Sal64x1: vex_printf("Sal64x1"); return;
case Iop_Perm8x8: vex_printf("Perm8x8"); return;
+ case Iop_Reverse16_8x8: vex_printf("Reverse16_8x8"); return;
+ case Iop_Reverse32_8x8: vex_printf("Reverse32_8x8"); return;
+ case Iop_Reverse32_16x4: vex_printf("Reverse32_16x4"); return;
+ case Iop_Reverse64_8x8: vex_printf("Reverse64_8x8"); return;
+ case Iop_Reverse64_16x4: vex_printf("Reverse64_16x4"); return;
+ case Iop_Reverse64_32x2: vex_printf("Reverse64_32x2"); return;
+ case Iop_Abs32Fx2: vex_printf("Abs32Fx2"); return;
case Iop_CmpNEZ32x2: vex_printf("CmpNEZ32x2"); return;
case Iop_CmpNEZ16x4: vex_printf("CmpNEZ16x4"); return;
case Iop_CmpNEZ8x8: vex_printf("CmpNEZ8x8"); return;
case Iop_Add32Fx4: vex_printf("Add32Fx4"); return;
+ case Iop_Add32Fx2: vex_printf("Add32Fx2"); return;
case Iop_Add32F0x4: vex_printf("Add32F0x4"); return;
case Iop_Add64Fx2: vex_printf("Add64Fx2"); return;
case Iop_Add64F0x2: vex_printf("Add64F0x2"); return;
@@ -388,11 +512,17 @@
case Iop_Div64F0x2: vex_printf("Div64F0x2"); return;
case Iop_Max32Fx4: vex_printf("Max32Fx4"); return;
+ case Iop_Max32Fx2: vex_printf("Max32Fx2"); return;
+ case Iop_PwMax32Fx4: vex_printf("PwMax32Fx4"); return;
+ case Iop_PwMax32Fx2: vex_printf("PwMax32Fx2"); return;
case Iop_Max32F0x4: vex_printf("Max32F0x4"); return;
case Iop_Max64Fx2: vex_printf("Max64Fx2"); return;
case Iop_Max64F0x2: vex_printf("Max64F0x2"); return;
case Iop_Min32Fx4: vex_printf("Min32Fx4"); return;
+ case Iop_Min32Fx2: vex_printf("Min32Fx2"); return;
+ case Iop_PwMin32Fx4: vex_printf("PwMin32Fx4"); return;
+ case Iop_PwMin32Fx2: vex_printf("PwMin32Fx2"); return;
case Iop_Min32F0x4: vex_printf("Min32F0x4"); return;
case Iop_Min64Fx2: vex_printf("Min64Fx2"); return;
case Iop_Min64F0x2: vex_printf("Min64F0x2"); return;
@@ -402,10 +532,18 @@
case Iop_Mul64Fx2: vex_printf("Mul64Fx2"); return;
case Iop_Mul64F0x2: vex_printf("Mul64F0x2"); return;
+ case Iop_Recip32x2: vex_printf("Recip32x2"); return;
+ case Iop_Recip32Fx2: vex_printf("Recip32Fx2"); return;
case Iop_Recip32Fx4: vex_printf("Recip32Fx4"); return;
+ case Iop_Recip32x4: vex_printf("Recip32x4"); return;
case Iop_Recip32F0x4: vex_printf("Recip32F0x4"); return;
case Iop_Recip64Fx2: vex_printf("Recip64Fx2"); return;
case Iop_Recip64F0x2: vex_printf("Recip64F0x2"); return;
+ case Iop_Recps32Fx2: vex_printf("VRecps32Fx2"); return;
+ case Iop_Recps32Fx4: vex_printf("VRecps32Fx4"); return;
+ case Iop_Abs32Fx4: vex_printf("Abs32Fx4"); return;
+ case Iop_Rsqrts32Fx4: vex_printf("VRsqrts32Fx4"); return;
+ case Iop_Rsqrts32Fx2: vex_printf("VRsqrts32Fx2"); return;
case Iop_RSqrt32Fx4: vex_printf("RSqrt32Fx4"); return;
case Iop_RSqrt32F0x4: vex_printf("RSqrt32F0x4"); return;
@@ -418,6 +556,7 @@
case Iop_Sqrt64F0x2: vex_printf("Sqrt64F0x2"); return;
case Iop_Sub32Fx4: vex_printf("Sub32Fx4"); return;
+ case Iop_Sub32Fx2: vex_printf("Sub32Fx2"); return;
case Iop_Sub32F0x4: vex_printf("Sub32F0x4"); return;
case Iop_Sub64Fx2: vex_printf("Sub64Fx2"); return;
case Iop_Sub64F0x2: vex_printf("Sub64F0x2"); return;
@@ -432,6 +571,9 @@
case Iop_CmpLT64Fx2: vex_printf("CmpLT64Fx2"); return;
case Iop_CmpLE64Fx2: vex_printf("CmpLE64Fx2"); return;
case Iop_CmpUN64Fx2: vex_printf("CmpUN64Fx2"); return;
+ case Iop_CmpGT32Fx2: vex_printf("CmpGT32Fx2"); return;
+ case Iop_CmpEQ32Fx2: vex_printf("CmpEQ32Fx2"); return;
+ case Iop_CmpGE32Fx2: vex_printf("CmpGE32Fx2"); return;
case Iop_CmpEQ32F0x4: vex_printf("CmpEQ32F0x4"); return;
case Iop_CmpLT32F0x4: vex_printf("CmpLT32F0x4"); return;
@@ -442,6 +584,9 @@
case Iop_CmpLE64F0x2: vex_printf("CmpLE64F0x2"); return;
case Iop_CmpUN64F0x2: vex_printf("CmpUN64F0x2"); return;
+ case Iop_Neg32Fx4: vex_printf("Neg32Fx4"); return;
+ case Iop_Neg32Fx2: vex_printf("Neg32Fx2"); return;
+
case Iop_V128to64: vex_printf("V128to64"); return;
case Iop_V128HIto64: vex_printf("V128HIto64"); return;
case Iop_64HLtoV128: vex_printf("64HLtoV128"); return;
@@ -456,6 +601,9 @@
case Iop_Dup8x16: vex_printf("Dup8x16"); return;
case Iop_Dup16x8: vex_printf("Dup16x8"); return;
case Iop_Dup32x4: vex_printf("Dup32x4"); return;
+ case Iop_Dup8x8: vex_printf("Dup8x8"); return;
+ case Iop_Dup16x4: vex_printf("Dup16x4"); return;
+ case Iop_Dup32x2: vex_printf("Dup32x2"); return;
case Iop_NotV128: vex_printf("NotV128"); return;
case Iop_AndV128: vex_printf("AndV128"); return;
@@ -467,6 +615,10 @@
case Iop_CmpNEZ32x4: vex_printf("CmpNEZ32x4"); return;
case Iop_CmpNEZ64x2: vex_printf("CmpNEZ64x2"); return;
+ case Iop_Abs8x16: vex_printf("Abs8x16"); return;
+ case Iop_Abs16x8: vex_printf("Abs16x8"); return;
+ case Iop_Abs32x4: vex_printf("Abs32x4"); return;
+
case Iop_Add8x16: vex_printf("Add8x16"); return;
case Iop_Add16x8: vex_printf("Add16x8"); return;
case Iop_Add32x4: vex_printf("Add32x4"); return;
@@ -477,6 +629,17 @@
case Iop_QAdd8Sx16: vex_printf("QAdd8Sx16"); return;
case Iop_QAdd16Sx8: vex_printf("QAdd16Sx8"); return;
case Iop_QAdd32Sx4: vex_printf("QAdd32Sx4"); return;
+ case Iop_QAdd64Ux2: vex_printf("QAdd64Ux2"); return;
+ case Iop_QAdd64Sx2: vex_printf("QAdd64Sx2"); return;
+ case Iop_PwAdd8x16: vex_printf("PwAdd8x16"); return;
+ case Iop_PwAdd16x8: vex_printf("PwAdd16x8"); return;
+ case Iop_PwAdd32x4: vex_printf("PwAdd32x4"); return;
+ case Iop_PwAddL8Ux16: vex_printf("PwAddL8Ux16"); return;
+ case Iop_PwAddL16Ux8: vex_printf("PwAddL16Ux8"); return;
+ case Iop_PwAddL32Ux4: vex_printf("PwAddL32Ux4"); return;
+ case Iop_PwAddL8Sx16: vex_printf("PwAddL8Sx16"); return;
+ case Iop_PwAddL16Sx8: vex_printf("PwAddL16Sx8"); return;
+ case Iop_PwAddL32Sx4: vex_printf("PwAddL32Sx4"); return;
case Iop_Sub8x16: vex_printf("Sub8x16"); return;
case Iop_Sub16x8: vex_printf("Sub16x8"); return;
@@ -488,12 +651,28 @@
case Iop_QSub8Sx16: vex_printf("QSub8Sx16"); return;
case Iop_QSub16Sx8: vex_printf("QSub16Sx8"); return;
case Iop_QSub32Sx4: vex_printf("QSub32Sx4"); return;
+ case Iop_QSub64Ux2: vex_printf("QSub64Ux2"); return;
+ case Iop_QSub64Sx2: vex_printf("QSub64Sx2"); return;
+ case Iop_Mul8x16: vex_printf("Mul8x16"); return;
case Iop_Mul16x8: vex_printf("Mul16x8"); return;
+ case Iop_Mul32x4: vex_printf("Mul32x4"); return;
+ case Iop_Mull8Ux8: vex_printf("Mull8Ux8"); return;
+ case Iop_Mull8Sx8: vex_printf("Mull8Sx8"); return;
+ case Iop_Mull16Ux4: vex_printf("Mull16Ux4"); return;
+ case Iop_Mull16Sx4: vex_printf("Mull16Sx4"); return;
+ case Iop_Mull32Ux2: vex_printf("Mull32Ux2"); return;
+ case Iop_Mull32Sx2: vex_printf("Mull32Sx2"); return;
+ case Iop_PolynomialMul8x16: vex_printf("PolynomialMul8x16"); return;
+ case Iop_PolynomialMull8x8: vex_printf("PolynomialMull8x8"); return;
case Iop_MulHi16Ux8: vex_printf("MulHi16Ux8"); return;
case Iop_MulHi32Ux4: vex_printf("MulHi32Ux4"); return;
case Iop_MulHi16Sx8: vex_printf("MulHi16Sx8"); return;
case Iop_MulHi32Sx4: vex_printf("MulHi32Sx4"); return;
+ case Iop_QDMulHi16Sx8: vex_printf("QDMulHi16Sx8"); return;
+ case Iop_QDMulHi32Sx4: vex_printf("QDMulHi32Sx4"); return;
+ case Iop_QRDMulHi16Sx8: vex_printf("QRDMulHi16Sx8"); return;
+ case Iop_QRDMulHi32Sx4: vex_printf("QRDMulHi32Sx4"); return;
case Iop_MullEven8Ux16: vex_printf("MullEven8Ux16"); return;
case Iop_MullEven16Ux8: vex_printf("MullEven16Ux8"); return;
@@ -532,6 +711,14 @@
case Iop_CmpGT16Ux8: vex_printf("CmpGT16Ux8"); return;
case Iop_CmpGT32Ux4: vex_printf("CmpGT32Ux4"); return;
+ case Iop_Cnt8x16: vex_printf("Cnt8x16"); return;
+ case Iop_Clz8Sx16: vex_printf("Clz8Sx16"); return;
+ case Iop_Clz16Sx8: vex_printf("Clz16Sx8"); return;
+ case Iop_Clz32Sx4: vex_printf("Clz32Sx4"); return;
+ case Iop_Cls8Sx16: vex_printf("Cls8Sx16"); return;
+ case Iop_Cls16Sx8: vex_printf("Cls16Sx8"); return;
+ case Iop_Cls32Sx4: vex_printf("Cls32Sx4"); return;
+
case Iop_ShlV128: vex_printf("ShlV128"); return;
case Iop_ShrV128: vex_printf("ShrV128"); return;
@@ -546,16 +733,44 @@
case Iop_SarN8x16: vex_printf("SarN8x16"); return;
case Iop_SarN16x8: vex_printf("SarN16x8"); return;
case Iop_SarN32x4: vex_printf("SarN32x4"); return;
+ case Iop_SarN64x2: vex_printf("SarN64x2"); return;
case Iop_Shl8x16: vex_printf("Shl8x16"); return;
case Iop_Shl16x8: vex_printf("Shl16x8"); return;
case Iop_Shl32x4: vex_printf("Shl32x4"); return;
+ case Iop_Shl64x2: vex_printf("Shl64x2"); return;
+ case Iop_QSal8x16: vex_printf("QSal8x16"); return;
+ case Iop_QSal16x8: vex_printf("QSal16x8"); return;
+ case Iop_QSal32x4: vex_printf("QSal32x4"); return;
+ case Iop_QSal64x2: vex_printf("QSal64x2"); return;
+ case Iop_QShl8x16: vex_printf("QShl8x16"); return;
+ case Iop_QShl16x8: vex_printf("QShl16x8"); return;
+ case Iop_QShl32x4: vex_printf("QShl32x4"); return;
+ case Iop_QShl64x2: vex_printf("QShl64x2"); return;
+ case Iop_QSalN8x16: vex_printf("QSalN8x16"); return;
+ case Iop_QSalN16x8: vex_printf("QSalN16x8"); return;
+ case Iop_QSalN32x4: vex_printf("QSalN32x4"); return;
+ case Iop_QSalN64x2: vex_printf("QSalN64x2"); return;
+ case Iop_QShlN8x16: vex_printf("QShlN8x16"); return;
+ case Iop_QShlN16x8: vex_printf("QShlN16x8"); return;
+ case Iop_QShlN32x4: vex_printf("QShlN32x4"); return;
+ case Iop_QShlN64x2: vex_printf("QShlN64x2"); return;
+ case Iop_QShlN8Sx16: vex_printf("QShlN8Sx16"); return;
+ case Iop_QShlN16Sx8: vex_printf("QShlN16Sx8"); return;
+ case Iop_QShlN32Sx4: vex_printf("QShlN32Sx4"); return;
+ case Iop_QShlN64Sx2: vex_printf("QShlN64Sx2"); return;
case Iop_Shr8x16: vex_printf("Shr8x16"); return;
case Iop_Shr16x8: vex_printf("Shr16x8"); return;
case Iop_Shr32x4: vex_printf("Shr32x4"); return;
+ case Iop_Shr64x2: vex_printf("Shr64x2"); return;
case Iop_Sar8x16: vex_printf("Sar8x16"); return;
case Iop_Sar16x8: vex_printf("Sar16x8"); return;
case Iop_Sar32x4: vex_printf("Sar32x4"); return;
+ case Iop_Sar64x2: vex_printf("Sar64x2"); return;
+ case Iop_Sal8x16: vex_printf("Sal8x16"); return;
+ case Iop_Sal16x8: vex_printf("Sal16x8"); return;
+ case Iop_Sal32x4: vex_printf("Sal32x4"); return;
+ case Iop_Sal64x2: vex_printf("Sal64x2"); return;
case Iop_Rol8x16: vex_printf("Rol8x16"); return;
case Iop_Rol16x8: vex_printf("Rol16x8"); return;
case Iop_Rol32x4: vex_printf("Rol32x4"); return;
@@ -566,6 +781,24 @@
case Iop_QNarrow32Ux4: vex_printf("QNarrow32Ux4"); return;
case Iop_QNarrow16Sx8: vex_printf("QNarrow16Sx8"); return;
case Iop_QNarrow32Sx4: vex_printf("QNarrow32Sx4"); return;
+ case Iop_Shorten16x8: vex_printf("Shorten16x8"); return;
+ case Iop_Shorten32x4: vex_printf("Shorten32x4"); return;
+ case Iop_Shorten64x2: vex_printf("Shorten64x2"); return;
+ case Iop_QShortenU16Ux8: vex_printf("QShortenU16Ux8"); return;
+ case Iop_QShortenU32Ux4: vex_printf("QShortenU32Ux4"); return;
+ case Iop_QShortenU64Ux2: vex_printf("QShortenU64Ux2"); return;
+ case Iop_QShortenS16Sx8: vex_printf("QShortenS16Sx8"); return;
+ case Iop_QShortenS32Sx4: vex_printf("QShortenS32Sx4"); return;
+ case Iop_QShortenS64Sx2: vex_printf("QShortenS64Sx2"); return;
+ case Iop_QShortenU16Sx8: vex_printf("QShortenU16Sx8"); return;
+ case Iop_QShortenU32Sx4: vex_printf("QShortenU32Sx4"); return;
+ case Iop_QShortenU64Sx2: vex_printf("QShortenU64Sx2"); return;
+ case Iop_Longen8Ux8: vex_printf("Longen8Ux8"); return;
+ case Iop_Longen16Ux4: vex_printf("Longen16Ux4"); return;
+ case Iop_Longen32Ux2: vex_printf("Longen32Ux2"); return;
+ case Iop_Longen8Sx8: vex_printf("Longen8Sx8"); return;
+ case Iop_Longen16Sx4: vex_printf("Longen16Sx4"); return;
+ case Iop_Longen32Sx2: vex_printf("Longen32Sx2"); return;
case Iop_InterleaveHI8x16: vex_printf("InterleaveHI8x16"); return;
case Iop_InterleaveHI16x8: vex_printf("InterleaveHI16x8"); return;
@@ -576,8 +809,52 @@
case Iop_InterleaveLO32x4: vex_printf("InterleaveLO32x4"); return;
case Iop_InterleaveLO64x2: vex_printf("InterleaveLO64x2"); return;
+ case Iop_CatOddLanes8x16: vex_printf("CatOddLanes8x16"); return;
+ case Iop_CatOddLanes16x8: vex_printf("CatOddLanes16x8"); return;
+ case Iop_CatOddLanes32x4: vex_printf("CatOddLanes32x4"); return;
+ case Iop_CatEvenLanes8x16: vex_printf("CatEvenLanes8x16"); return;
+ case Iop_CatEvenLanes16x8: vex_printf("CatEvenLanes16x8"); return;
+ case Iop_CatEvenLanes32x4: vex_printf("CatEvenLanes32x4"); return;
+
+ case Iop_InterleaveOddLanes8x16: vex_printf("InterleaveOddLanes8x16"); return;
+ case Iop_InterleaveOddLanes16x8: vex_printf("InterleaveOddLanes16x8"); return;
+ case Iop_InterleaveOddLanes32x4: vex_printf("InterleaveOddLanes32x4"); return;
+ case Iop_InterleaveEvenLanes8x16: vex_printf("InterleaveEvenLanes8x16"); return;
+ case Iop_InterleaveEvenLanes16x8: vex_printf("InterleaveEvenLanes16x8"); return;
+ case Iop_InterleaveEvenLanes32x4: vex_printf("InterleaveEvenLanes32x4"); return;
+
+ case Iop_GetElem8x16: vex_printf("GetElem8x16"); return;
+ case Iop_GetElem16x8: vex_printf("GetElem16x8"); return;
+ case Iop_GetElem32x4: vex_printf("GetElem32x4"); return;
+ case Iop_GetElem64x2: vex_printf("GetElem64x2"); return;
+
+ case Iop_GetElem8x8: vex_printf("GetElem8x8"); return;
+ case Iop_GetElem16x4: vex_printf("GetElem16x4"); return;
+ case Iop_GetElem32x2: vex_printf("GetElem32x2"); return;
+ case Iop_SetElem8x8: vex_printf("SetElem8x8"); return;
+ case Iop_SetElem16x4: vex_printf("SetElem16x4"); return;
+ case Iop_SetElem32x2: vex_printf("SetElem32x2"); return;
+
+ case Iop_Extract64: vex_printf("Extract64"); return;
+ case Iop_ExtractV128: vex_printf("ExtractV128"); return;
+
case Iop_Perm8x16: vex_printf("Perm8x16"); return;
+ case Iop_Reverse16_8x16: vex_printf("Reverse16_8x16"); return;
+ case Iop_Reverse32_8x16: vex_printf("Reverse32_8x16"); return;
+ case Iop_Reverse32_16x8: vex_printf("Reverse32_16x8"); return;
+ case Iop_Reverse64_8x16: vex_printf("Reverse64_8x16"); return;
+ case Iop_Reverse64_16x8: vex_printf("Reverse64_16x8"); return;
+ case Iop_Reverse64_32x4: vex_printf("Reverse64_32x4"); return;
+ case Iop_F32ToFixed32Ux4_RZ: vex_printf("F32ToFixed32Ux4_RZ"); return;
+ case Iop_F32ToFixed32Sx4_RZ: vex_printf("F32ToFixed32Sx4_RZ"); return;
+ case Iop_Fixed32UToF32x4_RN: vex_printf("Fixed32UToF32x4_RN"); return;
+ case Iop_Fixed32SToF32x4_RN: vex_printf("Fixed32SToF32x4_RN"); return;
+ case Iop_F32ToFixed32Ux2_RZ: vex_printf("F32ToFixed32Ux2_RZ"); return;
+ case Iop_F32ToFixed32Sx2_RZ: vex_printf("F32ToFixed32Sx2_RZ"); return;
+ case Iop_Fixed32UToF32x2_RN: vex_printf("Fixed32UToF32x2_RN"); return;
+ case Iop_Fixed32SToF32x2_RN: vex_printf("Fixed32SToF32x2_RN"); return;
+
default: vpanic("ppIROp(1)");
}
@@ -1182,6 +1459,21 @@
vec[7] = NULL;
return vec;
}
+IRExpr** mkIRExprVec_8 ( IRExpr* arg1, IRExpr* arg2, IRExpr* arg3,
+ IRExpr* arg4, IRExpr* arg5, IRExpr* arg6,
+ IRExpr* arg7, IRExpr* arg8 ) {
+ IRExpr** vec = LibVEX_Alloc(9 * sizeof(IRExpr*));
+ vec[0] = arg1;
+ vec[1] = arg2;
+ vec[2] = arg3;
+ vec[3] = arg4;
+ vec[4] = arg5;
+ vec[5] = arg6;
+ vec[6] = arg7;
+ vec[7] = arg8;
+ vec[8] = NULL;
+ return vec;
+}
/* Constructors -- IRDirty */
@@ -1625,29 +1917,67 @@
case Iop_CmpORD64S:
case Iop_Avg8Ux8: case Iop_Avg16Ux4:
case Iop_Add8x8: case Iop_Add16x4: case Iop_Add32x2:
+ case Iop_Add32Fx2: case Iop_Sub32Fx2:
case Iop_CmpEQ8x8: case Iop_CmpEQ16x4: case Iop_CmpEQ32x2:
case Iop_CmpGT8Sx8: case Iop_CmpGT16Sx4: case Iop_CmpGT32Sx2:
+ case Iop_CmpGT8Ux8: case Iop_CmpGT16Ux4: case Iop_CmpGT32Ux2:
+ case Iop_CmpGT32Fx2: case Iop_CmpEQ32Fx2: case Iop_CmpGE32Fx2:
case Iop_InterleaveHI8x8: case Iop_InterleaveLO8x8:
case Iop_InterleaveHI16x4: case Iop_InterleaveLO16x4:
case Iop_InterleaveHI32x2: case Iop_InterleaveLO32x2:
+ case Iop_CatOddLanes8x8: case Iop_CatEvenLanes8x8:
case Iop_CatOddLanes16x4: case Iop_CatEvenLanes16x4:
+ case Iop_InterleaveOddLanes8x8: case Iop_InterleaveEvenLanes8x8:
+ case Iop_InterleaveOddLanes16x4: case Iop_InterleaveEvenLanes16x4:
case Iop_Perm8x8:
- case Iop_Max8Ux8: case Iop_Max16Sx4:
- case Iop_Min8Ux8: case Iop_Min16Sx4:
- case Iop_Mul16x4: case Iop_Mul32x2:
+ case Iop_Max8Ux8: case Iop_Max16Ux4: case Iop_Max32Ux2:
+ case Iop_Max8Sx8: case Iop_Max16Sx4: case Iop_Max32Sx2:
+ case Iop_Max32Fx2: case Iop_Min32Fx2:
+ case Iop_PwMax32Fx2: case Iop_PwMin32Fx2:
+ case Iop_Min8Ux8: case Iop_Min16Ux4: case Iop_Min32Ux2:
+ case Iop_Min8Sx8: case Iop_Min16Sx4: case Iop_Min32Sx2:
+ case Iop_PwMax8Ux8: case Iop_PwMax16Ux4: case Iop_PwMax32Ux2:
+ case Iop_PwMax8Sx8: case Iop_PwMax16Sx4: case Iop_PwMax32Sx2:
+ case Iop_PwMin8Ux8: case Iop_PwMin16Ux4: case Iop_PwMin32Ux2:
+ case Iop_PwMin8Sx8: case Iop_PwMin16Sx4: case Iop_PwMin32Sx2:
+ case Iop_Mul8x8: case Iop_Mul16x4: case Iop_Mul32x2:
+ case Iop_Mul32Fx2:
+ case Iop_PolynomialMul8x8:
case Iop_MulHi16Sx4: case Iop_MulHi16Ux4:
+ case Iop_QDMulHi16Sx4: case Iop_QDMulHi32Sx2:
+ case Iop_QRDMulHi16Sx4: case Iop_QRDMulHi32Sx2:
case Iop_QAdd8Sx8: case Iop_QAdd16Sx4:
+ case Iop_QAdd32Sx2: case Iop_QAdd64Sx1:
case Iop_QAdd8Ux8: case Iop_QAdd16Ux4:
+ case Iop_QAdd32Ux2: case Iop_QAdd64Ux1:
+ case Iop_PwAdd8x8: case Iop_PwAdd16x4: case Iop_PwAdd32x2:
+ case Iop_PwAdd32Fx2:
case Iop_QNarrow32Sx2:
case Iop_QNarrow16Sx4: case Iop_QNarrow16Ux4:
case Iop_Sub8x8: case Iop_Sub16x4: case Iop_Sub32x2:
case Iop_QSub8Sx8: case Iop_QSub16Sx4:
+ case Iop_QSub32Sx2: case Iop_QSub64Sx1:
case Iop_QSub8Ux8: case Iop_QSub16Ux4:
+ case Iop_QSub32Ux2: case Iop_QSub64Ux1:
+ case Iop_Shl8x8: case Iop_Shl16x4: case Iop_Shl32x2:
+ case Iop_Shr8x8: case Iop_Shr16x4: case Iop_Shr32x2:
+ case Iop_Sar8x8: case Iop_Sar16x4: case Iop_Sar32x2:
+ case Iop_Sal8x8: case Iop_Sal16x4: case Iop_Sal32x2: case Iop_Sal64x1:
+ case Iop_QShl8x8: case Iop_QShl16x4: case Iop_QShl32x2: case Iop_QShl64x1:
+ case Iop_QSal8x8: case Iop_QSal16x4: case Iop_QSal32x2: case Iop_QSal64x1:
+ case Iop_Recps32Fx2:
+ case Iop_Rsqrts32Fx2:
BINARY(Ity_I64,Ity_I64, Ity_I64);
case Iop_ShlN32x2: case Iop_ShlN16x4: case Iop_ShlN8x8:
- case Iop_ShrN32x2: case Iop_ShrN16x4:
+ case Iop_ShrN32x2: case Iop_ShrN16x4: case Iop_ShrN8x8:
case Iop_SarN32x2: case Iop_SarN16x4: case Iop_SarN8x8:
+ case Iop_QShlN8x8: case Iop_QShlN16x4:
+ case Iop_QShlN32x2: case Iop_QShlN64x1:
+ case Iop_QShlN8Sx8: case Iop_QShlN16Sx4:
+ case Iop_QShlN32Sx2: case Iop_QShlN64Sx1:
+ case Iop_QSalN8x8: case Iop_QSalN16x4:
+ case Iop_QSalN32x2: case Iop_QSalN64x1:
BINARY(Ity_I64,Ity_I8, Ity_I64);
case Iop_Shl8: case Iop_Shr8: case Iop_Sar8:
@@ -1668,6 +1998,22 @@
case Iop_Not64:
case Iop_CmpNEZ32x2: case Iop_CmpNEZ16x4: case Iop_CmpNEZ8x8:
+ case Iop_Cnt8x8:
+ case Iop_Clz8Sx8: case Iop_Clz16Sx4: case Iop_Clz32Sx2:
+ case Iop_Cls8Sx8: case Iop_Cls16Sx4: case Iop_Cls32Sx2:
+ case Iop_PwAddL8Ux8: case Iop_PwAddL16Ux4: case Iop_PwAddL32Ux2:
+ case Iop_PwAddL8Sx8: case Iop_PwAddL16Sx4: case Iop_PwAddL32Sx2:
+ case Iop_Reverse64_8x8: case Iop_Reverse64_16x4: case Iop_Reverse64_32x2:
+ case Iop_Reverse32_8x8: case Iop_Reverse32_16x4:
+ case Iop_Reverse16_8x8:
+ case Iop_FtoI32Sx2_RZ: case Iop_FtoI32Ux2_RZ:
+ case Iop_I32StoFx2: case Iop_I32UtoFx2:
+ case Iop_Recip32x2: case Iop_Recip32Fx2:
+ case Iop_Abs32Fx2:
+ case Iop_Rsqrte32Fx2:
+ case Iop_Rsqrte32x2:
+ case Iop_Neg32Fx2:
+ case Iop_Abs8x8: case Iop_Abs16x4: case Iop_Abs32x2:
UNARY(Ity_I64, Ity_I64);
case Iop_CmpEQ8: case Iop_CmpNE8:
@@ -1853,16 +2199,31 @@
case Iop_I32StoFx4:
case Iop_QFtoI32Ux4_RZ:
case Iop_QFtoI32Sx4_RZ:
+ case Iop_FtoI32Ux4_RZ:
+ case Iop_FtoI32Sx4_RZ:
case Iop_RoundF32x4_RM:
case Iop_RoundF32x4_RP:
case Iop_RoundF32x4_RN:
case Iop_RoundF32x4_RZ:
+ case Iop_Abs32Fx4:
+ case Iop_Rsqrte32Fx4:
+ case Iop_Rsqrte32x4:
UNARY(Ity_V128, Ity_V128);
case Iop_64HLtoV128: BINARY(Ity_I64,Ity_I64, Ity_V128);
- case Iop_V128to64: case Iop_V128HIto64:
+ case Iop_V128to64: case Iop_V128HIto64:
+ case Iop_Shorten16x8: case Iop_Shorten32x4: case Iop_Shorten64x2:
+ case Iop_QShortenU16Ux8: case Iop_QShortenU32Ux4: case Iop_QShortenU64Ux2:
+ case Iop_QShortenS16Sx8: case Iop_QShortenS32Sx4: case Iop_QShortenS64Sx2:
+ case Iop_QShortenU16Sx8: case Iop_QShortenU32Sx4: case Iop_QShortenU64Sx2:
+ case Iop_F32toF16x4:
UNARY(Ity_V128, Ity_I64);
+ case Iop_Longen8Ux8: case Iop_Longen16Ux4: case Iop_Longen32Ux2:
+ case Iop_Longen8Sx8: case Iop_Longen16Sx4: case Iop_Longen32Sx2:
+ case Iop_F16toF32x4:
+ UNARY(Ity_I64, Ity_V128);
+
case Iop_V128to32: UNARY(Ity_V128, Ity_I32);
case Iop_32UtoV128: UNARY(Ity_I32, Ity_V128);
case Iop_64UtoV128: UNARY(Ity_I64, Ity_V128);
@@ -1872,6 +2233,9 @@
case Iop_Dup8x16: UNARY(Ity_I8, Ity_V128);
case Iop_Dup16x8: UNARY(Ity_I16, Ity_V128);
case Iop_Dup32x4: UNARY(Ity_I32, Ity_V128);
+ case Iop_Dup8x8: UNARY(Ity_I8, Ity_I64);
+ case Iop_Dup16x4: UNARY(Ity_I16, Ity_I64);
+ case Iop_Dup32x2: UNARY(Ity_I32, Ity_I64);
case Iop_CmpEQ32Fx4: case Iop_CmpLT32Fx4:
case Iop_CmpEQ64Fx2: case Iop_CmpLT64Fx2:
@@ -1887,6 +2251,7 @@
case Iop_Div32Fx4: case Iop_Div32F0x4:
case Iop_Div64Fx2: case Iop_Div64F0x2:
case Iop_Max32Fx4: case Iop_Max32F0x4:
+ case Iop_PwMax32Fx4: case Iop_PwMin32Fx4:
case Iop_Max64Fx2: case Iop_Max64F0x2:
case Iop_Min32Fx4: case Iop_Min32F0x4:
case Iop_Min64Fx2: case Iop_Min64F0x2:
@@ -1897,15 +2262,23 @@
case Iop_AndV128: case Iop_OrV128: case Iop_XorV128:
case Iop_Add8x16: case Iop_Add16x8:
case Iop_Add32x4: case Iop_Add64x2:
- case Iop_QAdd8Ux16: case Iop_QAdd16Ux8: case Iop_QAdd32Ux4:
- case Iop_QAdd8Sx16: case Iop_QAdd16Sx8: case Iop_QAdd32Sx4:
+ case Iop_QAdd8Ux16: case Iop_QAdd16Ux8:
+ case Iop_QAdd32Ux4: //case Iop_QAdd64Ux2:
+ case Iop_QAdd8Sx16: case Iop_QAdd16Sx8:
+ case Iop_QAdd32Sx4: case Iop_QAdd64Sx2:
+ case Iop_PwAdd8x16: case Iop_PwAdd16x8: case Iop_PwAdd32x4:
case Iop_Sub8x16: case Iop_Sub16x8:
case Iop_Sub32x4: case Iop_Sub64x2:
- case Iop_QSub8Ux16: case Iop_QSub16Ux8: case Iop_QSub32Ux4:
- case Iop_QSub8Sx16: case Iop_QSub16Sx8: case Iop_QSub32Sx4:
- case Iop_Mul16x8: case Iop_Mul32x4:
+ case Iop_QSub8Ux16: case Iop_QSub16Ux8:
+ case Iop_QSub32Ux4: //case Iop_QSub64Ux2:
+ case Iop_QSub8Sx16: case Iop_QSub16Sx8:
+ case Iop_QSub32Sx4: case Iop_QSub64Sx2:
+ case Iop_Mul8x16: case Iop_Mul16x8: case Iop_Mul32x4:
+ case Iop_PolynomialMul8x16:
case Iop_MulHi16Ux8: case Iop_MulHi32Ux4:
case Iop_MulHi16Sx8: case Iop_MulHi32Sx4:
+ case Iop_QDMulHi16Sx8: case Iop_QDMulHi32Sx4:
+ case Iop_QRDMulHi16Sx8: case Iop_QRDMulHi32Sx4:
case Iop_MullEven8Ux16: case Iop_MullEven16Ux8:
case Iop_MullEven8Sx16: case Iop_MullEven16Sx8:
case Iop_Avg8Ux16: case Iop_Avg16Ux8: case Iop_Avg32Ux4:
@@ -1918,22 +2291,40 @@
case Iop_CmpGT8Sx16: case Iop_CmpGT16Sx8: case Iop_CmpGT32Sx4:
case Iop_CmpGT64Sx2:
case Iop_CmpGT8Ux16: case Iop_CmpGT16Ux8: case Iop_CmpGT32Ux4:
- case Iop_Shl8x16: case Iop_Shl16x8: case Iop_Shl32x4:
- case Iop_Shr8x16: case Iop_Shr16x8: case Iop_Shr32x4:
- case Iop_Sar8x16: case Iop_Sar16x8: case Iop_Sar32x4:
+ case Iop_Shl8x16: case Iop_Shl16x8: case Iop_Shl32x4: case Iop_Shl64x2:
+ case Iop_QShl8x16: case Iop_QShl16x8: case Iop_QShl32x4: case Iop_QShl64x2:
+ case Iop_QSal8x16: case Iop_QSal16x8: case Iop_QSal32x4: case Iop_QSal64x2:
+ case Iop_Shr8x16: case Iop_Shr16x8: case Iop_Shr32x4: case Iop_Shr64x2:
+ case Iop_Sar8x16: case Iop_Sar16x8: case Iop_Sar32x4: case Iop_Sar64x2:
+ case Iop_Sal8x16: case Iop_Sal16x8: case Iop_Sal32x4: case Iop_Sal64x2:
case Iop_Rol8x16: case Iop_Rol16x8: case Iop_Rol32x4:
case Iop_QNarrow16Ux8: case Iop_QNarrow32Ux4:
case Iop_QNarrow16Sx8: case Iop_QNarrow32Sx4:
case Iop_Narrow16x8: case Iop_Narrow32x4:
case Iop_InterleaveHI8x16: case Iop_InterleaveHI16x8:
case Iop_InterleaveHI32x4: case Iop_InterleaveHI64x2:
- case Iop_InterleaveLO8x16: case Iop_InterleaveLO16x8:
+ case Iop_InterleaveLO8x16: case Iop_InterleaveLO16x8:
case Iop_InterleaveLO32x4: case Iop_InterleaveLO64x2:
+ case Iop_CatOddLanes8x16: case Iop_CatEvenLanes8x16:
+ case Iop_CatOddLanes16x8: case Iop_CatEvenLanes16x8:
+ case Iop_CatOddLanes32x4: case Iop_CatEvenLanes32x4:
+ case Iop_InterleaveOddLanes8x16: case Iop_InterleaveEvenLanes8x16:
+ case Iop_InterleaveOddLanes16x8: case Iop_InterleaveEvenLanes16x8:
+ case Iop_InterleaveOddLanes32x4: case Iop_InterleaveEvenLanes32x4:
case Iop_Perm8x16:
+ case Iop_Recps32Fx4:
+ case Iop_Rsqrts32Fx4:
BINARY(Ity_V128,Ity_V128, Ity_V128);
+ case Iop_PolynomialMull8x8:
+ case Iop_Mull8Ux8: case Iop_Mull8Sx8:
+ case Iop_Mull16Ux4: case Iop_Mull16Sx4:
+ case Iop_Mull32Ux2: case Iop_Mull32Sx2:
+ BINARY(Ity_I64, Ity_I64, Ity_V128);
+
case Iop_NotV128:
case Iop_Recip32Fx4: case Iop_Recip32F0x4:
+ case Iop_Recip32x4:
case Iop_Recip64Fx2: case Iop_Recip64F0x2:
case Iop_RSqrt32Fx4: case Iop_RSqrt32F0x4:
case Iop_RSqrt64Fx2: case Iop_RSqrt64F0x2:
@@ -1941,6 +2332,16 @@
case Iop_Sqrt64Fx2: case Iop_Sqrt64F0x2:
case Iop_CmpNEZ8x16: case Iop_CmpNEZ16x8:
case Iop_CmpNEZ32x4: case Iop_CmpNEZ64x2:
+ case Iop_Cnt8x16:
+ case Iop_Clz8Sx16: case Iop_Clz16Sx8: case Iop_Clz32Sx4:
+ case Iop_Cls8Sx16: case Iop_Cls16Sx8: case Iop_Cls32Sx4:
+ case Iop_PwAddL8Ux16: case Iop_PwAddL16Ux8: case Iop_PwAddL32Ux4:
+ case Iop_PwAddL8Sx16: case Iop_PwAddL16Sx8: case Iop_PwAddL32Sx4:
+ case Iop_Reverse64_8x16: case Iop_Reverse64_16x8: case Iop_Reverse64_32x4:
+ case Iop_Reverse32_8x16: case Iop_Reverse32_16x8:
+ case Iop_Reverse16_8x16:
+ case Iop_Neg32Fx4:
+ case Iop_Abs8x16: case Iop_Abs16x8: case Iop_Abs32x4:
UNARY(Ity_V128, Ity_V128);
case Iop_ShlV128: case Iop_ShrV128:
@@ -1948,9 +2349,57 @@
case Iop_ShlN32x4: case Iop_ShlN64x2:
case Iop_ShrN8x16: case Iop_ShrN16x8:
case Iop_ShrN32x4: case Iop_ShrN64x2:
- case Iop_SarN8x16: case Iop_SarN16x8: case Iop_SarN32x4:
+ case Iop_SarN8x16: case Iop_SarN16x8:
+ case Iop_SarN32x4: case Iop_SarN64x2:
+ case Iop_QShlN8x16: case Iop_QShlN16x8:
+ case Iop_QShlN32x4: case Iop_QShlN64x2:
+ case Iop_QShlN8Sx16: case Iop_QShlN16Sx8:
+ case Iop_QShlN32Sx4: case Iop_QShlN64Sx2:
+ case Iop_QSalN8x16: case Iop_QSalN16x8:
+ case Iop_QSalN32x4: case Iop_QSalN64x2:
BINARY(Ity_V128,Ity_I8, Ity_V128);
+ case Iop_F32ToFixed32Ux4_RZ:
+ case Iop_F32ToFixed32Sx4_RZ:
+ case Iop_Fixed32UToF32x4_RN:
+ case Iop_Fixed32SToF32x4_RN:
+ BINARY(Ity_V128, Ity_I8, Ity_V128);
+
+ case Iop_F32ToFixed32Ux2_RZ:
+ case Iop_F32ToFixed32Sx2_RZ:
+ case Iop_Fixed32UToF32x2_RN:
+ case Iop_Fixed32SToF32x2_RN:
+ BINARY(Ity_I64, Ity_I8, Ity_I64);
+
+ case Iop_GetElem8x16:
+ BINARY(Ity_V128, Ity_I8, Ity_I8);
+ case Iop_GetElem16x8:
+ BINARY(Ity_V128, Ity_I8, Ity_I16);
+ case Iop_GetElem32x4:
+ BINARY(Ity_V128, Ity_I8, Ity_I32);
+ case Iop_GetElem64x2:
+ BINARY(Ity_V128, Ity_I8, Ity_I64);
+ case Iop_GetElem8x8:
+ BINARY(Ity_I64, Ity_I8, Ity_I8);
+ case Iop_GetElem16x4:
+ BINARY(Ity_I64, Ity_I8, Ity_I16);
+ case Iop_GetElem32x2:
+ BINARY(Ity_I64, Ity_I8, Ity_I32);
+ case Iop_SetElem8x8:
+ TERNARY(Ity_I64, Ity_I8, Ity_I8, Ity_I64);
+ case Iop_SetElem16x4:
+ TERNARY(Ity_I64, Ity_I8, Ity_I16, Ity_I64);
+ case Iop_SetElem32x2:
+ TERNARY(Ity_I64, Ity_I8, Ity_I32, Ity_I64);
+
+ case Iop_Extract64:
+ TERNARY(Ity_I64, Ity_I64, Ity_I8, Ity_I64);
+ case Iop_ExtractV128:
+ TERNARY(Ity_V128, Ity_V128, Ity_I8, Ity_V128);
+
+ case Iop_QDMulLong16Sx4: case Iop_QDMulLong32Sx2:
+ BINARY(Ity_I64, Ity_I64, Ity_V128);
+
default:
ppIROp(op);
vpanic("typeOfPrimop");
Modified: trunk/pub/libvex_ir.h
===================================================================
--- trunk/pub/libvex_ir.h 2010-08-22 12:54:56 UTC (rev 2015)
+++ trunk/pub/libvex_ir.h 2010-08-22 12:59:02 UTC (rev 2016)
@@ -675,6 +675,49 @@
Iop_CalcFPRF, /* Calc 5 fpscr[FPRF] bits (Class, <, =, >, Unord)
from FP result */
+ /* ------------------ 64-bit SIMD FP ------------------------ */
+
+ /* Convertion to/from int */
+ Iop_I32UtoFx2, Iop_I32StoFx2, /* I32x4 -> F32x4 */
+ Iop_FtoI32Ux2_RZ, Iop_FtoI32Sx2_RZ, /* F32x4 -> I32x4 */
+ /* Fixed32 format is floating-point number with fixed number of fraction
+ bits. The number of fraction bits is passed as a second argument of
+ type I8. */
+ Iop_F32ToFixed32Ux2_RZ, Iop_F32ToFixed32Sx2_RZ, /* fp -> fixed-point */
+ Iop_Fixed32UToF32x2_RN, Iop_Fixed32SToF32x2_RN, /* fixed-point -> fp */
+
+ /* Binary operations */
+ Iop_Max32Fx2, Iop_Min32Fx2,
+ /* Pairwise Min and Max. See integer pairwise operations for more
+ details. */
+ Iop_PwMax32Fx2, Iop_PwMin32Fx2,
+ /* Note: For the following compares, the arm front-end assumes a
+ nan in a lane of either argument returns zero for that lane. */
+ Iop_CmpEQ32Fx2, Iop_CmpGT32Fx2, Iop_CmpGE32Fx2,
+
+ /* Vector Reciprocal Estimate finds an approximate reciprocal of each
+ element in the operand vector, and places the results in the destination
+ vector. */
+ Iop_Recip32Fx2,
+
+ /* Vector Reciprocal Step computes (2.0 - arg1 * arg2).
+ Note, that if one of the arguments is zero and another one is infinity
+ of arbitrary sign the result of the operation is 2.0. */
+ Iop_Recps32Fx2,
+
+ /* Vector Reciprocal Square Root Estimate finds an approximate reciprocal
+ square root of each element in the operand vector. */
+ Iop_Rsqrte32Fx2,
+
+ /* Vector Reciprocal Square Root Step computes (3.0 - arg1 * arg2) / 2.0.
+ Note, that of one of the arguments is zero and another one is infiinty
+ of arbitrary sign the result of the operation is 1.5. */
+ Iop_Rsqrts32Fx2,
+
+ /* Unary */
+ Iop_Neg32Fx2, Iop_Abs32Fx2,
+
+
/* ------------------ 64-bit SIMD Integer. ------------------ */
/* MISC (vector integer cmp != 0) */
@@ -682,55 +725,143 @@
/* ADDITION (normal / unsigned sat / signed sat) */
Iop_Add8x8, Iop_Add16x4, Iop_Add32x2,
- Iop_QAdd8Ux8, Iop_QAdd16Ux4,
- Iop_QAdd8Sx8, Iop_QAdd16Sx4,
+ Iop_QAdd8Ux8, Iop_QAdd16Ux4, Iop_QAdd32Ux2, Iop_QAdd64Ux1,
+ Iop_QAdd8Sx8, Iop_QAdd16Sx4, Iop_QAdd32Sx2, Iop_QAdd64Sx1,
+ /* PAIRWISE operations */
+ /* Iop_PwFoo16x4( [a,b,c,d], [e,f,g,h] ) =
+ [Foo16(a,b), Foo16(c,d), Foo16(e,f), Foo16(g,h)] */
+ Iop_PwAdd8x8, Iop_PwAdd16x4, Iop_PwAdd32x2,
+ Iop_PwMax8Sx8, Iop_PwMax16Sx4, Iop_PwMax32Sx2,
+ Iop_PwMax8Ux8, Iop_PwMax16Ux4, Iop_PwMax32Ux2,
+ Iop_PwMin8Sx8, Iop_PwMin16Sx4, Iop_PwMin32Sx2,
+ Iop_PwMin8Ux8, Iop_PwMin16Ux4, Iop_PwMin32Ux2,
+ /* Longening variant is unary. The resulting vector contains two times
+ less elements than operand, but they are two times wider.
+ Example:
+ Iop_PAddL16Ux4( [a,b,c,d] ) = [a+b,c+d]
+ where a+b and c+d are unsigned 32-bit values. */
+ Iop_PwAddL8Ux8, Iop_PwAddL16Ux4, Iop_PwAddL32Ux2,
+ Iop_PwAddL8Sx8, Iop_PwAddL16Sx4, Iop_PwAddL32Sx2,
+
/* SUBTRACTION (normal / unsigned sat / signed sat) */
Iop_Sub8x8, Iop_Sub16x4, Iop_Sub32x2,
- Iop_QSub8Ux8, Iop_QSub16Ux4,
- Iop_QSub8Sx8, Iop_QSub16Sx4,
+ Iop_QSub8Ux8, Iop_QSub16Ux4, Iop_QSub32Ux2, Iop_QSub64Ux1,
+ Iop_QSub8Sx8, Iop_QSub16Sx4, Iop_QSub32Sx2, Iop_QSub64Sx1,
- /* MULTIPLICATION (normal / high half of signed/unsigned) */
- Iop_Mul16x4, Iop_Mul32x2,
+ /* ABSOLUTE VALUE */
+ Iop_Abs8x8, Iop_Abs16x4, Iop_Abs32x2,
+
+ /* MULTIPLICATION (normal / high half of signed/unsigned / plynomial ) */
+ Iop_Mul8x8, Iop_Mul16x4, Iop_Mul32x2,
+ Iop_Mul32Fx2,
Iop_MulHi16Ux4,
Iop_MulHi16Sx4,
+ /* Plynomial multiplication treats it's arguments as coefficients of
+ polynoms over {0, 1}. */
+ Iop_PolynomialMul8x8,
+ /* Vector Saturating Doubling Multiply Returning High Half and
+ Vector Saturating Rounding Doubling Multiply Returning High Half */
+ /* These IROp's multiply corresponding elements in two vectors, double
+ the results, and place the most significant half of the final results
+ in the destination vector. The results are truncated or rounded. If
+ any of the results overflow, they are saturated. */
+ Iop_QDMulHi16Sx4, Iop_QDMulHi32Sx2,
+ Iop_QRDMulHi16Sx4, Iop_QRDMulHi32Sx2,
+
/* AVERAGING: note: (arg1 + arg2 + 1) >>u 1 */
Iop_Avg8Ux8,
Iop_Avg16Ux4,
/* MIN/MAX */
- Iop_Max16Sx4,
- Iop_Max8Ux8,
- Iop_Min16Sx4,
- Iop_Min8Ux8,
+ Iop_Max8Sx8, Iop_Max16Sx4, Iop_Max32Sx2,
+ Iop_Max8Ux8, Iop_Max16Ux4, Iop_Max32Ux2,
+ Iop_Min8Sx8, Iop_Min16Sx4, Iop_Min32Sx2,
+ Iop_Min8Ux8, Iop_Min16Ux4, Iop_Min32Ux2,
/* COMPARISON */
Iop_CmpEQ8x8, Iop_CmpEQ16x4, Iop_CmpEQ32x2,
+ Iop_CmpGT8Ux8, Iop_CmpGT16Ux4, Iop_CmpGT32Ux2,
Iop_CmpGT8Sx8, Iop_CmpGT16Sx4, Iop_CmpGT32Sx2,
+ /* COUNT ones / leading zeroes / leading sign bits (not including topmost
+ bit) */
+ Iop_Cnt8x8,
+ Iop_Clz8Sx8, Iop_Clz16Sx4, Iop_Clz32Sx2,
+ Iop_Cls8Sx8, Iop_Cls16Sx4, Iop_Cls32Sx2,
+
+ /* VECTOR x VECTOR SHIFT / ROTATE */
+ Iop_Shl8x8, Iop_Shl16x4, Iop_Shl32x2,
+ Iop_Shr8x8, Iop_Shr16x4, Iop_Shr32x2,
+ Iop_Sar8x8, Iop_Sar16x4, Iop_Sar32x2,
+ Iop_Sal8x8, Iop_Sal16x4, Iop_Sal32x2, Iop_Sal64x1,
+
/* VECTOR x SCALAR SHIFT (shift amt :: Ity_I8) */
Iop_ShlN8x8, Iop_ShlN16x4, Iop_ShlN32x2,
- Iop_ShrN16x4, Iop_ShrN32x2,
+ Iop_ShrN8x8, Iop_ShrN16x4, Iop_ShrN32x2,
Iop_SarN8x8, Iop_SarN16x4, Iop_SarN32x2,
+ /* VECTOR x VECTOR SATURATING SHIFT */
+ Iop_QShl8x8, Iop_QShl16x4, Iop_QShl32x2, Iop_QShl64x1,
+ Iop_QSal8x8, Iop_QSal16x4, Iop_QSal32x2, Iop_QSal64x1,
+ /* VECTOR x INTEGER SATURATING SHIFT */
+ Iop_QShlN8Sx8, Iop_QShlN16Sx4, Iop_QShlN32Sx2, Iop_QShlN64Sx1,
+ Iop_QShlN8x8, Iop_QShlN16x4, Iop_QShlN32x2, Iop_QShlN64x1,
+ Iop_QSalN8x8, Iop_QSalN16x4, Iop_QSalN32x2, Iop_QSalN64x1,
+
/* NARROWING -- narrow 2xI64 into 1xI64, hi half from left arg */
Iop_QNarrow16Ux4,
Iop_QNarrow16Sx4,
Iop_QNarrow32Sx2,
- /* INTERLEAVING -- interleave lanes from low or high halves of
+ /* INTERLEAVING */
+ /* Interleave lanes from low or high halves of
operands. Most-significant result lane is from the left
arg. */
Iop_InterleaveHI8x8, Iop_InterleaveHI16x4, Iop_InterleaveHI32x2,
Iop_InterleaveLO8x8, Iop_InterleaveLO16x4, Iop_InterleaveLO32x2,
+ /* Interleave odd/even lanes of operands. Most-significant result lane
+ is from the left arg. Note that Interleave{Odd,Even}Lanes32x2 are
+ identical to Interleave{HI,LO}32x2 and so are omitted.*/
+ Iop_InterleaveOddLanes8x8, Iop_InterleaveEvenLanes8x8,
+ Iop_InterleaveOddLanes16x4, Iop_InterleaveEvenLanes16x4,
+
/* CONCATENATION -- build a new value by concatenating either
the even or odd lanes of both operands. Note that
Cat{Odd,Even}Lanes32x2 are identical to Interleave{HI,LO}32x2
and so are omitted. */
- Iop_CatOddLanes16x4, Iop_CatEvenLanes16x4,
+ Iop_CatOddLanes8x8, Iop_CatOddLanes16x4,
+ Iop_CatEvenLanes8x8, Iop_CatEvenLanes16x4,
+ /* GET / SET elements of VECTOR
+ GET is binop (I64, I8) -> I<elem_size>
+ SET is triop (I64, I8, I<elem_size>) -> I64 */
+ /* Note: the arm back-end handles only constant second argument */
+ Iop_GetElem8x8, Iop_GetElem16x4, Iop_GetElem32x2,
+ Iop_SetElem8x8, Iop_SetElem16x4, Iop_SetElem32x2,
+
+ /* DUPLICATING -- copy value to all lanes */
+ Iop_Dup8x8, Iop_Dup16x4, Iop_Dup32x2,
+
+ /* EXTRACT -- copy 8-arg3 highest bytes from arg1 to 8-arg3 lowest bytes
+ of result and arg3 lowest bytes of arg2 to arg3 highest bytes of
+ result.
+ It is a triop: (I64, I64, I8) -> I64 */
+ /* Note: the arm back-end handles only constant third argumnet. */
+ Iop_Extract64,
+
+ /* REVERSE the order of elements in each Half-words, Words,
+ Double-words */
+ /* Examples:
+ Reverse16_8x8([a,b,c,d,e,f,g,h]) = [b,a,d,c,f,e,h,g]
+ Reverse32_8x8([a,b,c,d,e,f,g,h]) = [d,c,b,a,h,g,f,e]
+ Reverse64_8x8([a,b,c,d,e,f,g,h]) = [h,g,f,e,d,c,b,a] */
+ Iop_Reverse16_8x8,
+ Iop_Reverse32_8x8, Iop_Reverse32_16x4,
+ Iop_Reverse64_8x8, Iop_Reverse64_16x4, Iop_Reverse64_32x2,
+
/* PERMUTING -- copy src bytes to dst,
as indexed by control vector bytes:
for i in 0 .. 7 . result[i] = argL[ argR[i] ]
@@ -738,6 +869,10 @@
is undefined. */
Iop_Perm8x8,
+ /* Vector Reciprocal Estimate and Vector Reciprocal Square Root Estimate
+ See floating-point equiwalents for details. */
+ Iop_Recip32x2, Iop_Rsqrte32x2,
+
/* ------------------ 128-bit SIMD FP. ------------------ */
/* --- 32x4 vector FP --- */
@@ -745,23 +880,60 @@
/* binary */
Iop_Add32Fx4, Iop_Sub32Fx4, Iop_Mul32Fx4, Iop_Div32Fx4,
Iop_Max32Fx4, Iop_Min32Fx4,
- /* Note: For the following compares, the ppc front-end assumes a
+ Iop_Add32Fx2, Iop_Sub32Fx2,
+ /* Note: For the following compares, the ppc and arm front-ends assume a
nan in a lane of either argument returns zero for that lane. */
- Iop_CmpEQ32Fx4, Iop_CmpLT32Fx4, Iop_CmpLE32Fx4, Iop_CmpUN32Fx4,
+ Iop_CmpEQ32Fx4, Iop_CmpLT32Fx4, Iop_CmpLE32Fx4, Iop_CmpUN32Fx4,
Iop_CmpGT32Fx4, Iop_CmpGE32Fx4,
+ /* Vector Absolute */
+ Iop_Abs32Fx4,
+
+ /* Pairwise Max and Min. See integer pairwise operations for details. */
+ Iop_PwMax32Fx4, Iop_PwMin32Fx4,
+
/* unary */
- Iop_Recip32Fx4, Iop_Sqrt32Fx4, Iop_RSqrt32Fx4,
+ Iop_Sqrt32Fx4, Iop_RSqrt32Fx4,
+ Iop_Neg32Fx4,
+ /* Vector Reciprocal Estimate finds an approximate reciprocal of each
+ element in the operand vector, and places the results in the destination
+ vector. */
+ Iop_Recip32Fx4,
+
+ /* Vector Reciprocal Step computes (2.0 - arg1 * arg2).
+ Note, that if one of the arguments is zero and another one is infinity
+ of arbitrary sign the result of the operation is 2.0. */
+ Iop_Recps32Fx4,
+
+ /* Vector Reciprocal Square Root Estimate finds an approximate reciprocal
+ square root of each element in the operand vector. */
+ Iop_Rsqrte32Fx4,
+
+ /* Vector Reciprocal Square Root Step computes (3.0 - arg1 * arg2) / 2.0.
+ Note, that of one of the arguments is zero and another one is infiinty
+ of arbitrary sign the result of the operation is 1.5. */
+ Iop_Rsqrts32Fx4,
+
+
/* --- Int to/from FP conversion --- */
/* Unlike the standard fp conversions, these irops take no
rounding mode argument. Instead the irop trailers _R{M,P,N,Z}
indicate the mode: {-inf, +inf, nearest, zero} respectively. */
- Iop_I32UtoFx4, Iop_I32StoFx4, /* I32x4 -> F32x4 */
- Iop_QFtoI32Ux4_RZ, Iop_QFtoI32Sx4_RZ, /* F32x4 -> I32x4 */
+ Iop_I32UtoFx4, Iop_I32StoFx4, /* I32x4 -> F32x4 */
+ Iop_FtoI32Ux4_RZ, Iop_FtoI32Sx4_RZ, /* F32x4 -> I32x4 */
+ Iop_QFtoI32Ux4_RZ, Iop_QFtoI32Sx4_RZ, /* F32x4 -> I32x4 (with saturation) */
Iop_RoundF32x4_RM, Iop_RoundF32x4_RP, /* round to fp integer */
Iop_RoundF32x4_RN, Iop_RoundF32x4_RZ, /* round to fp integer */
+ /* Fixed32 format is floating-point number with fixed number of fraction
+ bits. The number of fraction bits is passed as a second argument of
+ type I8. */
+ Iop_F32ToFixed32Ux4_RZ, Iop_F32ToFixed32Sx4_RZ, /* fp -> fixed-point */
+ Iop_Fixed32UToF32x4_RN, Iop_Fixed32SToF32x4_RN, /* fixed-point -> fp */
+ /* --- Single to/from half conversion --- */
+ Iop_F32toF16x4, Iop_F16toF32x4, /* F32x4 <-> F16x4 */
+
/* --- 32x4 lowest-lane-only scalar FP --- */
/* In binary cases, upper 3/4 is copied from first operand. In
@@ -826,23 +998,57 @@
Iop_CmpNEZ8x16, Iop_CmpNEZ16x8, Iop_CmpNEZ32x4, Iop_CmpNEZ64x2,
/* ADDITION (normal / unsigned sat / signed sat) */
- Iop_Add8x16, Iop_Add16x8, Iop_Add32x4, Iop_Add64x2,
- Iop_QAdd8Ux16, Iop_QAdd16Ux8, Iop_QAdd32Ux4,
- Iop_QAdd8Sx16, Iop_QAdd16Sx8, Iop_QAdd32Sx4,
+ Iop_Add8x16, Iop_Add16x8, Iop_Add32x4, Iop_Add64x2,
+ Iop_QAdd8Ux16, Iop_QAdd16Ux8, Iop_QAdd32Ux4, Iop_QAdd64Ux2,
+ Iop_QAdd8Sx16, Iop_QAdd16Sx8, Iop_QAdd32Sx4, Iop_QAdd64Sx2,
/* SUBTRACTION (normal / unsigned sat / signed sat) */
- Iop_Sub8x16, Iop_Sub16x8, Iop_Sub32x4, Iop_Sub64x2,
- Iop_QSub8Ux16, Iop_QSub16Ux8, Iop_QSub32Ux4,
- Iop_QSub8Sx16, Iop_QSub16Sx8, Iop_QSub32Sx4,
+ Iop_Sub8x16, Iop_Sub16x8, Iop_Sub32x4, Iop_Sub64x2,
+ Iop_QSub8Ux16, Iop_QSub16Ux8, Iop_QSub32Ux4, Iop_QSub64Ux2,
+ Iop_QSub8Sx16, Iop_QSub16Sx8, Iop_QSub32Sx4, Iop_QSub64Sx2,
/* MULTIPLICATION (normal / high half of signed/unsigned) */
- Iop_Mul16x8, Iop_Mul32x4,
- Iop_MulHi16Ux8, Iop_MulHi32Ux4,
- Iop_MulHi16Sx8, Iop_MulHi32Sx4,
+ Iop_Mul8x16, Iop_Mul16x8, Iop_Mul32x4,
+ Iop_MulHi16Ux8, Iop_MulHi32Ux4,
+ Iop_MulHi16Sx8, Iop_MulHi32Sx4,
/* (widening signed/unsigned of even lanes, with lowest lane=zero) */
Iop_MullEven8Ux16, Iop_MullEven16Ux8,
Iop_MullEven8Sx16, Iop_MullEven16Sx8,
+ /* FIXME: document these */
+ Iop_Mull8Ux8, Iop_Mull8Sx8,
+ Iop_Mull16Ux4, Iop_Mull16Sx4,
+ Iop_Mull32Ux2, Iop_Mull32Sx2,
+ /* Vector Saturating Doubling Multiply Returning High Half and
+ Vector Saturating Rounding Doubling Multiply Returning High Half */
+ /* These IROp's multiply corresponding elements in two vectors, double
+ the results, and place the most significant half of the final results
+ in the destination vector. The results are truncated or rounded. If
+ any of the results overflow, they are saturated. */
+ Iop_QDMulHi16Sx8, Iop_QDMulHi32Sx4,
+ Iop_QRDMulHi16Sx8, Iop_QRDMulHi32Sx4,
+ /* Doubling saturating multiplication (long) (I64, I64) -> V128 */
+ Iop_QDMulLong16Sx4, Iop_QDMulLong32Sx2,
+ /* Plynomial multiplication treats it's arguments as coefficients of
+ polynoms over {0, 1}. */
+ Iop_PolynomialMul8x16, /* (V128, V128) -> V128 */
+ Iop_PolynomialMull8x8, /* (I64, I64) -> V128 */
+ /* PAIRWISE operations */
+ /* Iop_PwFoo16x4( [a,b,c,d], [e,f,g,h] ) =
+ [Foo16(a,b), Foo16(c,d), Foo16(e,f), Foo16(g,h)] */
+ Iop_PwAdd8x16, Iop_PwAdd16x8, Iop_PwAdd32x4,
+ Iop_PwAdd32Fx2,
+ /* Longening variant is unary. The resulting vector contains two times
+ less elements than operand, but they are two times wider.
+ Example:
+ Iop_PwAddL16Ux4( [a,b,c,d] ) = [a+b,c+d]
+ where a+b and c+d are unsigned 32-bit values. */
+ Iop_PwAddL8Ux16, Iop_PwAddL16Ux8, Iop_PwAddL32Ux4,
+ Iop_PwAddL8Sx16, Iop_PwAddL16Sx8, Iop_PwAddL32Sx4,
+
+ /* ABSOLUTE VALUE */
+ Iop_Abs8x16, Iop_Abs16x8, Iop_Abs32x4,
+
/* AVERAGING: note: (arg1 + arg2 + 1) >>u 1 */
Iop_Avg8Ux16, Iop_Avg16Ux8, Iop_Avg32Ux4,
Iop_Avg8Sx16, Iop_Avg16Sx8, Iop_Avg32Sx4,
@@ -858,40 +1064,110 @@
Iop_CmpGT8Sx16, Iop_CmpGT16Sx8, Iop_CmpGT32Sx4, Iop_CmpGT64Sx2,
Iop_CmpGT8Ux16, Iop_CmpGT16Ux8, Iop_CmpGT32Ux4,
+ /* COUNT ones / leading zeroes / leading sign bits (not including topmost
+ bit) */
+ Iop_Cnt8x16,
+ Iop_Clz8Sx16, Iop_Clz16Sx8, Iop_Clz32Sx4,
+ Iop_Cls8Sx16, Iop_Cls16Sx8, Iop_Cls32Sx4,
+
/* VECTOR x SCALAR SHIFT (shift amt :: Ity_I8) */
Iop_ShlN8x16, Iop_ShlN16x8, Iop_ShlN32x4, Iop_ShlN64x2,
Iop_ShrN8x16, Iop_ShrN16x8, Iop_ShrN32x4, Iop_ShrN64x2,
- Iop_SarN8x16, Iop_SarN16x8, Iop_SarN32x4,
+ Iop_SarN8x16, Iop_SarN16x8, Iop_SarN32x4, Iop_SarN64x2,
/* VECTOR x VECTOR SHIFT / ROTATE */
- Iop_Shl8x16, Iop_Shl16x8, Iop_Shl32x4,
- Iop_Shr8x16, Iop_Shr16x8, Iop_Shr32x4,
- Iop_Sar8x16, Iop_Sar16x8, Iop_Sar32x4,
+ Iop_Shl8x16, Iop_Shl16x8, Iop_Shl32x4, Iop_Shl64x2,
+ Iop_Shr8x16, Iop_Shr16x8, Iop_Shr32x4, Iop_Shr64x2,
+ Iop_Sar8x16, Iop_Sar16x8, Iop_Sar32x4, Iop_Sar64x2,
+ Iop_Sal8x16, Iop_Sal16x8, Iop_Sal32x4, Iop_Sal64x2,
Iop_Rol8x16, Iop_Rol16x8, Iop_Rol32x4,
+ /* VECTOR x VECTOR SATURATING SHIFT */
+ Iop_QShl8x16, Iop_QShl16x8, Iop_QShl32x4, Iop_QShl64x2,
+ Iop_QSal8x16, Iop_QSal16x8, Iop_QSal32x4, Iop_QSal64x2,
+ /* VECTOR x INTEGER SATURATING SHIFT */
+ Iop_QShlN8Sx16, Iop_QShlN16Sx8, Iop_QShlN32Sx4, Iop_QShlN64Sx2,
+ Iop_QShlN8x16, Iop_QShlN16x8, Iop_QShlN32x4, Iop_QShlN64x2,
+ Iop_QSalN8x16, Iop_QSalN16x8, Iop_QSalN32x4, Iop_QSalN64x2,
+
/* NARROWING -- narrow 2xV128 into 1xV128, hi half from left arg */
/* Note: the 16{U,S} and 32{U,S} are the pre-narrow lane widths. */
Iop_QNarrow16Ux8, Iop_QNarrow32Ux4,
Iop_QNarrow16Sx8, Iop_QNarrow32Sx4,
Iop_Narrow16x8, Iop_Narrow32x4,
+ /* Shortening V128->I64, lo half from each element */
+ Iop_Shorten16x8, Iop_Shorten32x4, Iop_Shorten64x2,
+ /* Saturating shortening from signed source to signed/unsigned destination */
+ Iop_QShortenS16Sx8, Iop_QShortenS32Sx4, Iop_QShortenS64Sx2,
+ Iop_QShortenU16Sx8, Iop_QShortenU32Sx4, Iop_QShortenU64Sx2,
+ /* Saturating shortening from unsigned source to unsigned destination */
+ Iop_QShortenU16Ux8, Iop_QShortenU32Ux4, Iop_QShortenU64Ux2,
- /* INTERLEAVING -- interleave lanes from low or high halves of
+ /* WIDENING */
+ /* Longening --- sign or zero extends each element of the argument
+ vector to the twice original size. The resulting vector consists of
+ the same number of elements but each element and the vector itself
+ are two times wider.
+ All operations are I64->V128.
+ Example
+ Iop_Longen32Sx2( [a, b] ) = [c, d]
+ where c = Iop_32Sto64(a) and d = Iop_32Sto64(b) */
+ Iop_Longen8Ux8, Iop_Longen16Ux4, Iop_Longen32Ux2,
+ Iop_Longen8Sx8, Iop_Longen16Sx4, Iop_Longen32Sx2,
+
+ /* INTERLEAVING */
+ /* Interleave lanes from low or high halves of
operands. Most-significant result lane is from the left
arg. */
Iop_InterleaveHI8x16, Iop_InterleaveHI16x8,
Iop_InterleaveHI32x4, Iop_InterleaveHI64x2,
- Iop_InterleaveLO8x16, Iop_InterleaveLO16x8,
+ Iop_InterleaveLO8x16, Iop_InterleaveLO16x8,
Iop_InterleaveLO32x4, Iop_InterleaveLO64x2,
+ /* Interleave odd/even lanes of operands. Most-significant result lane
+ is from the left arg. */
+ Iop_InterleaveOddLanes8x16, Iop_InterleaveEvenLanes8x16,
+ Iop_InterleaveOddLanes16x8, Iop_InterleaveEvenLanes16x8,
+ Iop_InterleaveOddLanes32x4, Iop_InterleaveEvenLanes32x4,
+ /* CONCATENATION -- build a new value by concatenating either
+ the even or odd lanes of both operands. */
+ Iop_CatOddLanes8x16, Iop_CatOddLanes16x8, Iop_CatOddLanes32x4,
+ Iop_CatEvenLanes8x16, Iop_CatEvenLanes16x8, Iop_CatEvenLanes32x4,
+
+ /* GET elements of VECTOR
+ GET is binop (V128, I8) -> I<elem_size> */
+ /* Note: the arm back-end handles only constant second argument. */
+ Iop_GetElem8x16, Iop_Ge...
[truncated message content] |
|
From: <sv...@va...> - 2010-08-22 12:55:04
|
Author: sewardj
Date: 2010-08-22 13:54:56 +0100 (Sun, 22 Aug 2010)
New Revision: 2015
Log:
Merge from branches/THUMB: hwcaps for ARM. May get simplified since
in fact ARM v5 and v6 are not supported targets -- ARMv7 remains the
minimum supported target.
Modified:
trunk/priv/main_main.c
trunk/pub/libvex.h
Modified: trunk/priv/main_main.c
===================================================================
--- trunk/priv/main_main.c 2010-08-22 12:48:28 UTC (rev 2014)
+++ trunk/priv/main_main.c 2010-08-22 12:54:56 UTC (rev 2015)
@@ -183,7 +183,7 @@
HInstrArray* (*iselSB) ( IRSB*, VexArch, VexArchInfo*,
VexAbiInfo* );
Int (*emit) ( UChar*, Int, HInstr*, Bool, void* );
- IRExpr* (*specHelper) ( HChar*, IRExpr** );
+ IRExpr* (*specHelper) ( HChar*, IRExpr**, IRStmt**, Int );
Bool (*preciseMemExnsFn) ( Int, Int );
DisOneInstrFn disInstrFn;
@@ -501,7 +501,8 @@
/* Clean it up, hopefully a lot. */
irsb = do_iropt_BB ( irsb, specHelper, preciseMemExnsFn,
- vta->guest_bytes_addr );
+ vta->guest_bytes_addr,
+ vta->arch_guest );
sanityCheckIRSB( irsb, "after initial iropt",
True/*must be flat*/, guest_word_type );
@@ -845,7 +846,41 @@
static HChar* show_hwcaps_arm ( UInt hwcaps )
{
- if (hwcaps == 0) return "arm-baseline";
+ Bool N = ((hwcaps & VEX_HWCAPS_ARM_NEON) != 0);
+ Bool vfp = ((hwcaps & (VEX_HWCAPS_ARM_VFP |
+ VEX_HWCAPS_ARM_VFP2 | VEX_HWCAPS_ARM_VFP3)) != 0);
+ switch (VEX_ARM_ARCHLEVEL(hwcaps)) {
+ case 5:
+ if (N)
+ return NULL;
+ if (vfp)
+ return "ARMv5-vfp";
+ else
+ return "ARMv5";
+ return NULL;
+ case 6:
+ if (N)
+ return NULL;
+ if (vfp)
+ return "ARMv6-vfp";
+ else
+ return "ARMv6";
+ return NULL;
+ case 7:
+ if (vfp) {
+ if (N)
+ return "ARMv7-vfp-neon";
+ else
+ return "ARMv7-vfp";
+ } else {
+ if (N)
+ return "ARMv7-neon";
+ else
+ return "ARMv7";
+ }
+ default:
+ return NULL;
+ }
return NULL;
}
Modified: trunk/pub/libvex.h
===================================================================
--- trunk/pub/libvex.h 2010-08-22 12:48:28 UTC (rev 2014)
+++ trunk/pub/libvex.h 2010-08-22 12:54:56 UTC (rev 2015)
@@ -94,8 +94,15 @@
(fres,frsqrte,fsel,stfiwx) */
/* arm: baseline capability is ARMv4 */
-/* No extra capabilities */
+/* Bits 5:0 - architecture level (e.g. 5 for v5, 6 for v6 etc) */
+#define VEX_HWCAPS_ARM_VFP (1<<6) /* VFP extension */
+#define VEX_HWCAPS_ARM_VFP2 (1<<7) /* VFPv2 */
+#define VEX_HWCAPS_ARM_VFP3 (1<<8) /* VFPv3 */
+/* Bits 15:10 reserved for (possible) future VFP revisions */
+#define VEX_HWCAPS_ARM_NEON (1<<16) /* Advanced SIMD also known as NEON */
+/* Get an ARM architecure level from HWCAPS */
+#define VEX_ARM_ARCHLEVEL(x) ((x) & 0x3f)
/* These return statically allocated strings. */
|
|
From: <sv...@va...> - 2010-08-22 12:48:36
|
Author: sewardj Date: 2010-08-22 13:48:28 +0100 (Sun, 22 Aug 2010) New Revision: 2014 Log: Merge from branches/THUMB: back end changes to support NEON code generation. Modified: trunk/priv/host_arm_defs.c trunk/priv/host_arm_defs.h trunk/priv/host_arm_isel.c [... diff too large to include ...] |
|
From: <sv...@va...> - 2010-08-22 12:44:28
|
Author: sewardj Date: 2010-08-22 13:44:20 +0100 (Sun, 22 Aug 2010) New Revision: 2013 Log: Merge from branches/THUMB: front end changes to support: * Thumb integer instructions * NEON in both ARM and Thumb mode * VFP in both ARM and Thumb mode * infrastructure to support APSR.Q flag representation Modified: trunk/auxprogs/genoffsets.c trunk/priv/guest_arm_defs.h trunk/priv/guest_arm_helpers.c trunk/priv/guest_arm_toIR.c trunk/pub/libvex_guest_arm.h [... diff too large to include ...] |
|
From: <sv...@va...> - 2010-08-22 12:39:01
|
Author: sewardj
Date: 2010-08-22 13:38:53 +0100 (Sun, 22 Aug 2010)
New Revision: 2012
Log:
Merge from branches/THUMB: A spechelper interface change that allows
the helper to look back at the previous IR statements. May be backed
out if it turns out no longer to be needed for optimising Thumb
translations.
Modified:
trunk/priv/guest_amd64_defs.h
trunk/priv/guest_amd64_helpers.c
trunk/priv/guest_ppc_defs.h
trunk/priv/guest_ppc_helpers.c
trunk/priv/guest_x86_defs.h
trunk/priv/guest_x86_helpers.c
trunk/priv/ir_opt.c
trunk/priv/ir_opt.h
Modified: trunk/priv/guest_amd64_defs.h
===================================================================
--- trunk/priv/guest_amd64_defs.h 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/guest_amd64_defs.h 2010-08-22 12:38:53 UTC (rev 2012)
@@ -61,8 +61,10 @@
/* Used by the optimiser to specialise calls to helpers. */
extern
-IRExpr* guest_amd64_spechelper ( HChar* function_name,
- IRExpr** args );
+IRExpr* guest_amd64_spechelper ( HChar* function_name,
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts );
/* Describes to the optimiser which part of the guest state require
precise memory exceptions. This is logically part of the guest
Modified: trunk/priv/guest_amd64_helpers.c
===================================================================
--- trunk/priv/guest_amd64_helpers.c 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/guest_amd64_helpers.c 2010-08-22 12:38:53 UTC (rev 2012)
@@ -867,7 +867,9 @@
}
IRExpr* guest_amd64_spechelper ( HChar* function_name,
- IRExpr** args )
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts )
{
# define unop(_op,_a1) IRExpr_Unop((_op),(_a1))
# define binop(_op,_a1,_a2) IRExpr_Binop((_op),(_a1),(_a2))
Modified: trunk/priv/guest_ppc_defs.h
===================================================================
--- trunk/priv/guest_ppc_defs.h 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/guest_ppc_defs.h 2010-08-22 12:38:53 UTC (rev 2012)
@@ -62,12 +62,16 @@
/* Used by the optimiser to specialise calls to helpers. */
extern
-IRExpr* guest_ppc32_spechelper ( HChar* function_name,
- IRExpr** args );
+IRExpr* guest_ppc32_spechelper ( HChar* function_name,
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts );
extern
-IRExpr* guest_ppc64_spechelper ( HChar* function_name,
- IRExpr** args );
+IRExpr* guest_ppc64_spechelper ( HChar* function_name,
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts );
/* Describes to the optimser which part of the guest state require
precise memory exceptions. This is logically part of the guest
Modified: trunk/priv/guest_ppc_helpers.c
===================================================================
--- trunk/priv/guest_ppc_helpers.c 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/guest_ppc_helpers.c 2010-08-22 12:38:53 UTC (rev 2012)
@@ -182,13 +182,17 @@
/* Helper-function specialiser. */
IRExpr* guest_ppc32_spechelper ( HChar* function_name,
- IRExpr** args )
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts )
{
return NULL;
}
IRExpr* guest_ppc64_spechelper ( HChar* function_name,
- IRExpr** args )
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts )
{
return NULL;
}
Modified: trunk/priv/guest_x86_defs.h
===================================================================
--- trunk/priv/guest_x86_defs.h 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/guest_x86_defs.h 2010-08-22 12:38:53 UTC (rev 2012)
@@ -61,8 +61,10 @@
/* Used by the optimiser to specialise calls to helpers. */
extern
-IRExpr* guest_x86_spechelper ( HChar* function_name,
- IRExpr** args );
+IRExpr* guest_x86_spechelper ( HChar* function_name,
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts );
/* Describes to the optimiser which part of the guest state require
precise memory exceptions. This is logically part of the guest
Modified: trunk/priv/guest_x86_helpers.c
===================================================================
--- trunk/priv/guest_x86_helpers.c 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/guest_x86_helpers.c 2010-08-22 12:38:53 UTC (rev 2012)
@@ -772,8 +772,10 @@
&& e->Iex.Const.con->Ico.U32 == n );
}
-IRExpr* guest_x86_spechelper ( HChar* function_name,
- IRExpr** args )
+IRExpr* guest_x86_spechelper ( HChar* function_name,
+ IRExpr** args,
+ IRStmt** precedingStmts,
+ Int n_precedingStmts )
{
# define unop(_op,_a1) IRExpr_Unop((_op),(_a1))
# define binop(_op,_a1,_a2) IRExpr_Binop((_op),(_a1),(_a2))
Modified: trunk/priv/ir_opt.c
===================================================================
--- trunk/priv/ir_opt.c 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/ir_opt.c 2010-08-22 12:38:53 UTC (rev 2012)
@@ -931,6 +931,7 @@
switch (op) {
case Iop_Xor8: return IRExpr_Const(IRConst_U8(0));
case Iop_Xor16: return IRExpr_Const(IRConst_U16(0));
+ case Iop_Sub32:
case Iop_Xor32: return IRExpr_Const(IRConst_U32(0));
case Iop_Xor64: return IRExpr_Const(IRConst_U64(0));
case Iop_XorV128: return IRExpr_Const(IRConst_V128(0));
@@ -1584,11 +1585,13 @@
}
/* Xor8/16/32/64/V128(t,t) ==> 0, for some IRTemp t */
+ /* Sub32(t,t) ==> 0, for some IRTemp t */
if ( (e->Iex.Binop.op == Iop_Xor64
|| e->Iex.Binop.op == Iop_Xor32
|| e->Iex.Binop.op == Iop_Xor16
|| e->Iex.Binop.op == Iop_Xor8
- || e->Iex.Binop.op == Iop_XorV128)
+ || e->Iex.Binop.op == Iop_XorV128
+ || e->Iex.Binop.op == Iop_Sub32)
&& sameIRTemps(e->Iex.Binop.arg1, e->Iex.Binop.arg2)) {
e2 = mkZeroForXor(e->Iex.Binop.op);
}
@@ -2199,8 +2202,10 @@
/*---------------------------------------------------------------*/
static
-IRSB* spec_helpers_BB ( IRSB* bb,
- IRExpr* (*specHelper) ( HChar*, IRExpr**) )
+IRSB* spec_helpers_BB(
+ IRSB* bb,
+ IRExpr* (*specHelper) (HChar*, IRExpr**, IRStmt**, Int)
+ )
{
Int i;
IRStmt* st;
@@ -2215,7 +2220,8 @@
continue;
ex = (*specHelper)( st->Ist.WrTmp.data->Iex.CCall.cee->name,
- st->Ist.WrTmp.data->Iex.CCall.args );
+ st->Ist.WrTmp.data->Iex.CCall.args,
+ &bb->stmts[0], i );
if (!ex)
/* the front end can't think of a suitable replacement */
continue;
@@ -4361,7 +4367,7 @@
static
IRSB* cheap_transformations (
IRSB* bb,
- IRExpr* (*specHelper) (HChar*, IRExpr**),
+ IRExpr* (*specHelper) (HChar*, IRExpr**, IRStmt**, Int),
Bool (*preciseMemExnsFn)(Int,Int)
)
{
@@ -4512,10 +4518,13 @@
*/
-IRSB* do_iropt_BB ( IRSB* bb0,
- IRExpr* (*specHelper) (HChar*, IRExpr**),
- Bool (*preciseMemExnsFn)(Int,Int),
- Addr64 guest_addr )
+IRSB* do_iropt_BB(
+ IRSB* bb0,
+ IRExpr* (*specHelper) (HChar*, IRExpr**, IRStmt**, Int),
+ Bool (*preciseMemExnsFn)(Int,Int),
+ Addr64 guest_addr,
+ VexArch guest_arch
+ )
{
static Int n_total = 0;
static Int n_expensive = 0;
@@ -4546,6 +4555,15 @@
bb = cheap_transformations( bb, specHelper, preciseMemExnsFn );
+ if (guest_arch == VexArchARM) {
+ /* Translating Thumb2 code produces a lot of chaff. We have to
+ work extra hard to get rid of it. */
+ bb = cprop_BB(bb);
+ bb = spec_helpers_BB ( bb, specHelper );
+ redundant_put_removal_BB ( bb, preciseMemExnsFn );
+ do_deadcode_BB( bb );
+ }
+
if (vex_control.iropt_level > 1) {
/* Peer at what we have, to decide how much more effort to throw
Modified: trunk/priv/ir_opt.h
===================================================================
--- trunk/priv/ir_opt.h 2010-08-19 13:25:10 UTC (rev 2011)
+++ trunk/priv/ir_opt.h 2010-08-22 12:38:53 UTC (rev 2012)
@@ -43,10 +43,13 @@
/* Top level optimiser entry point. Returns a new BB. Operates
under the control of the global "vex_control" struct. */
extern
-IRSB* do_iropt_BB ( IRSB* bb,
- IRExpr* (*specHelper) (HChar*, IRExpr**),
- Bool (*preciseMemExnsFn)(Int,Int),
- Addr64 guest_addr );
+IRSB* do_iropt_BB(
+ IRSB* bb,
+ IRExpr* (*specHelper) (HChar*, IRExpr**, IRStmt**, Int),
+ Bool (*preciseMemExnsFn)(Int,Int),
+ Addr64 guest_addr,
+ VexArch guest_arch
+ );
/* Do a constant folding/propagation pass. */
extern
|
|
From: <sv...@va...> - 2010-08-22 12:24:54
|
Author: sewardj Date: 2010-08-22 13:24:47 +0100 (Sun, 22 Aug 2010) New Revision: 11285 Log: Merge from branches/THUMB: test programs for NEON and Thumb. Added: trunk/none/tests/arm/neon128.c trunk/none/tests/arm/neon64.c trunk/none/tests/arm/v6intThumb.c [... diff too large to include ...] |
|
From: <sv...@va...> - 2010-08-22 12:23:09
|
Author: sewardj
Date: 2010-08-22 13:23:01 +0100 (Sun, 22 Aug 2010)
New Revision: 11284
Log:
Merge from branches/THUMB: link-time stub needed on Ubuntu 10.04 (ARM)
(maybe. actually I am not sure why this is necessary).
Modified:
trunk/coregrind/m_main.c
Modified: trunk/coregrind/m_main.c
===================================================================
--- trunk/coregrind/m_main.c 2010-08-22 12:21:14 UTC (rev 11283)
+++ trunk/coregrind/m_main.c 2010-08-22 12:23:01 UTC (rev 11284)
@@ -2611,6 +2611,12 @@
VG_(printf)("Something called __aeabi_unwind_cpp_pr0()\n");
vg_assert(0);
}
+
+void __aeabi_unwind_cpp_pr1(void);
+void __aeabi_unwind_cpp_pr1(void){
+ VG_(printf)("Something called __aeabi_unwind_cpp_pr1()\n");
+ vg_assert(0);
+}
#endif
/* ---------------- Requirement 2 ---------------- */
|
|
From: <sv...@va...> - 2010-08-22 12:21:23
|
Author: sewardj
Date: 2010-08-22 13:21:14 +0100 (Sun, 22 Aug 2010)
New Revision: 11283
Log:
Merge from branches/THUMB: m_machine changes needed for Thumb support:
* track guest_R15 -> guest_R15T renaming
* change min instruction size to 2
* tidy up VG_(get_IP) etc functions a bit
* add hwcaps detection code for ARM
Modified:
trunk/coregrind/m_machine.c
trunk/coregrind/pub_core_machine.h
trunk/include/pub_tool_machine.h
Modified: trunk/coregrind/m_machine.c
===================================================================
--- trunk/coregrind/m_machine.c 2010-08-22 12:16:25 UTC (rev 11282)
+++ trunk/coregrind/m_machine.c 2010-08-22 12:21:14 UTC (rev 11283)
@@ -43,45 +43,23 @@
#define STACK_PTR(regs) ((regs).vex.VG_STACK_PTR)
#define FRAME_PTR(regs) ((regs).vex.VG_FRAME_PTR)
-Addr VG_(get_SP) ( ThreadId tid )
-{
+Addr VG_(get_IP) ( ThreadId tid ) {
+ return INSTR_PTR( VG_(threads)[tid].arch );
+}
+Addr VG_(get_SP) ( ThreadId tid ) {
return STACK_PTR( VG_(threads)[tid].arch );
}
-
-Addr VG_(get_IP) ( ThreadId tid )
-{
- return INSTR_PTR( VG_(threads)[tid].arch );
-}
-
-Addr VG_(get_FP) ( ThreadId tid )
-{
+Addr VG_(get_FP) ( ThreadId tid ) {
return FRAME_PTR( VG_(threads)[tid].arch );
}
-Addr VG_(get_LR) ( ThreadId tid )
-{
-# if defined(VGA_ppc32) || defined(VGA_ppc64)
- return VG_(threads)[tid].arch.vex.guest_LR;
-# elif defined(VGA_x86) || defined(VGA_amd64)
- return 0;
-# elif defined(VGA_arm)
- return VG_(threads)[tid].arch.vex.guest_R14;
-# else
-# error "Unknown arch"
-# endif
+void VG_(set_IP) ( ThreadId tid, Addr ip ) {
+ INSTR_PTR( VG_(threads)[tid].arch ) = ip;
}
-
-void VG_(set_SP) ( ThreadId tid, Addr sp )
-{
+void VG_(set_SP) ( ThreadId tid, Addr sp ) {
STACK_PTR( VG_(threads)[tid].arch ) = sp;
}
-void VG_(set_IP) ( ThreadId tid, Addr ip )
-{
- INSTR_PTR( VG_(threads)[tid].arch ) = ip;
-}
-
-
void VG_(get_UnwindStartRegs) ( /*OUT*/UnwindStartRegs* regs,
ThreadId tid )
{
@@ -106,7 +84,7 @@
regs->misc.PPC64.r_lr
= VG_(threads)[tid].arch.vex.guest_LR;
# elif defined(VGA_arm)
- regs->r_pc = (ULong)VG_(threads)[tid].arch.vex.guest_R15;
+ regs->r_pc = (ULong)VG_(threads)[tid].arch.vex.guest_R15T;
regs->r_sp = (ULong)VG_(threads)[tid].arch.vex.guest_R13;
regs->misc.ARM.r14
= VG_(threads)[tid].arch.vex.guest_R14;
@@ -385,13 +363,16 @@
#if defined(VGA_ppc64)
ULong VG_(machine_ppc64_has_VMX) = 0;
#endif
+#if defined(VGA_arm)
+Int VG_(machine_arm_archlevel) = 4;
+#endif
/* Determine what insn set and insn set variant the host has, and
record it. To be called once at system startup. Returns False if
this a CPU incapable of running Valgrind. */
-#if defined(VGA_ppc32) || defined(VGA_ppc64)
+#if defined(VGA_ppc32) || defined(VGA_ppc64) || defined(VGA_arm)
#include <setjmp.h> // For jmp_buf
static jmp_buf env_unsup_insn;
static void handler_unsup_insn ( Int x ) { __builtin_longjmp(env_unsup_insn,1); }
@@ -764,8 +745,110 @@
#elif defined(VGA_arm)
{
+ /* Same instruction set detection algorithm as for ppc32. */
+ vki_sigset_t saved_set, tmp_set;
+ vki_sigaction_fromK_t saved_sigill_act, saved_sigfpe_act;
+ vki_sigaction_toK_t tmp_sigill_act, tmp_sigfpe_act;
+
+ volatile Bool have_VFP, have_VFP2, have_VFP3, have_NEON;
+ volatile Int archlevel;
+ Int r;
+
+ /* This is a kludge. Really we ought to back-convert saved_act
+ into a toK_t using VG_(convert_sigaction_fromK_to_toK), but
+ since that's a no-op on all ppc64 platforms so far supported,
+ it's not worth the typing effort. At least include most basic
+ sanity check: */
+ vg_assert(sizeof(vki_sigaction_fromK_t) == sizeof(vki_sigaction_toK_t));
+
+ VG_(sigemptyset)(&tmp_set);
+ VG_(sigaddset)(&tmp_set, VKI_SIGILL);
+ VG_(sigaddset)(&tmp_set, VKI_SIGFPE);
+
+ r = VG_(sigprocmask)(VKI_SIG_UNBLOCK, &tmp_set, &saved_set);
+ vg_assert(r == 0);
+
+ r = VG_(sigaction)(VKI_SIGILL, NULL, &saved_sigill_act);
+ vg_assert(r == 0);
+ tmp_sigill_act = saved_sigill_act;
+
+ VG_(sigaction)(VKI_SIGFPE, NULL, &saved_sigfpe_act);
+ tmp_sigfpe_act = saved_sigfpe_act;
+
+ /* NODEFER: signal handler does not return (from the kernel's point of
+ view), hence if it is to successfully catch a signal more than once,
+ we need the NODEFER flag. */
+ tmp_sigill_act.sa_flags &= ~VKI_SA_RESETHAND;
+ tmp_sigill_act.sa_flags &= ~VKI_SA_SIGINFO;
+ tmp_sigill_act.sa_flags |= VKI_SA_NODEFER;
+ tmp_sigill_act.ksa_handler = handler_unsup_insn;
+ VG_(sigaction)(VKI_SIGILL, &tmp_sigill_act, NULL);
+
+ tmp_sigfpe_act.sa_flags &= ~VKI_SA_RESETHAND;
+ tmp_sigfpe_act.sa_flags &= ~VKI_SA_SIGINFO;
+ tmp_sigfpe_act.sa_flags |= VKI_SA_NODEFER;
+ tmp_sigfpe_act.ksa_handler = handler_unsup_insn;
+ VG_(sigaction)(VKI_SIGFPE, &tmp_sigfpe_act, NULL);
+
+ /* VFP insns */
+ have_VFP = True;
+ if (__builtin_setjmp(env_unsup_insn)) {
+ have_VFP = False;
+ } else {
+ __asm__ __volatile__(".word 0xEEB02B42"); /* VMOV.F64 d2, d2 */
+ }
+ /* There are several generation of VFP extension but they differs very
+ little so for now we will not distinguish them. */
+ have_VFP2 = have_VFP;
+ have_VFP3 = have_VFP;
+
+ /* NEON insns */
+ have_NEON = True;
+ if (__builtin_setjmp(env_unsup_insn)) {
+ have_NEON = False;
+ } else {
+ __asm__ __volatile__(".word 0xF2244154"); /* VMOV q2, q2 */
+ }
+
+ /* ARM architecture level */
+ archlevel = 5; /* v5 will be base level */
+ if (archlevel < 7) {
+ archlevel = 7;
+ if (__builtin_setjmp(env_unsup_insn)) {
+ archlevel = 5;
+ } else {
+ __asm__ __volatile__(".word 0xF45FF000"); /* PLI [PC,#-0] */
+ }
+ }
+ if (archlevel < 6) {
+ archlevel = 6;
+ if (__builtin_setjmp(env_unsup_insn)) {
+ archlevel = 5;
+ } else {
+ __asm__ __volatile__(".word 0xE6822012"); /* PKHBT r2, r2, r2 */
+ }
+ }
+
+ VG_(convert_sigaction_fromK_to_toK)(&saved_sigill_act, &tmp_sigill_act);
+ VG_(convert_sigaction_fromK_to_toK)(&saved_sigfpe_act, &tmp_sigfpe_act);
+ VG_(sigaction)(VKI_SIGILL, &tmp_sigill_act, NULL);
+ VG_(sigaction)(VKI_SIGFPE, &tmp_sigfpe_act, NULL);
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &saved_set, NULL);
+
+ VG_(debugLog)(1, "machine", "ARMv%d VFP %d VFP2 %d VFP3 %d NEON %d\n",
+ archlevel, (Int)have_VFP, (Int)have_VFP2, (Int)have_VFP3,
+ (Int)have_NEON);
+
+ VG_(machine_arm_archlevel) = archlevel;
+
va = VexArchARM;
- vai.hwcaps = 0;
+
+ vai.hwcaps = VEX_ARM_ARCHLEVEL(archlevel);
+ if (have_VFP3) vai.hwcaps |= VEX_HWCAPS_ARM_VFP3;
+ if (have_VFP2) vai.hwcaps |= VEX_HWCAPS_ARM_VFP2;
+ if (have_VFP) vai.hwcaps |= VEX_HWCAPS_ARM_VFP;
+ if (have_NEON) vai.hwcaps |= VEX_HWCAPS_ARM_NEON;
+
return True;
}
Modified: trunk/coregrind/pub_core_machine.h
===================================================================
--- trunk/coregrind/pub_core_machine.h 2010-08-22 12:16:25 UTC (rev 11282)
+++ trunk/coregrind/pub_core_machine.h 2010-08-22 12:21:14 UTC (rev 11283)
@@ -96,7 +96,7 @@
# define VG_STACK_PTR guest_GPR1
# define VG_FRAME_PTR guest_GPR1 // No frame ptr for PPC
#elif defined(VGA_arm)
-# define VG_INSTR_PTR guest_R15
+# define VG_INSTR_PTR guest_R15T
# define VG_STACK_PTR guest_R13
# define VG_FRAME_PTR guest_R11
#else
@@ -110,6 +110,18 @@
//-------------------------------------------------------------
+// Guest state accessors that are not visible to tools. The only
+// ones that are visible are get_IP and get_SP.
+
+//Addr VG_(get_IP) ( ThreadId tid ); // in pub_tool_machine.h
+//Addr VG_(get_SP) ( ThreadId tid ); // in pub_tool_machine.h
+Addr VG_(get_FP) ( ThreadId tid );
+
+void VG_(set_IP) ( ThreadId tid, Addr encip );
+void VG_(set_SP) ( ThreadId tid, Addr sp );
+
+
+//-------------------------------------------------------------
// Get hold of the values needed for a stack unwind, for the specified
// (client) thread.
void VG_(get_UnwindStartRegs) ( /*OUT*/UnwindStartRegs* regs,
@@ -198,6 +210,10 @@
extern ULong VG_(machine_ppc64_has_VMX);
#endif
+#if defined(VGA_arm)
+extern Int VG_(machine_arm_archlevel);
+#endif
+
#endif // __PUB_CORE_MACHINE_H
/*--------------------------------------------------------------------*/
Modified: trunk/include/pub_tool_machine.h
===================================================================
--- trunk/include/pub_tool_machine.h 2010-08-22 12:16:25 UTC (rev 11282)
+++ trunk/include/pub_tool_machine.h 2010-08-22 12:21:14 UTC (rev 11283)
@@ -59,7 +59,7 @@
// Supplement 1.7
#elif defined(VGP_arm_linux)
-# define VG_MIN_INSTR_SZB 4
+# define VG_MIN_INSTR_SZB 2
# define VG_MAX_INSTR_SZB 4
# define VG_CLREQ_SZB 28
# define VG_STACK_REDZONE_SZB 0
@@ -99,13 +99,11 @@
#endif
// Guest state accessors
-extern Addr VG_(get_SP) ( ThreadId tid );
-extern Addr VG_(get_IP) ( ThreadId tid );
-extern Addr VG_(get_FP) ( ThreadId tid );
-extern Addr VG_(get_LR) ( ThreadId tid );
+// Are mostly in the core_ header.
+// Only these two are available to tools.
+Addr VG_(get_IP) ( ThreadId tid );
+Addr VG_(get_SP) ( ThreadId tid );
-extern void VG_(set_SP) ( ThreadId tid, Addr sp );
-extern void VG_(set_IP) ( ThreadId tid, Addr ip );
// For get/set, 'area' is where the asked-for guest state will be copied
// into/from. If shadowNo == 0, the real (non-shadow) guest state is
|
|
From: <sv...@va...> - 2010-08-22 12:16:33
|
Author: sewardj
Date: 2010-08-22 13:16:25 +0100 (Sun, 22 Aug 2010)
New Revision: 11282
Log:
Merge from branches/THUMB: add (partial) --track-origins support for
new guest state components needed for Thumb and NEON support.
Modified:
trunk/memcheck/mc_machine.c
Modified: trunk/memcheck/mc_machine.c
===================================================================
--- trunk/memcheck/mc_machine.c 2010-08-22 12:13:35 UTC (rev 11281)
+++ trunk/memcheck/mc_machine.c 2010-08-22 12:16:25 UTC (rev 11282)
@@ -711,7 +711,7 @@
if (o == GOF(R14) && sz == 4) return o;
/* EAZG: These may be completely wrong. */
- if (o == GOF(R15) && sz == 4) return -1; /* slot unused */
+ if (o == GOF(R15T) && sz == 4) return -1; /* slot unused */
if (o == GOF(CC_OP) && sz == 4) return -1; /* slot unused */
if (o == GOF(CC_DEP1) && sz == 4) return o;
@@ -719,6 +719,8 @@
if (o == GOF(CC_NDEP) && sz == 4) return -1; /* slot unused */
+ if (o == GOF(QFLAG32) && sz == 4) return o;
+
//if (o == GOF(SYSCALLNO) && sz == 4) return -1; /* slot unused */
//if (o == GOF(CC) && sz == 4) return -1; /* slot unused */
//if (o == GOF(EMWARN) && sz == 4) return -1; /* slot unused */
@@ -727,6 +729,7 @@
if (o == GOF(FPSCR) && sz == 4) return -1;
if (o == GOF(TPIDRURO) && sz == 4) return -1;
+ if (o == GOF(ITSTATE) && sz == 4) return -1;
if (o >= GOF(D0) && o+sz <= GOF(D0) +SZB(D0)) return GOF(D0);
if (o >= GOF(D1) && o+sz <= GOF(D1) +SZB(D1)) return GOF(D1);
|
|
From: <sv...@va...> - 2010-08-22 12:13:43
|
Author: sewardj
Date: 2010-08-22 13:13:35 +0100 (Sun, 22 Aug 2010)
New Revision: 11281
Log:
Merge from branches/THUMB: supps and mandatory redirs on Ubuntu 10.04 (ARM)
Modified:
trunk/coregrind/m_redir.c
trunk/glibc-2.X.supp.in
Modified: trunk/coregrind/m_redir.c
===================================================================
--- trunk/coregrind/m_redir.c 2010-08-22 12:08:59 UTC (rev 11280)
+++ trunk/coregrind/m_redir.c 2010-08-22 12:13:35 UTC (rev 11281)
@@ -1009,7 +1009,7 @@
add_hardwired_spec(
"ld-linux.so.3", "strlen",
(Addr)&VG_(arm_linux_REDIR_FOR_strlen),
- NULL
+ complain_about_stripped_glibc_ldso
);
//add_hardwired_spec(
// "ld-linux.so.3", "index",
@@ -1019,7 +1019,7 @@
add_hardwired_spec(
"ld-linux.so.3", "memcpy",
(Addr)&VG_(arm_linux_REDIR_FOR_memcpy),
- NULL
+ complain_about_stripped_glibc_ldso
);
}
/* nothing so far */
Modified: trunk/glibc-2.X.supp.in
===================================================================
--- trunk/glibc-2.X.supp.in 2010-08-22 12:08:59 UTC (rev 11280)
+++ trunk/glibc-2.X.supp.in 2010-08-22 12:13:35 UTC (rev 11281)
@@ -228,3 +228,11 @@
obj:/lib/libpthread-0.10.so
fun:pthread_create
}
+
+##----------------------------------------------------------------------##
+# Ubuntu 10.04 on ARM (Thumb). Not sure why this is necessary.
+{
+ U1004-ARM-_dl_relocate_object
+ Memcheck:Cond
+ fun:_dl_relocate_object
+}
|
|
From: <sv...@va...> - 2010-08-22 12:09:07
|
Author: sewardj
Date: 2010-08-22 13:08:59 +0100 (Sun, 22 Aug 2010)
New Revision: 11280
Log:
Merge from branches/THUMB: add support for sys_pselect6, sys_pipe2,
sys_inotify_init1 on arm-linux.
Modified:
trunk/coregrind/m_syswrap/syswrap-arm-linux.c
trunk/include/vki/vki-scnums-arm-linux.h
Modified: trunk/coregrind/m_syswrap/syswrap-arm-linux.c
===================================================================
--- trunk/coregrind/m_syswrap/syswrap-arm-linux.c 2010-08-22 12:03:45 UTC (rev 11279)
+++ trunk/coregrind/m_syswrap/syswrap-arm-linux.c 2010-08-22 12:08:59 UTC (rev 11280)
@@ -1657,10 +1657,13 @@
// correspond to what's in include/vki/vki-scnums-arm-linux.h.
// From here onwards, please ensure the numbers are correct.
+ LINX_(__NR_pselect6, sys_pselect6), // 335
+
LINXY(__NR_signalfd4, sys_signalfd4), // 355
LINX_(__NR_eventfd2, sys_eventfd2), // 356
- LINXY(__NR_pipe2, sys_pipe2) // 359
+ LINXY(__NR_pipe2, sys_pipe2), // 359
+ LINXY(__NR_inotify_init1, sys_inotify_init1) // 360
};
Modified: trunk/include/vki/vki-scnums-arm-linux.h
===================================================================
--- trunk/include/vki/vki-scnums-arm-linux.h 2010-08-22 12:03:45 UTC (rev 11279)
+++ trunk/include/vki/vki-scnums-arm-linux.h 2010-08-22 12:08:59 UTC (rev 11280)
@@ -370,7 +370,7 @@
#define __NR_readlinkat 332
#define __NR_fchmodat 333
#define __NR_faccessat 334
- /* 335 for pselect6 */
+#define __NR_pselect6 335 /* JRS 20100812: is this correct? */
/* 336 for ppoll */
#define __NR_unshare 337
#define __NR_set_robust_list 338
|
|
From: <sv...@va...> - 2010-08-22 12:03:53
|
Author: sewardj
Date: 2010-08-22 13:03:45 +0100 (Sun, 22 Aug 2010)
New Revision: 11279
Log:
Merge from branches/THUMB: rack renaming of guest_R15 to guest_R15T.
Also, add extra FPSCR masking for FPSCR invariant state sanity checks.
Modified:
trunk/coregrind/m_dispatch/dispatch-arm-linux.S
Modified: trunk/coregrind/m_dispatch/dispatch-arm-linux.S
===================================================================
--- trunk/coregrind/m_dispatch/dispatch-arm-linux.S 2010-08-22 12:00:40 UTC (rev 11278)
+++ trunk/coregrind/m_dispatch/dispatch-arm-linux.S 2010-08-22 12:03:45 UTC (rev 11279)
@@ -1,4 +1,3 @@
-
/*--------------------------------------------------------------------*/
/*--- The core dispatch loop, for jumping to a code address. ---*/
/*--- dispatch-arm-linux.S ---*/
@@ -30,6 +29,7 @@
*/
#if defined(VGP_arm_linux)
+ .fpu vfp
#include "pub_core_basics_asm.h"
#include "pub_core_dispatch_asm.h"
@@ -63,7 +63,7 @@
/* r0 (hence also [sp,#0]) holds guest_state */
/* r1 holds do_profiling */
mov r8, r0
- ldr r0, [r8, #OFFSET_arm_R15]
+ ldr r0, [r8, #OFFSET_arm_R15T]
/* fall into main loop (the right one) */
cmp r1, #0 /* do_profiling */
@@ -87,7 +87,7 @@
bne gsp_changed
/* save the jump address in the guest state */
- str r0, [r8, #OFFSET_arm_R15]
+ str r0, [r8, #OFFSET_arm_R15T]
/* Are we out of timeslice? If yes, defer to scheduler. */
ldr r1, =VG_(dispatch_ctr)
@@ -132,7 +132,7 @@
bne gsp_changed
/* save the jump address in the guest state */
- str r0, [r8, #OFFSET_arm_R15]
+ str r0, [r8, #OFFSET_arm_R15T]
/* Are we out of timeslice? If yes, defer to scheduler. */
ldr r1, =VG_(dispatch_ctr)
@@ -172,22 +172,22 @@
/*----------------------------------------------------*/
gsp_changed:
- // r0 = next guest addr (R15), r8 = modified gsp
+ // r0 = next guest addr (R15T), r8 = modified gsp
/* Someone messed with the gsp. Have to
defer to scheduler to resolve this. dispatch ctr
is not yet decremented, so no need to increment. */
- /* R15 is NOT up to date here. First, need to write
- r0 back to R15, but without trashing r8 since
+ /* R15T is NOT up to date here. First, need to write
+ r0 back to R15T, but without trashing r8 since
that holds the value we want to return to the scheduler.
Hence use r1 transiently for the guest state pointer. */
ldr r1, [sp, #0]
- str r0, [r1, #OFFSET_arm_R15]
+ str r0, [r1, #OFFSET_arm_R15T]
mov r0, r8 // "return modified gsp"
b run_innerloop_exit
/*NOTREACHED*/
counter_is_zero:
- /* R15 is up to date here */
+ /* R15T is up to date here */
/* Back out increment of the dispatch ctr */
ldr r1, =VG_(dispatch_ctr)
ldr r2, [r1]
@@ -198,7 +198,7 @@
/*NOTREACHED*/
fast_lookup_failed:
- /* R15 is up to date here */
+ /* R15T is up to date here */
/* Back out increment of the dispatch ctr */
ldr r1, =VG_(dispatch_ctr)
ldr r2, [r1]
@@ -215,8 +215,8 @@
/* We're leaving. Check that nobody messed with
FPSCR in ways we don't expect. */
fmrx r4, fpscr
- bic r4, #0xF0000000 /* mask out NZCV */
- bic r4, #0x0000001F /* mask out IXC,UFC,OFC,DZC,IOC */
+ bic r4, #0xF8000000 /* mask out NZCV and QC */
+ bic r4, #0x0000009F /* mask out IDC,IXC,UFC,OFC,DZC,IOC */
cmp r4, #0
bne invariant_violation
b run_innerloop_exit_REALLY
|
|
From: <sv...@va...> - 2010-08-22 12:00:49
|
Author: sewardj
Date: 2010-08-22 13:00:40 +0100 (Sun, 22 Aug 2010)
New Revision: 11278
Log:
Merge from branches/THUMB: track renaming of guest_R15 to guest_R15T.
Modified:
trunk/coregrind/m_coredump/coredump-elf.c
trunk/coregrind/m_debugger.c
trunk/coregrind/m_initimg/initimg-linux.c
trunk/coregrind/m_sigframe/sigframe-arm-linux.c
trunk/coregrind/m_syswrap/syswrap-main.c
Modified: trunk/coregrind/m_coredump/coredump-elf.c
===================================================================
--- trunk/coregrind/m_coredump/coredump-elf.c 2010-08-22 11:54:14 UTC (rev 11277)
+++ trunk/coregrind/m_coredump/coredump-elf.c 2010-08-22 12:00:40 UTC (rev 11278)
@@ -340,7 +340,7 @@
regs->ARM_ip = arch->vex.guest_R12;
regs->ARM_sp = arch->vex.guest_R13;
regs->ARM_lr = arch->vex.guest_R14;
- regs->ARM_pc = arch->vex.guest_R15;
+ regs->ARM_pc = arch->vex.guest_R15T;
regs->ARM_cpsr = LibVEX_GuestARM_get_cpsr( &((ThreadArchState*)arch)->vex );
#else
Modified: trunk/coregrind/m_debugger.c
===================================================================
--- trunk/coregrind/m_debugger.c 2010-08-22 11:54:14 UTC (rev 11277)
+++ trunk/coregrind/m_debugger.c 2010-08-22 12:00:40 UTC (rev 11278)
@@ -223,7 +223,7 @@
uregs.ARM_ip = vex->guest_R12;
uregs.ARM_sp = vex->guest_R13;
uregs.ARM_lr = vex->guest_R14;
- uregs.ARM_pc = vex->guest_R15;
+ uregs.ARM_pc = vex->guest_R15T;
uregs.ARM_cpsr = LibVEX_GuestARM_get_cpsr(vex);
return VG_(ptrace)(VKI_PTRACE_SETREGS, pid, NULL, &uregs);
Modified: trunk/coregrind/m_initimg/initimg-linux.c
===================================================================
--- trunk/coregrind/m_initimg/initimg-linux.c 2010-08-22 11:54:14 UTC (rev 11277)
+++ trunk/coregrind/m_initimg/initimg-linux.c 2010-08-22 12:00:40 UTC (rev 11278)
@@ -1025,8 +1025,8 @@
VG_(memset)(&arch->vex_shadow1, 0, sizeof(VexGuestARMState));
VG_(memset)(&arch->vex_shadow2, 0, sizeof(VexGuestARMState));
- arch->vex.guest_R13 = iifii.initial_client_SP;
- arch->vex.guest_R15 = iifii.initial_client_IP;
+ arch->vex.guest_R13 = iifii.initial_client_SP;
+ arch->vex.guest_R15T = iifii.initial_client_IP;
/* This is just EABI stuff. */
// FIXME jrs: what's this for?
Modified: trunk/coregrind/m_sigframe/sigframe-arm-linux.c
===================================================================
--- trunk/coregrind/m_sigframe/sigframe-arm-linux.c 2010-08-22 11:54:14 UTC (rev 11277)
+++ trunk/coregrind/m_sigframe/sigframe-arm-linux.c 2010-08-22 12:00:40 UTC (rev 11278)
@@ -139,7 +139,7 @@
SC2(ip,R12);
SC2(sp,R13);
SC2(lr,R14);
- SC2(pc,R15);
+ SC2(pc,R15T);
# undef SC2
sc->trap_no = trapno;
@@ -236,20 +236,20 @@
tst->arch.vex.guest_R1 = (Addr)&rsf->info;
tst->arch.vex.guest_R2 = (Addr)&rsf->sig.uc;
}
- else{
+ else {
build_sigframe(tst, (struct sigframe *)sp, siginfo, siguc,
handler, flags, mask, restorer);
- }
+ }
VG_(set_SP)(tid, sp);
VG_TRACK( post_reg_write, Vg_CoreSignal, tid, VG_O_STACK_PTR,
sizeof(Addr));
- tst->arch.vex.guest_R0 = sigNo;
+ tst->arch.vex.guest_R0 = sigNo;
- if(flags & VKI_SA_RESTORER)
- tst->arch.vex.guest_R14 = (Addr) restorer;
+ if (flags & VKI_SA_RESTORER)
+ tst->arch.vex.guest_R14 = (Addr) restorer;
- tst->arch.vex.guest_R15 = (Addr) handler; /* R15 == PC */
+ tst->arch.vex.guest_R15T = (Addr) handler; /* R15 == PC */
}
@@ -312,7 +312,7 @@
REST(ip,R12);
REST(sp,R13);
REST(lr,R14);
- REST(pc,R15);
+ REST(pc,R15T);
# undef REST
tst->arch.vex_shadow1 = priv->vex_shadow1;
@@ -323,8 +323,9 @@
if (VG_(clo_trace_signals))
VG_(message)(Vg_DebugMsg,
- "vg_pop_signal_frame (thread %d): isRT=%d valid magic; PC=%#x",
- tid, has_siginfo, tst->arch.vex.guest_R15);
+ "vg_pop_signal_frame (thread %d): "
+ "isRT=%d valid magic; PC=%#x",
+ tid, has_siginfo, tst->arch.vex.guest_R15T);
/* tell the tools */
VG_TRACK( post_deliver_signal, tid, sigNo );
Modified: trunk/coregrind/m_syswrap/syswrap-main.c
===================================================================
--- trunk/coregrind/m_syswrap/syswrap-main.c 2010-08-22 11:54:14 UTC (rev 11277)
+++ trunk/coregrind/m_syswrap/syswrap-main.c 2010-08-22 12:00:40 UTC (rev 11278)
@@ -1864,14 +1864,16 @@
}
#elif defined(VGP_arm_linux)
- arch->vex.guest_R15 -= 4; // sizeof(arm instr)
+ // INTERWORKING FIXME. This is certainly wrong. Need to look at
+ // R15T to determine current mode, then back up accordingly.
+ arch->vex.guest_R15T -= 4; // sizeof(arm instr)
{
- UChar *p = (UChar*)arch->vex.guest_R15;
+ UChar *p = (UChar*)arch->vex.guest_R15T;
if ((p[3] & 0xF) != 0xF)
VG_(message)(Vg_DebugMsg,
"?! restarting over syscall that is not syscall at %#llx %02x %02x %02x %02x\n",
- arch->vex.guest_R15 + 0ULL, p[0], p[1], p[2], p[3]);
+ arch->vex.guest_R15T + 0ULL, p[0], p[1], p[2], p[3]);
vg_assert((p[3] & 0xF) == 0xF);
}
|
|
From: <sv...@va...> - 2010-08-22 11:54:23
|
Author: sewardj
Date: 2010-08-22 12:54:14 +0100 (Sun, 22 Aug 2010)
New Revision: 11277
Log:
Merge from branches/THUMB: tool-side handling of new primops required
for NEON support. Requires further checking.
Modified:
trunk/exp-ptrcheck/h_main.c
trunk/memcheck/mc_translate.c
Modified: trunk/exp-ptrcheck/h_main.c
===================================================================
--- trunk/exp-ptrcheck/h_main.c 2010-08-22 11:51:26 UTC (rev 11276)
+++ trunk/exp-ptrcheck/h_main.c 2010-08-22 11:54:14 UTC (rev 11277)
@@ -2330,7 +2330,9 @@
ADD(0, __NR_getuid32);
# endif
ADD(0, __NR_getxattr);
+# if defined(__NR_ioperm)
ADD(0, __NR_ioperm);
+# endif
ADD(0, __NR_inotify_add_watch);
ADD(0, __NR_inotify_init);
# if defined(__NR_inotify_init1)
@@ -4339,7 +4341,7 @@
case Iop_CmpEQ32x2: case Iop_CmpEQ16x4: case Iop_CmpGT8Sx8:
case Iop_CmpGT32Sx2: case Iop_CmpGT16Sx4: case Iop_MulHi16Sx4:
case Iop_Mul16x4: case Iop_ShlN32x2: case Iop_ShlN16x4:
- case Iop_SarN32x2: case Iop_SarN16x4: case Iop_ShrN32x2:
+ case Iop_SarN32x2: case Iop_SarN16x4: case Iop_ShrN32x2: case Iop_ShrN8x8:
case Iop_ShrN16x4: case Iop_Sub8x8: case Iop_Sub32x2:
case Iop_QSub8Sx8: case Iop_QSub16Sx4: case Iop_QSub8Ux8:
case Iop_QSub16Ux4: case Iop_Sub16x4: case Iop_InterleaveHI8x8:
Modified: trunk/memcheck/mc_translate.c
===================================================================
--- trunk/memcheck/mc_translate.c 2010-08-22 11:51:26 UTC (rev 11276)
+++ trunk/memcheck/mc_translate.c 2010-08-22 11:54:14 UTC (rev 11277)
@@ -398,6 +398,8 @@
}
/* build various kinds of expressions */
+#define triop(_op, _arg1, _arg2, _arg3) \
+ IRExpr_Triop((_op),(_arg1),(_arg2),(_arg3))
#define binop(_op, _arg1, _arg2) IRExpr_Binop((_op),(_arg1),(_arg2))
#define unop(_op, _arg) IRExpr_Unop((_op),(_arg))
#define mkU8(_n) IRExpr_Const(IRConst_U8(_n))
@@ -1849,6 +1851,28 @@
return at;
}
+/* --- --- ... and ... 32Fx2 versions of the same --- --- */
+
+static
+IRAtom* binary32Fx2 ( MCEnv* mce, IRAtom* vatomX, IRAtom* vatomY )
+{
+ IRAtom* at;
+ tl_assert(isShadowAtom(mce, vatomX));
+ tl_assert(isShadowAtom(mce, vatomY));
+ at = mkUifU64(mce, vatomX, vatomY);
+ at = assignNew('V', mce, Ity_I64, mkPCast32x2(mce, at));
+ return at;
+}
+
+static
+IRAtom* unary32Fx2 ( MCEnv* mce, IRAtom* vatomX )
+{
+ IRAtom* at;
+ tl_assert(isShadowAtom(mce, vatomX));
+ at = assignNew('V', mce, Ity_I64, mkPCast32x2(mce, vatomX));
+ return at;
+}
+
/* --- --- Vector saturated narrowing --- --- */
/* This is quite subtle. What to do is simple:
@@ -1918,7 +1942,55 @@
return at3;
}
+static
+IRAtom* vectorShortenV128 ( MCEnv* mce, IROp shorten_op,
+ IRAtom* vatom1)
+{
+ IRAtom *at1, *at2;
+ IRAtom* (*pcast)( MCEnv*, IRAtom* );
+ switch (shorten_op) {
+ case Iop_Shorten16x8: pcast = mkPCast16x8; break;
+ case Iop_Shorten32x4: pcast = mkPCast32x4; break;
+ case Iop_Shorten64x2: pcast = mkPCast64x2; break;
+ case Iop_QShortenS16Sx8: pcast = mkPCast16x8; break;
+ case Iop_QShortenU16Sx8: pcast = mkPCast16x8; break;
+ case Iop_QShortenU16Ux8: pcast = mkPCast16x8; break;
+ case Iop_QShortenS32Sx4: pcast = mkPCast32x4; break;
+ case Iop_QShortenU32Sx4: pcast = mkPCast32x4; break;
+ case Iop_QShortenU32Ux4: pcast = mkPCast32x4; break;
+ case Iop_QShortenS64Sx2: pcast = mkPCast64x2; break;
+ case Iop_QShortenU64Sx2: pcast = mkPCast64x2; break;
+ case Iop_QShortenU64Ux2: pcast = mkPCast64x2; break;
+ default: VG_(tool_panic)("vectorShortenV128");
+ }
+ tl_assert(isShadowAtom(mce,vatom1));
+ at1 = assignNew('V', mce, Ity_V128, pcast(mce, vatom1));
+ at2 = assignNew('V', mce, Ity_I64, unop(shorten_op, at1));
+ return at2;
+}
+static
+IRAtom* vectorLongenI64 ( MCEnv* mce, IROp longen_op,
+ IRAtom* vatom1)
+{
+ IRAtom *at1, *at2;
+ IRAtom* (*pcast)( MCEnv*, IRAtom* );
+ switch (longen_op) {
+ case Iop_Longen8Ux8: pcast = mkPCast16x8; break;
+ case Iop_Longen8Sx8: pcast = mkPCast16x8; break;
+ case Iop_Longen16Ux4: pcast = mkPCast32x4; break;
+ case Iop_Longen16Sx4: pcast = mkPCast32x4; break;
+ case Iop_Longen32Ux2: pcast = mkPCast64x2; break;
+ case Iop_Longen32Sx2: pcast = mkPCast64x2; break;
+ default: VG_(tool_panic)("vectorLongenI64");
+ }
+ tl_assert(isShadowAtom(mce,vatom1));
+ at1 = assignNew('V', mce, Ity_V128, unop(longen_op, vatom1));
+ at2 = assignNew('V', mce, Ity_V128, pcast(mce, at1));
+ return at2;
+}
+
+
/* --- --- Vector integer arithmetic --- --- */
/* Simple ... UifU the args and per-lane pessimise the results. */
@@ -1990,7 +2062,16 @@
return at;
}
+static
+IRAtom* binary64Ix1 ( MCEnv* mce, IRAtom* vatom1, IRAtom* vatom2 )
+{
+ IRAtom* at;
+ at = mkUifU64(mce, vatom1, vatom2);
+ at = mkPCastTo(mce, Ity_I64, at);
+ return at;
+}
+
/*------------------------------------------------------------*/
/*--- Generate shadow values from all kinds of IRExprs. ---*/
/*------------------------------------------------------------*/
@@ -2077,6 +2158,17 @@
case Iop_DivF32:
/* I32(rm) x F32 x F32 -> I32 */
return mkLazy3(mce, Ity_I32, vatom1, vatom2, vatom3);
+ case Iop_ExtractV128:
+ complainIfUndefined(mce, atom3);
+ return assignNew('V', mce, Ity_V128, triop(op, vatom1, vatom2, atom3));
+ case Iop_Extract64:
+ complainIfUndefined(mce, atom3);
+ return assignNew('V', mce, Ity_I64, triop(op, vatom1, vatom2, atom3));
+ case Iop_SetElem8x8:
+ case Iop_SetElem16x4:
+ case Iop_SetElem32x2:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I64, triop(op, vatom1, atom2, vatom3));
default:
ppIROp(op);
VG_(tool_panic)("memcheck:expr2vbits_Triop");
@@ -2107,6 +2199,7 @@
/* 64-bit SIMD */
+ case Iop_ShrN8x8:
case Iop_ShrN16x4:
case Iop_ShrN32x2:
case Iop_SarN8x8:
@@ -2125,20 +2218,29 @@
return vectorNarrow64(mce, op, vatom1, vatom2);
case Iop_Min8Ux8:
+ case Iop_Min8Sx8:
case Iop_Max8Ux8:
+ case Iop_Max8Sx8:
case Iop_Avg8Ux8:
case Iop_QSub8Sx8:
case Iop_QSub8Ux8:
case Iop_Sub8x8:
case Iop_CmpGT8Sx8:
+ case Iop_CmpGT8Ux8:
case Iop_CmpEQ8x8:
case Iop_QAdd8Sx8:
case Iop_QAdd8Ux8:
+ case Iop_QSal8x8:
+ case Iop_QShl8x8:
case Iop_Add8x8:
+ case Iop_Mul8x8:
+ case Iop_PolynomialMul8x8:
return binary8Ix8(mce, vatom1, vatom2);
case Iop_Min16Sx4:
+ case Iop_Min16Ux4:
case Iop_Max16Sx4:
+ case Iop_Max16Ux4:
case Iop_Avg16Ux4:
case Iop_QSub16Ux4:
case Iop_QSub16Sx4:
@@ -2147,19 +2249,136 @@
case Iop_MulHi16Sx4:
case Iop_MulHi16Ux4:
case Iop_CmpGT16Sx4:
+ case Iop_CmpGT16Ux4:
case Iop_CmpEQ16x4:
case Iop_QAdd16Sx4:
case Iop_QAdd16Ux4:
+ case Iop_QSal16x4:
+ case Iop_QShl16x4:
case Iop_Add16x4:
+ case Iop_QDMulHi16Sx4:
+ case Iop_QRDMulHi16Sx4:
return binary16Ix4(mce, vatom1, vatom2);
case Iop_Sub32x2:
case Iop_Mul32x2:
+ case Iop_Max32Sx2:
+ case Iop_Max32Ux2:
+ case Iop_Min32Sx2:
+ case Iop_Min32Ux2:
case Iop_CmpGT32Sx2:
+ case Iop_CmpGT32Ux2:
case Iop_CmpEQ32x2:
case Iop_Add32x2:
+ case Iop_QAdd32Ux2:
+ case Iop_QAdd32Sx2:
+ case Iop_QSub32Ux2:
+ case Iop_QSub32Sx2:
+ case Iop_QSal32x2:
+ case Iop_QShl32x2:
+ case Iop_QDMulHi32Sx2:
+ case Iop_QRDMulHi32Sx2:
return binary32Ix2(mce, vatom1, vatom2);
+ case Iop_QSub64Ux1:
+ case Iop_QSub64Sx1:
+ case Iop_QAdd64Ux1:
+ case Iop_QAdd64Sx1:
+ case Iop_QSal64x1:
+ case Iop_QShl64x1:
+ case Iop_Sal64x1:
+ return binary64Ix1(mce, vatom1, vatom2);
+
+ case Iop_QShlN8Sx8:
+ case Iop_QShlN8x8:
+ case Iop_QSalN8x8:
+ complainIfUndefined(mce, atom2);
+ return mkPCast8x8(mce, vatom1);
+
+ case Iop_QShlN16Sx4:
+ case Iop_QShlN16x4:
+ case Iop_QSalN16x4:
+ complainIfUndefined(mce, atom2);
+ return mkPCast16x4(mce, vatom1);
+
+ case Iop_QShlN32Sx2:
+ case Iop_QShlN32x2:
+ case Iop_QSalN32x2:
+ complainIfUndefined(mce, atom2);
+ return mkPCast32x2(mce, vatom1);
+
+ case Iop_QShlN64Sx1:
+ case Iop_QShlN64x1:
+ case Iop_QSalN64x1:
+ complainIfUndefined(mce, atom2);
+ return mkPCast32x2(mce, vatom1);
+
+ case Iop_PwMax32Sx2:
+ case Iop_PwMax32Ux2:
+ case Iop_PwMin32Sx2:
+ case Iop_PwMin32Ux2:
+ case Iop_PwMax32Fx2:
+ case Iop_PwMin32Fx2:
+ return assignNew('V', mce, Ity_I64, binop(Iop_PwMax32Ux2, mkPCast32x2(mce, vatom1),
+ mkPCast32x2(mce, vatom2)));
+
+ case Iop_PwMax16Sx4:
+ case Iop_PwMax16Ux4:
+ case Iop_PwMin16Sx4:
+ case Iop_PwMin16Ux4:
+ return assignNew('V', mce, Ity_I64, binop(Iop_PwMax16Ux4, mkPCast16x4(mce, vatom1),
+ mkPCast16x4(mce, vatom2)));
+
+ case Iop_PwMax8Sx8:
+ case Iop_PwMax8Ux8:
+ case Iop_PwMin8Sx8:
+ case Iop_PwMin8Ux8:
+ return assignNew('V', mce, Ity_I64, binop(Iop_PwMax8Ux8, mkPCast8x8(mce, vatom1),
+ mkPCast8x8(mce, vatom2)));
+
+ case Iop_PwAdd32x2:
+ case Iop_PwAdd32Fx2:
+ return mkPCast32x2(mce,
+ assignNew('V', mce, Ity_I64, binop(Iop_PwAdd32x2, mkPCast32x2(mce, vatom1),
+ mkPCast32x2(mce, vatom2))));
+
+ case Iop_PwAdd16x4:
+ return mkPCast16x4(mce,
+ assignNew('V', mce, Ity_I64, binop(op, mkPCast16x4(mce, vatom1),
+ mkPCast16x4(mce, vatom2))));
+
+ case Iop_PwAdd8x8:
+ return mkPCast8x8(mce,
+ assignNew('V', mce, Ity_I64, binop(op, mkPCast8x8(mce, vatom1),
+ mkPCast8x8(mce, vatom2))));
+
+ case Iop_Shl8x8:
+ case Iop_Shr8x8:
+ case Iop_Sar8x8:
+ case Iop_Sal8x8:
+ return mkUifU64(mce,
+ assignNew('V', mce, Ity_I64, binop(op, vatom1, atom2)),
+ mkPCast8x8(mce,vatom2)
+ );
+
+ case Iop_Shl16x4:
+ case Iop_Shr16x4:
+ case Iop_Sar16x4:
+ case Iop_Sal16x4:
+ return mkUifU64(mce,
+ assignNew('V', mce, Ity_I64, binop(op, vatom1, atom2)),
+ mkPCast16x4(mce,vatom2)
+ );
+
+ case Iop_Shl32x2:
+ case Iop_Shr32x2:
+ case Iop_Sar32x2:
+ case Iop_Sal32x2:
+ return mkUifU64(mce,
+ assignNew('V', mce, Ity_I64, binop(op, vatom1, atom2)),
+ mkPCast32x2(mce,vatom2)
+ );
+
/* 64-bit data-steering */
case Iop_InterleaveLO32x2:
case Iop_InterleaveLO16x4:
@@ -2167,10 +2386,26 @@
case Iop_InterleaveHI32x2:
case Iop_InterleaveHI16x4:
case Iop_InterleaveHI8x8:
+ case Iop_CatOddLanes8x8:
+ case Iop_CatEvenLanes8x8:
case Iop_CatOddLanes16x4:
case Iop_CatEvenLanes16x4:
+ case Iop_InterleaveOddLanes8x8:
+ case Iop_InterleaveEvenLanes8x8:
+ case Iop_InterleaveOddLanes16x4:
+ case Iop_InterleaveEvenLanes16x4:
return assignNew('V', mce, Ity_I64, binop(op, vatom1, vatom2));
+ case Iop_GetElem8x8:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I8, binop(op, vatom1, atom2));
+ case Iop_GetElem16x4:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I16, binop(op, vatom1, atom2));
+ case Iop_GetElem32x2:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I32, binop(op, vatom1, atom2));
+
/* Perm8x8: rearrange values in left arg using steering values
from right arg. So rearrange the vbits in the same way but
pessimise wrt steering values. */
@@ -2183,16 +2418,18 @@
/* V128-bit SIMD */
+ case Iop_ShrN8x16:
case Iop_ShrN16x8:
case Iop_ShrN32x4:
case Iop_ShrN64x2:
+ case Iop_SarN8x16:
case Iop_SarN16x8:
case Iop_SarN32x4:
+ case Iop_SarN64x2:
+ case Iop_ShlN8x16:
case Iop_ShlN16x8:
case Iop_ShlN32x4:
case Iop_ShlN64x2:
- case Iop_ShlN8x16:
- case Iop_SarN8x16:
/* Same scheme as with all other shifts. Note: 22 Oct 05:
this is wrong now, scalar shifts are done properly lazily.
Vector shifts should be fixed too. */
@@ -2203,6 +2440,7 @@
case Iop_Shl8x16:
case Iop_Shr8x16:
case Iop_Sar8x16:
+ case Iop_Sal8x16:
case Iop_Rol8x16:
return mkUifUV128(mce,
assignNew('V', mce, Ity_V128, binop(op, vatom1, atom2)),
@@ -2212,6 +2450,7 @@
case Iop_Shl16x8:
case Iop_Shr16x8:
case Iop_Sar16x8:
+ case Iop_Sal16x8:
case Iop_Rol16x8:
return mkUifUV128(mce,
assignNew('V', mce, Ity_V128, binop(op, vatom1, atom2)),
@@ -2221,12 +2460,36 @@
case Iop_Shl32x4:
case Iop_Shr32x4:
case Iop_Sar32x4:
+ case Iop_Sal32x4:
case Iop_Rol32x4:
return mkUifUV128(mce,
assignNew('V', mce, Ity_V128, binop(op, vatom1, atom2)),
mkPCast32x4(mce,vatom2)
);
+ case Iop_Shl64x2:
+ case Iop_Shr64x2:
+ case Iop_Sar64x2:
+ case Iop_Sal64x2:
+ return mkUifUV128(mce,
+ assignNew('V', mce, Ity_V128, binop(op, vatom1, atom2)),
+ mkPCast64x2(mce,vatom2)
+ );
+
+ case Iop_F32ToFixed32Ux4_RZ:
+ case Iop_F32ToFixed32Sx4_RZ:
+ case Iop_Fixed32UToF32x4_RN:
+ case Iop_Fixed32SToF32x4_RN:
+ complainIfUndefined(mce, atom2);
+ return mkPCast32x4(mce, vatom1);
+
+ case Iop_F32ToFixed32Ux2_RZ:
+ case Iop_F32ToFixed32Sx2_RZ:
+ case Iop_Fixed32UToF32x2_RN:
+ case Iop_Fixed32SToF32x2_RN:
+ complainIfUndefined(mce, atom2);
+ return mkPCast32x2(mce, vatom1);
+
case Iop_QSub8Ux16:
case Iop_QSub8Sx16:
case Iop_Sub8x16:
@@ -2241,7 +2504,11 @@
case Iop_Avg8Sx16:
case Iop_QAdd8Ux16:
case Iop_QAdd8Sx16:
+ case Iop_QSal8x16:
+ case Iop_QShl8x16:
case Iop_Add8x16:
+ case Iop_Mul8x16:
+ case Iop_PolynomialMul8x16:
return binary8Ix16(mce, vatom1, vatom2);
case Iop_QSub16Ux8:
@@ -2261,7 +2528,11 @@
case Iop_Avg16Sx8:
case Iop_QAdd16Ux8:
case Iop_QAdd16Sx8:
+ case Iop_QSal16x8:
+ case Iop_QShl16x8:
case Iop_Add16x8:
+ case Iop_QDMulHi16Sx8:
+ case Iop_QRDMulHi16Sx8:
return binary16Ix8(mce, vatom1, vatom2);
case Iop_Sub32x4:
@@ -2272,6 +2543,8 @@
case Iop_QAdd32Ux4:
case Iop_QSub32Sx4:
case Iop_QSub32Ux4:
+ case Iop_QSal32x4:
+ case Iop_QShl32x4:
case Iop_Avg32Ux4:
case Iop_Avg32Sx4:
case Iop_Add32x4:
@@ -2280,11 +2553,19 @@
case Iop_Min32Ux4:
case Iop_Min32Sx4:
case Iop_Mul32x4:
+ case Iop_QDMulHi32Sx4:
+ case Iop_QRDMulHi32Sx4:
return binary32Ix4(mce, vatom1, vatom2);
case Iop_Sub64x2:
case Iop_Add64x2:
case Iop_CmpGT64Sx2:
+ case Iop_QSal64x2:
+ case Iop_QShl64x2:
+ case Iop_QAdd64Ux2:
+ case Iop_QAdd64Sx2:
+ case Iop_QSub64Ux2:
+ case Iop_QSub64Sx2:
return binary64Ix2(mce, vatom1, vatom2);
case Iop_QNarrow32Sx4:
@@ -2329,8 +2610,22 @@
case Iop_CmpGT32Fx4:
case Iop_CmpGE32Fx4:
case Iop_Add32Fx4:
+ case Iop_Recps32Fx4:
+ case Iop_Rsqrts32Fx4:
return binary32Fx4(mce, vatom1, vatom2);
+ case Iop_Sub32Fx2:
+ case Iop_Mul32Fx2:
+ case Iop_Min32Fx2:
+ case Iop_Max32Fx2:
+ case Iop_CmpEQ32Fx2:
+ case Iop_CmpGT32Fx2:
+ case Iop_CmpGE32Fx2:
+ case Iop_Add32Fx2:
+ case Iop_Recps32Fx2:
+ case Iop_Rsqrts32Fx2:
+ return binary32Fx2(mce, vatom1, vatom2);
+
case Iop_Sub32F0x4:
case Iop_Mul32F0x4:
case Iop_Min32F0x4:
@@ -2343,6 +2638,63 @@
case Iop_Add32F0x4:
return binary32F0x4(mce, vatom1, vatom2);
+ case Iop_QShlN8Sx16:
+ case Iop_QShlN8x16:
+ case Iop_QSalN8x16:
+ complainIfUndefined(mce, atom2);
+ return mkPCast8x16(mce, vatom1);
+
+ case Iop_QShlN16Sx8:
+ case Iop_QShlN16x8:
+ case Iop_QSalN16x8:
+ complainIfUndefined(mce, atom2);
+ return mkPCast16x8(mce, vatom1);
+
+ case Iop_QShlN32Sx4:
+ case Iop_QShlN32x4:
+ case Iop_QSalN32x4:
+ complainIfUndefined(mce, atom2);
+ return mkPCast32x4(mce, vatom1);
+
+ case Iop_QShlN64Sx2:
+ case Iop_QShlN64x2:
+ case Iop_QSalN64x2:
+ complainIfUndefined(mce, atom2);
+ return mkPCast32x4(mce, vatom1);
+
+ case Iop_Mull32Sx2:
+ case Iop_Mull32Ux2:
+ case Iop_QDMulLong32Sx2:
+ return vectorLongenI64(mce, Iop_Longen32Sx2,
+ mkUifU64(mce, vatom1, vatom2));
+
+ case Iop_Mull16Sx4:
+ case Iop_Mull16Ux4:
+ case Iop_QDMulLong16Sx4:
+ return vectorLongenI64(mce, Iop_Longen16Sx4,
+ mkUifU64(mce, vatom1, vatom2));
+
+ case Iop_Mull8Sx8:
+ case Iop_Mull8Ux8:
+ case Iop_PolynomialMull8x8:
+ return vectorLongenI64(mce, Iop_Longen8Sx8,
+ mkUifU64(mce, vatom1, vatom2));
+
+ case Iop_PwAdd32x4:
+ return mkPCast32x4(mce,
+ assignNew('V', mce, Ity_V128, binop(op, mkPCast32x4(mce, vatom1),
+ mkPCast32x4(mce, vatom2))));
+
+ case Iop_PwAdd16x8:
+ return mkPCast16x8(mce,
+ assignNew('V', mce, Ity_V128, binop(op, mkPCast16x8(mce, vatom1),
+ mkPCast16x8(mce, vatom2))));
+
+ case Iop_PwAdd8x16:
+ return mkPCast8x16(mce,
+ assignNew('V', mce, Ity_V128, binop(op, mkPCast8x16(mce, vatom1),
+ mkPCast8x16(mce, vatom2))));
+
/* V128-bit data-steering */
case Iop_SetV128lo32:
case Iop_SetV128lo64:
@@ -2355,8 +2707,33 @@
case Iop_InterleaveHI32x4:
case Iop_InterleaveHI16x8:
case Iop_InterleaveHI8x16:
+ case Iop_CatOddLanes8x16:
+ case Iop_CatOddLanes16x8:
+ case Iop_CatOddLanes32x4:
+ case Iop_CatEvenLanes8x16:
+ case Iop_CatEvenLanes16x8:
+ case Iop_CatEvenLanes32x4:
+ case Iop_InterleaveOddLanes8x16:
+ case Iop_InterleaveOddLanes16x8:
+ case Iop_InterleaveOddLanes32x4:
+ case Iop_InterleaveEvenLanes8x16:
+ case Iop_InterleaveEvenLanes16x8:
+ case Iop_InterleaveEvenLanes32x4:
return assignNew('V', mce, Ity_V128, binop(op, vatom1, vatom2));
-
+
+ case Iop_GetElem8x16:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I8, binop(op, vatom1, atom2));
+ case Iop_GetElem16x8:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I16, binop(op, vatom1, atom2));
+ case Iop_GetElem32x4:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I32, binop(op, vatom1, atom2));
+ case Iop_GetElem64x2:
+ complainIfUndefined(mce, atom2);
+ return assignNew('V', mce, Ity_I64, binop(op, vatom1, atom2));
+
/* Perm8x16: rearrange values in left arg using steering values
from right arg. So rearrange the vbits in the same way but
pessimise wrt steering values. */
@@ -2677,8 +3054,21 @@
case Iop_RoundF32x4_RP:
case Iop_RoundF32x4_RN:
case Iop_RoundF32x4_RZ:
+ case Iop_Recip32x4:
+ case Iop_Abs32Fx4:
+ case Iop_Neg32Fx4:
+ case Iop_Rsqrte32Fx4:
return unary32Fx4(mce, vatom);
+ case Iop_I32UtoFx2:
+ case Iop_I32StoFx2:
+ case Iop_Recip32Fx2:
+ case Iop_Recip32x2:
+ case Iop_Abs32Fx2:
+ case Iop_Neg32Fx2:
+ case Iop_Rsqrte32Fx2:
+ return unary32Fx2(mce, vatom);
+
case Iop_Sqrt32F0x4:
case Iop_RSqrt32F0x4:
case Iop_Recip32F0x4:
@@ -2689,6 +3079,12 @@
case Iop_Dup8x16:
case Iop_Dup16x8:
case Iop_Dup32x4:
+ case Iop_Reverse16_8x16:
+ case Iop_Reverse32_8x16:
+ case Iop_Reverse32_16x8:
+ case Iop_Reverse64_8x16:
+ case Iop_Reverse64_16x8:
+ case Iop_Reverse64_32x4:
return assignNew('V', mce, Ity_V128, unop(op, vatom));
case Iop_F32toF64:
@@ -2723,6 +3119,15 @@
case Iop_V128HIto64:
case Iop_128HIto64:
case Iop_128to64:
+ case Iop_Dup8x8:
+ case Iop_Dup16x4:
+ case Iop_Dup32x2:
+ case Iop_Reverse16_8x8:
+ case Iop_Reverse32_8x8:
+ case Iop_Reverse32_16x4:
+ case Iop_Reverse64_8x8:
+ case Iop_Reverse64_16x4:
+ case Iop_Reverse64_32x2:
return assignNew('V', mce, Ity_I64, unop(op, vatom));
case Iop_64to32:
@@ -2768,6 +3173,106 @@
case Iop_Not1:
return vatom;
+ case Iop_CmpNEZ8x8:
+ case Iop_Cnt8x8:
+ case Iop_Clz8Sx8:
+ case Iop_Cls8Sx8:
+ case Iop_Abs8x8:
+ return mkPCast8x8(mce, vatom);
+
+ case Iop_CmpNEZ8x16:
+ case Iop_Cnt8x16:
+ case Iop_Clz8Sx16:
+ case Iop_Cls8Sx16:
+ case Iop_Abs8x16:
+ return mkPCast8x16(mce, vatom);
+
+ case Iop_CmpNEZ16x4:
+ case Iop_Clz16Sx4:
+ case Iop_Cls16Sx4:
+ case Iop_Abs16x4:
+ return mkPCast16x4(mce, vatom);
+
+ case Iop_CmpNEZ16x8:
+ case Iop_Clz16Sx8:
+ case Iop_Cls16Sx8:
+ case Iop_Abs16x8:
+ return mkPCast16x8(mce, vatom);
+
+ case Iop_CmpNEZ32x2:
+ case Iop_Clz32Sx2:
+ case Iop_Cls32Sx2:
+ case Iop_FtoI32Ux2_RZ:
+ case Iop_FtoI32Sx2_RZ:
+ case Iop_Abs32x2:
+ return mkPCast32x2(mce, vatom);
+
+ case Iop_CmpNEZ32x4:
+ case Iop_Clz32Sx4:
+ case Iop_Cls32Sx4:
+ case Iop_FtoI32Ux4_RZ:
+ case Iop_FtoI32Sx4_RZ:
+ case Iop_Abs32x4:
+ return mkPCast32x4(mce, vatom);
+
+ case Iop_CmpwNEZ64:
+ return mkPCastTo(mce, Ity_I64, vatom);
+
+ case Iop_CmpNEZ64x2:
+ return mkPCast64x2(mce, vatom);
+
+ case Iop_Shorten16x8:
+ case Iop_Shorten32x4:
+ case Iop_Shorten64x2:
+ case Iop_QShortenS16Sx8:
+ case Iop_QShortenU16Sx8:
+ case Iop_QShortenU16Ux8:
+ case Iop_QShortenS32Sx4:
+ case Iop_QShortenU32Sx4:
+ case Iop_QShortenU32Ux4:
+ case Iop_QShortenS64Sx2:
+ case Iop_QShortenU64Sx2:
+ case Iop_QShortenU64Ux2:
+ return vectorShortenV128(mce, op, vatom);
+
+ case Iop_Longen8Sx8:
+ case Iop_Longen8Ux8:
+ case Iop_Longen16Sx4:
+ case Iop_Longen16Ux4:
+ case Iop_Longen32Sx2:
+ case Iop_Longen32Ux2:
+ return vectorLongenI64(mce, op, vatom);
+
+ case Iop_PwAddL32Ux2:
+ case Iop_PwAddL32Sx2:
+ return mkPCastTo(mce, Ity_I64,
+ assignNew('V', mce, Ity_I64, unop(op, mkPCast32x2(mce, vatom))));
+
+ case Iop_PwAddL16Ux4:
+ case Iop_PwAddL16Sx4:
+ return mkPCast32x2(mce,
+ assignNew('V', mce, Ity_I64, unop(op, mkPCast16x4(mce, vatom))));
+
+ case Iop_PwAddL8Ux8:
+ case Iop_PwAddL8Sx8:
+ return mkPCast16x4(mce,
+ assignNew('V', mce, Ity_I64, unop(op, mkPCast8x8(mce, vatom))));
+
+ case Iop_PwAddL32Ux4:
+ case Iop_PwAddL32Sx4:
+ return mkPCast64x2(mce,
+ assignNew('V', mce, Ity_V128, unop(op, mkPCast32x4(mce, vatom))));
+
+ case Iop_PwAddL16Ux8:
+ case Iop_PwAddL16Sx8:
+ return mkPCast32x4(mce,
+ assignNew('V', mce, Ity_V128, unop(op, mkPCast16x8(mce, vatom))));
+
+ case Iop_PwAddL8Ux16:
+ case Iop_PwAddL8Sx16:
+ return mkPCast16x8(mce,
+ assignNew('V', mce, Ity_V128, unop(op, mkPCast8x16(mce, vatom))));
+
default:
ppIROp(op);
VG_(tool_panic)("memcheck:expr2vbits_Unop");
|
|
From: <sv...@va...> - 2010-08-22 11:51:35
|
Author: sewardj Date: 2010-08-22 12:51:26 +0100 (Sun, 22 Aug 2010) New Revision: 11276 Log: Merge from branches/THUMB: build system fixes for Thumb Modified: trunk/Makefile.all.am trunk/Makefile.vex.am Modified: trunk/Makefile.all.am =================================================================== --- trunk/Makefile.all.am 2010-08-22 10:29:32 UTC (rev 11275) +++ trunk/Makefile.all.am 2010-08-22 11:51:26 UTC (rev 11276) @@ -153,7 +153,7 @@ AM_FLAG_M3264_ARM_LINUX = @FLAG_M32@ AM_CFLAGS_ARM_LINUX = @FLAG_M32@ @PREFERRED_STACK_BOUNDARY@ \ - $(AM_CFLAGS_BASE) + $(AM_CFLAGS_BASE) -marm AM_CCASFLAGS_ARM_LINUX = $(AM_CPPFLAGS_ARM_LINUX) @FLAG_M32@ -g AM_FLAG_M3264_PPC32_AIX5 = @FLAG_MAIX32@ Modified: trunk/Makefile.vex.am =================================================================== --- trunk/Makefile.vex.am 2010-08-22 10:29:32 UTC (rev 11275) +++ trunk/Makefile.vex.am 2010-08-22 11:51:26 UTC (rev 11276) @@ -54,8 +54,10 @@ # differently -- with a leading $ on x86/amd64 but none on ppc32/64. pub/libvex_guest_offsets.h: rm -f auxprogs/genoffsets.s - $(CC) $(LIBVEX_CFLAGS) -O -S -o auxprogs/genoffsets.s \ - auxprogs/genoffsets.c + $(CC) $(LIBVEX_CFLAGS) \ + $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) \ + -O -S -o auxprogs/genoffsets.s \ + auxprogs/genoffsets.c grep xyzzy auxprogs/genoffsets.s | grep define \ | sed "s/xyzzy\\$$//g" \ | sed "s/xyzzy#//g" \ |
|
From: <sv...@va...> - 2010-08-22 10:29:42
|
Author: sewardj
Date: 2010-08-22 11:29:32 +0100 (Sun, 22 Aug 2010)
New Revision: 11275
Log:
Back out a bunch of experimental ARM-Thumb interworking changes. It
appears the core can handle interworking with almost zero changes.
Only known place where it needs special casing is when backing up over
an interrupted syscall, since the encoding of the SVC instruction is
different for ARM vs Thumb.
Modified:
branches/THUMB/coregrind/m_coredump/coredump-elf.c
branches/THUMB/coregrind/m_debugger.c
branches/THUMB/coregrind/m_debuginfo/debuginfo.c
branches/THUMB/coregrind/m_execontext.c
branches/THUMB/coregrind/m_machine.c
branches/THUMB/coregrind/m_main.c
branches/THUMB/coregrind/m_scheduler/scheduler.c
branches/THUMB/coregrind/m_sigframe/sigframe-arm-linux.c
branches/THUMB/coregrind/m_signals.c
branches/THUMB/coregrind/m_translate.c
branches/THUMB/coregrind/pub_core_machine.h
branches/THUMB/include/pub_tool_machine.h
Modified: branches/THUMB/coregrind/m_coredump/coredump-elf.c
===================================================================
--- branches/THUMB/coregrind/m_coredump/coredump-elf.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_coredump/coredump-elf.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -340,7 +340,7 @@
regs->ARM_ip = arch->vex.guest_R12;
regs->ARM_sp = arch->vex.guest_R13;
regs->ARM_lr = arch->vex.guest_R14;
- regs->ARM_pc = VG_ENCIN_TO_IP(arch->vex.guest_R15T);
+ regs->ARM_pc = arch->vex.guest_R15T;
regs->ARM_cpsr = LibVEX_GuestARM_get_cpsr( &((ThreadArchState*)arch)->vex );
#else
Modified: branches/THUMB/coregrind/m_debugger.c
===================================================================
--- branches/THUMB/coregrind/m_debugger.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_debugger.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -224,7 +224,7 @@
uregs.ARM_ip = vex->guest_R12;
uregs.ARM_sp = vex->guest_R13;
uregs.ARM_lr = vex->guest_R14;
- uregs.ARM_pc = VG_ENCIN_TO_IP(vex->guest_R15T);
+ uregs.ARM_pc = vex->guest_R15T;
uregs.ARM_cpsr = LibVEX_GuestARM_get_cpsr(vex);
return VG_(ptrace)(VKI_PTRACE_SETREGS, pid, NULL, &uregs);
Modified: branches/THUMB/coregrind/m_debuginfo/debuginfo.c
===================================================================
--- branches/THUMB/coregrind/m_debuginfo/debuginfo.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_debuginfo/debuginfo.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -3057,7 +3057,7 @@
continue; /* ignore obviously stupid cases */
if (consider_vars_in_frame( dname1, dname2,
data_addr,
- VG_(get_ENCIP_IP)(tid),
+ VG_(get_IP)(tid),
VG_(get_SP)(tid),
VG_(get_FP)(tid), tid, 0 )) {
zterm_XA( dname1 );
Modified: branches/THUMB/coregrind/m_execontext.c
===================================================================
--- branches/THUMB/coregrind/m_execontext.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_execontext.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -311,7 +311,7 @@
if (first_ip_only) {
n_ips = 1;
- ips[0] = VG_(get_ENCIP_IP)(tid);
+ ips[0] = VG_(get_IP)(tid);
} else {
n_ips = VG_(get_StackTrace)( tid, ips, VG_(clo_backtrace_size),
NULL/*array to dump SP values in*/,
Modified: branches/THUMB/coregrind/m_machine.c
===================================================================
--- branches/THUMB/coregrind/m_machine.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_machine.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -39,41 +39,27 @@
#include "pub_core_debuglog.h"
-#define ENCIN_PTR(regs) ((regs).vex.VG_ENCIN_PTR)
+#define INSTR_PTR(regs) ((regs).vex.VG_INSTR_PTR)
#define STACK_PTR(regs) ((regs).vex.VG_STACK_PTR)
#define FRAME_PTR(regs) ((regs).vex.VG_FRAME_PTR)
-Addr VG_(get_ENCIP) ( ThreadId tid ) {
- return ENCIN_PTR( VG_(threads)[tid].arch );
+Addr VG_(get_IP) ( ThreadId tid ) {
+ return INSTR_PTR( VG_(threads)[tid].arch );
}
-Addr VG_(get_ENCIP_IP) ( ThreadId tid ) {
- return VG_ENCIN_TO_IP(ENCIN_PTR( VG_(threads)[tid].arch ));
-}
-UWord VG_(get_ENCIP_AUX) ( ThreadId tid ) {
- return VG_ENCIN_TO_AUX(ENCIN_PTR( VG_(threads)[tid].arch ));
-}
-
Addr VG_(get_SP) ( ThreadId tid ) {
return STACK_PTR( VG_(threads)[tid].arch );
}
-
Addr VG_(get_FP) ( ThreadId tid ) {
return FRAME_PTR( VG_(threads)[tid].arch );
}
-
-void VG_(set_ENCIP) ( ThreadId tid, Addr encip ) {
- ENCIN_PTR( VG_(threads)[tid].arch ) = encip;
+void VG_(set_IP) ( ThreadId tid, Addr ip ) {
+ INSTR_PTR( VG_(threads)[tid].arch ) = ip;
}
-void VG_(set_ENCIP_2) ( ThreadId tid, Addr ip, UWord aux ) {
- ENCIN_PTR( VG_(threads)[tid].arch ) = VG_IP_AUX_TO_ENCIN(ip, aux);
-}
-
void VG_(set_SP) ( ThreadId tid, Addr sp ) {
STACK_PTR( VG_(threads)[tid].arch ) = sp;
}
-
void VG_(get_UnwindStartRegs) ( /*OUT*/UnwindStartRegs* regs,
ThreadId tid )
{
@@ -109,8 +95,6 @@
# else
# error "Unknown arch"
# endif
- /* Ensure the starting PC is properly decoded. */
- regs->r_pc = VG_ENCIN_TO_IP(regs->r_pc);
}
Modified: branches/THUMB/coregrind/m_main.c
===================================================================
--- branches/THUMB/coregrind/m_main.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_main.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -2523,8 +2523,7 @@
function entry point, not a fn descriptor, so can use it
directly. However, we need to set R2 (the toc pointer)
appropriately. */
- /* INTERWORKING FIXME: assumes wrapper runs in ARM mode */
- VG_(set_ENCIP_2)(tid, __libc_freeres_wrapper, 0);
+ VG_(set_IP)(tid, __libc_freeres_wrapper);
# if defined(VGP_ppc64_linux)
VG_(threads)[tid].arch.vex.guest_GPR2 = r2;
# endif
Modified: branches/THUMB/coregrind/m_scheduler/scheduler.c
===================================================================
--- branches/THUMB/coregrind/m_scheduler/scheduler.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_scheduler/scheduler.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -654,7 +654,7 @@
vg_assert(VG_IS_16_ALIGNED(& tst->arch.vex.guest_VR1));
vg_assert(VG_IS_16_ALIGNED(& tst->arch.vex_shadow1.guest_VR1));
vg_assert(VG_IS_16_ALIGNED(& tst->arch.vex_shadow2.guest_VR1));
-# endif
+# endif
# if defined(VGA_arm)
/* arm guest_state VFP regs must be 8 byte aligned for
@@ -823,7 +823,7 @@
retval = VG_TRC_FAULT_SIGNAL;
} else {
/* store away the guest program counter */
- VG_(set_ENCIP)( tid, argblock[2] );
+ VG_(set_IP)( tid, argblock[2] );
if (argblock[3] == argblock[1])
/* the guest state pointer afterwards was unchanged */
retval = VG_TRC_BORING;
@@ -847,16 +847,16 @@
static void handle_tt_miss ( ThreadId tid )
{
Bool found;
- Addr encip = VG_(get_ENCIP)(tid);
+ Addr ip = VG_(get_IP)(tid);
/* Trivial event. Miss in the fast-cache. Do a full
lookup for it. */
- found = VG_(search_transtab)( NULL, encip, True/*upd_fast_cache*/ );
+ found = VG_(search_transtab)( NULL, ip, True/*upd_fast_cache*/ );
if (UNLIKELY(!found)) {
/* Not found; we need to request a translation. */
- if (VG_(translate)( tid, encip, /*debug*/False, 0/*not verbose*/,
+ if (VG_(translate)( tid, ip, /*debug*/False, 0/*not verbose*/,
bbs_done, True/*allow redirection*/ )) {
- found = VG_(search_transtab)( NULL, encip, True );
+ found = VG_(search_transtab)( NULL, ip, True );
vg_assert2(found, "VG_TRC_INNER_FASTMISS: missing tt_fast entry");
} else {
@@ -904,15 +904,17 @@
static UInt/*trc*/ handle_noredir_jump ( ThreadId tid )
{
AddrH hcode = 0;
- Addr encip = VG_(get_ENCIP)(tid);
+ Addr ip = VG_(get_IP)(tid);
- Bool found = VG_(search_unredir_transtab)( &hcode, encip );
+ Bool found = VG_(search_unredir_transtab)( &hcode, ip );
if (!found) {
/* Not found; we need to request a translation. */
- if (VG_(translate)( tid, encip, /*debug*/False, 0/*not verbose*/,
- bbs_done, False/*NO REDIRECTION*/ )) {
- found = VG_(search_unredir_transtab)( &hcode, encip );
+ if (VG_(translate)( tid, ip, /*debug*/False, 0/*not verbose*/, bbs_done,
+ False/*NO REDIRECTION*/ )) {
+
+ found = VG_(search_unredir_transtab)( &hcode, ip );
vg_assert2(found, "unredir translation missing after creation?!");
+
} else {
// If VG_(translate)() fails, it's because it had to throw a
// signal because the client jumped to a bad address. That
@@ -1171,7 +1173,7 @@
case VEX_TRC_JMP_NODECODE:
VG_(umsg)(
"valgrind: Unrecognised instruction at address %#lx.\n",
- VG_(get_ENCIP_IP)(tid));
+ VG_(get_IP)(tid));
#define M(a) VG_(umsg)(a "\n");
M("Your program just tried to execute an instruction that Valgrind" );
M("did not recognise. There are two possible reasons for this." );
@@ -1184,8 +1186,7 @@
M("Either way, Valgrind will now raise a SIGILL signal which will" );
M("probably kill your program." );
#undef M
- // INTERWORKING FIXME is this correct (the use of get_ENCIP) ?
- VG_(synth_sigill)(tid, VG_(get_ENCIP)(tid));
+ VG_(synth_sigill)(tid, VG_(get_IP)(tid));
break;
case VEX_TRC_JMP_TINVAL:
Modified: branches/THUMB/coregrind/m_sigframe/sigframe-arm-linux.c
===================================================================
--- branches/THUMB/coregrind/m_sigframe/sigframe-arm-linux.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_sigframe/sigframe-arm-linux.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -139,7 +139,7 @@
SC2(ip,R12);
SC2(sp,R13);
SC2(lr,R14);
- SC2(pc,R15T); // INTERWORKING FIXME
+ SC2(pc,R15T);
// afaics, this is used for two purposes:
// * so the guest can see the faulting address. Hence it needs
// to be unencoded (the real insn IP)
@@ -241,21 +241,19 @@
tst->arch.vex.guest_R1 = (Addr)&rsf->info;
tst->arch.vex.guest_R2 = (Addr)&rsf->sig.uc;
}
- else{
+ else {
build_sigframe(tst, (struct sigframe *)sp, siginfo, siguc,
handler, flags, mask, restorer);
- }
+ }
VG_(set_SP)(tid, sp);
VG_TRACK( post_reg_write, Vg_CoreSignal, tid, VG_O_STACK_PTR,
sizeof(Addr));
- tst->arch.vex.guest_R0 = sigNo;
+ tst->arch.vex.guest_R0 = sigNo;
- if(flags & VKI_SA_RESTORER)
- tst->arch.vex.guest_R14 = (Addr) restorer;
+ if (flags & VKI_SA_RESTORER)
+ tst->arch.vex.guest_R14 = (Addr) restorer;
- // INTERWORKING FIXME this is almost certainly wrong. But how
- // do we know which insn set is to be used for the signal handler?
tst->arch.vex.guest_R15T = (Addr) handler; /* R15 == PC */
}
@@ -319,7 +317,7 @@
REST(ip,R12);
REST(sp,R13);
REST(lr,R14);
- REST(pc,R15T); // INTERWORKING FIXME see comments above
+ REST(pc,R15T);
# undef REST
tst->arch.vex_shadow1 = priv->vex_shadow1;
@@ -331,7 +329,7 @@
if (VG_(clo_trace_signals))
VG_(message)(Vg_DebugMsg,
"vg_pop_signal_frame (thread %d): "
- "isRT=%d valid magic; PC(encoded)=%#x",
+ "isRT=%d valid magic; PC=%#x",
tid, has_siginfo, tst->arch.vex.guest_R15T);
/* tell the tools */
Modified: branches/THUMB/coregrind/m_signals.c
===================================================================
--- branches/THUMB/coregrind/m_signals.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_signals.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -2362,7 +2362,7 @@
if (VG_(clo_trace_signals)) {
VG_(dmsg)("sync signal handler: "
"signal=%d, si_code=%d, EIP=%#lx, eip=%#lx, from %s\n",
- sigNo, info->si_code, VG_(get_ENCIP_IP)(tid),
+ sigNo, info->si_code, VG_(get_IP)(tid),
VG_UCONTEXT_INSTR_PTR(uc),
( from_user ? "user" : "kernel" ));
}
Modified: branches/THUMB/coregrind/m_translate.c
===================================================================
--- branches/THUMB/coregrind/m_translate.c 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/m_translate.c 2010-08-22 10:29:32 UTC (rev 11275)
@@ -1253,7 +1253,6 @@
TID is the identity of the thread requesting this translation.
*/
-// INTERWORKING FIXME this requires careful consideration
Bool VG_(translate) ( ThreadId tid,
Addr64 nraddr,
Bool debugging_translation,
Modified: branches/THUMB/coregrind/pub_core_machine.h
===================================================================
--- branches/THUMB/coregrind/pub_core_machine.h 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/coregrind/pub_core_machine.h 2010-08-22 10:29:32 UTC (rev 11275)
@@ -96,12 +96,9 @@
# define VG_STACK_PTR guest_GPR1
# define VG_FRAME_PTR guest_GPR1 // No frame ptr for PPC
#elif defined(VGA_arm)
-# define VG_ENCIN_PTR guest_R15T
+# define VG_INSTR_PTR guest_R15T
# define VG_STACK_PTR guest_R13
# define VG_FRAME_PTR guest_R11
-# define VG_ENCIN_TO_IP(_encin) ((_encin) & ~1UL)
-# define VG_ENCIN_TO_AUX(_encin) ((_encin) & 1UL)
-# define VG_IP_AUX_TO_ENCIN(_ip,_aux) ((_ip) & ~1UL) | ((_aux) & 1UL)
#else
# error Unknown arch
#endif
@@ -113,18 +110,14 @@
//-------------------------------------------------------------
-// Guest state accessors not visible to tools (although they
-// could be, I guess)
-Addr VG_(get_ENCIP) ( ThreadId tid );
-Addr VG_(get_ENCIP_IP) ( ThreadId tid );
-UWord VG_(get_ENCIP_AUX) ( ThreadId tid );
+// Guest state accessors that are not visible to tools. The only
+// ones that are visible are get_IP and get_SP.
-Addr VG_(get_SP) ( ThreadId tid );
+//Addr VG_(get_IP) ( ThreadId tid ); // in pub_tool_machine.h
+//Addr VG_(get_SP) ( ThreadId tid ); // in pub_tool_machine.h
Addr VG_(get_FP) ( ThreadId tid );
-void VG_(set_ENCIP) ( ThreadId tid, Addr encip );
-void VG_(set_ENCIP_2) ( ThreadId tid, Addr ip, UWord aux );
-
+void VG_(set_IP) ( ThreadId tid, Addr encip );
void VG_(set_SP) ( ThreadId tid, Addr sp );
Modified: branches/THUMB/include/pub_tool_machine.h
===================================================================
--- branches/THUMB/include/pub_tool_machine.h 2010-08-21 11:47:01 UTC (rev 11274)
+++ branches/THUMB/include/pub_tool_machine.h 2010-08-22 10:29:32 UTC (rev 11275)
@@ -59,7 +59,7 @@
// Supplement 1.7
#elif defined(VGP_arm_linux)
-# define VG_MIN_INSTR_SZB 4
+# define VG_MIN_INSTR_SZB 2
# define VG_MAX_INSTR_SZB 4
# define VG_CLREQ_SZB 28
# define VG_STACK_REDZONE_SZB 0
@@ -99,8 +99,10 @@
#endif
// Guest state accessors
-// Currently all in the core_ header, until we know
-// they are needed here
+// Are mostly in the core_ header.
+// Only these two are available to tools.
+Addr VG_(get_IP) ( ThreadId tid );
+Addr VG_(get_SP) ( ThreadId tid );
// For get/set, 'area' is where the asked-for guest state will be copied
|
|
From: Bart V. A. <bva...@ac...> - 2010-08-22 08:20:15
|
Nightly build on cellbuzz-native ( cellbuzz, ppc64, Fedora 7, native ) Started at 2010-08-22 02:32:19 EDT Ended at 2010-08-22 04:19:44 EDT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... done Regression test results follow == 462 tests, 45 stderr failures, 12 stdout failures, 0 post failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cases-full (stderr) memcheck/tests/leak-cases-summary (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/linux/timerfd-syscall (stdout) memcheck/tests/linux-syscalls-2007 (stderr) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) callgrind/tests/simwork-both (stdout) callgrind/tests/simwork-both (stderr) callgrind/tests/simwork-branch (stdout) callgrind/tests/simwork-branch (stderr) none/tests/empty-exe (stderr) none/tests/linux/mremap (stderr) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-vmx (stdout) none/tests/ppc32/round (stdout) none/tests/ppc32/test_gx (stdout) none/tests/ppc64/jm-fp (stdout) none/tests/ppc64/jm-vmx (stdout) none/tests/ppc64/round (stdout) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) none/tests/shell_zerolength (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) exp-ptrcheck/tests/bad_percentify (stderr) exp-ptrcheck/tests/base (stderr) exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/fp (stderr) exp-ptrcheck/tests/globalerr (stderr) exp-ptrcheck/tests/hackedbz2 (stderr) exp-ptrcheck/tests/hp_bounds (stderr) exp-ptrcheck/tests/hp_dangle (stderr) exp-ptrcheck/tests/hsg (stderr) exp-ptrcheck/tests/justify (stderr) exp-ptrcheck/tests/partial_bad (stderr) exp-ptrcheck/tests/partial_good (stderr) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) exp-ptrcheck/tests/realloc (stderr) exp-ptrcheck/tests/stackerr (stderr) exp-ptrcheck/tests/strcpy (stderr) exp-ptrcheck/tests/supp (stderr) exp-ptrcheck/tests/tricky (stderr) exp-ptrcheck/tests/unaligned (stderr) exp-ptrcheck/tests/zero (stderr) |
|
From: Rich C. <rc...@wi...> - 2010-08-22 05:44:33
|
Nightly build on ppc32 ( Linux 2.6.27.45-0.1-default ppc )
Started at 2010-08-21 23:26:01 CDT
Ended at 2010-08-22 00:44:14 CDT
Results unchanged from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 459 tests, 39 stderr failures, 6 stdout failures, 2 post failures ==
memcheck/tests/badjump (stderr)
memcheck/tests/badjump2 (stderr)
memcheck/tests/linux/capget (stderr)
memcheck/tests/linux/stack_changes (stderr)
memcheck/tests/linux/timerfd-syscall (stdout)
memcheck/tests/linux-syscalls-2007 (stderr)
memcheck/tests/origin5-bz2 (stderr)
memcheck/tests/supp_unknown (stderr)
memcheck/tests/varinfo6 (stderr)
massif/tests/deep-D (post)
massif/tests/overloaded-new (post)
none/tests/linux/mremap (stderr)
none/tests/ppc32/jm-fp (stdout)
none/tests/ppc32/jm-fp (stderr)
none/tests/ppc32/jm-vmx (stdout)
none/tests/ppc32/round (stdout)
none/tests/ppc32/round (stderr)
none/tests/ppc32/test_fx (stdout)
none/tests/ppc32/test_fx (stderr)
none/tests/ppc32/test_gx (stdout)
helgrind/tests/hg05_race2 (stderr)
helgrind/tests/tc06_two_races_xml (stderr)
helgrind/tests/tc09_bad_unlock (stderr)
helgrind/tests/tc23_bogus_condwait (stderr)
drd/tests/tc23_bogus_condwait (stderr)
exp-ptrcheck/tests/bad_percentify (stderr)
exp-ptrcheck/tests/base (stderr)
exp-ptrcheck/tests/ccc (stderr)
exp-ptrcheck/tests/fp (stderr)
exp-ptrcheck/tests/globalerr (stderr)
exp-ptrcheck/tests/hackedbz2 (stderr)
exp-ptrcheck/tests/hp_bounds (stderr)
exp-ptrcheck/tests/hp_dangle (stderr)
exp-ptrcheck/tests/hsg (stderr)
exp-ptrcheck/tests/justify (stderr)
exp-ptrcheck/tests/partial_bad (stderr)
exp-ptrcheck/tests/partial_good (stderr)
exp-ptrcheck/tests/preen_invars (stderr)
exp-ptrcheck/tests/pth_create (stderr)
exp-ptrcheck/tests/pth_specific (stderr)
exp-ptrcheck/tests/realloc (stderr)
exp-ptrcheck/tests/stackerr (stderr)
exp-ptrcheck/tests/strcpy (stderr)
exp-ptrcheck/tests/supp (stderr)
exp-ptrcheck/tests/tricky (stderr)
exp-ptrcheck/tests/unaligned (stderr)
exp-ptrcheck/tests/zero (stderr)
=================================================
./valgrind-new/drd/tests/tc23_bogus_condwait.stderr.diff-darwin
=================================================
--- tc23_bogus_condwait.stderr.exp-darwin 2010-08-22 00:05:30.000000000 -0500
+++ tc23_bogus_condwait.stderr.out 2010-08-22 00:42:02.000000000 -0500
@@ -3,61 +3,11 @@
at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
by 0x........: main (tc23_bogus_condwait.c:69)
-Mutex not locked: mutex 0x........, recursion count 0, owner 0.
- at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:72)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:51)
-
-Thread 3:
-Probably a race condition: condition variable 0x........ has been signaled but the associated mutex 0x........ is not locked by the signalling thread.
- at 0x........: pthread_cond_signal (drd_pthread_intercepts.c:?)
- by 0x........: rescue_me (tc23_bogus_condwait.c:20)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-cond 0x........ was first observed at:
- at 0x........: pthread_cond_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:56)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:51)
-
-Thread 1:
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:75)
-rwlock 0x........ was first observed at:
- at 0x........: pthread_rwlock_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:57)
-
-Mutex not locked by calling thread: mutex 0x........, recursion count 1, owner 2.
- at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:78)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:53)
-
-Thread 3:
-Probably a race condition: condition variable 0x........ has been signaled but the associated mutex 0x........ is not locked by the signalling thread.
- at 0x........: pthread_cond_signal (drd_pthread_intercepts.c:?)
- by 0x........: rescue_me (tc23_bogus_condwait.c:24)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-cond 0x........ was first observed at:
- at 0x........: pthread_cond_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:56)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:53)
-
-The impossible happened: mutex 0x........ is locked simultaneously by two threads (recursion count 1, owners 2 and 1) !
-Thread 2:
-Mutex not locked by calling thread: mutex 0x........, recursion count 2, owner 1.
- at 0x........: pthread_mutex_unlock (drd_pthread_intercepts.c:?)
- by 0x........: grab_the_lock (tc23_bogus_condwait.c:42)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:53)
+Process terminating with default action of signal 7 (SIGBUS)
+ Invalid address alignment at address 0x........
+ at 0x........: __pthread_mutex_unlock_usercnt (pthread_mutex_unlock.c:?)
+ by 0x........: pthread_cond_wait@@GLIBC_2.3.2 (pthread_cond_wait.c:?)
+ by 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
-ERROR SUMMARY: 9 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 2 errors from 1 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/drd/tests/tc23_bogus_condwait.stderr.diff-linux-ppc
=================================================
--- tc23_bogus_condwait.stderr.exp-linux-ppc 2010-08-22 00:05:30.000000000 -0500
+++ tc23_bogus_condwait.stderr.out 2010-08-22 00:42:02.000000000 -0500
@@ -6,8 +6,8 @@
Process terminating with default action of signal 7 (SIGBUS)
Invalid address alignment at address 0x........
- at 0x........: (within libpthread-?.?.so)
- by 0x........: pthread_cond_wait@@GLIBC_2.3.2(within libpthread-?.?.so)
+ at 0x........: __pthread_mutex_unlock_usercnt (pthread_mutex_unlock.c:?)
+ by 0x........: pthread_cond_wait@@GLIBC_2.3.2 (pthread_cond_wait.c:?)
by 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
ERROR SUMMARY: 2 errors from 1 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/drd/tests/tc23_bogus_condwait.stderr.diff-linux-x86
=================================================
--- tc23_bogus_condwait.stderr.exp-linux-x86 2010-08-22 00:05:30.000000000 -0500
+++ tc23_bogus_condwait.stderr.out 2010-08-22 00:42:02.000000000 -0500
@@ -3,84 +3,11 @@
at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
by 0x........: main (tc23_bogus_condwait.c:69)
-Thread 3:
-Probably a race condition: condition variable 0x........ has been signaled but the associated mutex 0x........ is not locked by the signalling thread.
- at 0x........: pthread_cond_signal (drd_pthread_intercepts.c:?)
- by 0x........: rescue_me (tc23_bogus_condwait.c:20)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-cond 0x........ was first observed at:
- at 0x........: pthread_cond_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:56)
-
-Thread 1:
-Mutex not locked: mutex 0x........, recursion count 0, owner 0.
- at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:72)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:51)
-
-Thread 3:
-Probably a race condition: condition variable 0x........ has been signaled but the associated mutex 0x........ is not locked by the signalling thread.
- at 0x........: pthread_cond_signal (drd_pthread_intercepts.c:?)
- by 0x........: rescue_me (tc23_bogus_condwait.c:24)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-cond 0x........ was first observed at:
- at 0x........: pthread_cond_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:56)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:51)
-
-Thread 1:
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:75)
-rwlock 0x........ was first observed at:
- at 0x........: pthread_rwlock_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:57)
-
-Thread 3:
-Probably a race condition: condition variable 0x........ has been signaled but the associated mutex 0x........ is not locked by the signalling thread.
- at 0x........: pthread_cond_signal (drd_pthread_intercepts.c:?)
- by 0x........: rescue_me (tc23_bogus_condwait.c:28)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-cond 0x........ was first observed at:
- at 0x........: pthread_cond_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:56)
-rwlock 0x........ was first observed at:
- at 0x........: pthread_rwlock_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:57)
-
-Thread 1:
-Mutex not locked by calling thread: mutex 0x........, recursion count 1, owner 2.
- at 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:78)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:53)
-
-Thread 3:
-Probably a race condition: condition variable 0x........ has been signaled but the associated mutex 0x........ is not locked by the signalling thread.
- at 0x........: pthread_cond_signal (drd_pthread_intercepts.c:?)
- by 0x........: rescue_me (tc23_bogus_condwait.c:32)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-cond 0x........ was first observed at:
- at 0x........: pthread_cond_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:56)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:53)
-
-The impossible happened: mutex 0x........ is locked simultaneously by two threads (recursion count 1, owners 2 and 1) !
-Thread 2:
-Mutex not locked by calling thread: mutex 0x........, recursion count 2, owner 1.
- at 0x........: pthread_mutex_unlock (drd_pthread_intercepts.c:?)
- by 0x........: grab_the_lock (tc23_bogus_condwait.c:42)
- by 0x........: vgDrd_thread_wrapper (drd_pthread_intercepts.c:?)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
- by 0x........: main (tc23_bogus_condwait.c:53)
+Process terminating with default action of signal 7 (SIGBUS)
+ Invalid address alignment at address 0x........
+ at 0x........: __pthread_mutex_unlock_usercnt (pthread_mutex_unlock.c:?)
+ by 0x........: pthread_cond_wait@@GLIBC_2.3.2 (pthread_cond_wait.c:?)
+ by 0x........: pthread_cond_wait (drd_pthread_intercepts.c:?)
-ERROR SUMMARY: 11 errors from 9 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 2 errors from 1 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/bad_percentify.stderr.diff-glibc28-amd64
=================================================
--- bad_percentify.stderr.exp-glibc28-amd64 2010-08-22 00:05:13.000000000 -0500
+++ bad_percentify.stderr.out 2010-08-22 00:42:23.000000000 -0500
@@ -1,33 +1,6 @@
-Invalid read of size 1
- at 0x........: strlen (h_intercepts.c:...)
- by 0x........: ...
- by 0x........: ...
- by 0x........: VG_print_translation_stats (bad_percentify.c:88)
- by 0x........: main (bad_percentify.c:107)
- Address 0x........ expected vs actual:
- Expected: stack array "buf" in frame 3 back from here
- Actual: unknown
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
-Invalid read of size 1
- at 0x........: strlen (h_intercepts.c:...)
- by 0x........: ...
- by 0x........: ...
- by 0x........: VG_print_translation_stats (bad_percentify.c:93)
- by 0x........: main (bad_percentify.c:107)
- Address 0x........ expected vs actual:
- Expected: stack array "buf" in frame 3 back from here
- Actual: unknown
-
-Invalid read of size 1
- at 0x........: strlen (h_intercepts.c:...)
- by 0x........: ...
- by 0x........: ...
- by 0x........: VG_print_translation_stats (bad_percentify.c:98)
- by 0x........: main (bad_percentify.c:107)
- Address 0x........ expected vs actual:
- Expected: stack array "buf" in frame 3 back from here
- Actual: unknown
-
-
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/base.stderr.diff-glibc25-amd64
=================================================
--- base.stderr.exp-glibc25-amd64 2010-08-22 00:05:13.000000000 -0500
+++ base.stderr.out 2010-08-22 00:42:27.000000000 -0500
@@ -1,10 +1,13 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
about to do 14 [0]
about to do 14 [-1]
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (base.c:14)
- Address 0x........ is 8 bytes before the accessing pointer's
- legitimate range, a block of size 80 alloc'd
+ Address 0x........ is 4 bytes before the accessing pointer's
+ legitimate range, a block of size 40 alloc'd
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: main (arith_include2.c:22)
@@ -13,22 +16,22 @@
about to do 18 [0]
about to do 18 [-1]
about to do 20 [0]
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (base.c:20)
Address 0x........ is not derived from any known block
about to do 20 [-1]
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (base.c:20)
Address 0x........ is not derived from any known block
about to do 22 [0]
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (base.c:22)
Address 0x........ is not derived from any known block
about to do 22 [-1]
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (base.c:22)
Address 0x........ is not derived from any known block
=================================================
./valgrind-new/exp-ptrcheck/tests/base.stderr.diff-glibc25-x86
=================================================
--- base.stderr.exp-glibc25-x86 2010-08-22 00:05:13.000000000 -0500
+++ base.stderr.out 2010-08-22 00:42:27.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
about to do 14 [0]
about to do 14 [-1]
Invalid read of size 4
=================================================
./valgrind-new/exp-ptrcheck/tests/ccc.stderr.diff-glibc25-amd64
=================================================
--- ccc.stderr.exp-glibc25-amd64 2010-08-22 00:05:13.000000000 -0500
+++ ccc.stderr.out 2010-08-22 00:42:34.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (ccc.cpp:20)
Address 0x........ is 4 bytes before the accessing pointer's
@@ -21,21 +24,21 @@
by 0x........: main (ccc.cpp:10)
Invalid read of size 4
- at 0x........: main (ccc.cpp:22)
+ at 0x........: main (ccc.cpp:23)
Address 0x........ is 4 bytes before the accessing pointer's
legitimate range, a block of size 4 alloc'd
at 0x........: calloc (vg_replace_malloc.c:...)
by 0x........: main (ccc.cpp:11)
Invalid read of size 4
- at 0x........: main (ccc.cpp:23)
+ at 0x........: main (ccc.cpp:24)
Address 0x........ is 4 bytes before the accessing pointer's
legitimate range, a block of size 4 alloc'd
at 0x........: memalign (vg_replace_malloc.c:...)
by 0x........: main (ccc.cpp:12)
Invalid read of size 4
- at 0x........: main (ccc.cpp:24)
+ at 0x........: main (ccc.cpp:22)
Address 0x........ is 4 bytes before the accessing pointer's
legitimate range, a block of size 4 alloc'd
at 0x........: memalign (vg_replace_malloc.c:...)
=================================================
./valgrind-new/exp-ptrcheck/tests/ccc.stderr.diff-glibc27-x86
=================================================
--- ccc.stderr.exp-glibc27-x86 2010-08-22 00:05:13.000000000 -0500
+++ ccc.stderr.out 2010-08-22 00:42:34.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (ccc.cpp:20)
Address 0x........ is 4 bytes before the accessing pointer's
@@ -35,7 +38,7 @@
by 0x........: main (ccc.cpp:12)
Invalid read of size 4
- at 0x........: main (ccc.cpp:25)
+ at 0x........: main (ccc.cpp:22)
Address 0x........ is 4 bytes before the accessing pointer's
legitimate range, a block of size 4 alloc'd
at 0x........: memalign (vg_replace_malloc.c:...)
=================================================
./valgrind-new/exp-ptrcheck/tests/ccc.stderr.diff-glibc28-amd64
=================================================
--- ccc.stderr.exp-glibc28-amd64 2010-08-22 00:05:13.000000000 -0500
+++ ccc.stderr.out 2010-08-22 00:42:34.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (ccc.cpp:20)
Address 0x........ is 4 bytes before the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/fp.stderr.diff
=================================================
--- fp.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ fp.stderr.out 2010-08-22 00:42:38.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 8
at 0x........: main (fp.c:13)
Address 0x........ is 0 bytes inside the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/globalerr.stderr.diff-glibc28-amd64
=================================================
--- globalerr.stderr.exp-glibc28-amd64 2010-08-22 00:05:13.000000000 -0500
+++ globalerr.stderr.out 2010-08-22 00:42:43.000000000 -0500
@@ -1,15 +1,6 @@
-Invalid read of size 2
- at 0x........: main (globalerr.c:12)
- Address 0x........ expected vs actual:
- Expected: global array "a" in object with soname "NONE"
- Actual: unknown
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
-Invalid read of size 2
- at 0x........: main (globalerr.c:12)
- Address 0x........ expected vs actual:
- Expected: global array "b" in object with soname "NONE"
- Actual: unknown
-
-
-ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/hackedbz2.stderr.diff-glibc28-amd64
=================================================
--- hackedbz2.stderr.exp-glibc28-amd64 2010-08-22 00:05:13.000000000 -0500
+++ hackedbz2.stderr.out 2010-08-22 00:43:04.000000000 -0500
@@ -1,16 +1,6 @@
-Invalid read of size 1
- at 0x........: vex_strlen (hackedbz2.c:1006)
- by 0x........: add_to_myprintf_buf (hackedbz2.c:1284)
- by 0x........: vex_printf (hackedbz2.c:1155)
- by 0x........: BZ2_compressBlock (hackedbz2.c:4039)
- by 0x........: handle_compress (hackedbz2.c:4761)
- by 0x........: BZ2_bzCompress (hackedbz2.c:4831)
- by 0x........: BZ2_bzBuffToBuffCompress (hackedbz2.c:5638)
- by 0x........: main (hackedbz2.c:6484)
- Address 0x........ expected vs actual:
- Expected: global array "myprintf_buf" in object with soname "NONE"
- Actual: unknown
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
-
-ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/hp_bounds.stderr.diff
=================================================
--- hp_bounds.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ hp_bounds.stderr.out 2010-08-22 00:43:08.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (hp_bounds.c:9)
Address 0x........ is 0 bytes after the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/hp_dangle.stderr.diff
=================================================
--- hp_dangle.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ hp_dangle.stderr.out 2010-08-22 00:43:12.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (hp_dangle.c:17)
Address 0x........ is 20 bytes inside the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/hsg.stderr.diff
=================================================
--- hsg.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ hsg.stderr.out 2010-08-22 00:43:16.000000000 -0500
@@ -32,70 +32,6 @@
<error>
<unique>0x........</unique>
<tid>...</tid>
- <kind>SorG</kind>
- <what>Invalid read of size 2</what>
- <stack>
- <frame>
- <ip>0x........</ip>
- <obj>...</obj>
- <fn>addup_wrongly</fn>
- <dir>...</dir>
- <file>hsg.c</file>
- <line>...</line>
- </frame>
- <frame>
- <ip>0x........</ip>
- <obj>...</obj>
- <fn>main</fn>
- <dir>...</dir>
- <file>hsg.c</file>
- <line>...</line>
- </frame>
- </stack>
- <auxwhat>Address 0x........ expected vs actual:</auxwhat>
- <auxwhat>Expected: global array "ga" in object with soname "NONE"</auxwhat>
- <auxwhat>Actual: unknown</auxwhat>
-</error>
-
-<error>
- <unique>0x........</unique>
- <tid>...</tid>
- <kind>SorG</kind>
- <what>Invalid read of size 2</what>
- <stack>
- <frame>
- <ip>0x........</ip>
- <obj>...</obj>
- <fn>addup_wrongly</fn>
- <dir>...</dir>
- <file>hsg.c</file>
- <line>...</line>
- </frame>
- <frame>
- <ip>0x........</ip>
- <obj>...</obj>
- <fn>do_other_stuff</fn>
- <dir>...</dir>
- <file>hsg.c</file>
- <line>...</line>
- </frame>
- <frame>
- <ip>0x........</ip>
- <obj>...</obj>
- <fn>main</fn>
- <dir>...</dir>
- <file>hsg.c</file>
- <line>...</line>
- </frame>
- </stack>
- <auxwhat>Address 0x........ expected vs actual:</auxwhat>
- <auxwhat>Expected: stack array "la" in frame 1 back from here</auxwhat>
- <auxwhat>Actual: unknown</auxwhat>
-</error>
-
-<error>
- <unique>0x........</unique>
- <tid>...</tid>
<kind>Heap</kind>
<what>Invalid read of size 1</what>
<stack>
@@ -209,14 +145,6 @@
<pair>
<count>...</count>
<unique>0x........</unique>
- </pair>
- <pair>
- <count>...</count>
- <unique>0x........</unique>
- </pair>
- <pair>
- <count>...</count>
- <unique>0x........</unique>
</pair>
<pair>
<count>...</count>
=================================================
./valgrind-new/exp-ptrcheck/tests/justify.stderr.diff
=================================================
--- justify.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ justify.stderr.out 2010-08-22 00:43:21.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 1
at 0x........: main (justify.c:20)
Address 0x........ is 5000 bytes after the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/partial_bad.stderr.diff-glibc25-amd64
=================================================
--- partial_bad.stderr.exp-glibc25-amd64 2010-08-22 00:05:13.000000000 -0500
+++ partial_bad.stderr.out 2010-08-22 00:43:25.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (partial.c:21)
Address 0x........ is 0 bytes inside the accessing pointer's
@@ -83,9 +86,9 @@
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: main (partial.c:7)
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (partial.c:43)
- Address 0x........ is 0 bytes inside the accessing pointer's
+ Address 0x........ is 4 bytes inside the accessing pointer's
legitimate range, a block of size 7 alloc'd
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: main (partial.c:8)
=================================================
./valgrind-new/exp-ptrcheck/tests/partial_bad.stderr.diff-glibc25-x86
=================================================
--- partial_bad.stderr.exp-glibc25-x86 2010-08-22 00:05:13.000000000 -0500
+++ partial_bad.stderr.out 2010-08-22 00:43:25.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (partial.c:21)
Address 0x........ is 0 bytes inside the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/partial_good.stderr.diff-glibc25-amd64
=================================================
--- partial_good.stderr.exp-glibc25-amd64 2010-08-22 00:05:13.000000000 -0500
+++ partial_good.stderr.out 2010-08-22 00:43:29.000000000 -0500
@@ -1,11 +1,7 @@
-Invalid read of size 4
- at 0x........: main (partial.c:21)
- Address 0x........ is 0 bytes inside the accessing pointer's
- legitimate range, a block of size 3 alloc'd
- at 0x........: malloc (vg_replace_malloc.c:...)
- by 0x........: main (partial.c:6)
-
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (partial.c:22)
Address 0x........ is 1 bytes inside the accessing pointer's
@@ -28,13 +24,6 @@
by 0x........: main (partial.c:9)
Invalid read of size 4
- at 0x........: main (partial.c:25)
- Address 0x........ is 4 bytes inside the accessing pointer's
- legitimate range, a block of size 7 alloc'd
- at 0x........: malloc (vg_replace_malloc.c:...)
- by 0x........: main (partial.c:10)
-
-Invalid read of size 4
at 0x........: main (partial.c:34)
Address 0x........ is 1 bytes before the accessing pointer's
legitimate range, a block of size 3 alloc'd
@@ -42,13 +31,6 @@
by 0x........: main (partial.c:6)
Invalid read of size 4
- at 0x........: main (partial.c:35)
- Address 0x........ is 0 bytes inside the accessing pointer's
- legitimate range, a block of size 3 alloc'd
- at 0x........: malloc (vg_replace_malloc.c:...)
- by 0x........: main (partial.c:6)
-
-Invalid read of size 4
at 0x........: main (partial.c:36)
Address 0x........ is 1 bytes inside the accessing pointer's
legitimate range, a block of size 3 alloc'd
@@ -69,12 +51,12 @@
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: main (partial.c:6)
-Invalid read of size 4
- at 0x........: main (partial.c:41)
+Invalid read of size 8
+ at 0x........: main (partial.c:42)
Address 0x........ is 0 bytes inside the accessing pointer's
- legitimate range, a block of size 3 alloc'd
+ legitimate range, a block of size 7 alloc'd
at 0x........: malloc (vg_replace_malloc.c:...)
- by 0x........: main (partial.c:6)
+ by 0x........: main (partial.c:7)
Invalid read of size 1
at 0x........: main (partial.c:44)
@@ -91,4 +73,4 @@
by 0x........: main (partial.c:10)
-ERROR SUMMARY: 13 errors from 13 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 10 errors from 10 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/partial_good.stderr.diff-glibc25-x86
=================================================
--- partial_good.stderr.exp-glibc25-x86 2010-08-22 00:05:13.000000000 -0500
+++ partial_good.stderr.out 2010-08-22 00:43:29.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (partial.c:22)
Address 0x........ is 1 bytes inside the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/preen_invars.stderr.diff-glibc28-amd64
=================================================
--- preen_invars.stderr.exp-glibc28-amd64 2010-08-22 00:05:13.000000000 -0500
+++ preen_invars.stderr.out 2010-08-22 00:43:33.000000000 -0500
@@ -1,9 +1,6 @@
-Invalid read of size 1
- at 0x........: main (preen_invars.c:22)
- Address 0x........ expected vs actual:
- Expected: unknown
- Actual: global array "im_a_global_arr" in object with soname "preen_invars_so"
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
-
-ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/pth_create.stderr.diff
=================================================
--- pth_create.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ pth_create.stderr.out 2010-08-22 00:43:38.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid write of size 4
at 0x........: pthread_key_create (in /...libpthread...)
by 0x........: main (pth_create.c:17)
=================================================
./valgrind-new/exp-ptrcheck/tests/pth_specific.stderr.diff
=================================================
--- pth_specific.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ pth_specific.stderr.out 2010-08-22 00:43:44.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 1
at 0x........: main (pth_specific.c:19)
Address 0x........ is 1 bytes before the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/realloc.stderr.diff-glibc25-amd64
=================================================
--- realloc.stderr.exp-glibc25-amd64 2010-08-22 00:05:13.000000000 -0500
+++ realloc.stderr.out 2010-08-22 00:43:48.000000000 -0500
@@ -1,43 +1,46 @@
-Invalid read of size 8
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
+Invalid read of size 4
at 0x........: main (realloc.c:20)
- Address 0x........ is 8 bytes before the accessing pointer's
- legitimate range, a block of size 400 alloc'd
+ Address 0x........ is 4 bytes before the accessing pointer's
+ legitimate range, a block of size 200 alloc'd
at 0x........: realloc (vg_replace_malloc.c:...)
by 0x........: main (realloc.c:17)
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (realloc.c:21)
Address 0x........ is 0 bytes after the accessing pointer's
- legitimate range, a block of size 400 alloc'd
+ legitimate range, a block of size 200 alloc'd
at 0x........: realloc (vg_replace_malloc.c:...)
by 0x........: main (realloc.c:17)
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (realloc.c:28)
- Address 0x........ is 8 bytes before the accessing pointer's
- legitimate range, a block of size 400 alloc'd
+ Address 0x........ is 4 bytes before the accessing pointer's
+ legitimate range, a block of size 200 alloc'd
at 0x........: realloc (vg_replace_malloc.c:...)
by 0x........: main (realloc.c:25)
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (realloc.c:29)
Address 0x........ is 0 bytes after the accessing pointer's
- legitimate range, a block of size 400 alloc'd
+ legitimate range, a block of size 200 alloc'd
at 0x........: realloc (vg_replace_malloc.c:...)
by 0x........: main (realloc.c:25)
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (realloc.c:38)
- Address 0x........ is 8 bytes before the accessing pointer's
- legitimate range, a block of size 800 alloc'd
+ Address 0x........ is 4 bytes before the accessing pointer's
+ legitimate range, a block of size 400 alloc'd
at 0x........: realloc (vg_replace_malloc.c:...)
by 0x........: main (realloc.c:33)
-Invalid read of size 8
+Invalid read of size 4
at 0x........: main (realloc.c:39)
Address 0x........ is 0 bytes after the accessing pointer's
- legitimate range, a block of size 800 alloc'd
+ legitimate range, a block of size 400 alloc'd
at 0x........: realloc (vg_replace_malloc.c:...)
by 0x........: main (realloc.c:33)
=================================================
./valgrind-new/exp-ptrcheck/tests/realloc.stderr.diff-glibc25-x86
=================================================
--- realloc.stderr.exp-glibc25-x86 2010-08-22 00:05:13.000000000 -0500
+++ realloc.stderr.out 2010-08-22 00:43:48.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 4
at 0x........: main (realloc.c:20)
Address 0x........ is 4 bytes before the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/stackerr.stderr.diff-glibc27-x86
=================================================
--- stackerr.stderr.exp-glibc27-x86 2010-08-22 00:05:13.000000000 -0500
+++ stackerr.stderr.out 2010-08-22 00:43:52.000000000 -0500
@@ -1,27 +1,6 @@
-Invalid write of size 4
- at 0x........: foo (stackerr.c:27)
- by 0x........: bar (stackerr.c:32)
- by 0x........: main (stackerr.c:41)
- Address 0x........ expected vs actual:
- Expected: stack array "a" in frame 2 back from here
- Actual: stack array "beforea" in frame 2 back from here
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
-Invalid write of size 4
- at 0x........: main (stackerr.c:44)
- Address 0x........ expected vs actual:
- Expected: stack array "a" in this frame
- Actual: stack array "beforea" in this frame
-
-Invalid write of size 1
- at 0x........: _IO_default_xsputn (in /...libc...)
- by 0x........: ...
- by 0x........: ...
- by 0x........: ...
- by 0x........: main (stackerr.c:49)
- Address 0x........ expected vs actual:
- Expected: stack array "buf" in frame 4 back from here
- Actual: stack array "beforebuf" in frame 4 back from here
-
-
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/stackerr.stderr.diff-glibc28-amd64
=================================================
--- stackerr.stderr.exp-glibc28-amd64 2010-08-22 00:05:13.000000000 -0500
+++ stackerr.stderr.out 2010-08-22 00:43:52.000000000 -0500
@@ -1,27 +1,6 @@
-Invalid write of size 8
- at 0x........: foo (stackerr.c:27)
- by 0x........: bar (stackerr.c:32)
- by 0x........: main (stackerr.c:41)
- Address 0x........ expected vs actual:
- Expected: stack array "a" in frame 2 back from here
- Actual: unknown
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
-Invalid write of size 8
- at 0x........: main (stackerr.c:44)
- Address 0x........ expected vs actual:
- Expected: stack array "a" in this frame
- Actual: unknown
-
-Invalid write of size 1
- at 0x........: _IO_default_xsputn (in /...libc...)
- by 0x........: ...
- by 0x........: ...
- by 0x........: ...
- by 0x........: main (stackerr.c:49)
- Address 0x........ expected vs actual:
- Expected: stack array "buf" in frame 4 back from here
- Actual: unknown
-
-
-ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/strcpy.stderr.diff
=================================================
--- strcpy.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ strcpy.stderr.out 2010-08-22 00:43:56.000000000 -0500
@@ -1,3 +1,6 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/supp.stderr.diff
=================================================
--- supp.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ supp.stderr.out 2010-08-22 00:44:00.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Syscall param write(buf) is non-contiguous
at 0x........: write (in /...libc...)
by 0x........: main (supp.c:16)
=================================================
./valgrind-new/exp-ptrcheck/tests/tricky.stderr.diff
=================================================
--- tricky.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ tricky.stderr.out 2010-08-22 00:44:04.000000000 -0500
@@ -1,3 +1,6 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/unaligned.stderr.diff-glibc25-amd64
=================================================
--- unaligned.stderr.exp-glibc25-amd64 2010-08-22 00:05:13.000000000 -0500
+++ unaligned.stderr.out 2010-08-22 00:44:08.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 1
at 0x........: main (unaligned.c:33)
Address 0x........ is 1 bytes before the accessing pointer's
@@ -8,6 +11,14 @@
by 0x........: main (unaligned.c:8)
Invalid read of size 1
+ at 0x........: main (unaligned.c:37)
+ Address 0x........ is 1 bytes before the accessing pointer's
+ legitimate range, a block of size 6 alloc'd
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: ...
+ by 0x........: main (unaligned.c:8)
+
+Invalid read of size 1
at 0x........: main (unaligned.c:39)
Address 0x........ is 0 bytes after the accessing pointer's
legitimate range, a block of size 6 alloc'd
@@ -15,5 +26,13 @@
by 0x........: ...
by 0x........: main (unaligned.c:8)
+Invalid read of size 1
+ at 0x........: main (unaligned.c:43)
+ Address 0x........ is 0 bytes after the accessing pointer's
+ legitimate range, a block of size 6 alloc'd
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: ...
+ by 0x........: main (unaligned.c:8)
+
-ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/exp-ptrcheck/tests/unaligned.stderr.diff-glibc25-x86
=================================================
--- unaligned.stderr.exp-glibc25-x86 2010-08-22 00:05:13.000000000 -0500
+++ unaligned.stderr.out 2010-08-22 00:44:08.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 1
at 0x........: main (unaligned.c:33)
Address 0x........ is 1 bytes before the accessing pointer's
=================================================
./valgrind-new/exp-ptrcheck/tests/zero.stderr.diff
=================================================
--- zero.stderr.exp 2010-08-22 00:05:13.000000000 -0500
+++ zero.stderr.out 2010-08-22 00:44:13.000000000 -0500
@@ -1,4 +1,7 @@
+WARNING: exp-ptrcheck on ppc32/ppc64/arm platforms: stack and global array
+WARNING: checking is not currently supported. Only heap checking is
+WARNING: supported. Disabling s/g checks (like --enable-sg-checks=no).
Invalid read of size 1
at 0x........: main (zero.c:10)
Address 0x........ is 0 bytes after the accessing pointer's
=================================================
./valgrind-new/helgrind/tests/hg05_race2.stderr.diff
=================================================
--- hg05_race2.stderr.exp 2010-08-22 00:05:12.000000000 -0500
+++ hg05_race2.stderr.out 2010-08-22 00:31:50.000000000 -0500
@@ -17,8 +17,6 @@
at 0x........: th (hg05_race2.c:17)
by 0x........: mythread_wrapper (hg_intercepts.c:...)
...
- Location 0x........ is 0 bytes inside foo.poot[5].plop[11],
- declared at hg05_race2.c:24, in frame #x of thread x
Possible data race during write of size 4 at 0x........ by thread #x
at 0x........: th (hg05_race2.c:17)
@@ -28,8 +26,6 @@
at 0x........: th (hg05_race2.c:17)
by 0x........: mythread_wrapper (hg_intercepts.c:...)
...
- Location 0x........ is 0 bytes inside foo.poot[5].plop[11],
- declared at hg05_race2.c:24, in frame #x of thread x
ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc06_two_races_xml.stderr.diff
=================================================
--- tc06_two_races_xml.stderr.exp 2010-08-22 00:05:12.000000000 -0500
+++ tc06_two_races_xml.stderr.out 2010-08-22 00:32:35.000000000 -0500
@@ -45,11 +45,17 @@
<ip>0x........</ip>
<obj>...</obj>
<fn>do_clone</fn>
+ <dir>...</dir>
+ <file>createthread.c</file>
+ <line>...</line>
</frame>
<frame>
<ip>0x........</ip>
<obj>...</obj>
- <fn>pthread_create@@GLIBC_2.2.5</fn>
+ <fn>pthread_create@@GLIBC_2.1</fn>
+ <dir>...</dir>
+ <file>createthread.c</file>
+ <line>...</line>
</frame>
<frame>
<ip>0x........</ip>
@@ -121,6 +127,9 @@
<ip>0x........</ip>
<obj>...</obj>
<fn>start_thread</fn>
+ <dir>...</dir>
+ <file>pthread_create.c</file>
+ <line>...</line>
</frame>
<frame>
<ip>0x........</ip>
@@ -175,6 +184,9 @@
<ip>0x........</ip>
<obj>...</obj>
<fn>start_thread</fn>
+ <dir>...</dir>
+ <file>pthread_create.c</file>
+ <line>...</line>
</frame>
<frame>
<ip>0x........</ip>
@@ -229,6 +241,9 @@
<ip>0x........</ip>
<obj>...</obj>
<fn>start_thread</fn>
+ <dir>...</dir>
+ <file>pthread_create.c</file>
+ <line>...</line>
</frame>
<frame>
<ip>0x........</ip>
@@ -283,6 +298,9 @@
<ip>0x........</ip>
<obj>...</obj>
<fn>start_thread</fn>
+ <dir>...</dir>
+ <file>pthread_create.c</file>
+ <line>...</line>
</frame>
<frame>
<ip>0x........</ip>
=================================================
./valgrind-new/helgrind/tests/tc09_bad_unlock.stderr.diff-glibc23-amd64
=================================================
--- tc09_bad_unlock.stderr.exp-glibc23-amd64 2010-08-22 00:05:12.000000000 -0500
+++ tc09_bad_unlock.stderr.out 2010-08-22 00:32:40.000000000 -0500
@@ -31,14 +31,13 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:49)
-Thread #x deallocated location 0x........ containing a locked lock
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:49)
- Lock at 0x........ was first observed
- at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
+Thread #x's call to pthread_mutex_unlock failed
+ with error code 22 (EINVAL: Invalid argument)
+ at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:49)
+---------------------
Thread #x unlocked a not-locked lock at 0x........
at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:27)
@@ -46,6 +45,20 @@
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:23)
+ by 0x........: main (tc09_bad_unlock.c:49)
+
+Thread #x: Attempt to re-lock a non-recursive lock I already hold
+ at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:32)
+ by 0x........: main (tc09_bad_unlock.c:50)
+ Lock was previously acquired
+ at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:32)
+ by 0x........: main (tc09_bad_unlock.c:49)
+
+Thread #x: Bug in libpthread: recursive write lock granted on mutex/wrlock which does not support recursion
+ at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:32)
by 0x........: main (tc09_bad_unlock.c:50)
Thread #x was created
@@ -62,20 +75,21 @@
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
+ by 0x........: main (tc09_bad_unlock.c:49)
Thread #x unlocked an invalid lock at 0x........
at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Thread #x deallocated location 0x........ containing a locked lock
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:50)
- Lock at 0x........ was first observed
- at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
+Thread #x's call to pthread_mutex_unlock failed
+ with error code 22 (EINVAL: Invalid argument)
+ at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
+Thread #x: Exiting thread still holds 1 lock
+ ...
+
-ERROR SUMMARY: 8 errors from 8 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 11 errors from 11 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc09_bad_unlock.stderr.diff-glibc25-amd64
=================================================
--- tc09_bad_unlock.stderr.exp-glibc25-amd64 2010-08-22 00:05:12.000000000 -0500
+++ tc09_bad_unlock.stderr.out 2010-08-22 00:32:40.000000000 -0500
@@ -51,6 +51,10 @@
at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:32)
by 0x........: main (tc09_bad_unlock.c:50)
+ Lock was previously acquired
+ at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:32)
+ by 0x........: main (tc09_bad_unlock.c:49)
Thread #x: Bug in libpthread: recursive write lock granted on mutex/wrlock which does not support recursion
at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
=================================================
./valgrind-new/helgrind/tests/tc09_bad_unlock.stderr.diff-glibc25-x86
=================================================
--- tc09_bad_unlock.stderr.exp-glibc25-x86 2010-08-22 00:05:12.000000000 -0500
+++ tc09_bad_unlock.stderr.out 2010-08-22 00:32:40.000000000 -0500
@@ -37,14 +37,7 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:49)
-Thread #x deallocated location 0x........ containing a locked lock
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:49)
- Lock at 0x........ was first observed
- at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:49)
-
+---------------------
Thread #x unlocked a not-locked lock at 0x........
at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:27)
@@ -52,6 +45,20 @@
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:23)
+ by 0x........: main (tc09_bad_unlock.c:49)
+
+Thread #x: Attempt to re-lock a non-recursive lock I already hold
+ at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:32)
+ by 0x........: main (tc09_bad_unlock.c:50)
+ Lock was previously acquired
+ at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:32)
+ by 0x........: main (tc09_bad_unlock.c:49)
+
+Thread #x: Bug in libpthread: recursive write lock granted on mutex/wrlock which does not support recursion
+ at 0x........: pthread_mutex_lock (hg_intercepts.c:...)
+ by 0x........: nearly_main (tc09_bad_unlock.c:32)
by 0x........: main (tc09_bad_unlock.c:50)
Thread #x was created
@@ -68,7 +75,7 @@
Lock at 0x........ was first observed
at 0x........: pthread_mutex_init (hg_intercepts.c:...)
by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
+ by 0x........: main (tc09_bad_unlock.c:49)
Thread #x unlocked an invalid lock at 0x........
at 0x........: pthread_mutex_unlock (hg_intercepts.c:...)
@@ -81,13 +88,8 @@
by 0x........: nearly_main (tc09_bad_unlock.c:41)
by 0x........: main (tc09_bad_unlock.c:50)
-Thread #x deallocated location 0x........ containing a locked lock
- at 0x........: nearly_main (tc09_bad_unlock.c:45)
- by 0x........: main (tc09_bad_unlock.c:50)
- Lock at 0x........ was first observed
- at 0x........: pthread_mutex_init (hg_intercepts.c:...)
- by 0x........: nearly_main (tc09_bad_unlock.c:31)
- by 0x........: main (tc09_bad_unlock.c:50)
+Thread #x: Exiting thread still holds 1 lock
+ ...
-ERROR SUMMARY: 10 errors from 10 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 11 errors from 11 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/helgrind/tests/tc23_bogus_condwait.stderr.diff
=================================================
--- tc23_bogus_condwait.stderr.exp 2010-08-22 00:05:12.000000000 -0500
+++ tc23_bogus_condwait.stderr.out 2010-08-22 00:33:55.000000000 -0500
@@ -5,29 +5,21 @@
at 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
by 0x........: main (tc23_bogus_condwait.c:69)
-Thread #x: pthread_cond_{timed}wait called with un-held mutex
- at 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
- by 0x........: main (tc23_bogus_condwait.c:72)
-
-Thread #x: pthread_cond_{timed}wait: cond is associated with a different mutex
- at 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
- by 0x........: main (tc23_bogus_condwait.c:72)
-
-Thread #x: pthread_cond_{timed}wait called with mutex of type pthread_rwlock_t*
- at 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
- by 0x........: main (tc23_bogus_condwait.c:75)
-Thread #x: pthread_cond_{timed}wait: cond is associated with a different mutex
- at 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
- by 0x........: main (tc23_bogus_condwait.c:75)
-
-Thread #x: pthread_cond_{timed}wait called with mutex held by a different thread
- at 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
- by 0x........: main (tc23_bogus_condwait.c:78)
+Process terminating with default action of signal 7 (SIGBUS)
+ Invalid address alignment at address 0x........
+ at 0x........: __pthread_mutex_unlock_usercnt (pthread_mutex_unlock.c:64)
+ by 0x........: pthread_cond_wait@@GLIBC_2.3.2 (pthread_cond_wait.c:108)
+ by 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
+ by 0x........: main (tc23_bogus_condwait.c:69)
+Thread #x was created
+ ...
+ by 0x........: pthread_create@* (hg_intercepts.c:...)
+ by 0x........: main (tc23_bogus_condwait.c:61)
-Thread #x: pthread_cond_{timed}wait: cond is associated with a different mutex
- at 0x........: pthread_cond_wait@* (hg_intercepts.c:...)
- by 0x........: main (tc23_bogus_condwait.c:78)
+Thread #x: Exiting thread still holds 1 lock
+ ...
+ ...
-ERROR SUMMARY: 7 errors from 7 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
=================================================
./valgrind-new/massif/tests/deep-D.post.diff
=================================================
--- deep-D.post.exp 2010-08-22 00:05:21.000000000 -0500
+++ deep-D.post.out 2010-08-22 00:27:56.000000000 -0500
@@ -46,8 +46,9 @@
8 3,264 3,264 3,200 64 0
9 3,672 3,672 3,600 72 0
98.04% (3,600B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
-->98.04% (3,600B) 0x........: (below main)
-
+->98.04% (3,600B) 0x........: ??? (in /...libc...)
+ ->98.04% (3,600B) 0x........: (below main)
+
--------------------------------------------------------------------------------
n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
--------------------------------------------------------------------------------
=================================================
./valgrind-new/massif/tests/overloaded-new.post.diff
=================================================
--- overloaded-new.post.exp 2010-08-22 00:05:21.000000000 -0500
+++ overloaded-new.post.out 2010-08-22 00:28:07.000000000 -0500
@@ -42,14 +42,18 @@
4 12,032 12,032 12,000 32 0
5 12,032 12,032 12,000 32 0
99.73% (12,000B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
-->33.24% (4,000B) 0x........: main (overloaded-new.cpp:49)
-|
-->33.24% (4,000B) 0x........: main (overloaded-new.cpp:50)
-|
-->16.62% (2,000B) 0x........: main (overloaded-new.cpp:51)
-|
-->16.62% (2,000B) 0x........: main (overloaded-new.cpp:52)
-
+->33.24% (4,000B) 0x........: operator new(unsigned int) (overloaded-new.cpp:19)
+| ->33.24% (4,000B) 0x........: main (overloaded-new.cpp:49)
+|
+->33.24% (4,000B) 0x........: operator new(unsigned int, std::nothrow_t const&) (overloaded-new.cpp:24)
+| ->33.24% (4,000B) 0x........: main (overloaded-new.cpp:50)
+|
+->16.62% (2,000B) 0x........: operator new[](unsigned int) (overloaded-new.cpp:29)
+| ->16.62% (2,000B) 0x........: main (overloaded-new.cpp:51)
+|
+->16.62% (2,000B) 0x........: operator new[](unsigned int, std::nothrow_t const&) (overloaded-new.cpp:34)
+ ->16.62% (2,000B) 0x........: main (overloaded-new.cpp:52)
+
--------------------------------------------------------------------------------
n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
--------------------------------------------------------------------------------
=================================================
./valgrind-new/memcheck/tests/badjump.stderr.diff
=================================================
--- badjump.stderr.exp 2010-08-22 00:05:20.000000000 -0500
+++ badjump.stderr.out 2010-08-22 00:20:59.000000000 -0500
@@ -1,6 +1,7 @@
Jump to the invalid address stated on the next line
at 0x........: ???
+ by 0x........: ??? (in /...libc...)
by 0x........: (below main)
Address 0x........ is not stack'd, ...
[truncated message content] |
|
From: Rich C. <rc...@wi...> - 2010-08-22 04:14:42
|
Nightly build on macbook ( Darwin 9.8.0 i386 ) Started at 2010-08-21 23:05:01 CDT Ended at 2010-08-21 23:14:34 CDT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... failed Last 20 lines of verbose log follow echo gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o int int.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT jcxz.o -MD -MP -MF .deps/jcxz.Tpo -c -o jcxz.o jcxz.c mv -f .deps/jcxz.Tpo .deps/jcxz.Po gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o jcxz jcxz.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT lahf.o -MD -MP -MF .deps/lahf.Tpo -c -o lahf.o lahf.c mv -f .deps/lahf.Tpo .deps/lahf.Po gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o lahf lahf.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT looper.o -MD -MP -MF .deps/looper.Tpo -c -o looper.o looper.c mv -f .deps/looper.Tpo .deps/looper.Po gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o looper looper.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT lzcnt32.o -MD -MP -MF .deps/lzcnt32.Tpo -c -o lzcnt32.o lzcnt32.c /var/tmp//ccG2BCzY.s:51:no such instruction: `lzcntl 0(%eax), %esi' /var/tmp//ccG2BCzY.s:93:no such instruction: `lzcntw 0(%eax), %si' make[5]: *** [lzcnt32.o] Error 1 rm insn_ssse3.c insn_sse3.c insn_fpu.c insn_sse.c insn_mmx.c insn_mmxext.c insn_sse2.c insn_basic.c insn_cmov.c make[4]: *** [check-am] Error 2 make[3]: *** [check-recursive] Error 1 make[2]: *** [check-recursive] Error 1 make[1]: *** [check-recursive] Error 1 make: *** [check] Error 2 ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... failed Last 20 lines of verbose log follow echo gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o int int.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT jcxz.o -MD -MP -MF .deps/jcxz.Tpo -c -o jcxz.o jcxz.c mv -f .deps/jcxz.Tpo .deps/jcxz.Po gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o jcxz jcxz.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT lahf.o -MD -MP -MF .deps/lahf.Tpo -c -o lahf.o lahf.c mv -f .deps/lahf.Tpo .deps/lahf.Po gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o lahf lahf.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT looper.o -MD -MP -MF .deps/looper.Tpo -c -o looper.o looper.c mv -f .deps/looper.Tpo .deps/looper.Po gcc -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -o looper looper.o gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT lzcnt32.o -MD -MP -MF .deps/lzcnt32.Tpo -c -o lzcnt32.o lzcnt32.c /var/tmp//cceyFd59.s:51:no such instruction: `lzcntl 0(%eax), %esi' /var/tmp//cceyFd59.s:93:no such instruction: `lzcntw 0(%eax), %si' make[5]: *** [lzcnt32.o] Error 1 rm insn_ssse3.c insn_sse3.c insn_fpu.c insn_sse.c insn_mmx.c insn_mmxext.c insn_sse2.c insn_basic.c insn_cmov.c make[4]: *** [check-am] Error 2 make[3]: *** [check-recursive] Error 1 make[2]: *** [check-recursive] Error 1 make[1]: *** [check-recursive] Error 1 make: *** [check] Error 2 ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sat Aug 21 23:10:13 2010 --- new.short Sat Aug 21 23:14:34 2010 *************** *** 17,20 **** gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT lzcnt32.o -MD -MP -MF .deps/lzcnt32.Tpo -c -o lzcnt32.o lzcnt32.c ! /var/tmp//cceyFd59.s:51:no such instruction: `lzcntl 0(%eax), %esi' ! /var/tmp//cceyFd59.s:93:no such instruction: `lzcntw 0(%eax), %si' make[5]: *** [lzcnt32.o] Error 1 --- 17,20 ---- gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../.. -I../../../include -I../../../coregrind -I../../../include -I../../../VEX/pub -DVGA_x86=1 -DVGO_darwin=1 -DVGP_x86_darwin=1 -Winline -Wall -Wshadow -g -m32 -mmmx -msse -mdynamic-no-pic -Wno-long-long -Wno-pointer-sign -fno-stack-protector -MT lzcnt32.o -MD -MP -MF .deps/lzcnt32.Tpo -c -o lzcnt32.o lzcnt32.c ! /var/tmp//ccG2BCzY.s:51:no such instruction: `lzcntl 0(%eax), %esi' ! /var/tmp//ccG2BCzY.s:93:no such instruction: `lzcntw 0(%eax), %si' make[5]: *** [lzcnt32.o] Error 1 Congratulations, all tests passed! |
|
From: Tom H. <th...@cy...> - 2010-08-22 02:51:47
|
Nightly build on lloyd ( x86_64, Fedora 7 ) Started at 2010-08-22 03:05:05 BST Ended at 2010-08-22 03:51:27 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 546 tests, 2 stderr failures, 3 stdout failures, 0 post failures == none/tests/amd64/bug132918 (stdout) none/tests/amd64/fxtract (stdout) none/tests/x86/fxtract (stdout) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc09_bad_unlock (stderr) |