You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(1) |
|
2
|
3
(6) |
4
(5) |
5
|
6
(2) |
7
(1) |
8
|
|
9
|
10
(4) |
11
(2) |
12
(2) |
13
(3) |
14
(1) |
15
|
|
16
(4) |
17
|
18
(3) |
19
(3) |
20
(3) |
21
|
22
|
|
23
(1) |
24
(10) |
25
(13) |
26
(6) |
27
(2) |
28
(3) |
29
(5) |
|
30
(6) |
|
|
|
|
|
|
|
From: Ivo R. <iv...@iv...> - 2017-04-24 20:36:00
|
2017-04-24 22:03 GMT+02:00 Matthias Schwarzott <zz...@ge...>: > Am 24.04.2017 um 17:00 schrieb Ivo Raisr: >> Any comments or objections to patch v2 for bug 379039? >> https://bugs.kde.org/show_bug.cgi?id=379039 >> >> I. >> > Hi! > > The code seems to work, but the len variable does not mean length of the > string, so it could be misleading. > > Additionally I am not sure if the function POST(sys_prctl) must also be > modified. Hi Matthias, Thank you for your comments. You are right, I had to modify POST(sys_prctl) so it takes into account that ARG2 might not need to be nul-terminated. > The test memcheck/tests/threadname.c maybe needs more cases: > > * Set threadname to a long string and check that only the first 15 > characters are printed as threadname for the next error. Why? We do not want to do functional testing of libpthread or prctl syscall. > * If possible a test that proves, that POST(sys_prctl) does not access > memory after byte 16 (but I do not know how to test this). That would be appropriate for prctl(get-name) case. Different story. I will attach new patch shortly. I. |
|
From: Matthias S. <zz...@ge...> - 2017-04-24 20:20:56
|
Am 24.04.2017 um 22:03 schrieb Matthias Schwarzott: > Am 24.04.2017 um 17:00 schrieb Ivo Raisr: >> Any comments or objections to patch v2 for bug 379039? >> https://bugs.kde.org/show_bug.cgi?id=379039 >> >> I. >> > Hi! > > The code seems to work, but the len variable does not mean length of the > string, so it could be misleading. > > Additionally I am not sure if the function POST(sys_prctl) must also be > modified. > > The test memcheck/tests/threadname.c maybe needs more cases: > > * Set threadname to a long string and check that only the first 15 > characters are printed as threadname for the next error. I forgot to mention that for this test prctl must be called directly instead of pthread_setname_np as in the existing cases. pthread_setname_np fails with ERANGE without calling prctl. > > * If possible a test that proves, that POST(sys_prctl) does not access > memory after byte 16 (but I do not know how to test this). > > Regards > Matthias > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > |
|
From: Matthias S. <zz...@ge...> - 2017-04-24 20:04:00
|
Am 24.04.2017 um 17:00 schrieb Ivo Raisr: > Any comments or objections to patch v2 for bug 379039? > https://bugs.kde.org/show_bug.cgi?id=379039 > > I. > Hi! The code seems to work, but the len variable does not mean length of the string, so it could be misleading. Additionally I am not sure if the function POST(sys_prctl) must also be modified. The test memcheck/tests/threadname.c maybe needs more cases: * Set threadname to a long string and check that only the first 15 characters are printed as threadname for the next error. * If possible a test that proves, that POST(sys_prctl) does not access memory after byte 16 (but I do not know how to test this). Regards Matthias |
|
From: Petar J. <mip...@gm...> - 2017-04-24 17:50:24
|
On Sun, Apr 23, 2017 at 3:57 PM, 网络尖兵 <net...@fo...> wrote: > The Valgrind didn't reported any errors in my program. That is why I asked > Have you run all tools that come with Valgrind? Try different tools. Different time or network related events could also be the cause of the behaviour you are seeing when your program is run under Valgrind. |
|
From: Ivo R. <iv...@iv...> - 2017-04-24 15:01:56
|
Any comments or objections to patch attached to this bug? https://bugs.kde.org/show_bug.cgi?id=379094 I. |
|
From: Ivo R. <iv...@iv...> - 2017-04-24 15:00:51
|
Any comments or objections to patch v2 for bug 379039? https://bugs.kde.org/show_bug.cgi?id=379039 I. |
|
From: <sv...@va...> - 2017-04-24 13:33:24
|
Author: petarj
Date: Mon Apr 24 14:33:17 2017
New Revision: 16310
Log:
mips: fix build breakage introduced in r16309
Change archinfo->hwcaps to vex_archinfo.hwcaps.
Fixes build breakage.
Modified:
trunk/coregrind/m_translate.c
Modified: trunk/coregrind/m_translate.c
==============================================================================
--- trunk/coregrind/m_translate.c (original)
+++ trunk/coregrind/m_translate.c Mon Apr 24 14:33:17 2017
@@ -1696,7 +1696,7 @@
/* Compute guest__use_fallback_LLSC, overiding any settings of
VG_(clo_fallback_llsc) that we know would cause the guest to
fail (loop). */
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(vex_archinfo.hwcaps) == VEX_PRID_COMP_CAVIUM) {
/* We must use the fallback scheme. */
vex_abiinfo.guest__use_fallback_LLSC = True;
} else {
|
|
From: <sv...@va...> - 2017-04-24 10:57:14
|
Author: sewardj
Date: Mon Apr 24 11:57:05 2017
New Revision: 3353
Log:
widen_z_16_to_64, widen_z_8_to_64: generate less stupid code.
Modified:
trunk/priv/host_arm64_isel.c
Modified: trunk/priv/host_arm64_isel.c
==============================================================================
--- trunk/priv/host_arm64_isel.c (original)
+++ trunk/priv/host_arm64_isel.c Mon Apr 24 11:57:05 2017
@@ -298,10 +298,9 @@
a new register, and return the new register. */
static HReg widen_z_16_to_64 ( ISelEnv* env, HReg src )
{
- HReg dst = newVRegI(env);
- ARM64RI6* n48 = ARM64RI6_I6(48);
- addInstr(env, ARM64Instr_Shift(dst, src, n48, ARM64sh_SHL));
- addInstr(env, ARM64Instr_Shift(dst, dst, n48, ARM64sh_SHR));
+ HReg dst = newVRegI(env);
+ ARM64RIL* mask = ARM64RIL_I13(1, 0, 15); /* encodes 0xFFFF */
+ addInstr(env, ARM64Instr_Logic(dst, src, mask, ARM64lo_AND));
return dst;
}
@@ -329,10 +328,9 @@
static HReg widen_z_8_to_64 ( ISelEnv* env, HReg src )
{
- HReg dst = newVRegI(env);
- ARM64RI6* n56 = ARM64RI6_I6(56);
- addInstr(env, ARM64Instr_Shift(dst, src, n56, ARM64sh_SHL));
- addInstr(env, ARM64Instr_Shift(dst, dst, n56, ARM64sh_SHR));
+ HReg dst = newVRegI(env);
+ ARM64RIL* mask = ARM64RIL_I13(1, 0, 7); /* encodes 0xFF */
+ addInstr(env, ARM64Instr_Logic(dst, src, mask, ARM64lo_AND));
return dst;
}
|
Author: sewardj
Date: Mon Apr 24 10:24:57 2017
New Revision: 16309
Log:
Bug 369459 - valgrind on arm64 violates the ARMv8 spec (ldxr/stxr)
This implements a fallback LL/SC implementation as described in bug 344524.
Valgrind side changes:
* Command line plumbing for --sim-hints=fallback-llsc
* memcheck: handle new arm64 guest state in memcheck/mc_machine.c
Modified:
trunk/coregrind/m_main.c
trunk/coregrind/m_scheduler/scheduler.c
trunk/coregrind/m_translate.c
trunk/coregrind/pub_core_options.h
trunk/memcheck/mc_machine.c
trunk/none/tests/cmdline1.stdout.exp
trunk/none/tests/cmdline2.stdout.exp
Modified: trunk/coregrind/m_main.c
==============================================================================
--- trunk/coregrind/m_main.c (original)
+++ trunk/coregrind/m_main.c Mon Apr 24 10:24:57 2017
@@ -187,7 +187,7 @@
" --sim-hints=hint1,hint2,... activate unusual sim behaviours [none] \n"
" where hint is one of:\n"
" lax-ioctls lax-doors fuse-compatible enable-outer\n"
-" no-inner-prefix no-nptl-pthread-stackcache none\n"
+" no-inner-prefix no-nptl-pthread-stackcache fallback-llsc none\n"
" --fair-sched=no|yes|try schedule threads fairly on multicore systems [no]\n"
" --kernel-variant=variant1,variant2,...\n"
" handle non-standard kernel variants [none]\n"
@@ -417,7 +417,7 @@
else if VG_USETX_CLO (str, "--sim-hints",
"lax-ioctls,lax-doors,fuse-compatible,"
"enable-outer,no-inner-prefix,"
- "no-nptl-pthread-stackcache",
+ "no-nptl-pthread-stackcache,fallback-llsc",
VG_(clo_sim_hints)) {}
}
Modified: trunk/coregrind/m_scheduler/scheduler.c
==============================================================================
--- trunk/coregrind/m_scheduler/scheduler.c (original)
+++ trunk/coregrind/m_scheduler/scheduler.c Mon Apr 24 10:24:57 2017
@@ -925,6 +925,14 @@
tst->arch.vex.host_EvC_FAILADDR
= (HWord)VG_(fnptr_to_fnentry)( &VG_(disp_cp_evcheck_fail) );
+ /* Invalidate any in-flight LL/SC transactions, in the case that we're
+ using the fallback LL/SC implementation. See bugs 344524 and 369459. */
+# if defined(VGP_mips32_linux) || defined(VGP_mips64_linux)
+ tst->arch.vex.guest_LLaddr = (HWord)(-1);
+# elif defined(VGP_arm64_linux)
+ tst->arch.vex.guest_LLSC_SIZE = 0;
+# endif
+
if (0) {
vki_sigset_t m;
Int i, err = VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &m);
@@ -957,10 +965,6 @@
vg_assert(VG_(in_generated_code) == True);
VG_(in_generated_code) = False;
-#if defined(VGA_mips32) || defined(VGA_mips64)
- tst->arch.vex.guest_LLaddr = (HWord)(-1);
-#endif
-
if (jumped != (HWord)0) {
/* We get here if the client took a fault that caused our signal
handler to longjmp. */
Modified: trunk/coregrind/m_translate.c
==============================================================================
--- trunk/coregrind/m_translate.c (original)
+++ trunk/coregrind/m_translate.c Mon Apr 24 10:24:57 2017
@@ -1663,30 +1663,51 @@
vex_abiinfo.guest_amd64_assume_fs_is_const = True;
vex_abiinfo.guest_amd64_assume_gs_is_const = True;
# endif
+
# if defined(VGP_amd64_darwin)
vex_abiinfo.guest_amd64_assume_gs_is_const = True;
# endif
+
+# if defined(VGP_amd64_solaris)
+ vex_abiinfo.guest_amd64_assume_fs_is_const = True;
+# endif
+
# if defined(VGP_ppc32_linux)
vex_abiinfo.guest_ppc_zap_RZ_at_blr = False;
vex_abiinfo.guest_ppc_zap_RZ_at_bl = NULL;
# endif
+
# if defined(VGP_ppc64be_linux)
vex_abiinfo.guest_ppc_zap_RZ_at_blr = True;
vex_abiinfo.guest_ppc_zap_RZ_at_bl = const_True;
vex_abiinfo.host_ppc_calls_use_fndescrs = True;
# endif
+
# if defined(VGP_ppc64le_linux)
vex_abiinfo.guest_ppc_zap_RZ_at_blr = True;
vex_abiinfo.guest_ppc_zap_RZ_at_bl = const_True;
vex_abiinfo.host_ppc_calls_use_fndescrs = False;
# endif
-# if defined(VGP_amd64_solaris)
- vex_abiinfo.guest_amd64_assume_fs_is_const = True;
-# endif
+
# if defined(VGP_mips32_linux) || defined(VGP_mips64_linux)
ThreadArchState* arch = &VG_(threads)[tid].arch;
vex_abiinfo.guest_mips_fp_mode64 =
!!(arch->vex.guest_CP0_status & MIPS_CP0_STATUS_FR);
+ /* Compute guest__use_fallback_LLSC, overiding any settings of
+ VG_(clo_fallback_llsc) that we know would cause the guest to
+ fail (loop). */
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ /* We must use the fallback scheme. */
+ vex_abiinfo.guest__use_fallback_LLSC = True;
+ } else {
+ vex_abiinfo.guest__use_fallback_LLSC
+ = SimHintiS(SimHint_fallback_llsc, VG_(clo_sim_hints));
+ }
+# endif
+
+# if defined(VGP_arm64_linux)
+ vex_abiinfo.guest__use_fallback_LLSC
+ = SimHintiS(SimHint_fallback_llsc, VG_(clo_sim_hints));
# endif
/* Set up closure args. */
Modified: trunk/coregrind/pub_core_options.h
==============================================================================
--- trunk/coregrind/pub_core_options.h (original)
+++ trunk/coregrind/pub_core_options.h Mon Apr 24 10:24:57 2017
@@ -222,14 +222,15 @@
SimHint_fuse_compatible,
SimHint_enable_outer,
SimHint_no_inner_prefix,
- SimHint_no_nptl_pthread_stackcache
+ SimHint_no_nptl_pthread_stackcache,
+ SimHint_fallback_llsc
}
SimHint;
// Build mask to check or set SimHint a membership
#define SimHint2S(a) (1 << (a))
// SimHint h is member of the Set s ?
-#define SimHintiS(h,s) ((s) & SimHint2S(h))
+#define SimHintiS(h,s) (((s) & SimHint2S(h)) != 0)
extern UInt VG_(clo_sim_hints);
/* Show symbols in the form 'name+offset' ? Default: NO */
Modified: trunk/memcheck/mc_machine.c
==============================================================================
--- trunk/memcheck/mc_machine.c (original)
+++ trunk/memcheck/mc_machine.c Mon Apr 24 10:24:57 2017
@@ -1040,6 +1040,10 @@
if (o == GOF(CMSTART) && sz == 8) return -1; // untracked
if (o == GOF(CMLEN) && sz == 8) return -1; // untracked
+ if (o == GOF(LLSC_SIZE) && sz == 8) return -1; // untracked
+ if (o == GOF(LLSC_ADDR) && sz == 8) return o;
+ if (o == GOF(LLSC_DATA) && sz == 8) return o;
+
VG_(printf)("MC_(get_otrack_shadow_offset)(arm64)(off=%d,sz=%d)\n",
offset,szB);
tl_assert(0);
Modified: trunk/none/tests/cmdline1.stdout.exp
==============================================================================
--- trunk/none/tests/cmdline1.stdout.exp (original)
+++ trunk/none/tests/cmdline1.stdout.exp Mon Apr 24 10:24:57 2017
@@ -101,7 +101,7 @@
--sim-hints=hint1,hint2,... activate unusual sim behaviours [none]
where hint is one of:
lax-ioctls lax-doors fuse-compatible enable-outer
- no-inner-prefix no-nptl-pthread-stackcache none
+ no-inner-prefix no-nptl-pthread-stackcache fallback-llsc none
--fair-sched=no|yes|try schedule threads fairly on multicore systems [no]
--kernel-variant=variant1,variant2,...
handle non-standard kernel variants [none]
Modified: trunk/none/tests/cmdline2.stdout.exp
==============================================================================
--- trunk/none/tests/cmdline2.stdout.exp (original)
+++ trunk/none/tests/cmdline2.stdout.exp Mon Apr 24 10:24:57 2017
@@ -101,7 +101,7 @@
--sim-hints=hint1,hint2,... activate unusual sim behaviours [none]
where hint is one of:
lax-ioctls lax-doors fuse-compatible enable-outer
- no-inner-prefix no-nptl-pthread-stackcache none
+ no-inner-prefix no-nptl-pthread-stackcache fallback-llsc none
--fair-sched=no|yes|try schedule threads fairly on multicore systems [no]
--kernel-variant=variant1,variant2,...
handle non-standard kernel variants [none]
|
Author: sewardj
Date: Mon Apr 24 10:23:43 2017
New Revision: 3352
Log:
Bug 369459 - valgrind on arm64 violates the ARMv8 spec (ldxr/stxr)
This implements a fallback LL/SC implementation as described in bug 344524.
The fallback implementation is not enabled by default, and there is no
auto-detection for when it should be used. To use it, run with the
flag --sim-hints=fallback-llsc. This commit also allows the existing
MIPS fallback implementation to be enabled with that flag.
VEX side changes:
* priv/main_main.c, pub/libvex.h
Adds new field guest__use_fallback_LLSC to VexAbiInfo
* pub/libvex_guest_arm64.h priv/guest_arm64_toIR.c
add front end support, new guest state fields
guest_LLSC_{SIZE,ADDR,DATA}, also documentation of the scheme
* priv/guest_mips_toIR.c
allow manual selection of fallback implementation via
--sim-hints=fallback-llsc
* priv/host_arm64_defs.c priv/host_arm64_defs.h priv/host_arm64_isel.c
Add support for generating CAS on arm64, as needed by the front end changes
Modified:
trunk/priv/guest_arm64_toIR.c
trunk/priv/guest_mips_toIR.c
trunk/priv/host_arm64_defs.c
trunk/priv/host_arm64_defs.h
trunk/priv/host_arm64_isel.c
trunk/priv/main_main.c
trunk/pub/libvex.h
trunk/pub/libvex_guest_arm64.h
Modified: trunk/priv/guest_arm64_toIR.c
==============================================================================
--- trunk/priv/guest_arm64_toIR.c (original)
+++ trunk/priv/guest_arm64_toIR.c Mon Apr 24 10:23:43 2017
@@ -1147,6 +1147,10 @@
#define OFFB_CMSTART offsetof(VexGuestARM64State,guest_CMSTART)
#define OFFB_CMLEN offsetof(VexGuestARM64State,guest_CMLEN)
+#define OFFB_LLSC_SIZE offsetof(VexGuestARM64State,guest_LLSC_SIZE)
+#define OFFB_LLSC_ADDR offsetof(VexGuestARM64State,guest_LLSC_ADDR)
+#define OFFB_LLSC_DATA offsetof(VexGuestARM64State,guest_LLSC_DATA)
+
/* ---------------- Integer registers ---------------- */
@@ -4702,7 +4706,9 @@
static
-Bool dis_ARM64_load_store(/*MB_OUT*/DisResult* dres, UInt insn)
+Bool dis_ARM64_load_store(/*MB_OUT*/DisResult* dres, UInt insn,
+ const VexAbiInfo* abiinfo
+)
{
# define INSN(_bMax,_bMin) SLICE_UInt(insn, (_bMax), (_bMin))
@@ -6457,6 +6463,32 @@
sz 001000 000 s 0 11111 n t STX{R,RH,RB} Ws, Rt, [Xn|SP]
sz 001000 000 s 1 11111 n t STLX{R,RH,RB} Ws, Rt, [Xn|SP]
*/
+ /* For the "standard" implementation we pass through the LL and SC to
+ the host. For the "fallback" implementation, for details see
+ https://bugs.kde.org/show_bug.cgi?id=344524 and
+ https://bugs.kde.org/show_bug.cgi?id=369459,
+ but in short:
+
+ LoadLinked(addr)
+ gs.LLsize = load_size // 1, 2, 4 or 8
+ gs.LLaddr = addr
+ gs.LLdata = zeroExtend(*addr)
+
+ StoreCond(addr, data)
+ tmp_LLsize = gs.LLsize
+ gs.LLsize = 0 // "no transaction"
+ if tmp_LLsize != store_size -> fail
+ if addr != gs.LLaddr -> fail
+ if zeroExtend(*addr) != gs.LLdata -> fail
+ cas_ok = CAS(store_size, addr, gs.LLdata -> data)
+ if !cas_ok -> fail
+ succeed
+
+ When thread scheduled
+ gs.LLsize = 0 // "no transaction"
+ (coregrind/m_scheduler/scheduler.c, run_thread_for_a_while()
+ has to do this bit)
+ */
if (INSN(29,23) == BITS7(0,0,1,0,0,0,0)
&& (INSN(23,21) & BITS3(1,0,1)) == BITS3(0,0,0)
&& INSN(14,10) == BITS5(1,1,1,1,1)) {
@@ -6478,29 +6510,99 @@
if (isLD && ss == BITS5(1,1,1,1,1)) {
IRTemp res = newTemp(ty);
- stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), NULL/*LL*/));
- putIReg64orZR(tt, widenUto64(ty, mkexpr(res)));
+ if (abiinfo->guest__use_fallback_LLSC) {
+ // Do the load first so we don't update any guest state
+ // if it faults.
+ IRTemp loaded_data64 = newTemp(Ity_I64);
+ assign(loaded_data64, widenUto64(ty, loadLE(ty, mkexpr(ea))));
+ stmt( IRStmt_Put( OFFB_LLSC_DATA, mkexpr(loaded_data64) ));
+ stmt( IRStmt_Put( OFFB_LLSC_ADDR, mkexpr(ea) ));
+ stmt( IRStmt_Put( OFFB_LLSC_SIZE, mkU64(szB) ));
+ putIReg64orZR(tt, mkexpr(loaded_data64));
+ } else {
+ stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), NULL/*LL*/));
+ putIReg64orZR(tt, widenUto64(ty, mkexpr(res)));
+ }
if (isAcqOrRel) {
stmt(IRStmt_MBE(Imbe_Fence));
}
- DIP("ld%sx%s %s, [%s]\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
- nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn));
+ DIP("ld%sx%s %s, [%s] %s\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
+ nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn),
+ abiinfo->guest__use_fallback_LLSC
+ ? "(fallback implementation)" : "");
return True;
}
if (!isLD) {
if (isAcqOrRel) {
stmt(IRStmt_MBE(Imbe_Fence));
}
- IRTemp res = newTemp(Ity_I1);
IRExpr* data = narrowFrom64(ty, getIReg64orZR(tt));
- stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), data));
- /* IR semantics: res is 1 if store succeeds, 0 if it fails.
- Need to set rS to 1 on failure, 0 on success. */
- putIReg64orZR(ss, binop(Iop_Xor64, unop(Iop_1Uto64, mkexpr(res)),
- mkU64(1)));
- DIP("st%sx%s %s, %s, [%s]\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
+ if (abiinfo->guest__use_fallback_LLSC) {
+ // This is really ugly, since we don't have any way to do
+ // proper if-then-else. First, set up as if the SC failed,
+ // and jump forwards if it really has failed.
+
+ // Continuation address
+ IRConst* nia = IRConst_U64(guest_PC_curr_instr + 4);
+
+ // "the SC failed". Any non-zero value means failure.
+ putIReg64orZR(ss, mkU64(1));
+
+ IRTemp tmp_LLsize = newTemp(Ity_I64);
+ assign(tmp_LLsize, IRExpr_Get(OFFB_LLSC_SIZE, Ity_I64));
+ stmt( IRStmt_Put( OFFB_LLSC_SIZE, mkU64(0) // "no transaction"
+ ));
+ // Fail if no or wrong-size transaction
+ vassert(szB == 8 || szB == 4 || szB == 2 || szB == 1);
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64, mkexpr(tmp_LLsize), mkU64(szB)),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Fail if the address doesn't match the LL address
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64, mkexpr(ea),
+ IRExpr_Get(OFFB_LLSC_ADDR, Ity_I64)),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Fail if the data doesn't match the LL data
+ IRTemp llsc_data64 = newTemp(Ity_I64);
+ assign(llsc_data64, IRExpr_Get(OFFB_LLSC_DATA, Ity_I64));
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64, widenUto64(ty, loadLE(ty, mkexpr(ea))),
+ mkexpr(llsc_data64)),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Try to CAS the new value in.
+ IRTemp old = newTemp(ty);
+ IRTemp expd = newTemp(ty);
+ assign(expd, narrowFrom64(ty, mkexpr(llsc_data64)));
+ stmt( IRStmt_CAS(mkIRCAS(/*oldHi*/IRTemp_INVALID, old,
+ Iend_LE, mkexpr(ea),
+ /*expdHi*/NULL, mkexpr(expd),
+ /*dataHi*/NULL, data
+ )));
+ // Fail if the CAS failed (viz, old != expd)
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64,
+ widenUto64(ty, mkexpr(old)),
+ widenUto64(ty, mkexpr(expd))),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Otherwise we succeeded (!)
+ putIReg64orZR(ss, mkU64(0));
+ } else {
+ IRTemp res = newTemp(Ity_I1);
+ stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), data));
+ /* IR semantics: res is 1 if store succeeds, 0 if it fails.
+ Need to set rS to 1 on failure, 0 on success. */
+ putIReg64orZR(ss, binop(Iop_Xor64, unop(Iop_1Uto64, mkexpr(res)),
+ mkU64(1)));
+ }
+ DIP("st%sx%s %s, %s, [%s] %s\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
nameIRegOrZR(False, ss),
- nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn));
+ nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn),
+ abiinfo->guest__use_fallback_LLSC
+ ? "(fallback implementation)" : "");
return True;
}
/* else fall through */
@@ -6589,7 +6691,8 @@
static
Bool dis_ARM64_branch_etc(/*MB_OUT*/DisResult* dres, UInt insn,
- const VexArchInfo* archinfo)
+ const VexArchInfo* archinfo,
+ const VexAbiInfo* abiinfo)
{
# define INSN(_bMax,_bMin) SLICE_UInt(insn, (_bMax), (_bMin))
@@ -7048,7 +7151,11 @@
/* AFAICS, this simply cancels a (all?) reservations made by a
(any?) preceding LDREX(es). Arrange to hand it through to
the back end. */
- stmt( IRStmt_MBE(Imbe_CancelReservation) );
+ if (abiinfo->guest__use_fallback_LLSC) {
+ stmt( IRStmt_Put( OFFB_LLSC_SIZE, mkU64(0) )); // "no transaction"
+ } else {
+ stmt( IRStmt_MBE(Imbe_CancelReservation) );
+ }
DIP("clrex #%u\n", mm);
return True;
}
@@ -14411,12 +14518,12 @@
break;
case BITS4(1,0,1,0): case BITS4(1,0,1,1):
// Branch, exception generation and system instructions
- ok = dis_ARM64_branch_etc(dres, insn, archinfo);
+ ok = dis_ARM64_branch_etc(dres, insn, archinfo, abiinfo);
break;
case BITS4(0,1,0,0): case BITS4(0,1,1,0):
case BITS4(1,1,0,0): case BITS4(1,1,1,0):
// Loads and stores
- ok = dis_ARM64_load_store(dres, insn);
+ ok = dis_ARM64_load_store(dres, insn, abiinfo);
break;
case BITS4(0,1,0,1): case BITS4(1,1,0,1):
// Data processing - register
Modified: trunk/priv/guest_mips_toIR.c
==============================================================================
--- trunk/priv/guest_mips_toIR.c (original)
+++ trunk/priv/guest_mips_toIR.c Mon Apr 24 10:23:43 2017
@@ -16984,7 +16984,8 @@
case 0x30: /* LL */
DIP("ll r%u, %u(r%u)", rt, imm, rs);
LOAD_STORE_PATTERN;
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
t2 = newTemp(ty);
assign(t2, mkWidenFrom32(ty, load(Ity_I32, mkexpr(t1)), True));
putLLaddr(mkexpr(t1));
@@ -17002,7 +17003,8 @@
if (mode64) {
LOAD_STORE_PATTERN;
t2 = newTemp(Ity_I64);
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
assign(t2, load(Ity_I64, mkexpr(t1)));
putLLaddr(mkexpr(t1));
putLLdata(mkexpr(t2));
@@ -17019,7 +17021,8 @@
DIP("sc r%u, %u(r%u)", rt, imm, rs);
t2 = newTemp(Ity_I1);
LOAD_STORE_PATTERN;
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
t3 = newTemp(Ity_I32);
assign(t2, binop(mode64 ? Iop_CmpNE64 : Iop_CmpNE32,
mkexpr(t1), getLLaddr()));
@@ -17053,7 +17056,8 @@
if (mode64) {
t2 = newTemp(Ity_I1);
LOAD_STORE_PATTERN;
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
t3 = newTemp(Ity_I64);
assign(t2, binop(Iop_CmpNE64, mkexpr(t1), getLLaddr()));
assign(t3, getIReg(rt));
Modified: trunk/priv/host_arm64_defs.c
==============================================================================
--- trunk/priv/host_arm64_defs.c (original)
+++ trunk/priv/host_arm64_defs.c Mon Apr 24 10:23:43 2017
@@ -1005,6 +1005,13 @@
vassert(szB == 8 || szB == 4 || szB == 2 || szB == 1);
return i;
}
+ARM64Instr* ARM64Instr_CAS ( Int szB ) {
+ ARM64Instr* i = LibVEX_Alloc_inline(sizeof(ARM64Instr));
+ i->tag = ARM64in_CAS;
+ i->ARM64in.CAS.szB = szB;
+ vassert(szB == 8 || szB == 4 || szB == 2 || szB == 1);
+ return i;
+}
ARM64Instr* ARM64Instr_MFence ( void ) {
ARM64Instr* i = LibVEX_Alloc_inline(sizeof(ARM64Instr));
i->tag = ARM64in_MFence;
@@ -1569,6 +1576,10 @@
sz, i->ARM64in.StrEX.szB == 8 ? 'x' : 'w');
return;
}
+ case ARM64in_CAS: {
+ vex_printf("x1 = cas(%dbit)(x3, x5 -> x7)", 8 * i->ARM64in.CAS.szB);
+ return;
+ }
case ARM64in_MFence:
vex_printf("(mfence) dsb sy; dmb sy; isb");
return;
@@ -2064,6 +2075,14 @@
addHRegUse(u, HRmWrite, hregARM64_X0());
addHRegUse(u, HRmRead, hregARM64_X2());
return;
+ case ARM64in_CAS:
+ addHRegUse(u, HRmRead, hregARM64_X3());
+ addHRegUse(u, HRmRead, hregARM64_X5());
+ addHRegUse(u, HRmRead, hregARM64_X7());
+ addHRegUse(u, HRmWrite, hregARM64_X1());
+ /* Pointless to state this since X8 is not available to RA. */
+ addHRegUse(u, HRmWrite, hregARM64_X8());
+ break;
case ARM64in_MFence:
return;
case ARM64in_ClrEX:
@@ -2326,6 +2345,8 @@
return;
case ARM64in_StrEX:
return;
+ case ARM64in_CAS:
+ return;
case ARM64in_MFence:
return;
case ARM64in_ClrEX:
@@ -3803,6 +3824,61 @@
}
goto bad;
}
+ case ARM64in_CAS: {
+ /* This isn't simple. For an explanation see the comment in
+ host_arm64_defs.h on the the definition of ARM64Instr case
+ CAS. */
+ /* Generate:
+ -- one of:
+ mov x8, x5 // AA0503E8
+ and x8, x5, #0xFFFFFFFF // 92407CA8
+ and x8, x5, #0xFFFF // 92403CA8
+ and x8, x5, #0xFF // 92401CA8
+
+ -- one of:
+ ldxr x1, [x3] // C85F7C61
+ ldxr w1, [x3] // 885F7C61
+ ldxrh w1, [x3] // 485F7C61
+ ldxrb w1, [x3] // 085F7C61
+
+ -- always:
+ cmp x1, x8 // EB08003F
+ bne out // 54000061
+
+ -- one of:
+ stxr w1, x7, [x3] // C8017C67
+ stxr w1, w7, [x3] // 88017C67
+ stxrh w1, w7, [x3] // 48017C67
+ stxrb w1, w7, [x3] // 08017C67
+
+ -- always:
+ eor x1, x5, x1 // CA0100A1
+ out:
+ */
+ switch (i->ARM64in.CAS.szB) {
+ case 8: *p++ = 0xAA0503E8; break;
+ case 4: *p++ = 0x92407CA8; break;
+ case 2: *p++ = 0x92403CA8; break;
+ case 1: *p++ = 0x92401CA8; break;
+ default: vassert(0);
+ }
+ switch (i->ARM64in.CAS.szB) {
+ case 8: *p++ = 0xC85F7C61; break;
+ case 4: *p++ = 0x885F7C61; break;
+ case 2: *p++ = 0x485F7C61; break;
+ case 1: *p++ = 0x085F7C61; break;
+ }
+ *p++ = 0xEB08003F;
+ *p++ = 0x54000061;
+ switch (i->ARM64in.CAS.szB) {
+ case 8: *p++ = 0xC8017C67; break;
+ case 4: *p++ = 0x88017C67; break;
+ case 2: *p++ = 0x48017C67; break;
+ case 1: *p++ = 0x08017C67; break;
+ }
+ *p++ = 0xCA0100A1;
+ goto done;
+ }
case ARM64in_MFence: {
*p++ = 0xD5033F9F; /* DSB sy */
*p++ = 0xD5033FBF; /* DMB sy */
Modified: trunk/priv/host_arm64_defs.h
==============================================================================
--- trunk/priv/host_arm64_defs.h (original)
+++ trunk/priv/host_arm64_defs.h Mon Apr 24 10:23:43 2017
@@ -481,6 +481,7 @@
ARM64in_Mul,
ARM64in_LdrEX,
ARM64in_StrEX,
+ ARM64in_CAS,
ARM64in_MFence,
ARM64in_ClrEX,
/* ARM64in_V*: scalar ops involving vector registers */
@@ -668,6 +669,32 @@
struct {
Int szB; /* 1, 2, 4 or 8 */
} StrEX;
+ /* x1 = CAS(x3(addr), x5(expected) -> x7(new)),
+ where x1[8*szB-1 : 0] == x5[8*szB-1 : 0] indicates success,
+ x1[8*szB-1 : 0] != x5[8*szB-1 : 0] indicates failure.
+ Uses x8 as scratch (but that's not allocatable).
+ Hence: RD x3, x5, x7; WR x1
+
+ (szB=8) mov x8, x5
+ (szB=4) and x8, x5, #0xFFFFFFFF
+ (szB=2) and x8, x5, #0xFFFF
+ (szB=1) and x8, x5, #0xFF
+ -- x8 is correctly zero-extended expected value
+ ldxr x1, [x3]
+ -- x1 is correctly zero-extended actual value
+ cmp x1, x8
+ bne after
+ -- if branch taken, failure; x1[[8*szB-1 : 0] holds old value
+ -- attempt to store
+ stxr w1, x7, [x3]
+ -- if store successful, x1==0, so the eor is "x1 := x5"
+ -- if store failed, x1==1, so the eor makes x1 != x5
+ eor x1, x5, x1
+ after:
+ */
+ struct {
+ Int szB; /* 1, 2, 4 or 8 */
+ } CAS;
/* Mem fence. An insn which fences all loads and stores as
much as possible before continuing. On ARM64 we emit the
sequence "dsb sy ; dmb sy ; isb sy", which is probably
@@ -912,6 +939,7 @@
ARM64MulOp op );
extern ARM64Instr* ARM64Instr_LdrEX ( Int szB );
extern ARM64Instr* ARM64Instr_StrEX ( Int szB );
+extern ARM64Instr* ARM64Instr_CAS ( Int szB );
extern ARM64Instr* ARM64Instr_MFence ( void );
extern ARM64Instr* ARM64Instr_ClrEX ( void );
extern ARM64Instr* ARM64Instr_VLdStH ( Bool isLoad, HReg sD, HReg rN,
Modified: trunk/priv/host_arm64_isel.c
==============================================================================
--- trunk/priv/host_arm64_isel.c (original)
+++ trunk/priv/host_arm64_isel.c Mon Apr 24 10:23:43 2017
@@ -1383,12 +1383,13 @@
|| e->Iex.Binop.op == Iop_CmpLT64S
|| e->Iex.Binop.op == Iop_CmpLT64U
|| e->Iex.Binop.op == Iop_CmpLE64S
- || e->Iex.Binop.op == Iop_CmpLE64U)) {
+ || e->Iex.Binop.op == Iop_CmpLE64U
+ || e->Iex.Binop.op == Iop_CasCmpEQ64)) {
HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
ARM64RIA* argR = iselIntExpr_RIA(env, e->Iex.Binop.arg2);
addInstr(env, ARM64Instr_Cmp(argL, argR, True/*is64*/));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ64: return ARM64cc_EQ;
+ case Iop_CmpEQ64: case Iop_CasCmpEQ64: return ARM64cc_EQ;
case Iop_CmpNE64: return ARM64cc_NE;
case Iop_CmpLT64S: return ARM64cc_LT;
case Iop_CmpLT64U: return ARM64cc_CC;
@@ -1405,12 +1406,13 @@
|| e->Iex.Binop.op == Iop_CmpLT32S
|| e->Iex.Binop.op == Iop_CmpLT32U
|| e->Iex.Binop.op == Iop_CmpLE32S
- || e->Iex.Binop.op == Iop_CmpLE32U)) {
+ || e->Iex.Binop.op == Iop_CmpLE32U
+ || e->Iex.Binop.op == Iop_CasCmpEQ32)) {
HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
ARM64RIA* argR = iselIntExpr_RIA(env, e->Iex.Binop.arg2);
addInstr(env, ARM64Instr_Cmp(argL, argR, False/*!is64*/));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ32: return ARM64cc_EQ;
+ case Iop_CmpEQ32: case Iop_CasCmpEQ32: return ARM64cc_EQ;
case Iop_CmpNE32: return ARM64cc_NE;
case Iop_CmpLT32S: return ARM64cc_LT;
case Iop_CmpLT32U: return ARM64cc_CC;
@@ -1420,6 +1422,34 @@
}
}
+ /* --- Cmp*16*(x,y) --- */
+ if (e->tag == Iex_Binop
+ && (e->Iex.Binop.op == Iop_CasCmpEQ16)) {
+ HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ HReg argR = iselIntExpr_R(env, e->Iex.Binop.arg2);
+ HReg argL2 = widen_z_16_to_64(env, argL);
+ HReg argR2 = widen_z_16_to_64(env, argR);
+ addInstr(env, ARM64Instr_Cmp(argL2, ARM64RIA_R(argR2), True/*is64*/));
+ switch (e->Iex.Binop.op) {
+ case Iop_CasCmpEQ16: return ARM64cc_EQ;
+ default: vpanic("iselCondCode(arm64): CmpXX16");
+ }
+ }
+
+ /* --- Cmp*8*(x,y) --- */
+ if (e->tag == Iex_Binop
+ && (e->Iex.Binop.op == Iop_CasCmpEQ8)) {
+ HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ HReg argR = iselIntExpr_R(env, e->Iex.Binop.arg2);
+ HReg argL2 = widen_z_8_to_64(env, argL);
+ HReg argR2 = widen_z_8_to_64(env, argR);
+ addInstr(env, ARM64Instr_Cmp(argL2, ARM64RIA_R(argR2), True/*is64*/));
+ switch (e->Iex.Binop.op) {
+ case Iop_CasCmpEQ8: return ARM64cc_EQ;
+ default: vpanic("iselCondCode(arm64): CmpXX8");
+ }
+ }
+
ppIRExpr(e);
vpanic("iselCondCode");
}
@@ -3833,6 +3863,57 @@
break;
}
+ /* --------- ACAS --------- */
+ case Ist_CAS: {
+ if (stmt->Ist.CAS.details->oldHi == IRTemp_INVALID) {
+ /* "normal" singleton CAS */
+ UChar sz;
+ IRCAS* cas = stmt->Ist.CAS.details;
+ IRType ty = typeOfIRExpr(env->type_env, cas->dataLo);
+ switch (ty) {
+ case Ity_I64: sz = 8; break;
+ case Ity_I32: sz = 4; break;
+ case Ity_I16: sz = 2; break;
+ case Ity_I8: sz = 1; break;
+ default: goto unhandled_cas;
+ }
+ HReg rAddr = iselIntExpr_R(env, cas->addr);
+ HReg rExpd = iselIntExpr_R(env, cas->expdLo);
+ HReg rData = iselIntExpr_R(env, cas->dataLo);
+ vassert(cas->expdHi == NULL);
+ vassert(cas->dataHi == NULL);
+ addInstr(env, ARM64Instr_MovI(hregARM64_X3(), rAddr));
+ addInstr(env, ARM64Instr_MovI(hregARM64_X5(), rExpd));
+ addInstr(env, ARM64Instr_MovI(hregARM64_X7(), rData));
+ addInstr(env, ARM64Instr_CAS(sz));
+ /* Now we have the lowest szB bytes of x1 are either equal to
+ the lowest szB bytes of x5, indicating success, or they
+ aren't, indicating failure. The IR semantics actually
+ require us to return the old value at the location,
+ regardless of success or failure, but in the case of
+ failure it's not clear how to do this, since
+ ARM64Instr_CAS can't provide that. Instead we'll just
+ return the relevant bit of x1, since that's at least
+ guaranteed to be different from the lowest bits of x5 on
+ failure. */
+ HReg rResult = hregARM64_X1();
+ switch (sz) {
+ case 8: break;
+ case 4: rResult = widen_z_32_to_64(env, rResult); break;
+ case 2: rResult = widen_z_16_to_64(env, rResult); break;
+ case 1: rResult = widen_z_8_to_64(env, rResult); break;
+ default: vassert(0);
+ }
+ // "old" in this case is interpreted somewhat liberally, per
+ // the previous comment.
+ HReg rOld = lookupIRTemp(env, cas->oldLo);
+ addInstr(env, ARM64Instr_MovI(rOld, rResult));
+ return;
+ }
+ unhandled_cas:
+ break;
+ }
+
/* --------- MEM FENCE --------- */
case Ist_MBE:
switch (stmt->Ist.MBE.event) {
Modified: trunk/priv/main_main.c
==============================================================================
--- trunk/priv/main_main.c (original)
+++ trunk/priv/main_main.c Mon Apr 24 10:23:43 2017
@@ -1556,6 +1556,7 @@
vbi->guest_amd64_assume_gs_is_const = False;
vbi->guest_ppc_zap_RZ_at_blr = False;
vbi->guest_ppc_zap_RZ_at_bl = NULL;
+ vbi->guest__use_fallback_LLSC = False;
vbi->host_ppc_calls_use_fndescrs = False;
}
Modified: trunk/pub/libvex.h
==============================================================================
--- trunk/pub/libvex.h (original)
+++ trunk/pub/libvex.h Mon Apr 24 10:23:43 2017
@@ -369,6 +369,11 @@
guest is ppc32-linux ==> const False
guest is other ==> inapplicable
+ guest__use_fallback_LLSC
+ guest is mips32 ==> applicable, default True
+ guest is mips64 ==> applicable, default True
+ guest is arm64 ==> applicable, default False
+
host_ppc_calls_use_fndescrs:
host is ppc32-linux ==> False
host is ppc64-linux ==> True
@@ -401,11 +406,17 @@
is assumed equivalent to a fn which always returns False. */
Bool (*guest_ppc_zap_RZ_at_bl)(Addr);
+ /* Potentially for all guests that use LL/SC: use the fallback
+ (synthesised) implementation rather than passing LL/SC on to
+ the host? */
+ Bool guest__use_fallback_LLSC;
+
/* PPC32/PPC64 HOSTS only: does '&f' give us a pointer to a
function descriptor on the host, or to the function code
itself? True => descriptor, False => code. */
Bool host_ppc_calls_use_fndescrs;
+ /* ??? Description ??? */
Bool guest_mips_fp_mode64;
}
VexAbiInfo;
Modified: trunk/pub/libvex_guest_arm64.h
==============================================================================
--- trunk/pub/libvex_guest_arm64.h (original)
+++ trunk/pub/libvex_guest_arm64.h Mon Apr 24 10:23:43 2017
@@ -159,9 +159,14 @@
note of bits 23 and 22. */
UInt guest_FPCR;
+ /* Fallback LL/SC support. See bugs 344524 and 369459. */
+ ULong guest_LLSC_SIZE; // 0==no current transaction, else 1,2,4 or 8.
+ ULong guest_LLSC_ADDR; // Address of transaction.
+ ULong guest_LLSC_DATA; // Original value at _ADDR, zero-extended.
+
/* Padding to make it have an 16-aligned size */
/* UInt pad_end_0; */
- /* ULong pad_end_1; */
+ ULong pad_end_1;
}
VexGuestARM64State;
|