You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(1) |
|
2
|
3
(6) |
4
(5) |
5
|
6
(2) |
7
(1) |
8
|
|
9
|
10
(4) |
11
(2) |
12
(2) |
13
(3) |
14
(1) |
15
|
|
16
(4) |
17
|
18
(3) |
19
(3) |
20
(3) |
21
|
22
|
|
23
(1) |
24
(10) |
25
(13) |
26
(6) |
27
(2) |
28
(3) |
29
(5) |
|
30
(6) |
|
|
|
|
|
|
|
From: Mark W. <ma...@kl...> - 2017-04-25 18:36:27
|
On Tue, 2017-04-25 at 11:19 -0700, Patrick J. LoPresti wrote:
> This sort of code is supposed to be handled by
> "--partial-loads-ok=yes". (Which should be made the default, in my
> opinion.)
It already is, according to NEWS, since:
Release 3.11.0 (22 September 2015)
- The default value for --partial-loads-ok has been changed
from
"no" to "yes", so as to avoid false positive errors
resulting
from some kinds of vectorised loops.
Cheers,
Mark
|
|
From: Julian S. <js...@ac...> - 2017-04-25 18:32:54
|
> The inlined code has two load double word instructions (ldbrx inst) that > are partially uninitialized. Following the two double word loads we do a > subf. instruction to subtract the values and set the condition code. Does it help to run with --expensive-definedness-checks=yes? That enables more accurate but more expensive definedness tracking for subtracts, among other things. J |
|
From: Patrick J. L. <lop...@gm...> - 2017-04-25 18:19:34
|
This sort of code is supposed to be handled by "--partial-loads-ok=yes". (Which should be made the default, in my opinion.) If that does not work, it is a bug in the partial-loads-ok support. - Pat On Tue, Apr 25, 2017 at 10:36 AM, Carl E. Love <ce...@us...> wrote: > Valgrind developers: > > The GCC 7 compiler for power has a new optimization that is tripping up > Valgrind. Basically they are taking the strcmp() function and doing > some inlining of the code. Here is a snipit of the generated code > > return strcmp(str1, str2); > 100004bc: 28 4c 20 7d ldbrx r9,0,r9 > 100004c0: 28 54 40 7d ldbrx r10,0,r10 > 100004c4: 51 48 6a 7c subf. r3,r10,r9 > 100004c8: 1c 00 82 40 bne 100004e4 <main+0x84> > 100004cc: f8 1b 2a 7d cmpb r10,r9,r3 > 100004d0: 00 00 aa 2f cmpdi cr7,r10,0 > 100004d4: 38 00 9e 41 beq cr7,1000050c <main+0xac> > > The inlined code has two load double word instructions (ldbrx inst) that > are partially uninitialized. Following the two double word loads we do a > subf. instruction to subtract the values and set the condition code. > Then we get to the branch instruction (bne) and valgrind flags the > error: > > ==23948== Conditional jump or move depends on uninitialised value(s) > ==23948== at 0x100004C8: main (bug80497.c:9) > ==23948== > ==23948== Syscall param exit_group(status) contains uninitialised > byte(s) > ==23948== at 0x41BDEA4: _Exit (_exit.c:31) > > The code has some cmpb instructions to make sure they don't actually use > the uninitialized bytes but that doesn't really help Valgrind. I was > thinking of trying to create a rule to ignore the error but not sure I > can do this as it is inlined code. It isn't like trying to ignore > errors from a function. I have looked a little at the suppression rules > and from what I know of them it isn't clear how to write one for this > case were inlined code could show up anywhere. > > Wondering if anyone has thoughts on how to asddress fixing the issue or > how to suppress the issue? > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Carl E. L. <ce...@us...> - 2017-04-25 17:36:29
|
Valgrind developers:
The GCC 7 compiler for power has a new optimization that is tripping up
Valgrind. Basically they are taking the strcmp() function and doing
some inlining of the code. Here is a snipit of the generated code
return strcmp(str1, str2);
100004bc: 28 4c 20 7d ldbrx r9,0,r9
100004c0: 28 54 40 7d ldbrx r10,0,r10
100004c4: 51 48 6a 7c subf. r3,r10,r9
100004c8: 1c 00 82 40 bne 100004e4 <main+0x84>
100004cc: f8 1b 2a 7d cmpb r10,r9,r3
100004d0: 00 00 aa 2f cmpdi cr7,r10,0
100004d4: 38 00 9e 41 beq cr7,1000050c <main+0xac>
The inlined code has two load double word instructions (ldbrx inst) that
are partially uninitialized. Following the two double word loads we do a
subf. instruction to subtract the values and set the condition code.
Then we get to the branch instruction (bne) and valgrind flags the
error:
==23948== Conditional jump or move depends on uninitialised value(s)
==23948== at 0x100004C8: main (bug80497.c:9)
==23948==
==23948== Syscall param exit_group(status) contains uninitialised
byte(s)
==23948== at 0x41BDEA4: _Exit (_exit.c:31)
The code has some cmpb instructions to make sure they don't actually use
the uninitialized bytes but that doesn't really help Valgrind. I was
thinking of trying to create a rule to ignore the error but not sure I
can do this as it is inlined code. It isn't like trying to ignore
errors from a function. I have looked a little at the suppression rules
and from what I know of them it isn't clear how to write one for this
case were inlined code could show up anywhere.
Wondering if anyone has thoughts on how to asddress fixing the issue or
how to suppress the issue?
|
|
From: <sv...@va...> - 2017-04-25 17:28:08
|
Author: petarj
Date: Tue Apr 25 18:28:01 2017
New Revision: 3356
Log:
mips: add missing assembler directive to ASM_VOLATILE_UNARY64
Clang is picky and notices we have not explicitly set up fp64 mode.
This fixes issue with Clang build.
Modified:
trunk/priv/guest_mips_helpers.c
Modified: trunk/priv/guest_mips_helpers.c
==============================================================================
--- trunk/priv/guest_mips_helpers.c (original)
+++ trunk/priv/guest_mips_helpers.c Tue Apr 25 18:28:01 2017
@@ -497,6 +497,7 @@
#define ASM_VOLATILE_UNARY64(inst) \
__asm__ volatile(".set push" "\n\t" \
".set hardfloat" "\n\t" \
+ ".set fp=64" "\n\t" \
"cfc1 $t0, $31" "\n\t" \
"ctc1 %2, $31" "\n\t" \
"ldc1 $f24, 0(%1)" "\n\t" \
|
|
From: <sv...@va...> - 2017-04-25 16:37:24
|
Author: petarj
Date: Tue Apr 25 17:37:16 2017
New Revision: 3355
Log:
mips: remove unnecessary code from FCSR_fp32 dirty helper
These cases should never happen. Removing the code.
Modified:
trunk/priv/guest_mips_helpers.c
Modified: trunk/priv/guest_mips_helpers.c
==============================================================================
--- trunk/priv/guest_mips_helpers.c (original)
+++ trunk/priv/guest_mips_helpers.c Tue Apr 25 17:37:16 2017
@@ -625,45 +625,6 @@
case ROUNDWS:
ASM_VOLATILE_UNARY32(round.w.s)
break;
-#if ((__mips == 32) && defined(__mips_isa_rev) && (__mips_isa_rev >= 2)) \
- || (__mips == 64)
- case CEILLS:
- ASM_VOLATILE_UNARY32(ceil.l.s)
- break;
- case CEILLD:
- ASM_VOLATILE_UNARY32_DOUBLE(ceil.l.d)
- break;
- case CVTDL:
- ASM_VOLATILE_UNARY32_DOUBLE(cvt.d.l)
- break;
- case CVTLS:
- ASM_VOLATILE_UNARY32(cvt.l.s)
- break;
- case CVTLD:
- ASM_VOLATILE_UNARY32_DOUBLE(cvt.l.d)
- break;
- case CVTSL:
- ASM_VOLATILE_UNARY32_DOUBLE(cvt.s.l)
- break;
- case FLOORLS:
- ASM_VOLATILE_UNARY32(floor.l.s)
- break;
- case FLOORLD:
- ASM_VOLATILE_UNARY32_DOUBLE(floor.l.d)
- break;
- case ROUNDLS:
- ASM_VOLATILE_UNARY32(round.l.s)
- break;
- case ROUNDLD:
- ASM_VOLATILE_UNARY32_DOUBLE(round.l.d)
- break;
- case TRUNCLS:
- ASM_VOLATILE_UNARY32(trunc.l.s)
- break;
- case TRUNCLD:
- ASM_VOLATILE_UNARY32_DOUBLE(trunc.l.d)
- break;
-#endif
case ADDS:
ASM_VOLATILE_BINARY32(add.s)
break;
|
|
From: <sv...@va...> - 2017-04-25 14:41:06
|
Author: petarj
Date: Tue Apr 25 15:40:54 2017
New Revision: 3354
Log:
mips: limit cvt.s.l instruction translation to fp_mode64
The documentation says:
"For CVT.S.L, the result of this instruction is UNPREDICTABLE if the
processor is executing in the FR=0 32-bit FPU register model; it is
predictable if executing on a 64-bit FPU in the FR=1 mode, but not with
FR=0, and not on a 32-bit FPU."
Hence the fix.
Modified:
trunk/priv/guest_mips_toIR.c
Modified: trunk/priv/guest_mips_toIR.c
==============================================================================
--- trunk/priv/guest_mips_toIR.c (original)
+++ trunk/priv/guest_mips_toIR.c Tue Apr 25 15:40:54 2017
@@ -13090,12 +13090,16 @@
case 0x15: /* L */
DIP("cvt.s.l %u, %u", fd, fs);
- calculateFCSR(fs, 0, CVTSL, False, 1);
- t0 = newTemp(Ity_I64);
- assign(t0, unop(Iop_ReinterpF64asI64, getFReg(fs)));
+ if (fp_mode64) {
+ calculateFCSR(fs, 0, CVTSL, False, 1);
+ t0 = newTemp(Ity_I64);
+ assign(t0, unop(Iop_ReinterpF64asI64, getFReg(fs)));
- putFReg(fd, mkWidenFromF32(tyF, binop(Iop_I64StoF32,
- get_IR_roundingmode(), mkexpr(t0))));
+ putFReg(fd, mkWidenFromF32(tyF, binop(Iop_I64StoF32,
+ get_IR_roundingmode(), mkexpr(t0))));
+ } else {
+ ILLEGAL_INSTRUCTON;
+ }
break;
default:
|
|
From: <sv...@va...> - 2017-04-25 13:52:28
|
Author: petarj
Date: Tue Apr 25 14:52:21 2017
New Revision: 16312
Log:
make helgrind/tc23_bogus_condwait test deterministic
Using properly initialized mutex (instead of a dirty one) in case
"mx is not locked" makes behavior of this test deterministic.
Patch by Tamara Vlahovic.
Modified:
trunk/helgrind/tests/tc23_bogus_condwait.c
Modified: trunk/helgrind/tests/tc23_bogus_condwait.c
==============================================================================
--- trunk/helgrind/tests/tc23_bogus_condwait.c (original)
+++ trunk/helgrind/tests/tc23_bogus_condwait.c Tue Apr 25 14:52:21 2017
@@ -69,7 +69,7 @@
r= pthread_cond_wait(&cv, (pthread_mutex_t*)(4 + (char*)&mx[0]) );
/* mx is not locked */
- r= pthread_cond_wait(&cv, &mx[0]);
+ r= pthread_cond_wait(&cv, &mx[3]);
/* wrong flavour of lock */
r= pthread_cond_wait(&cv, (pthread_mutex_t*)&rwl );
|
Author: iraisr
Date: Tue Apr 25 07:44:28 2017
New Revision: 16311
Log:
Valgrind reports INTERNAL ERROR in rt_sigsuspend syscall wrapper.
Fixes BZ#379094.
Modified:
trunk/NEWS
trunk/coregrind/m_syswrap/syswrap-linux.c
trunk/memcheck/tests/x86-linux/scalar.c
trunk/memcheck/tests/x86-linux/scalar.stderr.exp
Modified: trunk/NEWS
==============================================================================
--- trunk/NEWS (original)
+++ trunk/NEWS Tue Apr 25 07:44:28 2017
@@ -156,6 +156,7 @@
377930 fcntl syscall wrapper is missing flock structure check
378535 Valgrind reports INTERNAL ERROR in execve syscall wrapper
378673 Update libiberty demangler
+379094 Valgrind reports INTERNAL ERROR in rt_sigsuspend syscall wrapper
Release 3.12.0 (20 October 2016)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Modified: trunk/coregrind/m_syswrap/syswrap-linux.c
==============================================================================
--- trunk/coregrind/m_syswrap/syswrap-linux.c (original)
+++ trunk/coregrind/m_syswrap/syswrap-linux.c Tue Apr 25 07:44:28 2017
@@ -3995,12 +3995,16 @@
PRE_REG_READ2(int, "rt_sigsuspend", vki_sigset_t *, mask, vki_size_t, size)
if (ARG1 != (Addr)NULL) {
PRE_MEM_READ( "rt_sigsuspend(mask)", ARG1, sizeof(vki_sigset_t) );
- VG_(sigdelset)((vki_sigset_t*)ARG1, VG_SIGVGKILL);
- /* We cannot mask VG_SIGVGKILL, as otherwise this thread would not
- be killable by VG_(nuke_all_threads_except).
- We thus silently ignore the user request to mask this signal.
- Note that this is similar to what is done for e.g.
- sigprocmask (see m_signals.c calculate_SKSS_from_SCSS). */
+ if (ML_(safe_to_deref)((vki_sigset_t *) ARG1, sizeof(vki_sigset_t))) {
+ VG_(sigdelset)((vki_sigset_t *) ARG1, VG_SIGVGKILL);
+ /* We cannot mask VG_SIGVGKILL, as otherwise this thread would not
+ be killable by VG_(nuke_all_threads_except).
+ We thus silently ignore the user request to mask this signal.
+ Note that this is similar to what is done for e.g.
+ sigprocmask (see m_signals.c calculate_SKSS_from_SCSS). */
+ } else {
+ SET_STATUS_Failure(VKI_EFAULT);
+ }
}
}
Modified: trunk/memcheck/tests/x86-linux/scalar.c
==============================================================================
--- trunk/memcheck/tests/x86-linux/scalar.c (original)
+++ trunk/memcheck/tests/x86-linux/scalar.c Tue Apr 25 07:44:28 2017
@@ -800,8 +800,8 @@
SY(__NR_rt_sigqueueinfo, x0, x0+1, x0); FAIL;
// __NR_rt_sigsuspend 179
- GO(__NR_rt_sigsuspend, "ignore");
- // (I don't know how to test this...)
+ GO(__NR_rt_sigsuspend, "2s 1m");
+ SY(__NR_rt_sigsuspend, x0 + 1, x0 + sizeof(sigset_t)); FAILx(EFAULT);
// __NR_pread64 180
GO(__NR_pread64, "5s 1m");
Modified: trunk/memcheck/tests/x86-linux/scalar.stderr.exp
==============================================================================
--- trunk/memcheck/tests/x86-linux/scalar.stderr.exp (original)
+++ trunk/memcheck/tests/x86-linux/scalar.stderr.exp Tue Apr 25 07:44:28 2017
@@ -2343,8 +2343,21 @@
Address 0x........ is not stack'd, malloc'd or (recently) free'd
-----------------------------------------------------
-179: __NR_rt_sigsuspend ignore
+179: __NR_rt_sigsuspend 2s 1m
-----------------------------------------------------
+Syscall param rt_sigsuspend(mask) contains uninitialised byte(s)
+ ...
+ by 0x........: main (scalar.c:804)
+
+Syscall param rt_sigsuspend(size) contains uninitialised byte(s)
+ ...
+ by 0x........: main (scalar.c:804)
+
+Syscall param rt_sigsuspend(mask) points to unaddressable byte(s)
+ ...
+ by 0x........: main (scalar.c:804)
+ Address 0x........ is not stack'd, malloc'd or (recently) free'd
+
-----------------------------------------------------
180: __NR_pread64 5s 1m
-----------------------------------------------------
|
|
From: Ivo R. <iv...@iv...> - 2017-04-24 20:36:00
|
2017-04-24 22:03 GMT+02:00 Matthias Schwarzott <zz...@ge...>: > Am 24.04.2017 um 17:00 schrieb Ivo Raisr: >> Any comments or objections to patch v2 for bug 379039? >> https://bugs.kde.org/show_bug.cgi?id=379039 >> >> I. >> > Hi! > > The code seems to work, but the len variable does not mean length of the > string, so it could be misleading. > > Additionally I am not sure if the function POST(sys_prctl) must also be > modified. Hi Matthias, Thank you for your comments. You are right, I had to modify POST(sys_prctl) so it takes into account that ARG2 might not need to be nul-terminated. > The test memcheck/tests/threadname.c maybe needs more cases: > > * Set threadname to a long string and check that only the first 15 > characters are printed as threadname for the next error. Why? We do not want to do functional testing of libpthread or prctl syscall. > * If possible a test that proves, that POST(sys_prctl) does not access > memory after byte 16 (but I do not know how to test this). That would be appropriate for prctl(get-name) case. Different story. I will attach new patch shortly. I. |
|
From: Matthias S. <zz...@ge...> - 2017-04-24 20:20:56
|
Am 24.04.2017 um 22:03 schrieb Matthias Schwarzott: > Am 24.04.2017 um 17:00 schrieb Ivo Raisr: >> Any comments or objections to patch v2 for bug 379039? >> https://bugs.kde.org/show_bug.cgi?id=379039 >> >> I. >> > Hi! > > The code seems to work, but the len variable does not mean length of the > string, so it could be misleading. > > Additionally I am not sure if the function POST(sys_prctl) must also be > modified. > > The test memcheck/tests/threadname.c maybe needs more cases: > > * Set threadname to a long string and check that only the first 15 > characters are printed as threadname for the next error. I forgot to mention that for this test prctl must be called directly instead of pthread_setname_np as in the existing cases. pthread_setname_np fails with ERANGE without calling prctl. > > * If possible a test that proves, that POST(sys_prctl) does not access > memory after byte 16 (but I do not know how to test this). > > Regards > Matthias > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > |
|
From: Matthias S. <zz...@ge...> - 2017-04-24 20:04:00
|
Am 24.04.2017 um 17:00 schrieb Ivo Raisr: > Any comments or objections to patch v2 for bug 379039? > https://bugs.kde.org/show_bug.cgi?id=379039 > > I. > Hi! The code seems to work, but the len variable does not mean length of the string, so it could be misleading. Additionally I am not sure if the function POST(sys_prctl) must also be modified. The test memcheck/tests/threadname.c maybe needs more cases: * Set threadname to a long string and check that only the first 15 characters are printed as threadname for the next error. * If possible a test that proves, that POST(sys_prctl) does not access memory after byte 16 (but I do not know how to test this). Regards Matthias |
|
From: Petar J. <mip...@gm...> - 2017-04-24 17:50:24
|
On Sun, Apr 23, 2017 at 3:57 PM, 网络尖兵 <net...@fo...> wrote: > The Valgrind didn't reported any errors in my program. That is why I asked > Have you run all tools that come with Valgrind? Try different tools. Different time or network related events could also be the cause of the behaviour you are seeing when your program is run under Valgrind. |
|
From: Ivo R. <iv...@iv...> - 2017-04-24 15:01:56
|
Any comments or objections to patch attached to this bug? https://bugs.kde.org/show_bug.cgi?id=379094 I. |
|
From: Ivo R. <iv...@iv...> - 2017-04-24 15:00:51
|
Any comments or objections to patch v2 for bug 379039? https://bugs.kde.org/show_bug.cgi?id=379039 I. |
|
From: <sv...@va...> - 2017-04-24 13:33:24
|
Author: petarj
Date: Mon Apr 24 14:33:17 2017
New Revision: 16310
Log:
mips: fix build breakage introduced in r16309
Change archinfo->hwcaps to vex_archinfo.hwcaps.
Fixes build breakage.
Modified:
trunk/coregrind/m_translate.c
Modified: trunk/coregrind/m_translate.c
==============================================================================
--- trunk/coregrind/m_translate.c (original)
+++ trunk/coregrind/m_translate.c Mon Apr 24 14:33:17 2017
@@ -1696,7 +1696,7 @@
/* Compute guest__use_fallback_LLSC, overiding any settings of
VG_(clo_fallback_llsc) that we know would cause the guest to
fail (loop). */
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(vex_archinfo.hwcaps) == VEX_PRID_COMP_CAVIUM) {
/* We must use the fallback scheme. */
vex_abiinfo.guest__use_fallback_LLSC = True;
} else {
|
|
From: <sv...@va...> - 2017-04-24 10:57:14
|
Author: sewardj
Date: Mon Apr 24 11:57:05 2017
New Revision: 3353
Log:
widen_z_16_to_64, widen_z_8_to_64: generate less stupid code.
Modified:
trunk/priv/host_arm64_isel.c
Modified: trunk/priv/host_arm64_isel.c
==============================================================================
--- trunk/priv/host_arm64_isel.c (original)
+++ trunk/priv/host_arm64_isel.c Mon Apr 24 11:57:05 2017
@@ -298,10 +298,9 @@
a new register, and return the new register. */
static HReg widen_z_16_to_64 ( ISelEnv* env, HReg src )
{
- HReg dst = newVRegI(env);
- ARM64RI6* n48 = ARM64RI6_I6(48);
- addInstr(env, ARM64Instr_Shift(dst, src, n48, ARM64sh_SHL));
- addInstr(env, ARM64Instr_Shift(dst, dst, n48, ARM64sh_SHR));
+ HReg dst = newVRegI(env);
+ ARM64RIL* mask = ARM64RIL_I13(1, 0, 15); /* encodes 0xFFFF */
+ addInstr(env, ARM64Instr_Logic(dst, src, mask, ARM64lo_AND));
return dst;
}
@@ -329,10 +328,9 @@
static HReg widen_z_8_to_64 ( ISelEnv* env, HReg src )
{
- HReg dst = newVRegI(env);
- ARM64RI6* n56 = ARM64RI6_I6(56);
- addInstr(env, ARM64Instr_Shift(dst, src, n56, ARM64sh_SHL));
- addInstr(env, ARM64Instr_Shift(dst, dst, n56, ARM64sh_SHR));
+ HReg dst = newVRegI(env);
+ ARM64RIL* mask = ARM64RIL_I13(1, 0, 7); /* encodes 0xFF */
+ addInstr(env, ARM64Instr_Logic(dst, src, mask, ARM64lo_AND));
return dst;
}
|
Author: sewardj
Date: Mon Apr 24 10:24:57 2017
New Revision: 16309
Log:
Bug 369459 - valgrind on arm64 violates the ARMv8 spec (ldxr/stxr)
This implements a fallback LL/SC implementation as described in bug 344524.
Valgrind side changes:
* Command line plumbing for --sim-hints=fallback-llsc
* memcheck: handle new arm64 guest state in memcheck/mc_machine.c
Modified:
trunk/coregrind/m_main.c
trunk/coregrind/m_scheduler/scheduler.c
trunk/coregrind/m_translate.c
trunk/coregrind/pub_core_options.h
trunk/memcheck/mc_machine.c
trunk/none/tests/cmdline1.stdout.exp
trunk/none/tests/cmdline2.stdout.exp
Modified: trunk/coregrind/m_main.c
==============================================================================
--- trunk/coregrind/m_main.c (original)
+++ trunk/coregrind/m_main.c Mon Apr 24 10:24:57 2017
@@ -187,7 +187,7 @@
" --sim-hints=hint1,hint2,... activate unusual sim behaviours [none] \n"
" where hint is one of:\n"
" lax-ioctls lax-doors fuse-compatible enable-outer\n"
-" no-inner-prefix no-nptl-pthread-stackcache none\n"
+" no-inner-prefix no-nptl-pthread-stackcache fallback-llsc none\n"
" --fair-sched=no|yes|try schedule threads fairly on multicore systems [no]\n"
" --kernel-variant=variant1,variant2,...\n"
" handle non-standard kernel variants [none]\n"
@@ -417,7 +417,7 @@
else if VG_USETX_CLO (str, "--sim-hints",
"lax-ioctls,lax-doors,fuse-compatible,"
"enable-outer,no-inner-prefix,"
- "no-nptl-pthread-stackcache",
+ "no-nptl-pthread-stackcache,fallback-llsc",
VG_(clo_sim_hints)) {}
}
Modified: trunk/coregrind/m_scheduler/scheduler.c
==============================================================================
--- trunk/coregrind/m_scheduler/scheduler.c (original)
+++ trunk/coregrind/m_scheduler/scheduler.c Mon Apr 24 10:24:57 2017
@@ -925,6 +925,14 @@
tst->arch.vex.host_EvC_FAILADDR
= (HWord)VG_(fnptr_to_fnentry)( &VG_(disp_cp_evcheck_fail) );
+ /* Invalidate any in-flight LL/SC transactions, in the case that we're
+ using the fallback LL/SC implementation. See bugs 344524 and 369459. */
+# if defined(VGP_mips32_linux) || defined(VGP_mips64_linux)
+ tst->arch.vex.guest_LLaddr = (HWord)(-1);
+# elif defined(VGP_arm64_linux)
+ tst->arch.vex.guest_LLSC_SIZE = 0;
+# endif
+
if (0) {
vki_sigset_t m;
Int i, err = VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &m);
@@ -957,10 +965,6 @@
vg_assert(VG_(in_generated_code) == True);
VG_(in_generated_code) = False;
-#if defined(VGA_mips32) || defined(VGA_mips64)
- tst->arch.vex.guest_LLaddr = (HWord)(-1);
-#endif
-
if (jumped != (HWord)0) {
/* We get here if the client took a fault that caused our signal
handler to longjmp. */
Modified: trunk/coregrind/m_translate.c
==============================================================================
--- trunk/coregrind/m_translate.c (original)
+++ trunk/coregrind/m_translate.c Mon Apr 24 10:24:57 2017
@@ -1663,30 +1663,51 @@
vex_abiinfo.guest_amd64_assume_fs_is_const = True;
vex_abiinfo.guest_amd64_assume_gs_is_const = True;
# endif
+
# if defined(VGP_amd64_darwin)
vex_abiinfo.guest_amd64_assume_gs_is_const = True;
# endif
+
+# if defined(VGP_amd64_solaris)
+ vex_abiinfo.guest_amd64_assume_fs_is_const = True;
+# endif
+
# if defined(VGP_ppc32_linux)
vex_abiinfo.guest_ppc_zap_RZ_at_blr = False;
vex_abiinfo.guest_ppc_zap_RZ_at_bl = NULL;
# endif
+
# if defined(VGP_ppc64be_linux)
vex_abiinfo.guest_ppc_zap_RZ_at_blr = True;
vex_abiinfo.guest_ppc_zap_RZ_at_bl = const_True;
vex_abiinfo.host_ppc_calls_use_fndescrs = True;
# endif
+
# if defined(VGP_ppc64le_linux)
vex_abiinfo.guest_ppc_zap_RZ_at_blr = True;
vex_abiinfo.guest_ppc_zap_RZ_at_bl = const_True;
vex_abiinfo.host_ppc_calls_use_fndescrs = False;
# endif
-# if defined(VGP_amd64_solaris)
- vex_abiinfo.guest_amd64_assume_fs_is_const = True;
-# endif
+
# if defined(VGP_mips32_linux) || defined(VGP_mips64_linux)
ThreadArchState* arch = &VG_(threads)[tid].arch;
vex_abiinfo.guest_mips_fp_mode64 =
!!(arch->vex.guest_CP0_status & MIPS_CP0_STATUS_FR);
+ /* Compute guest__use_fallback_LLSC, overiding any settings of
+ VG_(clo_fallback_llsc) that we know would cause the guest to
+ fail (loop). */
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ /* We must use the fallback scheme. */
+ vex_abiinfo.guest__use_fallback_LLSC = True;
+ } else {
+ vex_abiinfo.guest__use_fallback_LLSC
+ = SimHintiS(SimHint_fallback_llsc, VG_(clo_sim_hints));
+ }
+# endif
+
+# if defined(VGP_arm64_linux)
+ vex_abiinfo.guest__use_fallback_LLSC
+ = SimHintiS(SimHint_fallback_llsc, VG_(clo_sim_hints));
# endif
/* Set up closure args. */
Modified: trunk/coregrind/pub_core_options.h
==============================================================================
--- trunk/coregrind/pub_core_options.h (original)
+++ trunk/coregrind/pub_core_options.h Mon Apr 24 10:24:57 2017
@@ -222,14 +222,15 @@
SimHint_fuse_compatible,
SimHint_enable_outer,
SimHint_no_inner_prefix,
- SimHint_no_nptl_pthread_stackcache
+ SimHint_no_nptl_pthread_stackcache,
+ SimHint_fallback_llsc
}
SimHint;
// Build mask to check or set SimHint a membership
#define SimHint2S(a) (1 << (a))
// SimHint h is member of the Set s ?
-#define SimHintiS(h,s) ((s) & SimHint2S(h))
+#define SimHintiS(h,s) (((s) & SimHint2S(h)) != 0)
extern UInt VG_(clo_sim_hints);
/* Show symbols in the form 'name+offset' ? Default: NO */
Modified: trunk/memcheck/mc_machine.c
==============================================================================
--- trunk/memcheck/mc_machine.c (original)
+++ trunk/memcheck/mc_machine.c Mon Apr 24 10:24:57 2017
@@ -1040,6 +1040,10 @@
if (o == GOF(CMSTART) && sz == 8) return -1; // untracked
if (o == GOF(CMLEN) && sz == 8) return -1; // untracked
+ if (o == GOF(LLSC_SIZE) && sz == 8) return -1; // untracked
+ if (o == GOF(LLSC_ADDR) && sz == 8) return o;
+ if (o == GOF(LLSC_DATA) && sz == 8) return o;
+
VG_(printf)("MC_(get_otrack_shadow_offset)(arm64)(off=%d,sz=%d)\n",
offset,szB);
tl_assert(0);
Modified: trunk/none/tests/cmdline1.stdout.exp
==============================================================================
--- trunk/none/tests/cmdline1.stdout.exp (original)
+++ trunk/none/tests/cmdline1.stdout.exp Mon Apr 24 10:24:57 2017
@@ -101,7 +101,7 @@
--sim-hints=hint1,hint2,... activate unusual sim behaviours [none]
where hint is one of:
lax-ioctls lax-doors fuse-compatible enable-outer
- no-inner-prefix no-nptl-pthread-stackcache none
+ no-inner-prefix no-nptl-pthread-stackcache fallback-llsc none
--fair-sched=no|yes|try schedule threads fairly on multicore systems [no]
--kernel-variant=variant1,variant2,...
handle non-standard kernel variants [none]
Modified: trunk/none/tests/cmdline2.stdout.exp
==============================================================================
--- trunk/none/tests/cmdline2.stdout.exp (original)
+++ trunk/none/tests/cmdline2.stdout.exp Mon Apr 24 10:24:57 2017
@@ -101,7 +101,7 @@
--sim-hints=hint1,hint2,... activate unusual sim behaviours [none]
where hint is one of:
lax-ioctls lax-doors fuse-compatible enable-outer
- no-inner-prefix no-nptl-pthread-stackcache none
+ no-inner-prefix no-nptl-pthread-stackcache fallback-llsc none
--fair-sched=no|yes|try schedule threads fairly on multicore systems [no]
--kernel-variant=variant1,variant2,...
handle non-standard kernel variants [none]
|
Author: sewardj
Date: Mon Apr 24 10:23:43 2017
New Revision: 3352
Log:
Bug 369459 - valgrind on arm64 violates the ARMv8 spec (ldxr/stxr)
This implements a fallback LL/SC implementation as described in bug 344524.
The fallback implementation is not enabled by default, and there is no
auto-detection for when it should be used. To use it, run with the
flag --sim-hints=fallback-llsc. This commit also allows the existing
MIPS fallback implementation to be enabled with that flag.
VEX side changes:
* priv/main_main.c, pub/libvex.h
Adds new field guest__use_fallback_LLSC to VexAbiInfo
* pub/libvex_guest_arm64.h priv/guest_arm64_toIR.c
add front end support, new guest state fields
guest_LLSC_{SIZE,ADDR,DATA}, also documentation of the scheme
* priv/guest_mips_toIR.c
allow manual selection of fallback implementation via
--sim-hints=fallback-llsc
* priv/host_arm64_defs.c priv/host_arm64_defs.h priv/host_arm64_isel.c
Add support for generating CAS on arm64, as needed by the front end changes
Modified:
trunk/priv/guest_arm64_toIR.c
trunk/priv/guest_mips_toIR.c
trunk/priv/host_arm64_defs.c
trunk/priv/host_arm64_defs.h
trunk/priv/host_arm64_isel.c
trunk/priv/main_main.c
trunk/pub/libvex.h
trunk/pub/libvex_guest_arm64.h
Modified: trunk/priv/guest_arm64_toIR.c
==============================================================================
--- trunk/priv/guest_arm64_toIR.c (original)
+++ trunk/priv/guest_arm64_toIR.c Mon Apr 24 10:23:43 2017
@@ -1147,6 +1147,10 @@
#define OFFB_CMSTART offsetof(VexGuestARM64State,guest_CMSTART)
#define OFFB_CMLEN offsetof(VexGuestARM64State,guest_CMLEN)
+#define OFFB_LLSC_SIZE offsetof(VexGuestARM64State,guest_LLSC_SIZE)
+#define OFFB_LLSC_ADDR offsetof(VexGuestARM64State,guest_LLSC_ADDR)
+#define OFFB_LLSC_DATA offsetof(VexGuestARM64State,guest_LLSC_DATA)
+
/* ---------------- Integer registers ---------------- */
@@ -4702,7 +4706,9 @@
static
-Bool dis_ARM64_load_store(/*MB_OUT*/DisResult* dres, UInt insn)
+Bool dis_ARM64_load_store(/*MB_OUT*/DisResult* dres, UInt insn,
+ const VexAbiInfo* abiinfo
+)
{
# define INSN(_bMax,_bMin) SLICE_UInt(insn, (_bMax), (_bMin))
@@ -6457,6 +6463,32 @@
sz 001000 000 s 0 11111 n t STX{R,RH,RB} Ws, Rt, [Xn|SP]
sz 001000 000 s 1 11111 n t STLX{R,RH,RB} Ws, Rt, [Xn|SP]
*/
+ /* For the "standard" implementation we pass through the LL and SC to
+ the host. For the "fallback" implementation, for details see
+ https://bugs.kde.org/show_bug.cgi?id=344524 and
+ https://bugs.kde.org/show_bug.cgi?id=369459,
+ but in short:
+
+ LoadLinked(addr)
+ gs.LLsize = load_size // 1, 2, 4 or 8
+ gs.LLaddr = addr
+ gs.LLdata = zeroExtend(*addr)
+
+ StoreCond(addr, data)
+ tmp_LLsize = gs.LLsize
+ gs.LLsize = 0 // "no transaction"
+ if tmp_LLsize != store_size -> fail
+ if addr != gs.LLaddr -> fail
+ if zeroExtend(*addr) != gs.LLdata -> fail
+ cas_ok = CAS(store_size, addr, gs.LLdata -> data)
+ if !cas_ok -> fail
+ succeed
+
+ When thread scheduled
+ gs.LLsize = 0 // "no transaction"
+ (coregrind/m_scheduler/scheduler.c, run_thread_for_a_while()
+ has to do this bit)
+ */
if (INSN(29,23) == BITS7(0,0,1,0,0,0,0)
&& (INSN(23,21) & BITS3(1,0,1)) == BITS3(0,0,0)
&& INSN(14,10) == BITS5(1,1,1,1,1)) {
@@ -6478,29 +6510,99 @@
if (isLD && ss == BITS5(1,1,1,1,1)) {
IRTemp res = newTemp(ty);
- stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), NULL/*LL*/));
- putIReg64orZR(tt, widenUto64(ty, mkexpr(res)));
+ if (abiinfo->guest__use_fallback_LLSC) {
+ // Do the load first so we don't update any guest state
+ // if it faults.
+ IRTemp loaded_data64 = newTemp(Ity_I64);
+ assign(loaded_data64, widenUto64(ty, loadLE(ty, mkexpr(ea))));
+ stmt( IRStmt_Put( OFFB_LLSC_DATA, mkexpr(loaded_data64) ));
+ stmt( IRStmt_Put( OFFB_LLSC_ADDR, mkexpr(ea) ));
+ stmt( IRStmt_Put( OFFB_LLSC_SIZE, mkU64(szB) ));
+ putIReg64orZR(tt, mkexpr(loaded_data64));
+ } else {
+ stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), NULL/*LL*/));
+ putIReg64orZR(tt, widenUto64(ty, mkexpr(res)));
+ }
if (isAcqOrRel) {
stmt(IRStmt_MBE(Imbe_Fence));
}
- DIP("ld%sx%s %s, [%s]\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
- nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn));
+ DIP("ld%sx%s %s, [%s] %s\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
+ nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn),
+ abiinfo->guest__use_fallback_LLSC
+ ? "(fallback implementation)" : "");
return True;
}
if (!isLD) {
if (isAcqOrRel) {
stmt(IRStmt_MBE(Imbe_Fence));
}
- IRTemp res = newTemp(Ity_I1);
IRExpr* data = narrowFrom64(ty, getIReg64orZR(tt));
- stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), data));
- /* IR semantics: res is 1 if store succeeds, 0 if it fails.
- Need to set rS to 1 on failure, 0 on success. */
- putIReg64orZR(ss, binop(Iop_Xor64, unop(Iop_1Uto64, mkexpr(res)),
- mkU64(1)));
- DIP("st%sx%s %s, %s, [%s]\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
+ if (abiinfo->guest__use_fallback_LLSC) {
+ // This is really ugly, since we don't have any way to do
+ // proper if-then-else. First, set up as if the SC failed,
+ // and jump forwards if it really has failed.
+
+ // Continuation address
+ IRConst* nia = IRConst_U64(guest_PC_curr_instr + 4);
+
+ // "the SC failed". Any non-zero value means failure.
+ putIReg64orZR(ss, mkU64(1));
+
+ IRTemp tmp_LLsize = newTemp(Ity_I64);
+ assign(tmp_LLsize, IRExpr_Get(OFFB_LLSC_SIZE, Ity_I64));
+ stmt( IRStmt_Put( OFFB_LLSC_SIZE, mkU64(0) // "no transaction"
+ ));
+ // Fail if no or wrong-size transaction
+ vassert(szB == 8 || szB == 4 || szB == 2 || szB == 1);
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64, mkexpr(tmp_LLsize), mkU64(szB)),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Fail if the address doesn't match the LL address
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64, mkexpr(ea),
+ IRExpr_Get(OFFB_LLSC_ADDR, Ity_I64)),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Fail if the data doesn't match the LL data
+ IRTemp llsc_data64 = newTemp(Ity_I64);
+ assign(llsc_data64, IRExpr_Get(OFFB_LLSC_DATA, Ity_I64));
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64, widenUto64(ty, loadLE(ty, mkexpr(ea))),
+ mkexpr(llsc_data64)),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Try to CAS the new value in.
+ IRTemp old = newTemp(ty);
+ IRTemp expd = newTemp(ty);
+ assign(expd, narrowFrom64(ty, mkexpr(llsc_data64)));
+ stmt( IRStmt_CAS(mkIRCAS(/*oldHi*/IRTemp_INVALID, old,
+ Iend_LE, mkexpr(ea),
+ /*expdHi*/NULL, mkexpr(expd),
+ /*dataHi*/NULL, data
+ )));
+ // Fail if the CAS failed (viz, old != expd)
+ stmt( IRStmt_Exit(
+ binop(Iop_CmpNE64,
+ widenUto64(ty, mkexpr(old)),
+ widenUto64(ty, mkexpr(expd))),
+ Ijk_Boring, nia, OFFB_PC
+ ));
+ // Otherwise we succeeded (!)
+ putIReg64orZR(ss, mkU64(0));
+ } else {
+ IRTemp res = newTemp(Ity_I1);
+ stmt(IRStmt_LLSC(Iend_LE, res, mkexpr(ea), data));
+ /* IR semantics: res is 1 if store succeeds, 0 if it fails.
+ Need to set rS to 1 on failure, 0 on success. */
+ putIReg64orZR(ss, binop(Iop_Xor64, unop(Iop_1Uto64, mkexpr(res)),
+ mkU64(1)));
+ }
+ DIP("st%sx%s %s, %s, [%s] %s\n", isAcqOrRel ? "a" : "", suffix[szBlg2],
nameIRegOrZR(False, ss),
- nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn));
+ nameIRegOrZR(szB == 8, tt), nameIReg64orSP(nn),
+ abiinfo->guest__use_fallback_LLSC
+ ? "(fallback implementation)" : "");
return True;
}
/* else fall through */
@@ -6589,7 +6691,8 @@
static
Bool dis_ARM64_branch_etc(/*MB_OUT*/DisResult* dres, UInt insn,
- const VexArchInfo* archinfo)
+ const VexArchInfo* archinfo,
+ const VexAbiInfo* abiinfo)
{
# define INSN(_bMax,_bMin) SLICE_UInt(insn, (_bMax), (_bMin))
@@ -7048,7 +7151,11 @@
/* AFAICS, this simply cancels a (all?) reservations made by a
(any?) preceding LDREX(es). Arrange to hand it through to
the back end. */
- stmt( IRStmt_MBE(Imbe_CancelReservation) );
+ if (abiinfo->guest__use_fallback_LLSC) {
+ stmt( IRStmt_Put( OFFB_LLSC_SIZE, mkU64(0) )); // "no transaction"
+ } else {
+ stmt( IRStmt_MBE(Imbe_CancelReservation) );
+ }
DIP("clrex #%u\n", mm);
return True;
}
@@ -14411,12 +14518,12 @@
break;
case BITS4(1,0,1,0): case BITS4(1,0,1,1):
// Branch, exception generation and system instructions
- ok = dis_ARM64_branch_etc(dres, insn, archinfo);
+ ok = dis_ARM64_branch_etc(dres, insn, archinfo, abiinfo);
break;
case BITS4(0,1,0,0): case BITS4(0,1,1,0):
case BITS4(1,1,0,0): case BITS4(1,1,1,0):
// Loads and stores
- ok = dis_ARM64_load_store(dres, insn);
+ ok = dis_ARM64_load_store(dres, insn, abiinfo);
break;
case BITS4(0,1,0,1): case BITS4(1,1,0,1):
// Data processing - register
Modified: trunk/priv/guest_mips_toIR.c
==============================================================================
--- trunk/priv/guest_mips_toIR.c (original)
+++ trunk/priv/guest_mips_toIR.c Mon Apr 24 10:23:43 2017
@@ -16984,7 +16984,8 @@
case 0x30: /* LL */
DIP("ll r%u, %u(r%u)", rt, imm, rs);
LOAD_STORE_PATTERN;
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
t2 = newTemp(ty);
assign(t2, mkWidenFrom32(ty, load(Ity_I32, mkexpr(t1)), True));
putLLaddr(mkexpr(t1));
@@ -17002,7 +17003,8 @@
if (mode64) {
LOAD_STORE_PATTERN;
t2 = newTemp(Ity_I64);
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
assign(t2, load(Ity_I64, mkexpr(t1)));
putLLaddr(mkexpr(t1));
putLLdata(mkexpr(t2));
@@ -17019,7 +17021,8 @@
DIP("sc r%u, %u(r%u)", rt, imm, rs);
t2 = newTemp(Ity_I1);
LOAD_STORE_PATTERN;
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
t3 = newTemp(Ity_I32);
assign(t2, binop(mode64 ? Iop_CmpNE64 : Iop_CmpNE32,
mkexpr(t1), getLLaddr()));
@@ -17053,7 +17056,8 @@
if (mode64) {
t2 = newTemp(Ity_I1);
LOAD_STORE_PATTERN;
- if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM) {
+ if (VEX_MIPS_COMP_ID(archinfo->hwcaps) == VEX_PRID_COMP_CAVIUM
+ || abiinfo->guest__use_fallback_LLSC) {
t3 = newTemp(Ity_I64);
assign(t2, binop(Iop_CmpNE64, mkexpr(t1), getLLaddr()));
assign(t3, getIReg(rt));
Modified: trunk/priv/host_arm64_defs.c
==============================================================================
--- trunk/priv/host_arm64_defs.c (original)
+++ trunk/priv/host_arm64_defs.c Mon Apr 24 10:23:43 2017
@@ -1005,6 +1005,13 @@
vassert(szB == 8 || szB == 4 || szB == 2 || szB == 1);
return i;
}
+ARM64Instr* ARM64Instr_CAS ( Int szB ) {
+ ARM64Instr* i = LibVEX_Alloc_inline(sizeof(ARM64Instr));
+ i->tag = ARM64in_CAS;
+ i->ARM64in.CAS.szB = szB;
+ vassert(szB == 8 || szB == 4 || szB == 2 || szB == 1);
+ return i;
+}
ARM64Instr* ARM64Instr_MFence ( void ) {
ARM64Instr* i = LibVEX_Alloc_inline(sizeof(ARM64Instr));
i->tag = ARM64in_MFence;
@@ -1569,6 +1576,10 @@
sz, i->ARM64in.StrEX.szB == 8 ? 'x' : 'w');
return;
}
+ case ARM64in_CAS: {
+ vex_printf("x1 = cas(%dbit)(x3, x5 -> x7)", 8 * i->ARM64in.CAS.szB);
+ return;
+ }
case ARM64in_MFence:
vex_printf("(mfence) dsb sy; dmb sy; isb");
return;
@@ -2064,6 +2075,14 @@
addHRegUse(u, HRmWrite, hregARM64_X0());
addHRegUse(u, HRmRead, hregARM64_X2());
return;
+ case ARM64in_CAS:
+ addHRegUse(u, HRmRead, hregARM64_X3());
+ addHRegUse(u, HRmRead, hregARM64_X5());
+ addHRegUse(u, HRmRead, hregARM64_X7());
+ addHRegUse(u, HRmWrite, hregARM64_X1());
+ /* Pointless to state this since X8 is not available to RA. */
+ addHRegUse(u, HRmWrite, hregARM64_X8());
+ break;
case ARM64in_MFence:
return;
case ARM64in_ClrEX:
@@ -2326,6 +2345,8 @@
return;
case ARM64in_StrEX:
return;
+ case ARM64in_CAS:
+ return;
case ARM64in_MFence:
return;
case ARM64in_ClrEX:
@@ -3803,6 +3824,61 @@
}
goto bad;
}
+ case ARM64in_CAS: {
+ /* This isn't simple. For an explanation see the comment in
+ host_arm64_defs.h on the the definition of ARM64Instr case
+ CAS. */
+ /* Generate:
+ -- one of:
+ mov x8, x5 // AA0503E8
+ and x8, x5, #0xFFFFFFFF // 92407CA8
+ and x8, x5, #0xFFFF // 92403CA8
+ and x8, x5, #0xFF // 92401CA8
+
+ -- one of:
+ ldxr x1, [x3] // C85F7C61
+ ldxr w1, [x3] // 885F7C61
+ ldxrh w1, [x3] // 485F7C61
+ ldxrb w1, [x3] // 085F7C61
+
+ -- always:
+ cmp x1, x8 // EB08003F
+ bne out // 54000061
+
+ -- one of:
+ stxr w1, x7, [x3] // C8017C67
+ stxr w1, w7, [x3] // 88017C67
+ stxrh w1, w7, [x3] // 48017C67
+ stxrb w1, w7, [x3] // 08017C67
+
+ -- always:
+ eor x1, x5, x1 // CA0100A1
+ out:
+ */
+ switch (i->ARM64in.CAS.szB) {
+ case 8: *p++ = 0xAA0503E8; break;
+ case 4: *p++ = 0x92407CA8; break;
+ case 2: *p++ = 0x92403CA8; break;
+ case 1: *p++ = 0x92401CA8; break;
+ default: vassert(0);
+ }
+ switch (i->ARM64in.CAS.szB) {
+ case 8: *p++ = 0xC85F7C61; break;
+ case 4: *p++ = 0x885F7C61; break;
+ case 2: *p++ = 0x485F7C61; break;
+ case 1: *p++ = 0x085F7C61; break;
+ }
+ *p++ = 0xEB08003F;
+ *p++ = 0x54000061;
+ switch (i->ARM64in.CAS.szB) {
+ case 8: *p++ = 0xC8017C67; break;
+ case 4: *p++ = 0x88017C67; break;
+ case 2: *p++ = 0x48017C67; break;
+ case 1: *p++ = 0x08017C67; break;
+ }
+ *p++ = 0xCA0100A1;
+ goto done;
+ }
case ARM64in_MFence: {
*p++ = 0xD5033F9F; /* DSB sy */
*p++ = 0xD5033FBF; /* DMB sy */
Modified: trunk/priv/host_arm64_defs.h
==============================================================================
--- trunk/priv/host_arm64_defs.h (original)
+++ trunk/priv/host_arm64_defs.h Mon Apr 24 10:23:43 2017
@@ -481,6 +481,7 @@
ARM64in_Mul,
ARM64in_LdrEX,
ARM64in_StrEX,
+ ARM64in_CAS,
ARM64in_MFence,
ARM64in_ClrEX,
/* ARM64in_V*: scalar ops involving vector registers */
@@ -668,6 +669,32 @@
struct {
Int szB; /* 1, 2, 4 or 8 */
} StrEX;
+ /* x1 = CAS(x3(addr), x5(expected) -> x7(new)),
+ where x1[8*szB-1 : 0] == x5[8*szB-1 : 0] indicates success,
+ x1[8*szB-1 : 0] != x5[8*szB-1 : 0] indicates failure.
+ Uses x8 as scratch (but that's not allocatable).
+ Hence: RD x3, x5, x7; WR x1
+
+ (szB=8) mov x8, x5
+ (szB=4) and x8, x5, #0xFFFFFFFF
+ (szB=2) and x8, x5, #0xFFFF
+ (szB=1) and x8, x5, #0xFF
+ -- x8 is correctly zero-extended expected value
+ ldxr x1, [x3]
+ -- x1 is correctly zero-extended actual value
+ cmp x1, x8
+ bne after
+ -- if branch taken, failure; x1[[8*szB-1 : 0] holds old value
+ -- attempt to store
+ stxr w1, x7, [x3]
+ -- if store successful, x1==0, so the eor is "x1 := x5"
+ -- if store failed, x1==1, so the eor makes x1 != x5
+ eor x1, x5, x1
+ after:
+ */
+ struct {
+ Int szB; /* 1, 2, 4 or 8 */
+ } CAS;
/* Mem fence. An insn which fences all loads and stores as
much as possible before continuing. On ARM64 we emit the
sequence "dsb sy ; dmb sy ; isb sy", which is probably
@@ -912,6 +939,7 @@
ARM64MulOp op );
extern ARM64Instr* ARM64Instr_LdrEX ( Int szB );
extern ARM64Instr* ARM64Instr_StrEX ( Int szB );
+extern ARM64Instr* ARM64Instr_CAS ( Int szB );
extern ARM64Instr* ARM64Instr_MFence ( void );
extern ARM64Instr* ARM64Instr_ClrEX ( void );
extern ARM64Instr* ARM64Instr_VLdStH ( Bool isLoad, HReg sD, HReg rN,
Modified: trunk/priv/host_arm64_isel.c
==============================================================================
--- trunk/priv/host_arm64_isel.c (original)
+++ trunk/priv/host_arm64_isel.c Mon Apr 24 10:23:43 2017
@@ -1383,12 +1383,13 @@
|| e->Iex.Binop.op == Iop_CmpLT64S
|| e->Iex.Binop.op == Iop_CmpLT64U
|| e->Iex.Binop.op == Iop_CmpLE64S
- || e->Iex.Binop.op == Iop_CmpLE64U)) {
+ || e->Iex.Binop.op == Iop_CmpLE64U
+ || e->Iex.Binop.op == Iop_CasCmpEQ64)) {
HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
ARM64RIA* argR = iselIntExpr_RIA(env, e->Iex.Binop.arg2);
addInstr(env, ARM64Instr_Cmp(argL, argR, True/*is64*/));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ64: return ARM64cc_EQ;
+ case Iop_CmpEQ64: case Iop_CasCmpEQ64: return ARM64cc_EQ;
case Iop_CmpNE64: return ARM64cc_NE;
case Iop_CmpLT64S: return ARM64cc_LT;
case Iop_CmpLT64U: return ARM64cc_CC;
@@ -1405,12 +1406,13 @@
|| e->Iex.Binop.op == Iop_CmpLT32S
|| e->Iex.Binop.op == Iop_CmpLT32U
|| e->Iex.Binop.op == Iop_CmpLE32S
- || e->Iex.Binop.op == Iop_CmpLE32U)) {
+ || e->Iex.Binop.op == Iop_CmpLE32U
+ || e->Iex.Binop.op == Iop_CasCmpEQ32)) {
HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
ARM64RIA* argR = iselIntExpr_RIA(env, e->Iex.Binop.arg2);
addInstr(env, ARM64Instr_Cmp(argL, argR, False/*!is64*/));
switch (e->Iex.Binop.op) {
- case Iop_CmpEQ32: return ARM64cc_EQ;
+ case Iop_CmpEQ32: case Iop_CasCmpEQ32: return ARM64cc_EQ;
case Iop_CmpNE32: return ARM64cc_NE;
case Iop_CmpLT32S: return ARM64cc_LT;
case Iop_CmpLT32U: return ARM64cc_CC;
@@ -1420,6 +1422,34 @@
}
}
+ /* --- Cmp*16*(x,y) --- */
+ if (e->tag == Iex_Binop
+ && (e->Iex.Binop.op == Iop_CasCmpEQ16)) {
+ HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ HReg argR = iselIntExpr_R(env, e->Iex.Binop.arg2);
+ HReg argL2 = widen_z_16_to_64(env, argL);
+ HReg argR2 = widen_z_16_to_64(env, argR);
+ addInstr(env, ARM64Instr_Cmp(argL2, ARM64RIA_R(argR2), True/*is64*/));
+ switch (e->Iex.Binop.op) {
+ case Iop_CasCmpEQ16: return ARM64cc_EQ;
+ default: vpanic("iselCondCode(arm64): CmpXX16");
+ }
+ }
+
+ /* --- Cmp*8*(x,y) --- */
+ if (e->tag == Iex_Binop
+ && (e->Iex.Binop.op == Iop_CasCmpEQ8)) {
+ HReg argL = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ HReg argR = iselIntExpr_R(env, e->Iex.Binop.arg2);
+ HReg argL2 = widen_z_8_to_64(env, argL);
+ HReg argR2 = widen_z_8_to_64(env, argR);
+ addInstr(env, ARM64Instr_Cmp(argL2, ARM64RIA_R(argR2), True/*is64*/));
+ switch (e->Iex.Binop.op) {
+ case Iop_CasCmpEQ8: return ARM64cc_EQ;
+ default: vpanic("iselCondCode(arm64): CmpXX8");
+ }
+ }
+
ppIRExpr(e);
vpanic("iselCondCode");
}
@@ -3833,6 +3863,57 @@
break;
}
+ /* --------- ACAS --------- */
+ case Ist_CAS: {
+ if (stmt->Ist.CAS.details->oldHi == IRTemp_INVALID) {
+ /* "normal" singleton CAS */
+ UChar sz;
+ IRCAS* cas = stmt->Ist.CAS.details;
+ IRType ty = typeOfIRExpr(env->type_env, cas->dataLo);
+ switch (ty) {
+ case Ity_I64: sz = 8; break;
+ case Ity_I32: sz = 4; break;
+ case Ity_I16: sz = 2; break;
+ case Ity_I8: sz = 1; break;
+ default: goto unhandled_cas;
+ }
+ HReg rAddr = iselIntExpr_R(env, cas->addr);
+ HReg rExpd = iselIntExpr_R(env, cas->expdLo);
+ HReg rData = iselIntExpr_R(env, cas->dataLo);
+ vassert(cas->expdHi == NULL);
+ vassert(cas->dataHi == NULL);
+ addInstr(env, ARM64Instr_MovI(hregARM64_X3(), rAddr));
+ addInstr(env, ARM64Instr_MovI(hregARM64_X5(), rExpd));
+ addInstr(env, ARM64Instr_MovI(hregARM64_X7(), rData));
+ addInstr(env, ARM64Instr_CAS(sz));
+ /* Now we have the lowest szB bytes of x1 are either equal to
+ the lowest szB bytes of x5, indicating success, or they
+ aren't, indicating failure. The IR semantics actually
+ require us to return the old value at the location,
+ regardless of success or failure, but in the case of
+ failure it's not clear how to do this, since
+ ARM64Instr_CAS can't provide that. Instead we'll just
+ return the relevant bit of x1, since that's at least
+ guaranteed to be different from the lowest bits of x5 on
+ failure. */
+ HReg rResult = hregARM64_X1();
+ switch (sz) {
+ case 8: break;
+ case 4: rResult = widen_z_32_to_64(env, rResult); break;
+ case 2: rResult = widen_z_16_to_64(env, rResult); break;
+ case 1: rResult = widen_z_8_to_64(env, rResult); break;
+ default: vassert(0);
+ }
+ // "old" in this case is interpreted somewhat liberally, per
+ // the previous comment.
+ HReg rOld = lookupIRTemp(env, cas->oldLo);
+ addInstr(env, ARM64Instr_MovI(rOld, rResult));
+ return;
+ }
+ unhandled_cas:
+ break;
+ }
+
/* --------- MEM FENCE --------- */
case Ist_MBE:
switch (stmt->Ist.MBE.event) {
Modified: trunk/priv/main_main.c
==============================================================================
--- trunk/priv/main_main.c (original)
+++ trunk/priv/main_main.c Mon Apr 24 10:23:43 2017
@@ -1556,6 +1556,7 @@
vbi->guest_amd64_assume_gs_is_const = False;
vbi->guest_ppc_zap_RZ_at_blr = False;
vbi->guest_ppc_zap_RZ_at_bl = NULL;
+ vbi->guest__use_fallback_LLSC = False;
vbi->host_ppc_calls_use_fndescrs = False;
}
Modified: trunk/pub/libvex.h
==============================================================================
--- trunk/pub/libvex.h (original)
+++ trunk/pub/libvex.h Mon Apr 24 10:23:43 2017
@@ -369,6 +369,11 @@
guest is ppc32-linux ==> const False
guest is other ==> inapplicable
+ guest__use_fallback_LLSC
+ guest is mips32 ==> applicable, default True
+ guest is mips64 ==> applicable, default True
+ guest is arm64 ==> applicable, default False
+
host_ppc_calls_use_fndescrs:
host is ppc32-linux ==> False
host is ppc64-linux ==> True
@@ -401,11 +406,17 @@
is assumed equivalent to a fn which always returns False. */
Bool (*guest_ppc_zap_RZ_at_bl)(Addr);
+ /* Potentially for all guests that use LL/SC: use the fallback
+ (synthesised) implementation rather than passing LL/SC on to
+ the host? */
+ Bool guest__use_fallback_LLSC;
+
/* PPC32/PPC64 HOSTS only: does '&f' give us a pointer to a
function descriptor on the host, or to the function code
itself? True => descriptor, False => code. */
Bool host_ppc_calls_use_fndescrs;
+ /* ??? Description ??? */
Bool guest_mips_fp_mode64;
}
VexAbiInfo;
Modified: trunk/pub/libvex_guest_arm64.h
==============================================================================
--- trunk/pub/libvex_guest_arm64.h (original)
+++ trunk/pub/libvex_guest_arm64.h Mon Apr 24 10:23:43 2017
@@ -159,9 +159,14 @@
note of bits 23 and 22. */
UInt guest_FPCR;
+ /* Fallback LL/SC support. See bugs 344524 and 369459. */
+ ULong guest_LLSC_SIZE; // 0==no current transaction, else 1,2,4 or 8.
+ ULong guest_LLSC_ADDR; // Address of transaction.
+ ULong guest_LLSC_DATA; // Original value at _ADDR, zero-extended.
+
/* Padding to make it have an 16-aligned size */
/* UInt pad_end_0; */
- /* ULong pad_end_1; */
+ ULong pad_end_1;
}
VexGuestARM64State;
|
|
From: 网. <net...@fo...> - 2017-04-23 14:28:49
|
The Valgrind didn't reported any errors in my program. That is why I asked the question before. The most strange thing is that it did not core dumped when it was running with valgrind. I agree with you, there might be a wild access to memory. But why it disappeared when it was running with valgrind? ------------------ 原始邮件 ------------------ 发件人: "Petar Jovanovic";<mip...@gm...>; 发送时间: 2017年4月20日(星期四) 晚上6:56 收件人: "网络尖兵"<net...@fo...>; 抄送: "Valgrind Developers"<val...@li...>; 主题: Re: [Valgrind-developers] Are there some protective mechanisms ofvalgrind on MIPS64, especially on Cavium OCTEON3? On Wed, Apr 19, 2017 at 3:51 AM, zboom <net...@fo...> wrote: > Thanks for your reply! > This problem occurred when I was running OpenSwitch > (http://www.openswitch.net/) on my embedded system. Has Valgrind reported any errors in your program? If the program has a bug, you may want to fix that first. The issue you are reporting could be, for instance, due to reads in a freed block or branches based on uninitialized values and such. Petar |
|
From: <sv...@va...> - 2017-04-20 14:07:48
|
Author: petarj
Date: Thu Apr 20 15:07:37 2017
New Revision: 16308
Log:
update svn:ignore list
Add mcmain_pic.heur to the svn:ignore list.
Modified:
trunk/gdbserver_tests/ (props changed)
|
|
From: <sv...@va...> - 2017-04-20 14:04:50
|
Author: petarj
Date: Thu Apr 20 15:04:37 2017
New Revision: 16307
Log:
add MIPS to info about supported architectures
Indicate that Valgrind supports MIPS architecture.
Modified:
trunk/coregrind/m_main.c
Modified: trunk/coregrind/m_main.c
==============================================================================
--- trunk/coregrind/m_main.c (original)
+++ trunk/coregrind/m_main.c Thu Apr 20 15:04:37 2017
@@ -1338,6 +1338,7 @@
"AMD Athlon or above)\n");
VG_(printf)(" * AMD Athlon64/Opteron\n");
VG_(printf)(" * ARM (armv7)\n");
+ VG_(printf)(" * MIPS (mips32 and above; mips64 and above)\n");
VG_(printf)(" * PowerPC (most; ppc405 and above)\n");
VG_(printf)(" * System z (64bit only - s390x; z990 and above)\n");
VG_(printf)("\n");
|
|
From: Petar J. <mip...@gm...> - 2017-04-20 10:56:42
|
On Wed, Apr 19, 2017 at 3:51 AM, zboom <net...@fo...> wrote: > Thanks for your reply! > This problem occurred when I was running OpenSwitch > (http://www.openswitch.net/) on my embedded system. Has Valgrind reported any errors in your program? If the program has a bug, you may want to fix that first. The issue you are reporting could be, for instance, due to reads in a freed block or branches based on uninitialized values and such. Petar |
Author: iraisr
Date: Wed Apr 19 22:21:22 2017
New Revision: 3351
Log:
Add support for HInstrIfThenElse into x86 instruction selector backend.
Now on to register allocator...
Modified:
branches/VEX_JIT_HACKS/priv/host_generic_reg_alloc2.c
branches/VEX_JIT_HACKS/priv/host_generic_regs.c
branches/VEX_JIT_HACKS/priv/host_generic_regs.h
branches/VEX_JIT_HACKS/priv/host_x86_defs.c
branches/VEX_JIT_HACKS/priv/host_x86_defs.h
branches/VEX_JIT_HACKS/priv/host_x86_isel.c
branches/VEX_JIT_HACKS/priv/ir_defs.c
branches/VEX_JIT_HACKS/priv/main_main.c
branches/VEX_JIT_HACKS/pub/libvex_ir.h
Modified: branches/VEX_JIT_HACKS/priv/host_generic_reg_alloc2.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/host_generic_reg_alloc2.c (original)
+++ branches/VEX_JIT_HACKS/priv/host_generic_reg_alloc2.c Wed Apr 19 22:21:22 2017
@@ -410,6 +410,7 @@
/* For debug printing only. */
void (*ppInstr)(const HInstr*, Bool);
+ void (*ppCondCode)(HCondCode);
void (*ppReg)(HReg);
/* 32/64bit mode */
@@ -428,8 +429,8 @@
void (*genReload)( HInstr**, HInstr**, HReg, Int, Bool),
HInstr* (*directReload)( HInstr*, HReg, Short),
UInt guest_sizeB,
- void (*ppInstr)(const HInstr*, Bool), void (*ppReg)(HReg),
- Bool mode64, const HInstrVec* instrs_in, UInt n_vregs)
+ void (*ppInstr)(const HInstr*, Bool), void (*ppCondCode)(HCondCode),
+ void (*ppReg)(HReg), Bool mode64, const HInstrVec* instrs_in, UInt n_vregs)
{
/* Initialize Register Allocator state. */
state->univ = univ;
@@ -442,6 +443,7 @@
state->directReload = directReload;
state->guest_sizeB = guest_sizeB;
state->ppInstr = ppInstr;
+ state->ppCondCode = ppCondCode;
state->ppReg = ppReg;
state->mode64 = mode64;
@@ -595,7 +597,7 @@
const HInstr* instr = instrs_in->insns[ii];
if (state->isIfThenElse(instr) != NULL) {
- vpanic("IfThenElse unimplemented");
+ vpanic("regAlloc_HInstrVec: IfThenElse unimplemented");
}
state->getRegUsage(&state->reg_usage_arr[ii], instr, state->mode64);
@@ -1689,6 +1691,7 @@
/* For debug printing only. */
void (*ppInstr) ( const HInstr*, Bool ),
+ void (*ppCondCode)(HCondCode),
void (*ppReg) ( HReg ),
/* 32/64bit mode */
@@ -1704,7 +1707,7 @@
RegAllocState state;
initRegAllocState(&state, univ, isMove, getRegUsage, mapRegs, isIfThenElse,
genSpill, genReload, directReload, guest_sizeB, ppInstr,
- ppReg, mode64, sb_in->insns, sb_in->n_vregs);
+ ppCondCode, ppReg, mode64, sb_in->insns, sb_in->n_vregs);
sb_out->insns = regAlloc_HInstrVec(&state, sb_in->insns);
return sb_out;
}
Modified: branches/VEX_JIT_HACKS/priv/host_generic_regs.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/host_generic_regs.c (original)
+++ branches/VEX_JIT_HACKS/priv/host_generic_regs.c Wed Apr 19 22:21:22 2017
@@ -304,25 +304,63 @@
addHInstr(hv, instr);
}
+HInstrIfThenElse* newHInstrIfThenElse(HCondCode condCode, HPhiNode* phi_nodes,
+ UInt n_phis)
+{
+ HInstrIfThenElse* hite = LibVEX_Alloc_inline(sizeof(HInstrIfThenElse));
+ hite->ccOOL = condCode;
+ hite->fallThrough = newHInstrVec();
+ hite->outOfLine = newHInstrVec();
+ hite->phi_nodes = phi_nodes;
+ hite->n_phis = n_phis;
+ return hite;
+}
+
static void print_depth(UInt depth) {
for (UInt i = 0; i < depth; i++) {
vex_printf(" ");
}
}
+void ppHPhiNode(const HPhiNode* phi_node)
+{
+ ppHReg(phi_node->dst);
+ vex_printf(" = phi(");
+ ppHReg(phi_node->srcFallThrough);
+ vex_printf(",");
+ ppHReg(phi_node->srcOutOfLine);
+ vex_printf(")");
+}
+
static void ppHInstrVec(const HInstrVec* code,
HInstrIfThenElse* (*isIfThenElse)(const HInstr*),
void (*ppInstr)(const HInstr*, Bool),
+ void (*ppCondCode)(HCondCode),
Bool mode64, UInt depth, UInt *insn_num)
{
for (UInt i = 0; i < code->insns_used; i++) {
const HInstr* instr = code->insns[i];
const HInstrIfThenElse* hite = isIfThenElse(instr);
if (UNLIKELY(hite != NULL)) {
- ppHInstrVec(hite->fallThrough, isIfThenElse, ppInstr, mode64,
- depth + 1, insn_num);
- ppHInstrVec(hite->outOfLine, isIfThenElse, ppInstr, mode64,
- depth + 1, insn_num);
+ print_depth(depth);
+ vex_printf(" if (!");
+ ppCondCode(hite->ccOOL);
+ vex_printf(") then fall-through {\n");
+ ppHInstrVec(hite->fallThrough, isIfThenElse, ppInstr, ppCondCode,
+ mode64, depth + 1, insn_num);
+ print_depth(depth);
+ vex_printf(" } else out-of-line {\n");
+ ppHInstrVec(hite->outOfLine, isIfThenElse, ppInstr, ppCondCode,
+ mode64, depth + 1, insn_num);
+ print_depth(depth);
+ vex_printf(" }\n");
+
+ for (UInt j = 0; j < hite->n_phis; j++) {
+ print_depth(depth);
+ vex_printf(" ");
+ ppHPhiNode(&hite->phi_nodes[j]);
+ vex_printf("\n");
+ }
} else {
vex_printf("%3u ", (*insn_num)++);
print_depth(depth);
@@ -342,10 +380,12 @@
void ppHInstrSB(const HInstrSB* code,
HInstrIfThenElse* (*isIfThenElse)(const HInstr*),
- void (*ppInstr)(const HInstr*, Bool), Bool mode64)
+ void (*ppInstr)(const HInstr*, Bool),
+ void (*ppCondCode)(HCondCode), Bool mode64)
{
UInt insn_num = 0;
- ppHInstrVec(code->insns, isIfThenElse, ppInstr, mode64, 0, &insn_num);
+ ppHInstrVec(code->insns, isIfThenElse, ppInstr, ppCondCode, mode64, 0,
+ &insn_num);
}
Modified: branches/VEX_JIT_HACKS/priv/host_generic_regs.h
==============================================================================
--- branches/VEX_JIT_HACKS/priv/host_generic_regs.h (original)
+++ branches/VEX_JIT_HACKS/priv/host_generic_regs.h Wed Apr 19 22:21:22 2017
@@ -368,18 +368,37 @@
}
}
+/* Host-independent condition code. Stands for X86CondCode, ARM64CondCode... */
+typedef UInt HCondCode;
+
+
+/* Phi node expressed in terms of HReg's. Analogy to IRPhi. */
+typedef
+ struct {
+ HReg dst;
+ HReg srcFallThrough;
+ HReg srcOutOfLine;
+ }
+ HPhiNode;
+
+extern void ppHPhiNode(const HPhiNode* phi_node);
+
/* Represents two alternative code paths:
- - one more likely taken (hot path)
- - one not so likely taken (cold path) */
+ - One more likely taken (hot path)
+ - One not so likely taken (cold path) */
typedef
struct {
- // HCondCode ccOOL; // TODO-JIT: condition code for the OOL branch
+ HCondCode ccOOL; // condition code for the OOL branch
HInstrVec* fallThrough; // generated from the likely-taken IR
HInstrVec* outOfLine; // generated from likely-not-taken IR
+ HPhiNode* phi_nodes;
+ UInt n_phis;
}
HInstrIfThenElse;
+extern HInstrIfThenElse* newHInstrIfThenElse(HCondCode, HPhiNode* phi_nodes,
+ UInt n_phis);
/* Code block of HInstr's.
n_vregs indicates the number of virtual registers mentioned in the code,
@@ -395,7 +414,8 @@
extern HInstrSB* newHInstrSB(void);
extern void ppHInstrSB(const HInstrSB* code,
HInstrIfThenElse* (*isIfThenElse)(const HInstr*),
- void (*ppInstr)(const HInstr*, Bool), Bool mode64);
+ void (*ppInstr)(const HInstr*, Bool),
+ void (*ppCondCode)(HCondCode), Bool mode64);
/*---------------------------------------------------------*/
@@ -500,6 +520,7 @@
/* For debug printing only. */
void (*ppInstr) ( const HInstr*, Bool ),
+ void (*ppCondCode)(HCondCode),
void (*ppReg) ( HReg ),
/* 32/64bit mode */
Modified: branches/VEX_JIT_HACKS/priv/host_x86_defs.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/host_x86_defs.c (original)
+++ branches/VEX_JIT_HACKS/priv/host_x86_defs.c Wed Apr 19 22:21:22 2017
@@ -922,6 +922,13 @@
i->tag = Xin_ProfInc;
return i;
}
+X86Instr* X86Instr_IfThenElse(HInstrIfThenElse* hite)
+{
+ X86Instr* i = LibVEX_Alloc_inline(sizeof(X86Instr));
+ i->tag = Xin_IfThenElse;
+ i->Xin.IfThenElse.hite = hite;
+ return i;
+}
void ppX86Instr ( const X86Instr* i, Bool mode64 ) {
vassert(mode64 == False);
@@ -1212,11 +1219,20 @@
vex_printf("(profInc) addl $1,NotKnownYet; "
"adcl $0,NotKnownYet+4");
return;
+ case Xin_IfThenElse:
+ vex_printf("if (!%s) then {...",
+ showX86CondCode(i->Xin.IfThenElse.hite->ccOOL));
+ return;
default:
vpanic("ppX86Instr");
}
}
+void ppX86CondCode(X86CondCode condCode)
+{
+ vex_printf("%s", showX86CondCode(condCode));
+}
+
/* --------- Helpers for register allocation. --------- */
void getRegUsage_X86Instr (HRegUsage* u, const X86Instr* i, Bool mode64)
Modified: branches/VEX_JIT_HACKS/priv/host_x86_defs.h
==============================================================================
--- branches/VEX_JIT_HACKS/priv/host_x86_defs.h (original)
+++ branches/VEX_JIT_HACKS/priv/host_x86_defs.h Wed Apr 19 22:21:22 2017
@@ -712,9 +712,11 @@
extern X86Instr* X86Instr_EvCheck ( X86AMode* amCounter,
X86AMode* amFailAddr );
extern X86Instr* X86Instr_ProfInc ( void );
+extern X86Instr* X86Instr_IfThenElse(HInstrIfThenElse*);
extern void ppX86Instr ( const X86Instr*, Bool );
+extern void ppX86CondCode(X86CondCode);
/* Some functions that insulate the register allocator from details
of the underlying instruction set. */
Modified: branches/VEX_JIT_HACKS/priv/host_x86_isel.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/host_x86_isel.c (original)
+++ branches/VEX_JIT_HACKS/priv/host_x86_isel.c Wed Apr 19 22:21:22 2017
@@ -124,6 +124,19 @@
// && e->Iex.Const.con->Ico.U64 == 0ULL;
//}
+static void print_depth(UInt depth)
+{
+ for (UInt i = 0; i < depth; i++) {
+ vex_printf(" ");
+ }
+}
+
+static void print_IRStmt_prefix(UInt depth)
+{
+ vex_printf("\n");
+ print_depth(depth);
+ vex_printf("-- ");
+}
/*---------------------------------------------------------*/
/*--- ISelEnv ---*/
@@ -147,7 +160,8 @@
32-bit virtual HReg, which holds the high half
of the value.
- - The code array, that is, the insns selected so far.
+ - The code vector, that is, the insns selected so far. HInstrVec 'code'
+ changes according to the current IRStmtVec being processed.
- A counter, for generating new virtual registers.
@@ -171,13 +185,12 @@
typedef
struct {
- /* Constant -- are set at the start and do not change. */
- IRTypeEnv* type_env;
- IRStmtVec* stmts;
+ /* Constant -- set at the start and do not change. */
+ const IRTypeEnv* type_env;
HReg* vregmap;
HReg* vregmapHI;
- Int n_vregmap;
+ UInt n_vregmap;
UInt hwcaps;
@@ -185,8 +198,10 @@
Addr32 max_ga;
/* These are modified as we go along. */
- HInstrSB* code;
- Int vreg_ctr;
+ HInstrSB* code_sb;
+ HInstrVec* code;
+ UInt vreg_ctr;
+ UInt depth;
}
ISelEnv;
@@ -209,8 +224,9 @@
static void addInstr ( ISelEnv* env, X86Instr* instr )
{
- addHInstr(env->code->insns, instr);
+ addHInstr(env->code, instr);
if (vex_traceflags & VEX_TRACE_VCODE) {
+ print_depth(env->depth);
ppX86Instr(instr, False);
vex_printf("\n");
}
@@ -3860,10 +3876,111 @@
/*--- ISEL: Statements ---*/
/*---------------------------------------------------------*/
+static void iselStmt(ISelEnv* env, IRStmt* stmt);
+
+static HPhiNode* convertPhiNodes(ISelEnv* env, const IRPhiVec* phi_nodes,
+ IRIfThenElse_Hint hint, UInt *n_phis)
+{
+ *n_phis = phi_nodes->phis_used;
+ HPhiNode* hphis = LibVEX_Alloc_inline(*n_phis * sizeof(HPhiNode));
+
+ for (UInt i = 0; i < *n_phis; i++) {
+ const IRPhi* phi = phi_nodes->phis[i];
+ hphis[i].dst = lookupIRTemp(env, phi->dst);
+
+ switch (hint) {
+ case IfThenElse_ThenLikely:
+ hphis[i].srcFallThrough = lookupIRTemp(env, phi->srcThen);
+ hphis[i].srcOutOfLine = lookupIRTemp(env, phi->srcElse);
+ break;
+ case IfThenElse_ElseLikely:
+ hphis[i].srcFallThrough = lookupIRTemp(env, phi->srcElse);
+ hphis[i].srcOutOfLine = lookupIRTemp(env, phi->srcThen);
+ break;
+ default:
+ vassert(0);
+ }
+ }
+ return hphis;
+}
+
+static void iselStmtVec(ISelEnv* env, IRStmtVec* stmts)
+{
+ for (UInt i = 0; i < stmts->stmts_used; i++) {
+ IRStmt* st = stmts->stmts[i];
+ if (st->tag != Ist_IfThenElse) {
+ iselStmt(env, stmts->stmts[i]);
+ continue;
+ }
+
+ /* Deal with IfThenElse. */
+ HInstrVec* current_code = env->code;
+ IRIfThenElse* ite = st->Ist.IfThenElse.details;
+ if (vex_traceflags & VEX_TRACE_VCODE) {
+ print_IRStmt_prefix(env->depth);
+ ppIRIfThenElseCondHint(ite);
+ vex_printf(" then {\n");
+ }
+
+ UInt n_phis;
+ HPhiNode* phi_nodes = convertPhiNodes(env, ite->phi_nodes,
+ ite->hint, &n_phis);
+
+ X86CondCode cc = iselCondCode(env, ite->cond);
+ HInstrIfThenElse* hite = newHInstrIfThenElse(cc, phi_nodes, n_phis);
+ X86Instr* instr = X86Instr_IfThenElse(hite);
+ addInstr(env, instr);
+
+ env->depth += 1;
+
+ IRStmtVec* likely_leg;
+ IRStmtVec* unlikely_leg;
+ switch (ite->hint) {
+ case IfThenElse_ThenLikely:
+ likely_leg = ite->then_leg;
+ unlikely_leg = ite->else_leg;
+ break;
+ case IfThenElse_ElseLikely:
+ likely_leg = ite->else_leg;
+ unlikely_leg = ite->then_leg;
+ break;
+ default:
+ vassert(0);
+ }
+
+ env->code = hite->fallThrough;
+ iselStmtVec(env, likely_leg);
+ if (vex_traceflags & VEX_TRACE_VCODE) {
+ print_IRStmt_prefix(env->depth - 1);
+ vex_printf("} else {\n");
+ }
+ env->code = hite->outOfLine;
+ iselStmtVec(env, unlikely_leg);
+
+ env->depth -= 1;
+ env->code = current_code;
+
+ if (vex_traceflags & VEX_TRACE_VCODE) {
+ print_IRStmt_prefix(env->depth);
+ vex_printf("}\n");
+
+ for (UInt j = 0; j < hite->n_phis; j++) {
+ print_IRStmt_prefix(env->depth);
+ ppIRPhi(ite->phi_nodes->phis[j]);
+ vex_printf("\n");
+
+ print_depth(env->depth);
+ ppHPhiNode(&hite->phi_nodes[j]);
+ vex_printf("\n");
+ }
+ }
+ }
+}
+
static void iselStmt ( ISelEnv* env, IRStmt* stmt )
{
if (vex_traceflags & VEX_TRACE_VCODE) {
- vex_printf("\n-- ");
+ print_IRStmt_prefix(env->depth);
ppIRStmt(stmt, env->type_env, 0);
vex_printf("\n");
}
@@ -4430,9 +4547,6 @@
Bool addProfInc,
Addr max_ga )
{
- Int i, j;
- HReg hreg, hregHI;
- ISelEnv* env;
UInt hwcaps_host = archinfo_host->hwcaps;
X86AMode *amCounter, *amFailAddr;
@@ -4449,17 +4563,16 @@
vassert(archinfo_host->endness == VexEndnessLE);
/* Make up an initial environment to use. */
- env = LibVEX_Alloc_inline(sizeof(ISelEnv));
+ ISelEnv* env = LibVEX_Alloc_inline(sizeof(ISelEnv));
env->vreg_ctr = 0;
- /* Set up output code array. */
- env->code = newHInstrSB();
+ /* Set up output HInstrSB and the first processed HInstrVec. */
+ env->code_sb = newHInstrSB();
+ env->code = env->code_sb->insns;
+ env->depth = 0;
/* Copy BB's type env. */
- /* TODO-JIT: Currently works only with no if-then-else statements. */
- vassert(bb->id_seq == 1);
env->type_env = bb->tyenv;
- env->stmts = bb->stmts;
/* Make up an IRTemp -> virtual HReg mapping. This doesn't
change as we go along. */
@@ -4474,9 +4587,10 @@
/* For each IR temporary, allocate a suitably-kinded virtual
register. */
- j = 0;
- for (i = 0; i < env->n_vregmap; i++) {
- hregHI = hreg = INVALID_HREG;
+ UInt j = 0;
+ for (UInt i = 0; i < env->n_vregmap; i++) {
+ HReg hreg = INVALID_HREG;
+ HReg hregHI = INVALID_HREG;
switch (bb->tyenv->types[i]) {
case Ity_I1:
case Ity_I8:
@@ -4509,14 +4623,13 @@
}
/* Ok, finally we can iterate over the statements. */
- for (i = 0; i < bb->stmts->stmts_used; i++)
- iselStmt(env, bb->stmts->stmts[i]);
+ iselStmtVec(env, bb->stmts);
iselNext(env, bb->next, bb->jumpkind, bb->offsIP);
/* record the number of vregs we used. */
- env->code->n_vregs = env->vreg_ctr;
- return env->code;
+ env->code_sb->n_vregs = env->vreg_ctr;
+ return env->code_sb;
}
Modified: branches/VEX_JIT_HACKS/priv/ir_defs.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/ir_defs.c (original)
+++ branches/VEX_JIT_HACKS/priv/ir_defs.c Wed Apr 19 22:21:22 2017
@@ -1610,13 +1610,19 @@
}
}
-void ppIRIfThenElse(const IRIfThenElse* ite, const IRTypeEnv* tyenv, UInt depth)
+void ppIRIfThenElseCondHint(const IRIfThenElse* ite)
{
vex_printf("if (");
ppIRExpr(ite->cond);
vex_printf(") [");
ppIRIfThenElse_Hint(ite->hint);
- vex_printf("] then {\n");
+ vex_printf("]");
+}
+
+void ppIRIfThenElse(const IRIfThenElse* ite, const IRTypeEnv* tyenv, UInt depth)
+{
+ ppIRIfThenElseCondHint(ite);
+ vex_printf(" then {\n");
ppIRStmtVec(ite->then_leg, tyenv, depth + 1);
print_depth(depth);
vex_printf("} else {\n");
Modified: branches/VEX_JIT_HACKS/priv/main_main.c
==============================================================================
--- branches/VEX_JIT_HACKS/priv/main_main.c (original)
+++ branches/VEX_JIT_HACKS/priv/main_main.c Wed Apr 19 22:21:22 2017
@@ -331,6 +331,7 @@
void (*genReload) ( HInstr**, HInstr**, HReg, Int, Bool );
HInstr* (*directReload) ( HInstr*, HReg, Short );
void (*ppInstr) ( const HInstr*, Bool );
+ void (*ppCondCode) ( HCondCode );
void (*ppReg) ( HReg );
HInstrSB* (*iselSB) ( const IRSB*, VexArch, const VexArchInfo*,
const VexAbiInfo*, Int, Int, Bool, Bool,
@@ -367,6 +368,7 @@
genReload = NULL;
directReload = NULL;
ppInstr = NULL;
+ ppCondCode = NULL;
ppReg = NULL;
iselSB = NULL;
emit = NULL;
@@ -419,6 +421,7 @@
genReload = (__typeof__(genReload)) X86FN(genReload_X86);
directReload = (__typeof__(directReload)) X86FN(directReload_X86);
ppInstr = (__typeof__(ppInstr)) X86FN(ppX86Instr);
+ ppCondCode = (__typeof__(ppCondCode)) X86FN(ppX86CondCode);
ppReg = (__typeof__(ppReg)) X86FN(ppHRegX86);
iselSB = X86FN(iselSB_X86);
emit = (__typeof__(emit)) X86FN(emit_X86Instr);
@@ -1044,7 +1047,7 @@
vex_printf("\n");
if (vex_traceflags & VEX_TRACE_VCODE) {
- ppHInstrSB(vcode, isIfThenElse, ppInstr, mode64);
+ ppHInstrSB(vcode, isIfThenElse, ppInstr, ppCondCode, mode64);
vex_printf("\n");
}
@@ -1053,7 +1056,7 @@
isMove, getRegUsage, mapRegs, isIfThenElse,
genSpill, genReload, directReload,
guest_sizeB,
- ppInstr, ppReg, mode64 );
+ ppInstr, ppCondCode, ppReg, mode64 );
vexAllocSanityCheck();
@@ -1061,7 +1064,7 @@
vex_printf("\n------------------------"
" Register-allocated code "
"------------------------\n\n");
- ppHInstrSB(rcode, isIfThenElse, ppInstr, mode64);
+ ppHInstrSB(rcode, isIfThenElse, ppInstr, ppCondCode, mode64);
vex_printf("\n");
}
Modified: branches/VEX_JIT_HACKS/pub/libvex_ir.h
==============================================================================
--- branches/VEX_JIT_HACKS/pub/libvex_ir.h (original)
+++ branches/VEX_JIT_HACKS/pub/libvex_ir.h Wed Apr 19 22:21:22 2017
@@ -2817,6 +2817,8 @@
IRIfThenElse;
extern void ppIRIfThenElse_Hint(IRIfThenElse_Hint hint);
+/* Pretty print only If-Then-Else preamble: condition and hint. Not the legs. */
+extern void ppIRIfThenElseCondHint(const IRIfThenElse* ite);
extern void ppIRIfThenElse(const IRIfThenElse* ite, const IRTypeEnv* tyenv,
UInt depth);
extern IRIfThenElse* mkIRIfThenElse(IRExpr* cond, IRIfThenElse_Hint hint,
|
|
From: <sv...@va...> - 2017-04-19 20:16:02
|
Author: philippe
Date: Wed Apr 19 21:15:50 2017
New Revision: 16306
Log:
Have a cleaner way to remove the massif preload from LD_PRELOAD.
The previous code was removing the massif preload (when --pages-as-heap=yes)
by replacing the entry with spaces.
This is not very clear, and I suspect this gives problems with the
android linker, which seems to use such a space entry as a real entry
to load (and then fails to start the application).
This patch really removes the entry, by shifting the characters.
Tested on amd64/debian.
Modified:
trunk/massif/ms_main.c
Modified: trunk/massif/ms_main.c
==============================================================================
--- trunk/massif/ms_main.c (original)
+++ trunk/massif/ms_main.c Wed Apr 19 21:15:50 2017
@@ -1947,8 +1947,6 @@
{
Int i;
HChar* LD_PRELOAD_val;
- HChar* s;
- HChar* s2;
/* We will record execontext up to clo_depth + overestimate and
we will store this as ec => we need to increase the backtrace size
@@ -1969,35 +1967,42 @@
// If --pages-as-heap=yes we don't want malloc replacement to occur. So we
// disable vgpreload_massif-$PLATFORM.so by removing it from LD_PRELOAD (or
- // platform-equivalent). We replace it entirely with spaces because then
- // the linker doesn't complain (it does complain if we just change the name
- // to a bogus file). This is a bit of a hack, but LD_PRELOAD is setup well
- // before tool initialisation, so this seems the best way to do it.
+ // platform-equivalent). This is a bit of a hack, but LD_PRELOAD is setup
+ // well before tool initialisation, so this seems the best way to do it.
if (clo_pages_as_heap) {
+ HChar* s1;
+ HChar* s2;
+
clo_heap_admin = 0; // No heap admin on pages.
LD_PRELOAD_val = VG_(getenv)( VG_(LD_PRELOAD_var_name) );
tl_assert(LD_PRELOAD_val);
+ VERB(2, "clo_pages_as_heap orig LD_PRELOAD '%s'\n", LD_PRELOAD_val);
+
// Make sure the vgpreload_core-$PLATFORM entry is there, for sanity.
- s2 = VG_(strstr)(LD_PRELOAD_val, "vgpreload_core");
- tl_assert(s2);
+ s1 = VG_(strstr)(LD_PRELOAD_val, "vgpreload_core");
+ tl_assert(s1);
// Now find the vgpreload_massif-$PLATFORM entry.
- s2 = VG_(strstr)(LD_PRELOAD_val, "vgpreload_massif");
- tl_assert(s2);
+ s1 = VG_(strstr)(LD_PRELOAD_val, "vgpreload_massif");
+ tl_assert(s1);
+ s2 = s1;
- // Blank out everything to the previous ':', which must be there because
+ // Position s1 on the previous ':', which must be there because
// of the preceding vgpreload_core-$PLATFORM entry.
- for (s = s2; *s != ':'; s--) {
- *s = ' ';
- }
+ for (; *s1 != ':'; s1--)
+ ;
- // Blank out everything to the end of the entry, which will be '\0' if
- // LD_PRELOAD was empty before Valgrind started, or ':' otherwise.
- for (s = s2; *s != ':' && *s != '\0'; s++) {
- *s = ' ';
- }
+ // Position s2 on the next ':' or \0
+ for (; *s2 != ':' && *s2 != '\0'; s2++)
+ ;
+
+ // Move all characters from s2 to s1
+ while ((*s1++ = *s2++))
+ ;
+
+ VERB(2, "clo_pages_as_heap cleaned LD_PRELOAD '%s'\n", LD_PRELOAD_val);
}
// Print alloc-fns and ignore-fns, if necessary.
|