You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(5) |
|
2
(2) |
3
(3) |
4
(2) |
5
(4) |
6
(4) |
7
(1) |
8
|
|
9
|
10
(1) |
11
|
12
(3) |
13
(2) |
14
(2) |
15
|
|
16
|
17
|
18
|
19
(2) |
20
(10) |
21
|
22
(6) |
|
23
(7) |
24
|
25
(2) |
26
|
27
(1) |
28
(8) |
29
(2) |
|
30
|
31
|
|
|
|
|
|
|
From: Roger L. <ro...@at...> - 2018-12-20 22:27:12
|
Dear all, I have packaged valgrind as a snap. In case you aren't familiar with snaps, they are containerised packages that bundle dependencies, and run on a variety of Linux distributions, but are most well supported on Ubuntu. The advantage for me is that they can provide up to date versions of software even on older distributions - so with this package, 3.14.0 is available on all supported versions of Ubuntu back to 14.04, for example. To install on a snap supported distro, you would run snap install --beta --classic valgrind At the moment the package is still very new so I have only released it to the beta channel. As far as I have used the different tools everything seems fine, but I had been hoping for some other feedback. Once it is fully released you would run snap install --classic valgrind In both cases you would then use the valgrind command line programs as normal, but they would be running from /snap/... instead. The store page for the package is https://snapcraft.io/valgrind I'm writing to tell you out of interest, but also in case you would like to take over providing this package. I think that binaries managed by upstream are best, where possible. I am very well aware that packaging is a thankless task, but in my experience of building debs, a bit of rpm, and homebrew, the update procedure for this is very straightforward. Just update the version number in the snap packaging file (https://github.com/ralight/valgrind-snap/blob/master/snap/snapcraft.yaml ), commit and push, then wait for the package to be autobuilt for all architectures and release it to the stable channel on the store. Having said all that, I'd be quite happy to keep on maintaining the package myself. Regards, Roger |
|
From: Mark W. <ma...@so...> - 2018-12-20 21:51:29
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=3af8e12b0d49dc87cd26258131ebd60c9b587c74 commit 3af8e12b0d49dc87cd26258131ebd60c9b587c74 Author: Julian Seward <js...@ac...> Date: Wed Dec 12 13:55:01 2018 +0100 Fix memcheck/tests/undef_malloc_args failure. Try harder to trigger a memcheck error if a value is (partially) undefined. Diff: --- coregrind/m_replacemalloc/vg_replace_malloc.c | 16 +++++++++++++--- memcheck/tests/undef_malloc_args.c | 16 ++++++++-------- 2 files changed, 21 insertions(+), 11 deletions(-) diff --git a/coregrind/m_replacemalloc/vg_replace_malloc.c b/coregrind/m_replacemalloc/vg_replace_malloc.c index 28bdb4a..564829a 100644 --- a/coregrind/m_replacemalloc/vg_replace_malloc.c +++ b/coregrind/m_replacemalloc/vg_replace_malloc.c @@ -216,9 +216,19 @@ static void init(void); Apart of allowing memcheck to detect an error, the macro TRIGGER_MEMCHECK_ERROR_IF_UNDEFINED has no effect and has a minimal cost for other tools replacing malloc functions. + + Creating an "artificial" use of _x that works reliably is not entirely + straightforward. Simply comparing it against zero often produces no + warning if _x contains at least one nonzero bit is defined, because + Memcheck knows that the result of the comparison will be defined (cf + expensiveCmpEQorNE). + + Really we want to PCast _x, so as to create a value which is entirely + undefined if any bit of _x is undefined. But there's no portable way to do + that. */ -#define TRIGGER_MEMCHECK_ERROR_IF_UNDEFINED(x) \ - if ((ULong)x == 0) __asm__ __volatile__( "" ::: "memory" ) +#define TRIGGER_MEMCHECK_ERROR_IF_UNDEFINED(_x) \ + if ((UWord)(_x) == 0) __asm__ __volatile__( "" ::: "memory" ) /*---------------------- malloc ----------------------*/ @@ -504,7 +514,7 @@ static void init(void); void VG_REPLACE_FUNCTION_EZU(10040,soname,fnname) (void *zone, void *p) \ { \ DO_INIT; \ - TRIGGER_MEMCHECK_ERROR_IF_UNDEFINED((UWord) zone); \ + TRIGGER_MEMCHECK_ERROR_IF_UNDEFINED((UWord)zone ^ (UWord)p); \ MALLOC_TRACE(#fnname "(%p, %p)\n", zone, p ); \ if (p == NULL) \ return; \ diff --git a/memcheck/tests/undef_malloc_args.c b/memcheck/tests/undef_malloc_args.c index 99e2799..654d70d 100644 --- a/memcheck/tests/undef_malloc_args.c +++ b/memcheck/tests/undef_malloc_args.c @@ -11,29 +11,29 @@ int main (int argc, char*argv[]) { size_t size = def_size; - (void) VALGRIND_MAKE_MEM_UNDEFINED(&size, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&size, sizeof(size)); p = malloc(size); } - (void) VALGRIND_MAKE_MEM_UNDEFINED(&p, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&p, sizeof(p)); new_p = realloc(p, def_size); - (void) VALGRIND_MAKE_MEM_UNDEFINED(&new_p, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&new_p, sizeof(new_p)); new_p = realloc(new_p, def_size); - (void) VALGRIND_MAKE_MEM_UNDEFINED(&new_p, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&new_p, sizeof(new_p)); free (new_p); { size_t nmemb = 1; - (void) VALGRIND_MAKE_MEM_UNDEFINED(&nmemb, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&nmemb, sizeof(nmemb)); new_p = calloc(nmemb, def_size); free (new_p); } #if 0 { size_t alignment = 1; - (void) VALGRIND_MAKE_MEM_UNDEFINED(&alignment, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&alignment, sizeof(alignment)); new_p = memalign(alignment, def_size); free(new_p); } @@ -41,14 +41,14 @@ int main (int argc, char*argv[]) { size_t nmemb = 16; size_t size = def_size; - (void) VALGRIND_MAKE_MEM_UNDEFINED(&size, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&size, sizeof(size)); new_p = memalign(nmemb, size); free(new_p); } { size_t size = def_size; - (void) VALGRIND_MAKE_MEM_UNDEFINED(&size, 1); + (void) VALGRIND_MAKE_MEM_UNDEFINED(&size, sizeof(size)); new_p = valloc(size); free (new_p); } |
|
From: Mark W. <ma...@so...> - 2018-12-20 21:51:23
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=01f1936b1238a3bbeaf2821b61ecb36ba0ae15b1 commit 01f1936b1238a3bbeaf2821b61ecb36ba0ae15b1 Author: Julian Seward <js...@ac...> Date: Mon Dec 10 17:18:20 2018 +0100 Adjust ppc set_AV_CR6 computation to help Memcheck instrumentation. * changes set_AV_CR6 so that it does scalar comparisons against zero, rather than sometimes against an all-ones word. This is something that Memcheck can instrument exactly. * in Memcheck, requests expensive instrumentation of Iop_Cmp{EQ,NE}64 by default on ppc64le. https://bugs.kde.org/show_bug.cgi?id=386945#c62 Diff: --- VEX/priv/guest_ppc_toIR.c | 103 ++++++++++++++++++++++++++++++++-------------- memcheck/mc_translate.c | 3 ++ 2 files changed, 76 insertions(+), 30 deletions(-) diff --git a/VEX/priv/guest_ppc_toIR.c b/VEX/priv/guest_ppc_toIR.c index 6b02fb4..18df822 100644 --- a/VEX/priv/guest_ppc_toIR.c +++ b/VEX/priv/guest_ppc_toIR.c @@ -2062,45 +2062,88 @@ static void set_CR0 ( IRExpr* result ) static void set_AV_CR6 ( IRExpr* result, Bool test_all_ones ) { /* CR6[0:3] = {all_ones, 0, all_zeros, 0} - all_ones = (v[0] && v[1] && v[2] && v[3]) - all_zeros = ~(v[0] || v[1] || v[2] || v[3]) + 32 bit: all_zeros = (v[0] || v[1] || v[2] || v[3]) == 0x0000'0000 + all_ones = ~(v[0] && v[1] && v[2] && v[3]) == 0x0000'0000 + where v[] denotes 32-bit lanes + or + 64 bit: all_zeros = (v[0] || v[1]) == 0x0000'0000'0000'0000 + all_ones = ~(v[0] && v[1]) == 0x0000'0000'0000'0000 + where v[] denotes 64-bit lanes + + The 32- and 64-bit versions compute the same thing, but the 64-bit one + tries to be a bit more efficient. */ - IRTemp v0 = newTemp(Ity_V128); - IRTemp v1 = newTemp(Ity_V128); - IRTemp v2 = newTemp(Ity_V128); - IRTemp v3 = newTemp(Ity_V128); - IRTemp rOnes = newTemp(Ity_I8); - IRTemp rZeros = newTemp(Ity_I8); - vassert(typeOfIRExpr(irsb->tyenv,result) == Ity_V128); - assign( v0, result ); - assign( v1, binop(Iop_ShrV128, result, mkU8(32)) ); - assign( v2, binop(Iop_ShrV128, result, mkU8(64)) ); - assign( v3, binop(Iop_ShrV128, result, mkU8(96)) ); + IRTemp overlappedOred = newTemp(Ity_V128); + IRTemp overlappedAnded = newTemp(Ity_V128); + + if (mode64) { + IRTemp v0 = newTemp(Ity_V128); + IRTemp v1 = newTemp(Ity_V128); + assign( v0, result ); + assign( v1, binop(Iop_ShrV128, result, mkU8(64)) ); + assign(overlappedOred, + binop(Iop_OrV128, mkexpr(v0), mkexpr(v1))); + assign(overlappedAnded, + binop(Iop_AndV128, mkexpr(v0), mkexpr(v1))); + } else { + IRTemp v0 = newTemp(Ity_V128); + IRTemp v1 = newTemp(Ity_V128); + IRTemp v2 = newTemp(Ity_V128); + IRTemp v3 = newTemp(Ity_V128); + assign( v0, result ); + assign( v1, binop(Iop_ShrV128, result, mkU8(32)) ); + assign( v2, binop(Iop_ShrV128, result, mkU8(64)) ); + assign( v3, binop(Iop_ShrV128, result, mkU8(96)) ); + assign(overlappedOred, + binop(Iop_OrV128, + binop(Iop_OrV128, mkexpr(v0), mkexpr(v1)), + binop(Iop_OrV128, mkexpr(v2), mkexpr(v3)))); + assign(overlappedAnded, + binop(Iop_AndV128, + binop(Iop_AndV128, mkexpr(v0), mkexpr(v1)), + binop(Iop_AndV128, mkexpr(v2), mkexpr(v3)))); + } + + IRTemp rOnes = newTemp(Ity_I8); + IRTemp rZeroes = newTemp(Ity_I8); - assign( rZeros, unop(Iop_1Uto8, - binop(Iop_CmpEQ32, mkU32(0xFFFFFFFF), - unop(Iop_Not32, - unop(Iop_V128to32, - binop(Iop_OrV128, - binop(Iop_OrV128, mkexpr(v0), mkexpr(v1)), - binop(Iop_OrV128, mkexpr(v2), mkexpr(v3)))) - ))) ); + if (mode64) { + assign(rZeroes, + unop(Iop_1Uto8, + binop(Iop_CmpEQ64, + mkU64(0), + unop(Iop_V128to64, mkexpr(overlappedOred))))); + assign(rOnes, + unop(Iop_1Uto8, + binop(Iop_CmpEQ64, + mkU64(0), + unop(Iop_Not64, + unop(Iop_V128to64, mkexpr(overlappedAnded)))))); + } else { + assign(rZeroes, + unop(Iop_1Uto8, + binop(Iop_CmpEQ32, + mkU32(0), + unop(Iop_V128to32, mkexpr(overlappedOred))))); + assign(rOnes, + unop(Iop_1Uto8, + binop(Iop_CmpEQ32, + mkU32(0), + unop(Iop_Not32, + unop(Iop_V128to32, mkexpr(overlappedAnded)))))); + } + + // rOnes might not be used below. But iropt will remove it, so there's no + // inefficiency as a result. if (test_all_ones) { - assign( rOnes, unop(Iop_1Uto8, - binop(Iop_CmpEQ32, mkU32(0xFFFFFFFF), - unop(Iop_V128to32, - binop(Iop_AndV128, - binop(Iop_AndV128, mkexpr(v0), mkexpr(v1)), - binop(Iop_AndV128, mkexpr(v2), mkexpr(v3))) - ))) ); putCR321( 6, binop(Iop_Or8, binop(Iop_Shl8, mkexpr(rOnes), mkU8(3)), - binop(Iop_Shl8, mkexpr(rZeros), mkU8(1))) ); + binop(Iop_Shl8, mkexpr(rZeroes), mkU8(1))) ); } else { - putCR321( 6, binop(Iop_Shl8, mkexpr(rZeros), mkU8(1)) ); + putCR321( 6, binop(Iop_Shl8, mkexpr(rZeroes), mkU8(1)) ); } putCR0( 6, mkU8(0) ); } diff --git a/memcheck/mc_translate.c b/memcheck/mc_translate.c index 1e770b3..04ed864 100644 --- a/memcheck/mc_translate.c +++ b/memcheck/mc_translate.c @@ -8323,6 +8323,9 @@ IRSB* MC_(instrument) ( VgCallbackClosure* closure, # elif defined(VGA_amd64) mce.dlbo.dl_Add64 = DLauto; mce.dlbo.dl_CmpEQ32_CmpNE32 = DLexpensive; +# elif defined(VGA_ppc64le) + // Needed by (at least) set_AV_CR6() in the front end. + mce.dlbo.dl_CmpEQ64_CmpNE64 = DLexpensive; # endif /* preInstrumentationAnalysis() will allocate &mce.tmpHowUsed and then |
|
From: Mark W. <ma...@so...> - 2018-12-20 21:51:18
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=3ef4b2c780ea76a80ae5beaa63c1cb1d6530988b commit 3ef4b2c780ea76a80ae5beaa63c1cb1d6530988b Author: Mark Wielaard <ma...@kl...> Date: Sun Dec 9 23:25:05 2018 +0100 Implement ppc64 lxvb16x as 128-bit vector load with reversed double words. This makes it possible for memcheck to know which part of the 128bit vector is defined, even if the load is partly beyond an addressable block. Partially resolves bug 386945. Diff: --- VEX/priv/guest_ppc_toIR.c | 61 ++++++++++++++--------------------------------- 1 file changed, 18 insertions(+), 43 deletions(-) diff --git a/VEX/priv/guest_ppc_toIR.c b/VEX/priv/guest_ppc_toIR.c index 72c9c13..6b02fb4 100644 --- a/VEX/priv/guest_ppc_toIR.c +++ b/VEX/priv/guest_ppc_toIR.c @@ -20702,54 +20702,29 @@ dis_vx_load ( UInt theInstr ) { DIP("lxvb16x %d,r%u,r%u\n", (UInt)XT, rA_addr, rB_addr); - IRTemp byte[16]; - int i; - UInt ea_off = 0; - IRExpr* irx_addr; - IRTemp tmp_low[9]; - IRTemp tmp_hi[9]; + /* The result of lxvb16x should be the same on big and little + endian systems. We do a host load, then reverse the bytes in + the double words. If the host load was little endian we swap + them around again. */ - tmp_low[0] = newTemp( Ity_I64 ); - tmp_hi[0] = newTemp( Ity_I64 ); - assign( tmp_low[0], mkU64( 0 ) ); - assign( tmp_hi[0], mkU64( 0 ) ); - - for ( i = 0; i < 8; i++ ) { - byte[i] = newTemp( Ity_I64 ); - tmp_low[i+1] = newTemp( Ity_I64 ); - - irx_addr = binop( mkSzOp( ty, Iop_Add8 ), mkexpr( EA ), - ty == Ity_I64 ? mkU64( ea_off ) : mkU32( ea_off ) ); - ea_off += 1; - - assign( byte[i], binop( Iop_Shl64, - unop( Iop_8Uto64, - load( Ity_I8, irx_addr ) ), - mkU8( 8 * ( 7 - i ) ) ) ); + IRTemp high = newTemp(Ity_I64); + IRTemp high_rev = newTemp(Ity_I64); + IRTemp low = newTemp(Ity_I64); + IRTemp low_rev = newTemp(Ity_I64); - assign( tmp_low[i+1], - binop( Iop_Or64, - mkexpr( byte[i] ), mkexpr( tmp_low[i] ) ) ); - } + IRExpr *t128 = load( Ity_V128, mkexpr( EA ) ); - for ( i = 0; i < 8; i++ ) { - byte[i + 8] = newTemp( Ity_I64 ); - tmp_hi[i+1] = newTemp( Ity_I64 ); + assign( high, unop(Iop_V128HIto64, t128) ); + assign( high_rev, unop(Iop_Reverse8sIn64_x1, mkexpr(high)) ); + assign( low, unop(Iop_V128to64, t128) ); + assign( low_rev, unop(Iop_Reverse8sIn64_x1, mkexpr(low)) ); - irx_addr = binop( mkSzOp( ty, Iop_Add8 ), mkexpr( EA ), - ty == Ity_I64 ? mkU64( ea_off ) : mkU32( ea_off ) ); - ea_off += 1; + if (host_endness == VexEndnessLE) + t128 = binop( Iop_64HLtoV128, mkexpr (low_rev), mkexpr (high_rev) ); + else + t128 = binop( Iop_64HLtoV128, mkexpr (high_rev), mkexpr (low_rev) ); - assign( byte[i+8], binop( Iop_Shl64, - unop( Iop_8Uto64, - load( Ity_I8, irx_addr ) ), - mkU8( 8 * ( 7 - i ) ) ) ); - assign( tmp_hi[i+1], binop( Iop_Or64, - mkexpr( byte[i+8] ), - mkexpr( tmp_hi[i] ) ) ); - } - putVSReg( XT, binop( Iop_64HLtoV128, - mkexpr( tmp_low[8] ), mkexpr( tmp_hi[8] ) ) ); + putVSReg( XT, t128 ); break; } |
|
From: Mark W. <ma...@so...> - 2018-12-20 21:51:13
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=8d12697b157bb50ad98467b565b3a7a39097bf31 commit 8d12697b157bb50ad98467b565b3a7a39097bf31 Author: Mark Wielaard <ma...@kl...> Date: Sun Dec 9 14:26:39 2018 +0100 memcheck: Allow unaligned loads of 128bit vectors on ppc64[le]. On powerpc partial unaligned loads of vectors from partially invalid addresses are OK and could be generated by our translation of lxvd2x. Adjust partial_load memcheck tests to allow partial loads of 16 byte vectors on powerpc64. Part of resolving bug #386945. Diff: --- memcheck/mc_main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/memcheck/mc_main.c b/memcheck/mc_main.c index 737f79d..101916b 100644 --- a/memcheck/mc_main.c +++ b/memcheck/mc_main.c @@ -1354,6 +1354,9 @@ void mc_LOADV_128_or_256_slow ( /*OUT*/ULong* res, tl_assert(szB == 16); // s390 doesn't have > 128 bit SIMD /* OK if all loaded bytes are from the same page. */ Bool alignedOK = ((a & 0xfff) <= 0x1000 - szB); +# elif defined(VGA_ppc64be) || defined(VGA_ppc64le) + /* lxvd2x might generate an unaligned 128 bit vector load. */ + Bool alignedOK = (szB == 16); # else /* OK if the address is aligned by the load size. */ Bool alignedOK = (0 == (a & (szB - 1))); |
|
From: Mark W. <ma...@so...> - 2018-12-20 21:51:08
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=98a73de1c0c83918a63e736b62d428bc2f98c943 commit 98a73de1c0c83918a63e736b62d428bc2f98c943 Author: Mark Wielaard <ma...@kl...> Date: Sun Dec 9 00:55:42 2018 +0100 Implement ppc64 lxvd2x as 128-bit load with double word swap for ppc64le. This makes it possible for memcheck to know which part of the 128bit vector is defined, even if the load is partly beyond an addressable block. Partially resolves bug 386945. Diff: --- VEX/priv/guest_ppc_toIR.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/VEX/priv/guest_ppc_toIR.c b/VEX/priv/guest_ppc_toIR.c index 10f6daa..72c9c13 100644 --- a/VEX/priv/guest_ppc_toIR.c +++ b/VEX/priv/guest_ppc_toIR.c @@ -20590,16 +20590,22 @@ dis_vx_load ( UInt theInstr ) } case 0x34C: // lxvd2x { - IROp addOp = ty == Ity_I64 ? Iop_Add64 : Iop_Add32; - IRExpr * high, *low; - ULong ea_off = 8; - IRExpr* high_addr; + IRExpr *t128; DIP("lxvd2x %d,r%u,r%u\n", XT, rA_addr, rB_addr); - high = load( Ity_I64, mkexpr( EA ) ); - high_addr = binop( addOp, mkexpr( EA ), ty == Ity_I64 ? mkU64( ea_off ) - : mkU32( ea_off ) ); - low = load( Ity_I64, high_addr ); - putVSReg( XT, binop( Iop_64HLtoV128, high, low ) ); + t128 = load( Ity_V128, mkexpr( EA ) ); + + /* The data in the vec register should be in big endian order. + So if we just did a little endian load then swap around the + high and low double words. */ + if (host_endness == VexEndnessLE) { + IRTemp high = newTemp(Ity_I64); + IRTemp low = newTemp(Ity_I64); + assign( high, unop(Iop_V128HIto64, t128) ); + assign( low, unop(Iop_V128to64, t128) ); + t128 = binop( Iop_64HLtoV128, mkexpr (low), mkexpr (high) ); + } + + putVSReg( XT, t128 ); break; } case 0x14C: // lxvdsx |
|
From: Mark W. <ma...@so...> - 2018-12-20 21:51:03
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=5ecdecdcd3efecadf149dcd6276140e833b7f8e6 commit 5ecdecdcd3efecadf149dcd6276140e833b7f8e6 Author: Mark Wielaard <ma...@kl...> Date: Sat Dec 8 13:47:43 2018 -0500 memcheck: Allow unaligned loads of words on ppc64[le]. On powerpc partial unaligned loads of words from partially invalid addresses are OK and could be generated by our translation of ldbrx. Adjust partial_load memcheck tests to allow partial loads of words on powerpc64. Part of resolving bug #386945. Diff: --- memcheck/mc_main.c | 3 +++ memcheck/tests/Makefile.am | 2 ++ memcheck/tests/partial_load.c | 12 ++++++------ memcheck/tests/partial_load_dflt.stderr.exp-ppc64 | 23 +++++++++++++++++++++++ memcheck/tests/partial_load_ok.stderr.exp-ppc64 | 23 +++++++++++++++++++++++ 5 files changed, 57 insertions(+), 6 deletions(-) diff --git a/memcheck/mc_main.c b/memcheck/mc_main.c index 3ef7cb9..737f79d 100644 --- a/memcheck/mc_main.c +++ b/memcheck/mc_main.c @@ -1508,6 +1508,9 @@ ULong mc_LOADVn_slow ( Addr a, SizeT nBits, Bool bigendian ) # if defined(VGA_mips64) && defined(VGABI_N32) if (szB == VG_WORDSIZE * 2 && VG_IS_WORD_ALIGNED(a) && n_addrs_bad < VG_WORDSIZE * 2) +# elif defined(VGA_ppc64be) || defined(VGA_ppc64le) + /* On power unaligned loads of words are OK. */ + if (szB == VG_WORDSIZE && n_addrs_bad < VG_WORDSIZE) # else if (szB == VG_WORDSIZE && VG_IS_WORD_ALIGNED(a) && n_addrs_bad < VG_WORDSIZE) diff --git a/memcheck/tests/Makefile.am b/memcheck/tests/Makefile.am index 2af4dd1..70b8ada 100644 --- a/memcheck/tests/Makefile.am +++ b/memcheck/tests/Makefile.am @@ -235,8 +235,10 @@ EXTRA_DIST = \ partiallydefinedeq.stdout.exp \ partial_load_ok.vgtest partial_load_ok.stderr.exp \ partial_load_ok.stderr.exp64 \ + partial_load_ok.stderr.exp-ppc64 \ partial_load_dflt.vgtest partial_load_dflt.stderr.exp \ partial_load_dflt.stderr.exp64 \ + partial_load_dflt.stderr.exp-ppc64 \ partial_load_dflt.stderr.expr-s390x-mvc \ pdb-realloc.stderr.exp pdb-realloc.vgtest \ pdb-realloc2.stderr.exp pdb-realloc2.stdout.exp pdb-realloc2.vgtest \ diff --git a/memcheck/tests/partial_load.c b/memcheck/tests/partial_load.c index 0b2f10b..685ca8d 100644 --- a/memcheck/tests/partial_load.c +++ b/memcheck/tests/partial_load.c @@ -1,14 +1,14 @@ - +#include <stdio.h> #include <stdlib.h> #include <assert.h> int main ( void ) { - long w; - int i; - char* p; - + long w; int i; char* p; assert(sizeof(long) == sizeof(void*)); +#if defined(__powerpc64__) + fprintf (stderr, "powerpc64\n"); /* Used to select correct .exp file. */ +#endif /* partial load, which --partial-loads-ok=yes should suppress */ p = calloc( sizeof(long)-1, 1 ); @@ -16,7 +16,7 @@ int main ( void ) w = *(long*)p; free(p); - /* partial but misaligned, cannot be suppressed */ + /* partial but misaligned, ppc64[le] ok, but otherwise cannot be suppressed */ p = calloc( sizeof(long), 1 ); assert(p); p++; diff --git a/memcheck/tests/partial_load_dflt.stderr.exp-ppc64 b/memcheck/tests/partial_load_dflt.stderr.exp-ppc64 new file mode 100644 index 0000000..cf32bcf --- /dev/null +++ b/memcheck/tests/partial_load_dflt.stderr.exp-ppc64 @@ -0,0 +1,23 @@ + +powerpc64 +Invalid read of size 2 + at 0x........: main (partial_load.c:30) + Address 0x........ is 0 bytes inside a block of size 1 alloc'd + at 0x........: calloc (vg_replace_malloc.c:...) + by 0x........: main (partial_load.c:28) + +Invalid read of size 8 + at 0x........: main (partial_load.c:37) + Address 0x........ is 0 bytes inside a block of size 8 free'd + at 0x........: free (vg_replace_malloc.c:...) + by 0x........: main (partial_load.c:36) + + +HEAP SUMMARY: + in use at exit: ... bytes in ... blocks + total heap usage: ... allocs, ... frees, ... bytes allocated + +For a detailed leak analysis, rerun with: --leak-check=full + +For counts of detected and suppressed errors, rerun with: -v +ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0) diff --git a/memcheck/tests/partial_load_ok.stderr.exp-ppc64 b/memcheck/tests/partial_load_ok.stderr.exp-ppc64 new file mode 100644 index 0000000..cf32bcf --- /dev/null +++ b/memcheck/tests/partial_load_ok.stderr.exp-ppc64 @@ -0,0 +1,23 @@ + +powerpc64 +Invalid read of size 2 + at 0x........: main (partial_load.c:30) + Address 0x........ is 0 bytes inside a block of size 1 alloc'd + at 0x........: calloc (vg_replace_malloc.c:...) + by 0x........: main (partial_load.c:28) + +Invalid read of size 8 + at 0x........: main (partial_load.c:37) + Address 0x........ is 0 bytes inside a block of size 8 free'd + at 0x........: free (vg_replace_malloc.c:...) + by 0x........: main (partial_load.c:36) + + +HEAP SUMMARY: + in use at exit: ... bytes in ... blocks + total heap usage: ... allocs, ... frees, ... bytes allocated + +For a detailed leak analysis, rerun with: --leak-check=full + +For counts of detected and suppressed errors, rerun with: -v +ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0) |
|
From: Mark W. <ma...@so...> - 2018-12-20 21:50:58
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=0ed17bc9f675c59cf1e93caa4290e0b21b84c0a0 commit 0ed17bc9f675c59cf1e93caa4290e0b21b84c0a0 Author: Mark Wielaard <ma...@kl...> Date: Fri Dec 7 10:42:22 2018 -0500 Implement ppc64 ldbrx as 64-bit load and Iop_Reverse8sIn64_x1. This makes it possible for memcheck to analyse the new gcc strcmp inlined code correctly even if the ldbrx load is partly beyond an addressable block. Partially resolves bug 386945. Diff: --- VEX/priv/guest_ppc_toIR.c | 38 +++++++++++++++++-------------- VEX/priv/host_ppc_isel.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+), 17 deletions(-) diff --git a/VEX/priv/guest_ppc_toIR.c b/VEX/priv/guest_ppc_toIR.c index 8977d4f..10f6daa 100644 --- a/VEX/priv/guest_ppc_toIR.c +++ b/VEX/priv/guest_ppc_toIR.c @@ -9178,24 +9178,28 @@ static Bool dis_int_ldst_rev ( UInt theInstr ) case 0x214: // ldbrx (Load Doubleword Byte-Reverse Indexed) { - // JRS FIXME: - // * is the host_endness conditional below actually necessary? - // * can we just do a 64-bit load followed by by Iop_Reverse8sIn64_x1? - // That would be a lot more efficient. - IRExpr * nextAddr; - IRTemp w3 = newTemp( Ity_I32 ); - IRTemp w4 = newTemp( Ity_I32 ); + /* Caller makes sure we are only called in mode64. */ + + /* If we supported swapping LE/BE loads in the backend then we could + just load the value with the bytes reversed by doing a BE load + on an LE machine and a LE load on a BE machine. + + IRTemp dw1 = newTemp(Ity_I64); + if (host_endness == VexEndnessBE) + assign( dw1, IRExpr_Load(Iend_LE, Ity_I64, mkexpr(EA))); + else + assign( dw1, IRExpr_Load(Iend_BE, Ity_I64, mkexpr(EA))); + putIReg( rD_addr, mkexpr(dw1) ); + + But since we currently don't we load the value as is and then + switch it around with Iop_Reverse8sIn64_x1. */ + + IRTemp dw1 = newTemp(Ity_I64); + IRTemp dw2 = newTemp(Ity_I64); DIP("ldbrx r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr); - assign( w1, load( Ity_I32, mkexpr( EA ) ) ); - assign( w2, gen_byterev32( w1 ) ); - nextAddr = binop( mkSzOp( ty, Iop_Add8 ), mkexpr( EA ), - ty == Ity_I64 ? mkU64( 4 ) : mkU32( 4 ) ); - assign( w3, load( Ity_I32, nextAddr ) ); - assign( w4, gen_byterev32( w3 ) ); - if (host_endness == VexEndnessLE) - putIReg( rD_addr, binop( Iop_32HLto64, mkexpr( w2 ), mkexpr( w4 ) ) ); - else - putIReg( rD_addr, binop( Iop_32HLto64, mkexpr( w4 ), mkexpr( w2 ) ) ); + assign( dw1, load(Ity_I64, mkexpr(EA)) ); + assign( dw2, unop(Iop_Reverse8sIn64_x1, mkexpr(dw1)) ); + putIReg( rD_addr, mkexpr(dw2) ); break; } diff --git a/VEX/priv/host_ppc_isel.c b/VEX/priv/host_ppc_isel.c index 750cf8d..4fc3eb5 100644 --- a/VEX/priv/host_ppc_isel.c +++ b/VEX/priv/host_ppc_isel.c @@ -2210,6 +2210,63 @@ static HReg iselWordExpr_R_wrk ( ISelEnv* env, const IRExpr* e, return rr; } + case Iop_Reverse8sIn64_x1: { + /* See Iop_Reverse8sIn32_x1, but extended to 64bit. + Can only be used in 64bit mode. */ + vassert (mode64); + + HReg r_src = iselWordExpr_R(env, e->Iex.Unop.arg, IEndianess); + HReg rr = newVRegI(env); + HReg rMask = newVRegI(env); + HReg rnMask = newVRegI(env); + HReg rtHi = newVRegI(env); + HReg rtLo = newVRegI(env); + + // Copy r_src since we need to modify it + addInstr(env, mk_iMOVds_RR(rr, r_src)); + + // r = (r & 0x00FF00FF00FF00FF) << 8 | (r & 0xFF00FF00FF00FF00) >> 8 + addInstr(env, PPCInstr_LI(rMask, 0x00FF00FF00FF00FFULL, + True/* 64bit imm*/)); + addInstr(env, PPCInstr_Unary(Pun_NOT, rnMask, rMask)); + addInstr(env, PPCInstr_Alu(Palu_AND, rtHi, rr, PPCRH_Reg(rMask))); + addInstr(env, PPCInstr_Shft(Pshft_SHL, False/*64 bit shift*/, + rtHi, rtHi, + PPCRH_Imm(False/*!signed imm*/, 8))); + addInstr(env, PPCInstr_Alu(Palu_AND, rtLo, rr, PPCRH_Reg(rnMask))); + addInstr(env, PPCInstr_Shft(Pshft_SHR, False/*64 bit shift*/, + rtLo, rtLo, + PPCRH_Imm(False/*!signed imm*/, 8))); + addInstr(env, PPCInstr_Alu(Palu_OR, rr, rtHi, PPCRH_Reg(rtLo))); + + // r = (r & 0x0000FFFF0000FFFF) << 16 | (r & 0xFFFF0000FFFF0000) >> 16 + addInstr(env, PPCInstr_LI(rMask, 0x0000FFFF0000FFFFULL, + True/* !64bit imm*/)); + addInstr(env, PPCInstr_Unary(Pun_NOT, rnMask, rMask)); + addInstr(env, PPCInstr_Alu(Palu_AND, rtHi, rr, PPCRH_Reg(rMask))); + addInstr(env, PPCInstr_Shft(Pshft_SHL, False/*64 bit shift*/, + rtHi, rtHi, + PPCRH_Imm(False/*!signed imm*/, 16))); + addInstr(env, PPCInstr_Alu(Palu_AND, rtLo, rr, PPCRH_Reg(rnMask))); + addInstr(env, PPCInstr_Shft(Pshft_SHR, False/*64 bit shift*/, + rtLo, rtLo, + PPCRH_Imm(False/*!signed imm*/, 16))); + addInstr(env, PPCInstr_Alu(Palu_OR, rr, rtHi, PPCRH_Reg(rtLo))); + + // r = (r & 0x00000000FFFFFFFF) << 32 | (r & 0xFFFFFFFF00000000) >> 32 + /* We don't need to mask anymore, just two more shifts and an or. */ + addInstr(env, mk_iMOVds_RR(rtLo, rr)); + addInstr(env, PPCInstr_Shft(Pshft_SHL, False/*64 bit shift*/, + rtLo, rtLo, + PPCRH_Imm(False/*!signed imm*/, 32))); + addInstr(env, PPCInstr_Shft(Pshft_SHR, False/*64 bit shift*/, + rr, rr, + PPCRH_Imm(False/*!signed imm*/, 32))); + addInstr(env, PPCInstr_Alu(Palu_OR, rr, rr, PPCRH_Reg(rtLo))); + + return rr; + } + case Iop_Left8: case Iop_Left16: case Iop_Left32: |
|
From: Nicholas N. <n.n...@gm...> - 2018-12-20 09:20:03
|
Hi, I have rewritten DHAT. The details, and a patch, are in https://bugs.kde.org/show_bug.cgi?id=402369. I'd be interested to hear feedback from anyone who wants to try it out. Nick |
|
From: Bart V. A. <bva...@so...> - 2018-12-20 02:37:42
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=5b4029b8cc8fc74a6e24e8bfaee1914e46081496 commit 5b4029b8cc8fc74a6e24e8bfaee1914e46081496 Author: Bart Van Assche <bva...@ac...> Date: Wed Dec 19 18:13:14 2018 -0800 drd/tests/tsan_thread_wrappers_pthread.h: Fix MyThread::ThreadBody() See also https://bugs.kde.org/show_bug.cgi?id=402341. Diff: --- drd/tests/tsan_thread_wrappers_pthread.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drd/tests/tsan_thread_wrappers_pthread.h b/drd/tests/tsan_thread_wrappers_pthread.h index bdca574..878d440 100644 --- a/drd/tests/tsan_thread_wrappers_pthread.h +++ b/drd/tests/tsan_thread_wrappers_pthread.h @@ -368,7 +368,7 @@ class MyThread { } if (my_thread->wpvpv_) return my_thread->wpvpv_(my_thread->arg_); - if (my_thread->wpvpv_) + if (my_thread->wvpv_) my_thread->wvpv_(my_thread->arg_); if (my_thread->wvv_) my_thread->wvv_(); |