You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(32) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(5) |
2
(16) |
3
(23) |
|
4
(13) |
5
(1) |
6
(1) |
7
(17) |
8
(18) |
9
(14) |
10
(12) |
|
11
|
12
(6) |
13
(19) |
14
(4) |
15
(7) |
16
(30) |
17
(12) |
|
18
(2) |
19
(13) |
20
(3) |
21
(3) |
22
(17) |
23
(16) |
24
(5) |
|
25
(14) |
26
(15) |
27
(4) |
28
(15) |
29
(16) |
30
(16) |
31
(15) |
|
From: Maran P. <ma...@li...> - 2013-08-19 02:23:04
|
valgrind revision: 13503 VEX revision: 2744 C compiler: gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973] GDB: GNU gdb (GDB) SUSE (7.3-0.6.1) Assembler: GNU assembler (GNU Binutils; SUSE Linux Enterprise 11) 2.21.1 C library: GNU C Library stable release version 2.11.3 (20110527) uname -mrs: Linux 3.0.80-0.7-default s390x Vendor version: Welcome to SUSE Linux Enterprise Server 11 SP2 (s390x) - Kernel %r (%t). Nightly build on sless390 ( SUSE Linux Enterprise Server 11 SP1 gcc 4.3.4 on z196 (s390x) ) Started at 2013-08-19 03:45:01 CEST Ended at 2013-08-19 04:22:52 CEST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... done Regression test results follow == 636 tests, 0 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == |
|
From: Tom H. <to...@co...> - 2013-08-19 02:09:16
|
valgrind revision: 13503 VEX revision: 2744 C compiler: gcc (GCC) 4.8.1 20130603 (Red Hat 4.8.1-1) GDB: GNU gdb (GDB) Fedora (7.6-34.fc19) Assembler: GNU assembler version 2.23.52.0.1-9.fc19 20130226 C library: GNU C Library (GNU libc) stable release version 2.17 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 19 (Schrödingerâs Cat) Nightly build on bristol ( x86_64, Fedora 19 (Schrödingerâs Cat) ) Started at 2013-08-19 02:32:03 BST Ended at 2013-08-19 03:08:56 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 3 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/dw4 (stderr) memcheck/tests/origin5-bz2 (stderr) exp-sgcheck/tests/hackedbz2 (stderr) |
|
From: <sv...@va...> - 2013-08-18 10:20:31
|
sewardj 2013-08-18 10:20:22 +0000 (Sun, 18 Aug 2013)
New Revision: 13503
Log:
Minor tidying of the generalised V128/V256 shadow helper returns
that landed in r13500. No functional change.
Modified files:
trunk/memcheck/mc_main.c
Modified: trunk/memcheck/mc_main.c (+28 -26)
===================================================================
--- trunk/memcheck/mc_main.c 2013-08-18 10:00:59 +00:00 (rev 13502)
+++ trunk/memcheck/mc_main.c 2013-08-18 10:20:22 +00:00 (rev 13503)
@@ -1130,11 +1130,12 @@
static
__attribute__((noinline))
-void mc_LOADVx_slow ( /*OUT*/ULong* res, Addr a, SizeT nBits, Bool bigendian )
+void mc_LOADV_128_or_256_slow ( /*OUT*/ULong* res,
+ Addr a, SizeT nBits, Bool bigendian )
{
- ULong pessim[4]; /* only used when p-l-ok=yes */
+ ULong pessim[4]; /* only used when p-l-ok=yes */
SSizeT szB = nBits / 8;
- SSizeT szL = szB / 8; /* Size in longs */
+ SSizeT szL = szB / 8; /* Size in Longs (64-bit units) */
SSizeT i, j; /* Must be signed. */
SizeT n_addrs_bad = 0;
Addr ai;
@@ -1149,7 +1150,7 @@
the pessim array. */
tl_assert(szL <= sizeof(pessim) / sizeof(pessim[0]));
- for (j=0 ; j < szL ; j++) {
+ for (j = 0; j < szL; j++) {
pessim[j] = V_BITS64_DEFINED;
res[j] = V_BITS64_UNDEFINED;
}
@@ -1162,12 +1163,12 @@
--partial-loads-ok=yes. n_addrs_bad is redundant (the relevant
info can be gleaned from the pessim array) but is used as a
cross-check. */
- for (j = szL-1 ; j >= 0 ; j--) {
- ULong vbits64 = V_BITS64_UNDEFINED;
- ULong pessim64 = V_BITS64_DEFINED;
- UWord long_index = byte_offset_w(szL, bigendian, j);
+ for (j = szL-1; j >= 0; j--) {
+ ULong vbits64 = V_BITS64_UNDEFINED;
+ ULong pessim64 = V_BITS64_DEFINED;
+ UWord long_index = byte_offset_w(szL, bigendian, j);
for (i = 8-1; i >= 0; i--) {
- PROF_EVENT(31, "mc_LOADV128_slow(loop)");
+ PROF_EVENT(31, "mc_LOADV_128_or_256_slow(loop)");
ai = a + 8*long_index + byte_offset_w(8, bigendian, i);
ok = get_vbits8(ai, &vbits8);
vbits64 <<= 8;
@@ -1217,7 +1218,7 @@
/* "at least one of the addresses is invalid" */
ok = False;
- for (j=0 ; j < szL ; j++)
+ for (j = 0; j < szL; j++)
ok |= pessim[j] != V_BITS8_DEFINED;
tl_assert(ok);
@@ -1230,7 +1231,7 @@
tl_assert(V_BIT_UNDEFINED == 1 && V_BIT_DEFINED == 0);
/* (really need "UifU" here...)
vbits[j] UifU= pessim[j] (is pessimised by it, iow) */
- for (j = szL-1 ; j >= 0 ; j--)
+ for (j = szL-1; j >= 0; j--)
res[j] |= pessim[j];
return;
}
@@ -4205,29 +4206,30 @@
/* ------------------------ Size = 16 ------------------------ */
static INLINE
-void mc_LOADVx ( /*OUT*/ULong* res, Addr a, SizeT nBits, Bool isBigEndian )
+void mc_LOADV_128_or_256 ( /*OUT*/ULong* res,
+ Addr a, SizeT nBits, Bool isBigEndian )
{
- PROF_EVENT(200, "mc_LOADVx");
+ PROF_EVENT(200, "mc_LOADV_128_or_256");
#ifndef PERF_FAST_LOADV
- mc_LOADVx_slow( res, a, nBits, isBigEndian );
+ mc_LOADV_128_or_256_slow( res, a, nBits, isBigEndian );
return;
#else
{
- UWord sm_off16, vabits16;
+ UWord sm_off16, vabits16, j;
+ UWord nBytes = nBits / 8;
+ UWord nULongs = nBytes / 8;
SecMap* sm;
- int j;
- int nBytes = nBits / 8;
if (UNLIKELY( UNALIGNED_OR_HIGH(a,nBits) )) {
- PROF_EVENT(201, "mc_LOADVx-slow1");
- mc_LOADVx_slow( res, a, nBits, isBigEndian );
+ PROF_EVENT(201, "mc_LOADV_128_or_256-slow1");
+ mc_LOADV_128_or_256_slow( res, a, nBits, isBigEndian );
return;
}
/* Handle common cases quickly: a (and a+8 and a+16 etc.) is
suitably aligned, is mapped, and addressible. */
- for (j=0 ; j<nBytes/8 ; ++j) {
+ for (j = 0; j < nULongs; j++) {
sm = get_secmap_for_reading_low(a + 8*j);
sm_off16 = SM_OFF_16(a + 8*j);
vabits16 = ((UShort*)(sm->vabits8))[sm_off16];
@@ -4241,8 +4243,8 @@
} else {
/* Slow case: some block of 8 bytes are not all-defined or
all-undefined. */
- PROF_EVENT(202, "mc_LOADVx-slow2");
- mc_LOADVx_slow( res, a, nBits, isBigEndian );
+ PROF_EVENT(202, "mc_LOADV_128_or_256-slow2");
+ mc_LOADV_128_or_256_slow( res, a, nBits, isBigEndian );
return;
}
}
@@ -4253,20 +4255,20 @@
VG_REGPARM(2) void MC_(helperc_LOADV256be) ( /*OUT*/V256* res, Addr a )
{
- mc_LOADVx(&res->w64[0], a, 256, True);
+ mc_LOADV_128_or_256(&res->w64[0], a, 256, True);
}
VG_REGPARM(2) void MC_(helperc_LOADV256le) ( /*OUT*/V256* res, Addr a )
{
- mc_LOADVx(&res->w64[0], a, 256, False);
+ mc_LOADV_128_or_256(&res->w64[0], a, 256, False);
}
VG_REGPARM(2) void MC_(helperc_LOADV128be) ( /*OUT*/V128* res, Addr a )
{
- mc_LOADVx(&res->w64[0], a, 128, True);
+ mc_LOADV_128_or_256(&res->w64[0], a, 128, True);
}
VG_REGPARM(2) void MC_(helperc_LOADV128le) ( /*OUT*/V128* res, Addr a )
{
- mc_LOADVx(&res->w64[0], a, 128, False);
+ mc_LOADV_128_or_256(&res->w64[0], a, 128, False);
}
/* ------------------------ Size = 8 ------------------------ */
|
|
From: <sv...@va...> - 2013-08-18 10:01:13
|
sewardj 2013-08-18 10:00:59 +0000 (Sun, 18 Aug 2013)
New Revision: 13502
Log:
Make sure that sh-mem-vec256 is only built on platforms that can
assemble the relevant instructions.
Modified files:
trunk/memcheck/tests/amd64/Makefile.am
Modified: trunk/memcheck/tests/amd64/Makefile.am (+3 -1)
===================================================================
--- trunk/memcheck/tests/amd64/Makefile.am 2013-08-16 08:34:10 +00:00 (rev 13501)
+++ trunk/memcheck/tests/amd64/Makefile.am 2013-08-18 10:00:59 +00:00 (rev 13502)
@@ -44,9 +44,11 @@
insn-pmovmskb \
more_x87_fp \
sh-mem-vec128 \
- sh-mem-vec256 \
sse_memory \
xor-undef-amd64
+if BUILD_AVX_TESTS
+ check_PROGRAMS += sh-mem-vec256
+endif
AM_CFLAGS += @FLAG_M64@
AM_CXXFLAGS += @FLAG_M64@
|
|
From: Philippe W. <phi...@sk...> - 2013-08-17 03:57:05
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.7.2 20121109 (Red Hat 4.7.2-8) GDB: GNU gdb (GDB) Fedora (7.5.1-37.fc18) Assembler: GNU assembler version 2.23.51.0.1-7.fc18 20120806 C library: GNU C Library stable release version 2.16 uname -mrs: Linux 3.7.2-204.fc18.ppc64 ppc64 Vendor version: Fedora release 18 (Spherical Cow) Nightly build on gcc110 ( Fedora release 18 (Spherical Cow), ppc64 ) Started at 2013-08-16 20:00:10 PDT Ended at 2013-08-16 20:56:31 PDT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 558 tests, 31 stderr failures, 3 stdout failures, 0 stderrB failures, 0 stdoutB failures, 2 post failures == memcheck/tests/linux/getregset (stdout) memcheck/tests/linux/getregset (stderr) memcheck/tests/ppc64/power_ISA2_05 (stdout) memcheck/tests/supp_unknown (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) massif/tests/big-alloc (post) massif/tests/deep-D (post) helgrind/tests/annotate_rwlock (stderr) helgrind/tests/free_is_write (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/locked_vs_unlocked1_fwd (stderr) helgrind/tests/locked_vs_unlocked1_rev (stderr) helgrind/tests/locked_vs_unlocked2 (stderr) helgrind/tests/locked_vs_unlocked3 (stderr) helgrind/tests/pth_barrier1 (stderr) helgrind/tests/pth_barrier2 (stderr) helgrind/tests/pth_barrier3 (stderr) helgrind/tests/pth_destroy_cond (stderr) helgrind/tests/rwlock_race (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <to...@co...> - 2013-08-17 03:29:28
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.3.0 20080428 (Red Hat 4.3.0-8) GDB: Assembler: GNU assembler version 2.18.50.0.6-2 20080403 C library: GNU C Library stable release version 2.8 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 9 (Sulphur) Nightly build on bristol ( x86_64, Fedora 9 ) Started at 2013-08-17 03:51:55 BST Ended at 2013-08-17 04:29:13 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 637 tests, 1 stderr failure, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/amd64/insn-pcmpistri (stderr) none/tests/amd64/sse4-64 (stdout) |
|
From: Tom H. <to...@co...> - 2013-08-17 03:27:57
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2) GDB: Assembler: GNU assembler version 2.19.51.0.14-3.fc11 20090722 C library: GNU C Library stable release version 2.10.2 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 11 (Leonidas) Nightly build on bristol ( x86_64, Fedora 11 ) Started at 2013-08-17 03:41:35 BST Ended at 2013-08-17 04:27:41 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 639 tests, 1 stderr failure, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/long_namespace_xml (stderr) none/tests/amd64/sse4-64 (stdout) |
|
From: Tom H. <to...@co...> - 2013-08-17 03:13:08
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.4.5 20101112 (Red Hat 4.4.5-2) GDB: Assembler: GNU assembler version 2.20.51.0.2-20.fc13 20091009 C library: GNU C Library stable release version 2.12.2 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 13 (Goddard) Nightly build on bristol ( x86_64, Fedora 13 ) Started at 2013-08-17 03:32:41 BST Ended at 2013-08-17 04:12:56 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 639 tests, 2 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == none/tests/fdleak_ipv4 (stdout) none/tests/fdleak_ipv4 (stderr) helgrind/tests/pth_barrier3 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 639 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/pth_barrier3 (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short 2013-08-17 03:51:41.755443543 +0100 --- new.short 2013-08-17 04:12:56.688739161 +0100 *************** *** 8,10 **** ! == 639 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/pth_barrier3 (stderr) --- 8,12 ---- ! == 639 tests, 2 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == ! none/tests/fdleak_ipv4 (stdout) ! none/tests/fdleak_ipv4 (stderr) helgrind/tests/pth_barrier3 (stderr) |
|
From: Tom H. <to...@co...> - 2013-08-17 03:13:00
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4) GDB: GNU gdb (GDB) Fedora (7.2-52.fc14) Assembler: GNU assembler version 2.20.51.0.7-8.fc14 20100318 C library: GNU C Library stable release version 2.13 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 14 (Laughlin) Nightly build on bristol ( x86_64, Fedora 14 ) Started at 2013-08-17 03:23:17 BST Ended at 2013-08-17 04:12:41 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 658 tests, 2 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) none/tests/fdleak_ipv4 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 658 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short 2013-08-17 03:43:41.735083219 +0100 --- new.short 2013-08-17 04:12:41.166051446 +0100 *************** *** 8,11 **** ! == 658 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) --- 8,12 ---- ! == 658 tests, 2 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) + none/tests/fdleak_ipv4 (stderr) |
|
From: Tom H. <to...@co...> - 2013-08-17 02:52:59
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) GDB: GNU gdb (GDB) Fedora (7.3.1-48.fc15) Assembler: GNU assembler version 2.21.51.0.6-6.fc15 20110118 C library: GNU C Library stable release version 2.14.1 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 15 (Lovelock) Nightly build on bristol ( x86_64, Fedora 15 ) Started at 2013-08-17 03:13:19 BST Ended at 2013-08-17 03:52:39 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 2 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) none/tests/fdleak_ipv4 (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short 2013-08-17 03:32:54.985160357 +0100 --- new.short 2013-08-17 03:52:39.849273186 +0100 *************** *** 8,12 **** ! == 660 tests, 2 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) - none/tests/fdleak_ipv4 (stderr) --- 8,11 ---- ! == 660 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) |
|
From: Tom H. <to...@co...> - 2013-08-17 02:45:06
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) GDB: GNU gdb (GDB) Fedora (7.3.50.20110722-16.fc16) Assembler: GNU assembler version 2.21.53.0.1-6.fc16 20110716 C library: GNU C Library development release version 2.14.90 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 16 (Verne) Nightly build on bristol ( x86_64, Fedora 16 ) Started at 2013-08-17 03:02:32 BST Ended at 2013-08-17 03:44:40 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 1 stderr failure, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) |
|
From: Tom H. <to...@co...> - 2013-08-17 02:33:46
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2) GDB: GNU gdb (GDB) Fedora (7.4.50.20120120-54.fc17) Assembler: GNU assembler version 2.22.52.0.1-10.fc17 20120131 C library: GNU C Library stable release version 2.15 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 17 (Beefy Miracle) Nightly build on bristol ( x86_64, Fedora 17 (Beefy Miracle) ) Started at 2013-08-17 02:52:03 BST Ended at 2013-08-17 03:33:27 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 6 stderr failures, 2 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallRU (stderr) gdbserver_tests/mcinfcallWSRU (stderr) gdbserver_tests/mcmain_pic (stderr) memcheck/tests/origin5-bz2 (stderr) none/tests/fdleak_ipv4 (stdout) none/tests/fdleak_ipv4 (stderr) exp-sgcheck/tests/preen_invars (stdout) exp-sgcheck/tests/preen_invars (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 5 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallRU (stderr) gdbserver_tests/mcinfcallWSRU (stderr) gdbserver_tests/mcmain_pic (stderr) memcheck/tests/origin5-bz2 (stderr) exp-sgcheck/tests/preen_invars (stdout) exp-sgcheck/tests/preen_invars (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short 2013-08-17 03:11:19.899363560 +0100 --- new.short 2013-08-17 03:33:27.262508237 +0100 *************** *** 8,10 **** ! == 660 tests, 5 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallRU (stderr) --- 8,10 ---- ! == 660 tests, 6 stderr failures, 2 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcinfcallRU (stderr) *************** *** 13,14 **** --- 13,16 ---- memcheck/tests/origin5-bz2 (stderr) + none/tests/fdleak_ipv4 (stdout) + none/tests/fdleak_ipv4 (stderr) exp-sgcheck/tests/preen_invars (stdout) |
|
From: Tom H. <to...@co...> - 2013-08-17 02:25:08
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.7.2 20121109 (Red Hat 4.7.2-8) GDB: GNU gdb (GDB) Fedora (7.5.1-38.fc18) Assembler: GNU assembler version 2.23.51.0.1-10.fc18 20120806 C library: GNU C Library stable release version 2.16 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 18 (Spherical Cow) Nightly build on bristol ( x86_64, Fedora 18 (Spherical Cow) ) Started at 2013-08-17 02:41:37 BST Ended at 2013-08-17 03:24:48 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 2 stderr failures, 1 stdout failure, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/origin5-bz2 (stderr) exp-sgcheck/tests/preen_invars (stdout) exp-sgcheck/tests/preen_invars (stderr) |
|
From: Maran P. <ma...@li...> - 2013-08-17 02:24:55
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973] GDB: GNU gdb (GDB) SUSE (7.3-0.6.1) Assembler: GNU assembler (GNU Binutils; SUSE Linux Enterprise 11) 2.21.1 C library: GNU C Library stable release version 2.11.3 (20110527) uname -mrs: Linux 3.0.80-0.7-default s390x Vendor version: Welcome to SUSE Linux Enterprise Server 11 SP2 (s390x) - Kernel %r (%t). Nightly build on sless390 ( SUSE Linux Enterprise Server 11 SP1 gcc 4.3.4 on z196 (s390x) ) Started at 2013-08-17 03:45:01 CEST Ended at 2013-08-17 04:24:44 CEST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... done Regression test results follow == 636 tests, 0 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == |
|
From: Tom H. <to...@co...> - 2013-08-17 02:09:50
|
valgrind revision: 13501 VEX revision: 2744 C compiler: gcc (GCC) 4.8.1 20130603 (Red Hat 4.8.1-1) GDB: GNU gdb (GDB) Fedora (7.6-34.fc19) Assembler: GNU assembler version 2.23.52.0.1-9.fc19 20130226 C library: GNU C Library (GNU libc) stable release version 2.17 uname -mrs: Linux 3.9.5-301.fc19.x86_64 x86_64 Vendor version: Fedora release 19 (Schrödingerâs Cat) Nightly build on bristol ( x86_64, Fedora 19 (Schrödingerâs Cat) ) Started at 2013-08-17 02:31:58 BST Ended at 2013-08-17 03:09:23 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 660 tests, 3 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == memcheck/tests/dw4 (stderr) memcheck/tests/origin5-bz2 (stderr) exp-sgcheck/tests/hackedbz2 (stderr) |
|
From: Peter B. <be...@vn...> - 2013-08-17 00:09:48
|
On Fri, 2013-08-16 at 15:06 -0700, Carl E. Love wrote:
> The code sequence is essentially as follows:
>
> tbegin.
> beq <failure path>
> // success path
>
> Note, the are no restrictions preventing the compiler from putting
> additional instructions between the tbegin and the beq in the above code
> sequence.
True, although if the compiler does move something there, it will be
a redundant computation that would have been computed on both paths
anyway. A more common occurrence is the compiler can reverse the
sense of the branch, so instead of the code above, you might instead
see:
tbegin.
bne <success path>
// failure path
Logically it makes no difference and shouldn't affect the implementation
in valgrind, since the target address branched to on failure is always
the instruction after tbegin. (modulo being changed by writing to the
THFAR register) and not the address of the failure handler. It's the
job of the cr0 value and the branch to get you to the failure handler.
> We do not tell the CPU where to go on failure as is done in
> the Intel T_BEGIN.
Well not explicitly we don't, but we do implicitly and it is
just CIA+4. From a GCC code generation point of view, when a
xbegin is emitted on Intel, the failure "label" passed to the
xbegin instruction is always at CIA+4, so from a practical
standpoint, Intel and POWER are identical here.
> In my implementation, the host executes the tbegin. I didn't do
> anything to set or change the host TFIAR register. I capture the value
> from the condition code register and write that into the guest machine's
> condition code register.
How do you capture the cr0 value? I ask, because as part of the code
generation for the __builtin_tbegin builtin, I destroy it's value.
The only way to get that value is through use of the __builtin_ttest()
builtin, which returns the 4-bit value that was written to cr0 on
completion of the tbegin...as long as you haven't executed any more HTM
instructions in the mean time.
> > * (if the transaction does not fail)
> > the guest CPU arrives at T_END. It calls another dirty helper
> > function which first does T_END on the host, then pops
> > guest-fail-handler off the stack of handler addresses for
> > the thread. The transaction is over.
>
> In my implementation the T_END instruction is actually a noop right now.
> Looking at it again as I write this response I see this is an error in
> my current implementation. I will fix it.
If the host doesn't execute a tend., your host transaction will abort
100% of the time. No wonder things didn't work for you.
>> * (if the transaction fails)
>> the host CPU will jump to host-fail-handler.
>> This behaves similarly to how synchronous signals are currently
>> handled:
>> basically host-fail-handler must longjmp out of the JITted code, over
>> m_dispatch/dispatch-*.S, back into the scheduler, indicating somehow
>> that a transaction has failed. The scheduler can then fix up the guest
>> state, by popping guest-fail-handler off this thread's handler stack,
>> setting the guest state program counter to that value, and letting the
>> guest CPU resume.
One question I have, is what should we tell the guest program if the
host transaction fails? I'm guessing we want to mirror the transaction
success/failure up into the guest by making it look like the guest
transaction succeeded/failed too. Otherwise, all guest transactions
will seem to succeed regardless of the underlying host transactions.
That's not too realistic, just like the first implementation where
is always fails.
Peter
|
|
From: Carl E. L. <ce...@li...> - 2013-08-16 22:06:14
|
On Fri, 2013-08-16 at 16:24 +0200, Julian Seward wrote:
> > I have implemented the second proposal for Power. [...]
>
> I'm not sure I understand the details of how TM is presented in the Power
> instruction set and architectural state. It seems broadly similar to the
> Intel scheme, though, in which there are two basic primitives:
>
> T_BEGIN, which takes the failure handler address as a parameter
> T_END
>
> T_BEGIN starts a new transaction. T_END ends it and releases any resources
> associated with it. If the transaction fails for any reason, the processor
> jumps to the handler address specified by T_BEGIN. Typically, if the
> transaction fails, some registers will be set, indicating the reason, before
> jumping to the failure handler; although that is secondary to this
> discussion.
On Power, the compiler generates the T_BEGIN instruction followed by an
conditional branch instruction. The result of executing the T_BEGIN
instruction is to set the condition code register. If the T_BEGIN
succeeds, then the subsequent branch instruction will cause the control
flow to follow the successful code path. Otherwise, the branch causes
execution to follow the failure path. The code sequence is essentially
as follows:
tbegin.
beq <failure path>
// success path
Note, the are no restrictions preventing the compiler from putting
additional instructions between the tbegin and the beq in the above code
sequence. We do not tell the CPU where to go on failure as is done in
the Intel T_BEGIN.
Power has three registers for use by the TM instructions. Here is
Peter's description of the registers.
Transaction Failure Handler Address Register (TFHAR):
This register holds the address the hardware will start
executing from upon a transaction failure/abort. It is
initialized by the tbegin. instruction to CIA+4 (in IBM
parlance), which means it contains the address of the
instruction immediately following the tbegin. instruction.
It can be modified by a "mtspr TFHAR,<reg>", but that
should be a fairly rare occurrence. Similar to x86's
common usage, where the xbegin's %reg is set to the
address following the xbegin.
Transaction EXception And Summary Register (TEXASR):
This register is normally used by failure handlers for
determining why a transaction failed, but it also holds
information about the depth of nested transactions we
currently have.
Transaction Failure Instruction Address Register (TFIAR):
This register holds the address of the instruction
that caused the transaction failure (when possible).
>
> My vague implementation sketch for this second proposal was:
>
> * when the guest CPU guest arrives at the T_BEGIN(guest-fail-handler)
> instruction, call a dirty helper function which:
>
> - adds guest-fail-handler to a stack of handler addresses
> for the current thread
Power places the address of the error branch code into the TFHA. The
nesting level for the TM is updated in the TEXASR register. For now,
lets not consider nested TMs.
>
> - (on the the host) does T_BEGIN(host-fail-handler). Note that
> host-fail-handler is part of the valgrind C code and is
> is definitely != guest-fail-handler
In my implementation, the host executes the tbegin. I didn't do
anything to set or change the host TFIAR register. I capture the value
from the condition code register and write that into the guest machine's
condition code register. I also capture the value of the register
(TEXASR) containing the reason for the failure and put it in the guest
TEXASR. Thus the guest code control flow will take either the success or
failure path based on the updated guest condition code register which
contains the result of executing the T_BEGIN on the host.
>
> * the dirty helper returns (to JITted code), and continues. This
> is (of course) part of the transaction on the host. The guest CPU
> therefore continues with the instructions that are part of the
> (guest) transaction.
The code that I have as part of the tbegin implementation copies the
condition code register and TEXASR value from the host to the guest
machine registers I guess is what you would refer to as the dirty
helper. I my case, the code is not in an explicit function but could be
put into an explicit dirty helper function. The code I am talking about
is:
+ /* The TEXASR is returned from the TBEGIN instruction in the
upper
+ * 64-bits, the CC register is returned in the lowest 32-bits.
+ */
+ assign( rDst, unop( Iop_TBEGIN, mkU32( R_field ) ) );
+
+ assign( texasr, unop( Iop_128HIto64, mkexpr( rDst ) ) );
+ assign( lDst, unop( Iop_128to64, mkexpr( rDst ) ) );
+ assign( CondCode, unop( Iop_64to32, mkexpr( lDst) ) );
+
+ /* Set the CR0 field to indicate the tbegin failed. Then let
+ * the code do the branch to the success/failure path.
+ *
+ * 000 || 0 Transaction initiation successful,
+ * unnested (Transaction state of
+ * Non-transactional prior to tbegin.)
+ * 010 || 0 Transaction initiation successful, nested
+ * (Transaction state of Transactional
+ * prior to tbegin.)
+ * 001 || 0 Transaction initiation unsuccessful,
+ * (Transaction state of Suspended prior
+ * to tbegin.)
+ */
+
+ /* 0x2 takes transactional path */
+ /* 0x0 takes the failure path */
+
+ putGST( PPC_GST_TFIAR, mkU64( guest_CIA_curr_instr) );
+ putGST( PPC_GST_TEXASR, mkexpr( texasr ) );
+ putGST( PPC_GST_TFHAR, mkU64( guest_CIA_curr_instr+4 ) );
+
+ return True;
+
+ break;
>
> * (if the transaction does not fail)
> the guest CPU arrives at T_END. It calls another dirty helper
> function which first does T_END on the host, then pops
> guest-fail-handler off the stack of handler addresses for
> the thread. The transaction is over.
In my implementation the T_END instruction is actually a noop right now.
Looking at it again as I write this response I see this is an error in
my current implementation. I will fix it.
What happens, is the host is executing the guest instructions issued to
the host as well as the instructions from Valgrind. The Host HW detects
the HW can't track all of the instructions being executed. Since I
didn't issue the T_END the TM is bound to fail eventually. The
host HW rolls the state of the registers back to the state at the
tbegin, updates the condition code register to failure, sets the TEXASR
register. Then the return from the host executing the T_BEGIN
effectively happens again but this time the condition code register and
TEXASR are set for failure and we go down the failure path in the guest
code. So, in my implementation, I see that the HW resources were
exceeded and I only see the results of the failure path.
>
> * (if the transaction fails)
> the host CPU will jump to host-fail-handler.
> This behaves similarly to how synchronous signals are currently
> handled:
> basically host-fail-handler must longjmp out of the JITted code, over
> m_dispatch/dispatch-*.S, back into the scheduler, indicating somehow
> that a transaction has failed. The scheduler can then fix up the guest
> state, by popping guest-fail-handler off this thread's handler stack,
> setting the guest state program counter to that value, and letting the
> guest CPU resume.
I believe what the hardware does once the failure occurs is that all of
the register and memory changes that occurred between the T_BEGIN and
the failure are erased and we go back to the T_BEGIN instruction
(Program counter again points to the T_BEGIN), update the condition code
register, the TEXASR and the TFIAR and continue executing the code
sequence with the condition code set to failure thus following the TM
failure path as if we had never been down the success path. This is
what I understand of how this works at the hardware level.
>
> What is crucial (and I was unable to determine from your description) is
> that we cannot pass the guest's failure-handling address through to the
> host, since otherwise we will permanently lose control of execution when
> the transaction fails.
>From what I understand of how Power implements this, we have reset the
state of the HW back to the state when the T_BEGIN started executing,
i.e. the program counter is set back to the T_BEGIN. So, we are not
going to lose control of the execution as we never tell the CPU
explicitly where to go on failure.
>
> Whether or not the transactions on the host get nuked due to resource
> constraints is orthogonal to the above proposal. In principle, if the
> host has enough tracking resources, it could succeed.
Yes, it could, assuming I actually do the T_END. I will fix that and
try again.
An alternative implementation suggestion. I believe Valgrind is single
threaded, correct? When we see a T_BEGIN couldn't we just have the
Valgrind scheduler just continue to execute instructions in the same
thread/CPU until the T_END is seen. We would effectively make the
transactional memory thread sequential so there wouldn't be any
conflicts with other threads. The host would not execute any of the TM
instructions. We would then make the Power suspend and abort
instructions noops. The transaction abort and end instructions would
then allow the Valgrind scheduler to go back to scheduling threads/CPUs
normally. Not sure if this is a viable solution for Valgrind or not. I
just don't know enough of the internals.
>
> Note that none of this involves changing the IR, so none of the tools
> have to be aware that transactions are supported.
>From the POWER perspective, all the register/memory updates are erased
so the tools would have no way of knowing.
>
> Does any of the above sync with how you did your Power implementation?
>
> J
>
|
|
From: Milind C. <Mil...@ri...> - 2013-08-16 18:37:49
|
Philippe, >> The callgrind tool maintains the callstack on-the-fly via call-ret, but this callstack maintenance is specialised for callgrind. I noticed that Helgrind shows full call stack of 2 conflicting accesses during a data races. Does it maintain a call path identifier associated with each memory access in a shadow memory? If it does, doesn't Helgrind need to obtain the call stack on each memory access instruction? Does Helgrind also build the stack on-the-fly via call-ret and maintain a dictionary of call paths? On Fri, Aug 16, 2013 at 3:28 AM, Philippe Waroquiers < phi...@sk...> wrote: > On Wed, 2013-08-14 at 08:44 -0700, Milind Chabbi wrote: > > Resending my question. Is this the right mailing list to ask valgrind > > internals question? > Yes. > > > > > > On Tue, Aug 13, 2013 at 12:19 PM, Milind Chabbi > > <Mil...@ri...> wrote: > > Hello, > > > > I am considering using Valgrind for a heavy-weight > > instrumentation > > that would instrument each memory access. > > For detailed information about the access, I want to collect > > the full > > call path of each access. > > Can someone tell me (at the high-level) what call path > > collection > > technique exists in Valgrind ? and give me some details of the > > feature > > such as algorithmic complexity of getting the call stack esp. > > for each > > instruction. Is it done using a stack unwinding technique or > > it is > > built on-the-fly via call-ret instructions? How is a call path > > represented (is it an identifier to a call path object that > > can be > > queried at a later time) ? > A stack trace is obtained by calling VG_(get_StackTrace) > (see pub_tool_stacktrace.h). > For frequent events (e.g. stacktraces of malloc,free,... in memcheck), > there will be a lot of identical stacktraces. These can be maintained > in a "dictionnary of stacktraces". See pub_tool_execontext.h. > So, an execontext corresponds more or less to an > "identifier ... that can be queried at a later time". > > Getting stacktraces is done via unwind technique, so is quite costly. > Doing it for each memory access instruction will be very slow. > > The callgrind tool maintains the callstack on-the-fly via call-ret, > but this callstack maintenance is specialised for callgrind. > It might be an interesting project to generalise this callgrind > callstack maintenance and make it "general" (i.e. part of the > valgrind core framework) rather than tool specific. > > What kind of tool are you thinking to ? > It might maybe easier to extend callgrind, which already captures > memory access related information and callstack. > > Philippe > > > > > > |
|
From: Philippe W. <phi...@sk...> - 2013-08-16 18:24:48
|
On Fri, 2013-08-16 at 11:16 -0700, Milind Chabbi wrote: > Philippe, > > > >> The callgrind tool maintains the callstack on-the-fly via > call-ret, but this callstack maintenance is specialised for callgrind. > > > I noticed that Helgrind shows full call stack of 2 conflicting > accesses during a data races. Does it maintain a call path identifier > associated with each memory access in a shadow memory? If it does, > doesn't Helgrind need to obtain the call stack on each memory access > instruction? Does Helgrind also build the stack on-the-fly via > call-ret and maintain a dictionary of call paths? helgrind does not use call-ret, but uses VG_(get_StackTrace) (limited to 8 frames from what I can see). I do not know if helgrind does that for all memory access or only a subset of the memory access. Philippe |
|
From: <sv...@va...> - 2013-08-16 16:15:48
|
jf 2013-08-16 16:15:37 +0000 (Fri, 16 Aug 2013)
New Revision: 479
Log:
remove redundant file
Removed files:
trunk/base
Deleted: trunk/base (+0 -0)
===================================================================
|
|
From: Maynard J. <may...@us...> - 2013-08-16 15:46:23
|
On 08/16/2013 09:55 AM, Julian Seward wrote: > > Maynard, > > Florian landed a change last night -- r13498/r2742 -- which should fix this. > Pls yell if it's still broken. OK, thanks. I'll be out for a couple weeks as of this afternoon, and so will verify the fix when I return. -Maynard > > J > > On 08/14/2013 02:51 PM, Maynard Johnson wrote: >> On 08/13/2013 03:03 PM, Florian Krohm wrote: >>> On 08/13/2013 09:51 PM, Maynard Johnson wrote: >>>> Using an SVN checkout done today, I get the following error when running a test java app under Valgrind: >> Sorry, but I forgot to say yesterday that I was only seeing this error on my Intel Core 2 Duo laptop. The current valgrind from SVN trunk worked OK on my ppc64 box. >> >> I'm at a different work location today and using a Sandybridge laptop (Intel Core i7) -- and oddly, I cannot reproduce the error, using the same JVM. >> >> -Maynard >>>> >>>> [maynard@oc3431575272 myJavaStuff]$ valgrind --tool=none java ThreadLoop 1 >>>> ==18410== Nulgrind, the minimal Valgrind tool >>>> ==18410== Copyright (C) 2002-2012, and GNU GPL'd, by Nicholas Nethercote. >>>> ==18410== Using Valgrind-3.9.0.SVN and LibVEX; rerun with -h for copyright info >>>> ==18410== Command: java ThreadLoop 1 >>>> ==18410== >>>> --18410-- VALGRIND INTERNAL ERROR: Valgrind received a signal 11 (SIGSEGV) - exiting >>>> --18410-- si_code=1; Faulting address: 0x11; sp: 0x80277d490 >>>> >>>> valgrind: the 'impossible' happened: >>>> Killed by fatal signal >>>> ==18410== at 0x380BF6BB: deepCopyIRExpr (ir_defs.c:2130) >>>> ==18410== by 0x380BFB14: deepCopyIRExprVec (ir_defs.c:2093) >>> >>> I bet that e is IRExprP__BBPTR which is ((IRExpr*)17) which is 0x11, >>> the faulting address. Looks as if deepCopyIRExprVec does not handle the >>> special expressions introduced recently. There may be other places. I >>> haven't checked. Unless you beat me to it I'll look at fixing it tomorrow. >>> >>> Florian >>> >>> >> >> >> ------------------------------------------------------------------------------ >> Get 100% visibility into Java/.NET code with AppDynamics Lite! >> It's a free troubleshooting tool designed for production. >> Get down to code-level detail for bottlenecks, with <2% overhead. >> Download for free and get started troubleshooting in minutes. >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> _______________________________________________ >> Valgrind-developers mailing list >> Val...@li... >> https://lists.sourceforge.net/lists/listinfo/valgrind-developers >> > |
|
From: Niall D. <ndo...@bl...> - 2013-08-16 15:37:39
|
> > Note that for non-x86/x86 callgrind's optimized callstack generator > > isn't reliable. > Is there a way to differentiate the (at instrumentation time or at > runtime) the instructions which leads to this non reliability. > Then for such instructions, a more reliable but slower technique could be > used > (e.g. unwind information, see below). Josef and Julian will know the answer to this, but essentially the problem is that it is very hard to (quickly) reconstruct what's going on precisely on ARM from just the stack alone. > > There is scope to greatly improve valgrind's callstack generator in > > general if we had an unwind table parser (see -funwind-tables in GCC). > > libunwind has one, but it need libc which makes it unsuitable for > > valgrind. > Not too sure how to understand the above: Valgrind today generates > stacktraces using unwind information, see e.g. in coregrind > pub_core_debuginfo.h, m_stacktrace.c and files in coregrind/m_debuginfo. Correct, though not in callgrind nor cachegrind. It is rather slow and memory hungry though. > > Also, Julian > > has some ARM EXIDX unwind table parser code he can point you at if > > you're interested. > What is the difference between these unwind tables and the dwarf unwind > tables ? ARM EXIDX unwind tables are specified by the ARM ABI for C++ exception unwinds. They are not a dwarf format, though can be (slowly) converted into a subset of dwarf format which is what Julian's code does. A native ARM EXIDX unwind table parser would very significantly improve stack backtrace performance on ARM valgrind, as well as making callgrind and cachegrind actually useful on ARM. Niall --- Opinions expressed here are my own and do not necessarily represent those of BlackBerry Inc. |
|
From: Philippe W. <phi...@sk...> - 2013-08-16 15:26:18
|
On Fri, 2013-08-16 at 14:25 +0000, Niall Douglas wrote: > > Getting stacktraces is done via unwind technique, so is quite costly. > > Doing it for each memory access instruction will be very slow. > > > > The callgrind tool maintains the callstack on-the-fly via call-ret, but > this callstack > > maintenance is specialised for callgrind. > > It might be an interesting project to generalise this callgrind callstack > > maintenance and make it "general" (i.e. part of the valgrind core > framework) > > rather than tool specific. > > Note that for non-x86/x86 callgrind's optimized callstack generator isn't > reliable. Is there a way to differentiate the (at instrumentation time or at runtime) the instructions which leads to this non reliability. Then for such instructions, a more reliable but slower technique could be used (e.g. unwind information, see below). > > There is scope to greatly improve valgrind's callstack generator in general > if we had an unwind table parser (see -funwind-tables in GCC). libunwind has > one, but it need libc which makes it unsuitable for valgrind. Not too sure how to understand the above: Valgrind today generates stacktraces using unwind information, see e.g. in coregrind pub_core_debuginfo.h, m_stacktrace.c and files in coregrind/m_debuginfo. > Also, Julian > has some ARM EXIDX unwind table parser code he can point you at if you're > interested. What is the difference between these unwind tables and the dwarf unwind tables ? Philippe |
|
From: <sv...@va...> - 2013-08-16 15:09:52
|
NoMethodError: undefined method `to_a' for "remove phpinfo.php as everything seems to be working\n":String /usr/local/lib/ruby/site_ruby/1.9/svn/commit-mailer.rb:647:in `make_subject' /usr/local/lib/ruby/site_ruby/1.9/svn/commit-mailer.rb:594:in `make_header' /usr/local/lib/ruby/site_ruby/1.9/svn/commit-mailer.rb:397:in `make_mail' /usr/local/lib/ruby/site_ruby/1.9/svn/commit-mailer.rb:309:in `run' /usr/local/lib/ruby/site_ruby/1.9/svn/commit-mailer.rb:48:in `run' /usr/local/share/subversion/hook-scripts/commit-email.rb:69:in `<main>' |
|
From: Philippe W. <phi...@sk...> - 2013-08-16 15:06:51
|
On Tue, 2013-08-13 at 13:34 +0400, Alexander Potapenko wrote:
> (+timurrrr)
> For the record, another idea is to perform the leak checking when the
> program is being terminated by an exit() call or right after the
> return from main().
> If I insert VALGRIND_DO_LEAK_CHECK right before "return 0" in the
> above example, no leaks are reported, because the detached threads are
> still live.
> Perhaps we shouldn't shut them down before checking for leaks?
It is better to have only one thread remaining when doing
the leak search to e.g. avoid problems when running
the libc free resource (see valgrind option
--run-libc-freeres=no|yes).
The attached patch solves the problem by having the exiting thread
marking (using VgSrc_ExitProcess) the other threads which must
terminate due to the sys_exit_group syscall.
The exitreason VgSrc_ExitProcess is then used to detect that
the registers of an empty thread still have to be used for leak
search.
Patch has been regression tested on linux x86, amd64 and ppc64.
Assuming the approach in the patch is deemed ok, there are still a few
additional points to cleanup e.g.
the darwin equivalent code should be done
(would be nice to have an access to a darwin system for that)
some obsolete comments about VgSrc_ExitProcess
confirming (or not) that a process can have threads of multiple
thread group, and updating the code accordingly
Philippe
|