You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(32) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(20) |
2
(20) |
3
(11) |
4
(10) |
5
(11) |
6
(19) |
|
7
(12) |
8
(22) |
9
(22) |
10
(18) |
11
(11) |
12
(21) |
13
(17) |
|
14
(8) |
15
(16) |
16
(16) |
17
(9) |
18
(19) |
19
(12) |
20
(9) |
|
21
(8) |
22
(12) |
23
(17) |
24
(8) |
25
(8) |
26
(7) |
27
(11) |
|
28
(12) |
29
(16) |
30
(16) |
31
(9) |
|
|
|
|
From: Julian S. <js...@ac...> - 2007-01-02 08:00:55
|
I've noticed this several times - pth_detached is still giving different results on different runs despite having committed a your extra-locking fix. Any ideas? J On Tuesday 02 January 2007 05:16, js...@ac... wrote: > Nightly build on phoenix ( SuSE 10.0 ) started at 2007-01-02 04:30:01 GMT > > Checking out vex source tree ... done > Building vex ... done > Checking out valgrind source tree ... done > Configuring valgrind ... done > Building valgrind ... done > Running regression tests ... failed > > Regression test results follow > > == 250 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == > memcheck/tests/leak-tree (stderr) > memcheck/tests/pointer-trace (stderr) > memcheck/tests/stack_switch (stderr) > memcheck/tests/x86/scalar (stderr) > memcheck/tests/x86/scalar_supp (stderr) > none/tests/mremap (stderr) > none/tests/mremap2 (stdout) > none/tests/pth_detached (stdout) > > ================================================= > == Results from 24 hours ago == > ================================================= > > Checking out vex source tree ... done > Building vex ... done > Checking out valgrind source tree ... done > Configuring valgrind ... done > Building valgrind ... done > Running regression tests ... failed > > Regression test results follow > > == 250 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == > memcheck/tests/leak-tree (stderr) > memcheck/tests/pointer-trace (stderr) > memcheck/tests/stack_switch (stderr) > memcheck/tests/x86/scalar (stderr) > memcheck/tests/x86/scalar_supp (stderr) > none/tests/mremap (stderr) > none/tests/mremap2 (stdout) > > > ================================================= > == Difference between 24 hours ago and now == > ================================================= > > *** old.short Tue Jan 2 04:55:11 2007 > --- new.short Tue Jan 2 05:16:51 2007 > *************** > *** 10,12 **** > > ! == 250 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == > memcheck/tests/leak-tree (stderr) > --- 10,12 ---- > > ! == 250 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures > == memcheck/tests/leak-tree (stderr) > *************** > *** 18,19 **** > --- 18,20 ---- > none/tests/mremap2 (stdout) > + none/tests/pth_detached (stdout) > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: <js...@ac...> - 2007-01-02 06:15:54
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2007-01-02 09:00:01 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 217 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <js...@ac...> - 2007-01-02 05:16:37
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2007-01-02 04:30:01 GMT Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 250 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) ================================================= == Results from 24 hours ago == ================================================= Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 250 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Jan 2 04:55:11 2007 --- new.short Tue Jan 2 05:16:51 2007 *************** *** 10,12 **** ! == 250 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) --- 10,12 ---- ! == 250 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) *************** *** 18,19 **** --- 18,20 ---- none/tests/mremap2 (stdout) + none/tests/pth_detached (stdout) |
|
From: Nicholas N. <nj...@cs...> - 2007-01-02 04:43:46
|
On Sun, 31 Dec 2006, Bart Van Assche wrote: > My own goal is still to make drd meet or exceed Intel's VTune. The Thread > Checker included in VTune doesn't report any false positives as far as I > know. Good to hear it! :) As you now know, there's a lot of work between "getting a tool working" and "getting a tool working well". And that work is the difference between something that is not bad vs. something that is really useful. Nick |
|
From: Tom H. <to...@co...> - 2007-01-02 03:55:17
|
Nightly build on dunsmere ( athlon, Fedora Core 6 ) started at 2007-01-02 03:30:06 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 252 tests, 5 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) |
|
From: Tom H. <th...@cy...> - 2007-01-02 03:23:58
|
Nightly build on dellow ( x86_64, Fedora Core 6 ) started at 2007-01-02 03:10:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 281 tests, 4 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: <js...@ac...> - 2007-01-02 01:39:32
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-01-02 02:00:01 CET Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 223 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 223 tests, 7 stderr failures, 3 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Jan 2 02:08:09 2007 --- new.short Tue Jan 2 02:16:06 2007 *************** *** 8,10 **** ! == 223 tests, 7 stderr failures, 3 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) --- 8,10 ---- ! == 223 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) *************** *** 13,16 **** memcheck/tests/pointer-trace (stderr) - memcheck/tests/wrap8 (stdout) - memcheck/tests/wrap8 (stderr) none/tests/faultstatus (stderr) --- 13,14 ---- |
|
From: Julian S. <js...@ac...> - 2007-01-01 23:28:58
|
Over the past couple of weeks, many bug fixes and patches have been pushed into 3_2_BRANCH. This is in preparation for a 3.2.2 release, which I am hoping to do in the 2nd week of Jan. By now I believe=20 most of the changes for 3.2.2 have been committed. It would be helpful if people could check out and test the stable branch, particularly if you package Valgrind for a distro, or make extensive use of it. I am hoping to ship a release which works well on OpenSUSE 10.2 and Fedora Core 6, whilst retaining support for everything back to and including Red Hat 7.3. I can test on OpenSUSE but have less access to FC6 systems. I have tried hard to ensure the ppc32/64-linux ports continue to work well. However, they receive less use than the x86/amd64 ports and so are inevitably less well tested. In particular I have no way to verify that the newly added 64k page support for ppc64/32-linux works. To acquire/build the 3.2.2 sources: svn co \ svn://svn.valgrind.org/valgrind/branches/VALGRIND_3_2_BRANCH branch32 cd branch32 ./autogen.sh # configure and build as usual The main changes in 3.2.2 are: =2D fix a bunch of bugs, including missing instructions, as shown in the list below =2D support for glibc-2.5 =2D modest speedups in various areas * faster program startups due to lower JIT overheads * amd64 FP improvements * faster dispatching on ppc32/64 (fewer branch mispredicts) * code generation improvements for ppc64 J ---------------------------------------- This is the current master list of bugs reported/fixed in 3.2.2. It is somewhat cryptic. It lives in docs/internals/3_2_BUGSTATUS.txt in the trunk tree (not in the 3_2_BRANCH). Legend: n-i-bz =3D not in bugzilla pending =3D is scheduled to be fixed (or at least considered) on this branch wontfix =3D will not fix on this branch PRI: 32 =3D fix this for 3.2.2 Vfd =3D fix has been verified on 3.2.X branch s93 =3D possible SuSE 9.3 amd64 assembler bug ,w =3D waiting for feedback from bug reporter =2D------ Bugs reported after (in) 3.2.1, or ------ =2D------ reported in 3.2.0 but not fixed in 3.2.1 ------ TRUNK 32BRANCH PRI BUG# WHAT pending pending 124478 memcheck reports uninitialized bytes on=20 timer_create() pending pending 128359 Please suppress the uninitialized bytes report on getifaddrs() (glibc 2.3.3) vx1709 vx1710 32 129390 ppc?->IR: some kind of VMX prefetch (dstt) pending pending 129968 amd64->IR: 0xF 0xAE 0x0 (fxsave) =3D=3D134319 r6242? r6438 32 133054 'make install' fails with syntax errors =3D=3D118903 pending wontfix 133154 crash when using client requests to=20 register/deregister stack pending pending 32,w 132998 startup fails in when running on UML (/proc/self/map start=3D=3Dend problem) pending pending 32 133327 support for voicetronix ioctl (w/patch) pending pending 32 133679 Callgrind does not write path names to=20 sources with dwarf debug info (dirnames) pending pending s93 133962 amd64->IR: 0xF2 0x4C 0xF 0x10 (rex64X ...) pending pending s93 135023 amd64->IR: 0x49 0xDD 0x86 0xE0=20 (rex64Z fldl 0xe0(%r14)) pending pending s93 136529 Unhandled instruction error for legal instruction r6439 r6440 32 134207 pkg-config output contains @VG_PLATFORM@ vx1660 vx1690 32 n-i-bz %eflags rule for SUBL-CondNLE Signal race condition (users list, 13 June, Johannes Berg) Unrecognised instruction at address 0x70198EC2 (users, 19 July, Bennee) pending pending 133984 unhandled instruction bytes:=20 0xCC 0x89 0xEC 0x31 (int3) pending pending 134138 Stale default library used after reconfiguri= ng pending pending 134219 Launcher defaults to ppc32-linux even with --enable-only64bit pending pending 134316 Callgrind does not distinguish between parent and child v6084 v6421 32 134727 valgrind exits with "Value too large for defined data type" vx1667 vx1691 32 n-i-bz ppc32/64: support mcrfs v6211 v6422 32 n-i-bz Cachegrind: Update cache parameter detection XXX: check status of Core2 cpuid code vx1672 vx1692 32 135012 x86->IR: 0xD7 0x8A 0xE0 0xD0 (xlat) =3D=3D125959 vx1673/4 vx1693 32 126147 x86->IR: 0xF2 0xA5 0xF 0x77 (repne movsw) w/test vx1676 vx1694/6 32 136650 amd64->IR: 0xC2 0x8 0x0 vx1679 vx1695 32 135421 x86->IR: unhandled Grp5(R) case 6 [ok] vx1675 vx1697 32 n-i-bz x86 COPY-CondP (Espindola #2, dev, Nov 1) vx1677 vx1704 32 n-i-bz IR comments vx1678 vx1698 32 n-i-bz jcxz (x86) (users, 8 Nov) r6341 r6424 32 n-i-bz ExeContext hashing fix r6356 r6425 32 n-i-bz Dwarf CFI 0:24 0:32 0:48 0:7 (Nov 8) r6365 r6423 32 n-i-bz Drepper: obscure Cachegrind simulation bug r6367 r6423 32 n-i-bz Same fix as r6365, but for Callgrind=20 simulation. r6371 r6426 32 n-i-bz libmpiwrap.c: fix handling of MPI_LONG_DOUBLE r6374 r6427 32 n-i-bz make User errors suppressible (XXX: DOCS!) r6377/8 r6428 32 136844 corrupted malloc line when using=20 =3D=3D138507 --gen-suppressions=3Dyes vx1686 vx1701 32 n-i-bz Reg-alloc speedups r6382/3 r6429 32 n-i-bz Fix confusing leak-checker flag hints r6384 r6385 32 n-i-bz Support recent autoswamp versions pending pending ? 135026 incorrect complaint that shm_nattch is=20 uninitialized pending pending ? 135264 ppc->IR: dcbzl instruction missing pending pending ? 136401 off-by-one in ESP checking pending pending 32 n-i-bz amd64 INCW-CondZ (Andr=E9 W=F6bbeking,=20 users, Oct 19) (=3D=3D Espindola #1) r6291 r6430 32 n-i-bz ppc32/64 dispatcher speedups vx1670/1 vx1699 32 n-i-bz ppc64 fe rld/rlw improvements vx1669 vx1700 32 n-i-bz ppc64 be imm64 improvement (hdefs.c only) r6459/61 r6457/8/60 32 136300 support 64K pages on ppc64-linux =3D=3D 139124 r6404/5 r6431 32 n-i-bz fix ppc insn set tests for gcc >=3D 4.1 vx1711 vx1712 32 137493 x86->IR: recent binutils no-ops vx1702/r6441 vx1703/r6442 32 137714 x86->IR: 0x66 0xF 0xF7 0xC6 (maskmovdqu) pending pending 32 137830 crash upon delivery of SIGALRM (NPTL) (can't reproduce) pending pending 138019 valgrind memcheck crashes with SIGSEGV r6444 r6445 32 138424 "failed in UME with error 22" (at least produce a better error msg) =3D=3D 138856 r6410 r6432 32 138627 Enhancement of prctl ioctl pending pending 32 138702 amd64->IR: 0xF0 0xF 0xC0 0x90 (lock xadd %dl,0xb5(%rax)) r6411 r6433 32 138896 usb ioctl handling =3D=3D 136059 =20 vx1705 vx1706 32 139050 ppc32->IR: mfspr 268/269 instructions not=20 handled pending pending 32 139076 valgrind VT_GETSTATE error vx1707/r6447 vx1708/r6448 32 n-i-bz ppc32->IR: lvxl/stvxl Test/make run cleanly on SuSE 10.2 x86/amd64/ppc32 and FC6 ditto Last update was 25 Dec 06 r6462/3 r6464/5 32 n-i-bz glibc-2.5 support r6469 |
|
From: <sv...@va...> - 2007-01-01 22:17:38
|
Author: sewardj
Date: 2007-01-01 22:17:37 +0000 (Mon, 01 Jan 2007)
New Revision: 6472
Log:
Merge r6471 (Avoid printf in the recursive routines ...)
Modified:
branches/VALGRIND_3_2_BRANCH/memcheck/tests/wrap8.c
Modified: branches/VALGRIND_3_2_BRANCH/memcheck/tests/wrap8.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- branches/VALGRIND_3_2_BRANCH/memcheck/tests/wrap8.c 2007-01-01 22:07:=
58 UTC (rev 6471)
+++ branches/VALGRIND_3_2_BRANCH/memcheck/tests/wrap8.c 2007-01-01 22:17:=
37 UTC (rev 6472)
@@ -1,4 +1,4 @@
-
+#include <unistd.h>
#include <stdio.h>
#include <malloc.h>
#include "valgrind.h"
@@ -12,15 +12,15 @@
Hence this test has two expected outcomes:
- on ppc64-linux, a stack overflow is caught, and V aborts.
- on everything else, it runs successfully to completion.
+ Note, pre() and post() used so as to avoid printf, which messes
+ up the call stacks on ppc64-linux due to intercept of mempcpy.
*/
-
typedef=20
struct _Lard {
struct _Lard* next;=20
char stuff[999];=20
}
Lard;
-
Lard* lard =3D NULL;
static int ctr =3D 0;
=20
@@ -34,8 +34,8 @@
lard =3D p;
}
}
-
-
+static void post ( char* s, int n, int r );
+static void pre ( char* s, int n );
static int fact1 ( int n );
static int fact2 ( int n );
=20
@@ -60,11 +60,11 @@
int r;
OrigFn fn;
VALGRIND_GET_ORIG_FN(fn);
- printf("in wrapper1-pre: fact(%d)\n", n); fflush(stdout);
+ pre("wrapper1", n);
addMoreLard();
CALL_FN_W_W(r, fn, n);
addMoreLard();
- printf("in wrapper1-post: fact(%d) =3D %d\n", n, r); fflush(stdout);
+ post("wrapper1", n, r);
if (n >=3D 3) r +=3D fact2(2);
return r;
}
@@ -74,11 +74,11 @@
int r;
OrigFn fn;
VALGRIND_GET_ORIG_FN(fn);
- printf("in wrapper2-pre: fact(%d)\n", n); fflush(stdout);
+ pre("wrapper2", n);
addMoreLard();
CALL_FN_W_W(r, fn, n);
addMoreLard();
- printf("in wrapper2-post: fact(%d) =3D %d\n", n, r); fflush(stdout);
+ post("wrapper2", n, r);
return r;
}
=20
@@ -100,3 +100,40 @@
=20
return 0;
}
+
+static void send ( char* s )
+{
+ while (*s) {
+ write(1, s, 1);
+ s++;
+ }
+}
+
+static void pre ( char* s, int n )
+{
+ char buf[50];
+ fflush(stdout);
+ sprintf(buf,"%d", n);
+ send("in ");
+ send(s);
+ send("-pre: fact(");
+ send(buf);
+ send(")\n");
+ fflush(stdout);
+}
+
+static void post ( char* s, int n, int r )
+{
+ char buf[50];
+ fflush(stdout);
+ sprintf(buf,"%d", n);
+ send("in ");
+ send(s);
+ send("-post: fact(");
+ send(buf);
+ send(") =3D ");
+ sprintf(buf,"%d", r);
+ send(buf);
+ send("\n");
+ fflush(stdout);
+}
|
|
From: <sv...@va...> - 2007-01-01 22:08:03
|
Author: sewardj
Date: 2007-01-01 22:07:58 +0000 (Mon, 01 Jan 2007)
New Revision: 6471
Log:
Avoid printf in the recursive routines, so that the intercept of
mempcpy which is called from printf does not mess up the
carefully-balanced call-stack overflow checks that this test does on
ppc64-linux.
Modified:
trunk/memcheck/tests/wrap8.c
Modified: trunk/memcheck/tests/wrap8.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/memcheck/tests/wrap8.c 2006-12-31 19:40:56 UTC (rev 6470)
+++ trunk/memcheck/tests/wrap8.c 2007-01-01 22:07:58 UTC (rev 6471)
@@ -1,4 +1,4 @@
-
+#include <unistd.h>
#include <stdio.h>
#include <malloc.h>
#include <stdlib.h>
@@ -13,15 +13,15 @@
Hence this test has two expected outcomes:
- on ppc64-linux, a stack overflow is caught, and V aborts.
- on everything else, it runs successfully to completion.
+ Note, pre() and post() used so as to avoid printf, which messes
+ up the call stacks on ppc64-linux due to intercept of mempcpy.
*/
-
typedef=20
struct _Lard {
struct _Lard* next;=20
char stuff[999];=20
}
Lard;
-
Lard* lard =3D NULL;
static int ctr =3D 0;
=20
@@ -35,8 +35,8 @@
lard =3D p;
}
}
-
-
+static void post ( char* s, int n, int r );
+static void pre ( char* s, int n );
static int fact1 ( int n );
static int fact2 ( int n );
=20
@@ -61,11 +61,11 @@
int r;
OrigFn fn;
VALGRIND_GET_ORIG_FN(fn);
- printf("in wrapper1-pre: fact(%d)\n", n); fflush(stdout);
+ pre("wrapper1", n);
addMoreLard();
CALL_FN_W_W(r, fn, n);
addMoreLard();
- printf("in wrapper1-post: fact(%d) =3D %d\n", n, r); fflush(stdout);
+ post("wrapper1", n, r);
if (n >=3D 3) r +=3D fact2(2);
return r;
}
@@ -75,11 +75,11 @@
int r;
OrigFn fn;
VALGRIND_GET_ORIG_FN(fn);
- printf("in wrapper2-pre: fact(%d)\n", n); fflush(stdout);
+ pre("wrapper2", n);
addMoreLard();
CALL_FN_W_W(r, fn, n);
addMoreLard();
- printf("in wrapper2-post: fact(%d) =3D %d\n", n, r); fflush(stdout);
+ post("wrapper2", n, r);
return r;
}
=20
@@ -101,3 +101,40 @@
=20
return 0;
}
+
+static void send ( char* s )
+{
+ while (*s) {
+ write(1, s, 1);
+ s++;
+ }
+}
+
+static void pre ( char* s, int n )
+{
+ char buf[50];
+ fflush(stdout);
+ sprintf(buf,"%d", n);
+ send("in ");
+ send(s);
+ send("-pre: fact(");
+ send(buf);
+ send(")\n");
+ fflush(stdout);
+}
+
+static void post ( char* s, int n, int r )
+{
+ char buf[50];
+ fflush(stdout);
+ sprintf(buf,"%d", n);
+ send("in ");
+ send(s);
+ send("-post: fact(");
+ send(buf);
+ send(") =3D ");
+ sprintf(buf,"%d", r);
+ send(buf);
+ send("\n");
+ fflush(stdout);
+}
|
|
From: Julian S. <js...@ac...> - 2007-01-01 21:49:45
|
On Sunday 31 December 2006 22:50, John Reiser wrote:
> Julian Seward wrote:
> > I've been testing the 3.2 branch on OpenSUSE 10.2 (kernel 2.6.18.2,
> > glibc-2.5) and mostly it works pretty well. However, it can't run bash:
>
> ...
>
> > sys_mmap ( 0xFFFDE000, 69636, 5, 2050, 3, 0 ) --> [pre-fail] Failure(0x=
C)
> > sys_mmap ( 0xFFFD9000, 90236, 5, 2050, 3, 0 ) --> [pre-fail] Failure(0x=
C)
> > sys_mmap ( 0xFFFA1000, 319664, 5, 2050, 3, 0 ) --> [pre-fail]
> > Failure(0xC)
> >
> > This strikes me as strange because the addresses (0xFFFDE000 etc) are
> > almost at the end of 4G. It's also strange because other programs,
> > including large ones, run just fine, and these .so's are mapped quite
> > low, as is normal.
On further investigation I am more mystified. I have established that this
is not a regression, as 3.2.1 fails similarly on ppc32, and also that it is=
=20
not caused by the 64k page stuff added to 3.2.2.
I cannot afford to spend any more time on this, so maybe some Linux-on-PPC
person can chase it more if required.
Here's what else I discovered:
The bogus mmap addresses are produced by __elf_preferred_address in
glibc-2.5/sysdeps/powerpc/powerpc32/dl-machine.c (I assume). They are
handed off to the failing mmap in _dl_map_object_from_fd in dl-load.c:
/* This is a position-independent shared object. We can let the
kernel map it anywhere it likes, but we must have space for all
the segments in their specified positions relative to the first.
So we map the first segment without MAP_FIXED, but with its
extent increased to cover all the segments. Then we remove
access from excess portion, and there is known sufficient space
there to remap from the later segments.
As a refinement, sometimes we have an address that we would
prefer to map such objects at; but this is only a preference,
the OS can do whatever it likes. */
ElfW(Addr) mappref;
mappref =3D (ELF_PREFERRED_ADDRESS (loader, maplength,
c->mapstart
& GLRO(dl_use_load_bias))
- MAP_BASE_ADDR (l));
/* Remember which part of the address space this object uses. */
l->l_map_start =3D (ElfW(Addr)) __mmap ((void *) mappref, maplength,
c->prot,
MAP_COPY|MAP_FILE,
fd, c->mapoff);
if (__builtin_expect ((void *) l->l_map_start =3D=3D MAP_FAILED, 0))
{
map_error:
errstring =3D N_("failed to map segment from shared object");
goto call_lose_errno;
}
=46rom comparing against strace results, I see the failing mmap addresses a=
re
exactly of 0x80000000 greater than the corresponding values from strace.
This made me wonder if there is some sign-extend bug in the 32-bit virtual
ppc CPU code, but I could not find any such, and besides that code has been
extensively tested and hammered on this past year.
I also discovered the same mmap-fail problem occurs in various other
situations:
kernel in 32-bit mode, openSUSE 10.2, running bash
kernel in 32-bit mode, openSUSE 10.2, running ssh
kernel in 32-bit mode, openSUSE 10.1, running ssh (but bash is OK)
=46or a kernel in 64-bit mode (on ppc970), openSUSE 10.1, both bash and
ssh run fine, even though they are still 32-bit executables.
I notice that 32-bit mode kernels appear to have 2G+2G userspace split,
whereas a 64-bit kernel running a 32-bit executable can offer that exe
a full 4G of its own.
Anyway, this is all just for the record. Am not chasing it further.
J
|
|
From: Julian S. <js...@ac...> - 2007-01-01 18:02:07
|
On Monday 01 January 2007 12:02, Tom Hughes wrote: > Can you not check for the PLT and suppress it when the access occurs > rather than trying to build suppressions up front? I agree, that sounds generally easier. You cannot really tell when a .so has been dlopen-d. You can ask (I believe) to be notified for mmap/munmap events, and maybe you can tell that they are of .so files. But it's still ugly: suppose a program mmaps a .so file for some other purpose -- eg to compute an md5sum of it. In short it's better not to put ELF-specific hacks in drd if you can avoid it - instead, think in terms of using/extending the interface provided by m_debuginfo. For one thing, not all current/potential future targets use ELF. J |
|
From: Julian S. <js...@ac...> - 2007-01-01 17:34:18
|
> A nice solution would be not to instrument any code in the .plt section > from within the drd tool. Do you know whether the original (untranslated) > instruction address is accessible from within a tool's instrumentation > function, such that it can be passed to VG_(seginfo_sect_kind)() ? Yes, the IR for each instruction begins with an Ist_IMark which says exactly the address and size of the instruction. Search for "IMark" in libvex_ir.h. J |
|
From: Bart V. A. <bar...@gm...> - 2007-01-01 16:08:33
|
On 1/1/07, Tom Hughes <to...@co...> wrote: > > Is it not possible to check it the other way? Based on the address of > the faulting instruction? > > If it isn't then look at readelf.c and debuginfo.c where they record > the address of the .plt and .got sections in the SegInfo structure and > then use it to return a segment kind from that function and extend it > to track the .got.plt section as well and return a new segment type. > > Tom > A nice solution would be not to instrument any code in the .plt section from within the drd tool. Do you know whether the original (untranslated) instruction address is accessible from within a tool's instrumentation function, such that it can be passed to VG_(seginfo_sect_kind)() ? Bart. |
|
From: Tom H. <to...@co...> - 2007-01-01 12:02:41
|
In message <e2e...@ma...>
"Bart Van Assche" <bar...@gm...> wrote:
> Another question: can a Valgrind tool be notified just after the debug
> information of the client has been loaded ? I need a point in the drd tool
> where I can iterate over the segments of the client to suppress conflict
> reports on the .got.plt section. This must happen before the client starts
> to dlopen() the DSO's it was linked with.
You mean after the main executable has been mapped and before any
shared libraries are mapped (even the ones it was linked with)?
That would basically mean after m_ume.c has mapped the program but
before we start it running (at which point ld-linux will immediately
start mapping the shared libraries it was linked with).
There's also the question of ld-linux.so (or whatever the interpreter
is called on the platform) as that will already have been mapped along
with the main program.
Also each shared library may have it's own .got.plt section if it
references functions in other shared libraries.
Can you not check for the PLT and suppress it when the access occurs
rather than trying to build suppressions up front?
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Tom H. <to...@co...> - 2007-01-01 11:58:10
|
In message <e2e...@ma...>
"Bart Van Assche" <bar...@gm...> wrote:
> On 1/1/07, Tom Hughes <to...@co...> wrote:
> >
> > In message <755...@lo...>
> > Tom Hughes <to...@co...> wrote:
>
> ...
>
> > Ah hang on... The PLT itself is .plt - the .got.plt section is the
> > table used to cache the result of resolving the function name to an
> > address.
>
> ...
>
> > So I think what you need to do is to see whether the instruction
> > doing the access is in the PLT rather than whether the address
> > being accessed is in the PLT.
> >
> > Either that or valgrind needs to be extended to track the .got.plt
> > section in the seginfo as well as the .plt and .got sections.
>
>
> What I need is the address where the .got.plt section is loaded, and the
> size of this section. Does this mean Valgrind has to be extended ? If I
> should do this, where should I start ?
Is it not possible to check it the other way? Based on the address of
the faulting instruction?
If it isn't then look at readelf.c and debuginfo.c where they record
the address of the .plt and .got sections in the SegInfo structure and
then use it to return a segment kind from that function and extend it
to track the .got.plt section as well and return a new segment type.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Bart V. A. <bar...@gm...> - 2007-01-01 11:13:20
|
Another question: can a Valgrind tool be notified just after the debug information of the client has been loaded ? I need a point in the drd tool where I can iterate over the segments of the client to suppress conflict reports on the .got.plt section. This must happen before the client starts to dlopen() the DSO's it was linked with. Bart. |
|
From: Bart V. A. <bar...@gm...> - 2007-01-01 10:50:41
|
On 1/1/07, Tom Hughes <to...@co...> wrote: > > In message <755...@lo...> > Tom Hughes <to...@co...> wrote: ... > Ah hang on... The PLT itself is .plt - the .got.plt section is the > table used to cache the result of resolving the function name to an > address. ... > So I think what you need to do is to see whether the instruction > doing the access is in the PLT rather than whether the address > being accessed is in the PLT. > > Either that or valgrind needs to be extended to track the .got.plt > section in the seginfo as well as the .plt and .got sections. > > Tom What I need is the address where the .got.plt section is loaded, and the size of this section. Does this mean Valgrind has to be extended ? If I should do this, where should I start ? Bart. |
|
From: Tom H. <to...@co...> - 2007-01-01 10:18:26
|
In message <755...@lo...>
Tom Hughes <to...@co...> wrote:
> In message <e2e...@ma...>
> "Bart Van Assche" <bar...@gm...> wrote:
>
> > > You mean PLT I think, not GOT - the GOT is for global data access and
> > > the PLT is for function calls.
> > >
> > > The ELF reader in m_debuginfo already detects the PLT and GOT so you
> > > would need to make it record that information somewhere I guess.
> >
> > From the output of objdump -d:
> >
> > 080486e4 <_Znwj@plt>:
> > 80486e4: ff 25 00 9d 04 08 jmp *0x8049d00
> > 80486ea: 68 38 00 00 00 push $0x38
> > 80486ef: e9 70 ff ff ff jmp 8048664 <_init+0x18>
> >
> > From the output of readelf -a:
> >
> > Section Headers:
> > [Nr] Name Type Addr Off Size ES Flg Lk
> > Inf Al
> > [22] .got PROGBITS 08049cd4 000cd4 000004 04 WA 0
> > 0 4
> > [23] .got.plt PROGBITS 08049cd8 000cd8 000044 04 WA 0
> > 0 4
> >
> > Conflicting accesses were reported on location 0x8049d00 (size 4). Is my
> > conclusion correct that this data resides in the .got.plt section ?
>
> Correct - that section is the PLT section. I'm not sure why ELF gives
> the section that name but it is normally known as the PLT and the .got
> section is known as the GOT.
Ah hang on... The PLT itself is .plt - the .got.plt section is the
table used to cache the result of resolving the function name to an
address.
> > I don't think it is possible to add a tracking function in the ELF reader,
> > since the executable file is loaded before the Valgrind tool is loaded ?
> >
> > There is already an interface for iterating over segments:
> > VG_(seginfo_syms_howmany)() and VG_(seginfo_syms_getidx)(). Would it be
> > possible to make the .got.plt section information available via these
> > functions ? All I need is the start address and the size of the .got.plt
> > section.
>
> Actually, the information is already there by the looks of it - you
> just need to call VG_(seginfo_sect_kind) and see if the resulting
> value is Vg_SectPLT or not.
So I think what you need to do is to see whether the instruction
doing the access is in the PLT rather than whether the address
being accessed is in the PLT.
Either that or valgrind needs to be extended to track the .got.plt
section in the seginfo as well as the .plt and .got sections.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Tom H. <to...@co...> - 2007-01-01 10:12:19
|
In message <e2e...@ma...>
"Bart Van Assche" <bar...@gm...> wrote:
> > You mean PLT I think, not GOT - the GOT is for global data access and
> > the PLT is for function calls.
> >
> > The ELF reader in m_debuginfo already detects the PLT and GOT so you
> > would need to make it record that information somewhere I guess.
>
> From the output of objdump -d:
>
> 080486e4 <_Znwj@plt>:
> 80486e4: ff 25 00 9d 04 08 jmp *0x8049d00
> 80486ea: 68 38 00 00 00 push $0x38
> 80486ef: e9 70 ff ff ff jmp 8048664 <_init+0x18>
>
> From the output of readelf -a:
>
> Section Headers:
> [Nr] Name Type Addr Off Size ES Flg Lk
> Inf Al
> [22] .got PROGBITS 08049cd4 000cd4 000004 04 WA 0
> 0 4
> [23] .got.plt PROGBITS 08049cd8 000cd8 000044 04 WA 0
> 0 4
>
> Conflicting accesses were reported on location 0x8049d00 (size 4). Is my
> conclusion correct that this data resides in the .got.plt section ?
Correct - that section is the PLT section. I'm not sure why ELF gives
the section that name but it is normally known as the PLT and the .got
section is known as the GOT.
> I don't think it is possible to add a tracking function in the ELF reader,
> since the executable file is loaded before the Valgrind tool is loaded ?
>
> There is already an interface for iterating over segments:
> VG_(seginfo_syms_howmany)() and VG_(seginfo_syms_getidx)(). Would it be
> possible to make the .got.plt section information available via these
> functions ? All I need is the start address and the size of the .got.plt
> section.
Actually, the information is already there by the looks of it - you
just need to call VG_(seginfo_sect_kind) and see if the resulting
value is Vg_SectPLT or not.
> And what about DSO's loaded after program start by dlopen() calls ? Should a
> tracking function be added for this, or should I add wrappers for dlopen()
> and dlclose() in the drd tool ?
The segment infomation will be updated automatically when a DSO is
loaded, so using the seginfo calls should handle all that transparently.
The only thing you might need to do is that if you are caching anything
relating to whether an address is in a PLT or not you will need to watch
for unmaps of that address range and discard that cached data.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Bart V. A. <bar...@gm...> - 2007-01-01 10:05:31
|
On 12/31/06, Tom Hughes <to...@co...> wrote: > > In message <e2e...@ma...> > "Bart Van Assche" <bar...@gm...> wrote: > > > Thanks for the URL. Ulrich Drepper has a paper about DSO's (dynamic > shared > > objects) that explains the GOT (global offset table) and the PLT > (procedure > > linkage table). Apparently the data races that were detected and that I > > would like to suppress are caused by lazy relocation of symbols in > shared > > libraries. Each time a function in a shared library is called, this > happens > > via an indirect jump in the GOT (at least on IA-32). The first time a > shared > > library function is called, the corresponding entry in the GOT is filled > in. > > There is one GOT entry per shared library function, and these entries > are > > shared over threads. Does anyone have a suggestion on how I can find out > the > > address range of all GOT entries ? Then I can easily suppress data races > > triggered on the GOT. > > You mean PLT I think, not GOT - the GOT is for global data access and > the PLT is for function calls. > > The ELF reader in m_debuginfo already detects the PLT and GOT so you > would need to make it record that information somewhere I guess. > > Tom > >From the output of objdump -d: 080486e4 <_Znwj@plt>: 80486e4: ff 25 00 9d 04 08 jmp *0x8049d00 80486ea: 68 38 00 00 00 push $0x38 80486ef: e9 70 ff ff ff jmp 8048664 <_init+0x18> >From the output of readelf -a: Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [22] .got PROGBITS 08049cd4 000cd4 000004 04 WA 0 0 4 [23] .got.plt PROGBITS 08049cd8 000cd8 000044 04 WA 0 0 4 Conflicting accesses were reported on location 0x8049d00 (size 4). Is my conclusion correct that this data resides in the .got.plt section ? I don't think it is possible to add a tracking function in the ELF reader, since the executable file is loaded before the Valgrind tool is loaded ? There is already an interface for iterating over segments: VG_(seginfo_syms_howmany)() and VG_(seginfo_syms_getidx)(). Would it be possible to make the .got.plt section information available via these functions ? All I need is the start address and the size of the .got.plt section. And what about DSO's loaded after program start by dlopen() calls ? Should a tracking function be added for this, or should I add wrappers for dlopen() and dlclose() in the drd tool ? Bart. |
|
From: Bart V. A. <bar...@gm...> - 2007-01-01 09:53:50
|
On 12/31/06, Julian Seward <js...@ac...> wrote: > > > > can be easily intercepted (via VG_(needs_malloc_replacement)(...)). But > is > > it normal that via this mechanism malloc() calls performed by /lib/ld- > > linux.so are not intercepted ? > > Unfortunately there will be an initial section of the program for which > these intercepts do not apply. > > The replacement functions live in drd/vgpreload_drd-<arch>-linux.so > (by analogy with memcheck's ones). > > At startup, V loads the executable into memory - that is, maps its text > and data sections, and those of its ELF interpreter, but that's all. It > then starts running the ELF interpreter on the simulated CPU, and that > takes > care of loading/linking all the other .so's. V doesn't have any further > involvement, which is nice, since ELF linking/loading is a nightmare of > complexity. > > Except .. the preload .so's, drd/vgpreload_drd-<arch>-linux.so et al. > These need to be incorporated into the memory image of the client > program. V can't do that directly, so what it does is to add the name > of this .so to the LD_PRELOAD env var handed to the client. This causes > its dynamic linker to load in the preload. > > As soon as the preload is loaded (via an mmap syscall) V reads its symbol > tables, notes the malloc replacements, and routes all subsequent > malloc/free > calls to them. But by then it may be that ld.so has already called > malloc/free. > > I think that explains what you are seeing. I hope that makes sense. > Loading the executable and getting started is one of the most hairy > parts of the system. > > Does the lack of intercepts right from the start cause a problem for drd? > > J > This does not cause a real problem for the drd tool. The drd tool tries to print the allocation context every time a conflicting access is detected, and I noticed that frequently the drd tool was unable to find the allocation context. That is why I compared the output triggered by mtrace() with tracing output added in the malloc wrappers. After that I found out that the conflicting accesses were related to the PLT /GOT mechanism, and not to malloc() calls performed from within ld-linux.so. Bart. |
|
From: <js...@ac...> - 2007-01-01 06:09:11
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2007-01-01 09:00:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 217 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <js...@ac...> - 2007-01-01 05:04:07
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2007-01-01 04:30:01 GMT Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 250 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <to...@co...> - 2007-01-01 03:58:59
|
Nightly build on dunsmere ( athlon, Fedora Core 6 ) started at 2007-01-01 03:30:12 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 252 tests, 5 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) |