You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(15) |
2
(12) |
3
(11) |
4
(20) |
5
(6) |
|
6
(6) |
7
(7) |
8
(8) |
9
(17) |
10
(25) |
11
(27) |
12
(6) |
|
13
(28) |
14
(16) |
15
(20) |
16
(9) |
17
(26) |
18
(7) |
19
(25) |
|
20
(7) |
21
(18) |
22
(25) |
23
(15) |
24
(21) |
25
(32) |
26
(15) |
|
27
(23) |
28
(33) |
|
|
|
|
|
|
From: Nicholas N. <nj...@cs...> - 2005-02-15 23:27:48
|
On Mon, 14 Feb 2005, Jeremy Fitzhardinge wrote: >> Really? So what about an application that mmaps, say, a big chunk >> (>64M) of a database into memory? > > No, I mean internal to Valgrind. We could allocate multiple superblock > chunks and hope they're contiguous, but it would seem a bit rude (since > that denies them to the client). We're already denying areas of memory to the client. Doesn't seem like a big deal to me. N |
|
From: Jeremy F. <je...@go...> - 2005-02-15 19:49:50
|
At the moment, if vg_symtab2 finds two symbols with the same address, it generally prefers the longer one over a shorter one, modulo some logic for stripping all the libc-internal prefixes like __, __GI_, __libc_, etc. It seems to me though, that choosing the longer symbol is never the useful thing to do. I can't think of a single instance where a symbol has aliases where the longer one is useful. The only case I can think of where there are non-internal aliases is free/cfree, and nobody cares about cfree (the symmetric free for calloc, which redundant). The shortest name is always the mostly expected name. There is an exception; when a symbol is versioned, we want to remember that (ie, pthread_create@GLIBC_2.0 vs pthread_create). Its important to be able to distinguish the different versions for wrapping/redirection. And it seems that making this change has no effect on the regtests; I'm not sure if that's because all the differences have already been filtered out or not. What do people think? J |
|
From: Jeremy F. <je...@go...> - 2005-02-15 10:31:48
|
On Mon, 2005-02-07 at 12:36 +0600, Evgeny Baskakov wrote:
> Hello.
>
> I think there is a bug in valgrind. There is a small pre-compiled
> program attached.
> It has been produced by our own compiler and linker tools. This
> program (which prints
> "Hello world!" on the console) works fine on all Linux distros (Red
> Hat 8, 9, FC1, 2, 3,
> SuSe 8, 9, etc.) But when I try to run it with valgrind, the latter
> crashes.
>
> Valgrind seems to be a great tool, so it would be extremely fine if it
> could work with
> our executables. The manual says that it works with all Linux/x86
> executables, but
> it appeared that not all executables are supported...
>
> If there is no way to fix valgrind core, could you tell us what we
> should change in our
> linker to produce proper executables?
Your executable just flat out doesn't work for me. It dumps core as
soon as I run it natively, as well as causing Valgrind to crash
instantly. ldd doesn't work on it.
Oh, I see your problem:
readelf -hl shows:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x08048000 0x08048000 0x002a7 0x01000 R 0x1000
^^^^^^^ ^^^^^^^ ^
LOAD 0x001000 0x08049000 0x08049000 0x1333f 0x14000 R E 0x1000
LOAD 0x015000 0x0805d000 0x0805d000 0x0151c 0x02000 RW 0x1000
LOAD 0x017000 0x0805f000 0x0805f000 0x00000 0x1b000 RW 0x1000
Your filesize is < your memsize, so the loader is required to pad the
difference with zeros, but your mappings are read-only, so it dies when
it tries. You should set the filesize to be the same as your memsize
unless you want this behaviour, in which case you should make the
mapping writable.
J
|
|
From: Tom H. <to...@co...> - 2005-02-15 09:52:11
|
In message <6664B2C1086BF14EBDB96CEEDE4A1EA429F6DE@mail.excelsior>
Evgeny Baskakov <eba...@ex...> wrote:
> I think there is a bug in valgrind. There is a small pre-compiled
> program attached. It has been produced by our own compiler and
> linker tools. This program (which prints "Hello world!" on the
> console) works fine on all Linux distros (Red Hat 8, 9, FC1, 2, 3,
> SuSe 8, 9, etc.) But when I try to run it with valgrind, the latter
> crashes.
How does it crash? What exactly does it print? It is very rare for
valgrind to crash without displaying some useful information that
would help diagnose the crash so if it really is displaying nothing
at all then saying so is helpful.
> I didn't create the bug report via Bugzilla because I couldn't
> attach the test program. So I send this letter. Sorry for
> inconveniences.
It is certainly possible to attach a program to a bug, and that would
be the preferred way for you to report this.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: <js...@ac...> - 2005-02-15 03:58:22
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-02-15 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 196 tests, 12 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_fcntl (stderr) helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/writev (stderr) none/tests/map_unmap (stdout) none/tests/map_unmap (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2005-02-15 03:29:55
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2005-02-15 03:20:09 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 203 tests, 8 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_supp (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-15 03:23:49
|
Nightly build on audi ( Red Hat 9 ) started at 2005-02-15 03:15:11 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind --num-callers=4 ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 202 tests, 6 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-15 03:17:42
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2005-02-15 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 201 tests, 8 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-15 03:15:39
|
Nightly build on standard ( Red Hat 7.2 ) started at 2005-02-15 03:00:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 201 tests, 9 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-15 03:12:55
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2005-02-15 03:05:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 201 tests, 13 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/badjump (stderr) memcheck/tests/post-syscall (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: Jeremy F. <je...@go...> - 2005-02-15 02:51:36
|
On Mon, 2005-02-14 at 21:07 -0500, Greg Parker wrote: > Jeremy Fitzhardinge writes: > > Come to think of it, if there are only a small number of superpages, > > then just mprotecting them on entry/exit to generated code would be > > pretty cheap (since you'd only have to do the Valgrind one(s), which is > > at most one mprotect per superblock, and fewer if they're contiguous). > > It might get expensive if there are lots of syscalls, but it would save > > on extra code per memory reference... > > You mean mprotect the superblock itself when moving in and out of > Valgrind? That won't work because part of the generated code needs > to read and write things like the shadow memory and the ThreadState. > The generated simulation can't touch them, but the generated > instrumentation and scaffolding must. Yep, you're right. Scratch that. J |
|
From: Greg P. <gp...@us...> - 2005-02-15 02:07:32
|
Jeremy Fitzhardinge writes: > Come to think of it, if there are only a small number of superpages, > then just mprotecting them on entry/exit to generated code would be > pretty cheap (since you'd only have to do the Valgrind one(s), which is > at most one mprotect per superblock, and fewer if they're contiguous). > It might get expensive if there are lots of syscalls, but it would save > on extra code per memory reference... You mean mprotect the superblock itself when moving in and out of Valgrind? That won't work because part of the generated code needs to read and write things like the shadow memory and the ThreadState. The generated simulation can't touch them, but the generated instrumentation and scaffolding must. -- Greg Parker gp...@us... |
|
From: Jeremy F. <je...@go...> - 2005-02-15 01:53:05
|
On Mon, 2005-02-14 at 17:31 -0800, Robert Walsh wrote: > On Mon, 2005-02-14 at 17:21 -0800, Jeremy Fitzhardinge wrote: > > That would become mandatory with the superblock scheme, since we could > > never rely on being able to mmap more than 64M. > > Really? So what about an application that mmaps, say, a big chunk > (>64M) of a database into memory? No, I mean internal to Valgrind. We could allocate multiple superblock chunks and hope they're contiguous, but it would seem a bit rude (since that denies them to the client). J |
|
From: Julian S. <js...@ac...> - 2005-02-15 01:48:11
|
> I'm still concerned about fragmentation and intermingleing Valgrind > allocations with client ones, though with a 64M chunk size it should be > lessened. Yeh ... I think there's no 100% solution. I guess the superblock manager could try hard to keep a large contiguous bunch of free superblocks to satisfy very large mmap requests. On 64-bit targets it's pretty much a non-problem, but we might have to be quite careful on 32-bit targets. I guess there's no real reason not to jack up the superblock size to 128M or even 256M; would that help? J |
|
From: Robert W. <rj...@du...> - 2005-02-15 01:31:22
|
On Mon, 2005-02-14 at 17:21 -0800, Jeremy Fitzhardinge wrote: > That would become mandatory with the superblock scheme, since we could > never rely on being able to mmap more than 64M. Really? So what about an application that mmaps, say, a big chunk (>64M) of a database into memory? |
|
From: Jeremy F. <je...@go...> - 2005-02-15 01:24:49
|
On Mon, 2005-02-14 at 18:43 -0600, Nicholas Nethercote wrote: > It sounds pretty good to me too -- keeps the nice property that the client > cannot trash V, but also allows more flexibility in layout. I would > suggest that on 32-bit platforms we don't keep things exactly as they are > now; I think the shadow memory should be allocated on demand, rather than > in a huge chunk at the start, which is the main cause of address layout > brittleness. I'm still concerned about fragmentation and intermingleing Valgrind allocations with client ones, though with a 64M chunk size it should be lessened. > It will be interesting to see how slow the check becomes, and how much it > bloats code sizes, though. Well, if we keep the address space as-is (though described in terms of superblocks) we can still use segmentation. > There are two other ways we can reduce address layout brittleness: > > - read debug info incrementally, so as to handle huge (300MB+) executables > and shared objects That would become mandatory with the superblock scheme, since we could never rely on being able to mmap more than 64M. > - move to a compressed V bit scheme for Memcheck, which can reduce shadow > memory requirements from 9 bits-per-byte to 4 bits-per-byte. > (This is feasible because the vast majority of V-bytes are fully defined > or fully undefined.) Have you prototyped this yet? It would be interesting to see how this works in practice, particularly handling pathalogical cases (since it would be pretty easy to write a program which has very atypical V-bit patterns, and it still needs to do something sensible). J |
|
From: Jeremy F. <je...@go...> - 2005-02-15 01:18:59
|
On Mon, 2005-02-14 at 18:44 -0600, Nicholas Nethercote wrote: > And you have to call mprotect whenever you switch between generated code > and Valgrind code, right? That could be expensive. No! The table would be an array of single pages, each of which is a proxy for a whole superblock. You'd only need to change their protections when you allocate a superblock for client use. Come to think of it, if there are only a small number of superpages, then just mprotecting them on entry/exit to generated code would be pretty cheap (since you'd only have to do the Valgrind one(s), which is at most one mprotect per superblock, and fewer if they're contiguous). It might get expensive if there are lots of syscalls, but it would save on extra code per memory reference... J |
|
From: Nicholas N. <nj...@cs...> - 2005-02-15 00:49:36
|
On Mon, 14 Feb 2005, Julian Seward wrote: > There's been a lot of development work on V recently, not all of > it visible. Jeremy has improved the threading situation by finally > allowing V to run thread libraries natively, which is a great > simplification. I'm working on "Vex", an architecture-neutral > replacement for the UCode framework which facilitates porting > V away from x86. > > It has been a good six months since 2.2 was released and it's > now way overdue for another stable release. Many bugs have been > fixed, and the new threading stuff really ought to go live sooner > rather than later. So we will do a 2.4 release, hopefully > within the next month. Jeremy is in charge of stabilising the > tree to a point where it is at releasable quality. > > 2.4 will be the last x86-only valgrind, and the last using the > UCode framework. Future valgrinds will be built on Vex, a new > library for dynamic translation and instrumentation. Vex is now > working well for x86, and amd64 support is starting to work. > > I will be merging the current cvs tree and the vex development > tree. The aim will be to release an alpha grade "technology > demonstrator" as soon as amd64-linux support is usable. We will > then push towards stabilising the tree for a mid-year valgrind-3.0 > release, which should provide Linux support for x86, amd64, and > possibly ppc32, all from a unified code base. I think this is all great. Big kudos to Julian and Jeremy for these two strands of work, which should improve Valgrind's future viability (via Vex) and correctness/maintainability/stability (via native pthreads) no end. Big ongoing thanks to Tom also for his tireless bug-fixing. I think it's going to be a good year for Valgrind :) N |
|
From: Nicholas N. <nj...@cs...> - 2005-02-15 00:45:05
|
On Mon, 14 Feb 2005, Jeremy Fitzhardinge wrote: > A thought I just had in the shower: if the table entries are actually > pages, you can use mprotect to mark them as client-accessible or not. > Then the sequence looks like: > mov %addr, %tmp > shrl $n, %tmp /* %tmp /= (superblocksize / pagesize) */ > testb $0, table(%tmp) /* any memory-touching instruction; > faults if %addr is bad */ > > This would use no cache and has no jumps, but it does use TLB entries > for the table, so it isn't completely free. But it's worth considering. And you have to call mprotect whenever you switch between generated code and Valgrind code, right? That could be expensive. N |
|
From: Nicholas N. <nj...@cs...> - 2005-02-15 00:43:49
|
On Mon, 14 Feb 2005, Jeremy Fitzhardinge wrote: >> So, what am I missing? > > It looks reasonable. For ia32 I would suggest sticking to the current > layout, so there would just be a series of fixed, preallocated > superblocks. In fact, for any 32-bit implementation I think that's the > best way to go. Since there are only 64 superblocks, we can pretty > easily determine in advance whether they'll end up being client or > Valgrind usable (maybe a few of them would be unallocated). It sounds pretty good to me too -- keeps the nice property that the client cannot trash V, but also allows more flexibility in layout. I would suggest that on 32-bit platforms we don't keep things exactly as they are now; I think the shadow memory should be allocated on demand, rather than in a huge chunk at the start, which is the main cause of address layout brittleness. It will be interesting to see how slow the check becomes, and how much it bloats code sizes, though. There are two other ways we can reduce address layout brittleness: - read debug info incrementally, so as to handle huge (300MB+) executables and shared objects - move to a compressed V bit scheme for Memcheck, which can reduce shadow memory requirements from 9 bits-per-byte to 4 bits-per-byte. (This is feasible because the vast majority of V-bytes are fully defined or fully undefined.) > For 64-bit, there's no reason we couldn't use, say, a 16Gbyte super-page > size and have a 16k-entry table for a 48-bit address space. At the > moment Linux constrains user-space to 256Gbytes, but after 2.6.1[12] > that will unconstrained (ie, up to 2^47 or so, I think) because it will > use a 4-level pagetable structure. At that point, we could put Valgrind > way up high and leave the client with a very clear address space. Sounds ok too. N |