You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(8) |
2
(11) |
3
(21) |
4
(15) |
5
(10) |
|
6
(7) |
7
(7) |
8
(5) |
9
(7) |
10
(5) |
11
(1) |
12
(21) |
|
13
(8) |
14
(17) |
15
(6) |
16
(10) |
17
(7) |
18
(6) |
19
(15) |
|
20
(12) |
21
(16) |
22
(25) |
23
(14) |
24
(10) |
25
(7) |
26
(6) |
|
27
(34) |
28
(13) |
29
(10) |
30
(8) |
|
|
|
|
From: Nicholas N. <nj...@ca...> - 2004-06-24 14:43:10
|
Hi, VG_STACK_SIZE_W is only used in vg_mylibc.c:get_real_execontext(), and it looks like there it should actually be VG_SIGSTACK_SIZE_W. Then VG_STACK_SIZE_W could be removed altogether. Does that sound right? N |
|
From: Nicholas N. <nj...@ca...> - 2004-06-24 14:20:24
|
On Thu, 24 Jun 2004, Nicholas Nethercote wrote: >>> - new thread stacks >>> - brk() allocations >> >> No, these grow upwards from where brk() would normally allocate memory. > > Are you sure? > > The call stack for new thread stacks is: > > do__apply_in_new_thread() calls > VG_(client_alloc)() which calls > VG_(mmap)[CLIENT,FIXED] to allocate the memory I stand by this one, but maybe your text above wasn't questioning this. > The call stack for brk() allocations is: > > PRE(brk) calls > do_brk() which calls > VG_(mmap)[CLIENT,FIXED] to allocate the memory > > VG_(mmap)[CLIENT,FIXED] allocates in the client-seg-map area. Whoops; I was wrong, you're right, brk() allocations go where they normally would, just above the main executable. N |
|
From: Nicholas N. <nj...@ca...> - 2004-06-24 08:32:24
|
On Wed, 23 Jun 2004, Jeremy Fitzhardinge wrote: >> First, a question: there's a file vg_glibc.c which defines overrides for >> brk() and sbrk(). But, AFAICT, on my system they don't seem to be getting >> executed, rather the brk() handling stuff in vg_syscalls.c is what gets >> executed. Is this meant to be like this? Is vg_glibc.c necessary? > > That's some half-finished stuff I was working on. The intent was to > intercept the low-level memory allocation functions so that we could use > glibc mmap/malloc/new/etc from withing Valgrind without causing > problems, so we can use third-party libraries. The trouble is that 1) > there are some nasty cyclic dependencies which make startup hard, and 2) > glibc makes it *very* difficult to intercept the low-level allocators > consistently. The big risk is some code deep in a library running as > part of the Valgrind context ends up using brk() or mmap() and > allocating in the client address space, or worse, allocating memory > without telling Valgrind's address space management code, and the > mapping gets overwritten. > > So, I got stalled without really resolving these issues. Ok. So what should we do about it? We've survived ok so far without third-party libraries -- should we forget about using them and remove this code? To me, there's not much point in having broken not-used code in the repo, it's just confusing. >> - new thread stacks >> - brk() allocations > > No, these grow upwards from where brk() would normally allocate memory. Are you sure? The call stack for new thread stacks is: do__apply_in_new_thread() calls VG_(client_alloc)() which calls VG_(mmap)[CLIENT,FIXED] to allocate the memory The call stack for brk() allocations is: PRE(brk) calls do_brk() which calls VG_(mmap)[CLIENT,FIXED] to allocate the memory VG_(mmap)[CLIENT,FIXED] allocates in the client-seg-map area. >> - V heap: (via VG_(brk) and VG_(get_memory_from_mmap)) >> - [stage2 executable] >> - proxyLWP pages >> - parts of the TC and TT >> - VG_(get_memory_from_mmap)() tool/core allocations (eg. in Massif) >> - VG_(arena_malloc) non-CLIENT tool/core allocations >> - VG_(malloc) tool allocations >> - [V-stack] > > V's stack is the one originally given to us by the kernel, I think. Or > do we still switch? It's the one given by the kernel, growing down from 0xbfffffff. >> I think the V-map-seg and the V-heap can be merged -- Valgrind doesn't >> need its own heap. > > We can certainly grow V's heap in the same area as mappings, but it > might lead to more non-deterministic out-of-memory failures if they end > up colliding. I don't see why V needs a heap at all -- is there any reason it can't do everything with non-contiguous mappings? That's how it was before the FV reorganisation. > There's the other trick which Paul Mackerras (I think) suggested: > putting the symtab reader into another process, and talk to it with > pipes. > > Of course, all of this is moot in a 64-bit address space... The question of how best to layout the address space is moot. But the potential reduction in complexity and code size will still be a boon. N |
|
From: Tom H. <to...@co...> - 2004-06-24 02:25:07
|
Nightly build on dunsmere ( Fedora Core 2 ) started at 2004-06-24 03:20:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 7 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-06-24 02:19:39
|
Nightly build on audi ( Red Hat 9 ) started at 2004-06-24 03:15:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 7 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-06-24 02:13:28
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-06-24 03:10:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 4 stderr failures, 0 stdout failures ================= helgrind/tests/deadlock (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-06-24 02:08:16
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-06-24 03:05:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 8 stderr failures, 1 stdout failure ================= helgrind/tests/deadlock (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badjump (stderr) memcheck/tests/brk (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/new_nothrow (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-06-24 02:06:40
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-06-24 03:00:01 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow readline1: valgrind ./readline1 resolv: valgrind ./resolv seg_override: valgrind ./seg_override sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/badfree-2trace (stderr) make: *** [regtest] Error 1 |
|
From: Jeremy F. <je...@go...> - 2004-06-24 01:31:05
|
On Thu, 2004-06-24 at 01:09 +0100, Nicholas Nethercote wrote: > Hi, > > I've done a big trawl through all the allocation functions. The good news > is that there's quite a bit of scope for simplifying them; various > functions have sprouted over time that do similar things to each other. > > First, a question: there's a file vg_glibc.c which defines overrides for > brk() and sbrk(). But, AFAICT, on my system they don't seem to be getting > executed, rather the brk() handling stuff in vg_syscalls.c is what gets > executed. Is this meant to be like this? Is vg_glibc.c necessary? That's some half-finished stuff I was working on. The intent was to intercept the low-level memory allocation functions so that we could use glibc mmap/malloc/new/etc from withing Valgrind without causing problems, so we can use third-party libraries. The trouble is that 1) there are some nasty cyclic dependencies which make startup hard, and 2) glibc makes it *very* difficult to intercept the low-level allocators consistently. The big risk is some code deep in a library running as part of the Valgrind context ends up using brk() or mmap() and allocating in the client address space, or worse, allocating memory without telling Valgrind's address space management code, and the mapping gets overwritten. So, I got stalled without really resolving these issues. > Second, AFAICT, this is how the main parts of the address space get used, > which is interesting reading: > > - client-heap: > - [client executable] > - nothing else (?) (no client heap as such due to brk() intercepts?) Yep. > - client-map-seg: (via VG_(mmap), native mmap()) > - stack extension (on SIGSEGV) Well, I guess, but that's really the client stack area, which grows into the map area. > - new thread stacks > - brk() allocations No, these grow upwards from where brk() would normally allocate memory. > - malloc() allocations, if malloc is replacement > - mmap() allocations [should include client libraries (?)] > - ipcop_shmat() allocations (?) Yep. > - [client stack] > > - shadow memory (via shadow_alloc()) > > - V-map-seg: (via VG_(mmap)) > - tmp .so debug symbols > - any libraries used by core/tools (?) Yes. > - V heap: (via VG_(brk) and VG_(get_memory_from_mmap)) > - [stage2 executable] > - proxyLWP pages > - parts of the TC and TT > - VG_(get_memory_from_mmap)() tool/core allocations (eg. in Massif) > - VG_(arena_malloc) non-CLIENT tool/core allocations > - VG_(malloc) tool allocations > - [V-stack] V's stack is the one originally given to us by the kernel, I think. Or do we still switch? > It's a tortuous call graph, but I think I've got it right. > > I think the V-map-seg and the V-heap can be merged -- Valgrind doesn't > need its own heap. We can certainly grow V's heap in the same area as mappings, but it might lead to more non-deterministic out-of-memory failures if they end up colliding. There's the other trick which Paul Mackerras (I think) suggested: putting the symtab reader into another process, and talk to it with pipes. Of course, all of this is moot in a 64-bit address space... J |
|
From: Nicholas N. <nj...@ca...> - 2004-06-24 00:09:37
|
Hi,
I've done a big trawl through all the allocation functions. The good news
is that there's quite a bit of scope for simplifying them; various
functions have sprouted over time that do similar things to each other.
First, a question: there's a file vg_glibc.c which defines overrides for
brk() and sbrk(). But, AFAICT, on my system they don't seem to be getting
executed, rather the brk() handling stuff in vg_syscalls.c is what gets
executed. Is this meant to be like this? Is vg_glibc.c necessary?
Second, AFAICT, this is how the main parts of the address space get used,
which is interesting reading:
- client-heap:
- [client executable]
- nothing else (?) (no client heap as such due to brk() intercepts?)
- client-map-seg: (via VG_(mmap), native mmap())
- stack extension (on SIGSEGV)
- new thread stacks
- brk() allocations
- malloc() allocations, if malloc is replacement
- mmap() allocations [should include client libraries (?)]
- ipcop_shmat() allocations (?)
- [client stack]
- shadow memory (via shadow_alloc())
- V-map-seg: (via VG_(mmap))
- tmp .so debug symbols
- any libraries used by core/tools (?)
- V heap: (via VG_(brk) and VG_(get_memory_from_mmap))
- [stage2 executable]
- proxyLWP pages
- parts of the TC and TT
- VG_(get_memory_from_mmap)() tool/core allocations (eg. in Massif)
- VG_(arena_malloc) non-CLIENT tool/core allocations
- VG_(malloc) tool allocations
- [V-stack]
It's a tortuous call graph, but I think I've got it right.
I think the V-map-seg and the V-heap can be merged -- Valgrind doesn't
need its own heap. A similar thing might be possible with the client
space. That's what I'm going to try next.
N
|