You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(11) |
2
(13) |
3
(7) |
|
4
(9) |
5
(23) |
6
(19) |
7
(18) |
8
(2) |
9
(7) |
10
(21) |
|
11
(13) |
12
|
13
(8) |
14
(17) |
15
(19) |
16
(25) |
17
(43) |
|
18
(22) |
19
(12) |
20
(19) |
21
(12) |
22
(9) |
23
(12) |
24
(5) |
|
25
(16) |
26
(25) |
27
(24) |
28
(19) |
29
(26) |
30
(25) |
31
(6) |
|
From: Tom H. <th...@cy...> - 2004-07-16 23:08:08
|
CVS commit by thughes:
Fix typo in comment.
M +1 -1 vg_memory.c 1.63
--- valgrind/coregrind/vg_memory.c #1.62:1.63
@@ -591,5 +591,5 @@ void VG_(pad_address_space)(void)
}
-/* Removed the address space padding added by VG_(pad_address_space)
+/* Remove the address space padding added by VG_(pad_address_space)
by removing any mappings that it created. */
void VG_(unpad_address_space)(void)
|
|
From: Julian S. <js...@ac...> - 2004-07-16 21:35:15
|
CVS commit by jseward: Bring this up to date. M +15 -4 AUTHORS 1.7 --- valgrind/AUTHORS #1.6:1.7 @@ -1,10 +1,21 @@ -Julian Seward, js...@ac..., is the main author. +Julian Seward, js...@ac..., was the original author, creating the +dynamic translation framework, memcheck stuff, and the +signal/syscall/threads support gunk. Nicholas Nethercote, nj...@ca..., did the core/tool -generalisation, and wrote Cachegrind and some of the other tools. +generalisation, and wrote Cachegrind and some of the other tools, and +tons of other stuff, including code generation improvments. -Jeremy Fitzhardinge, je...@go..., wrote much of Helgrind, and lots -of low-level syscall/signal stuff. +Jeremy Fitzhardinge, je...@go..., wrote Helgrind, and lots of +syscall/signal simulation stuff, including a complete redesign of how +syscalls and signals are handled. Also code generation improvements. + +Tom Hughes, th...@cy..., did a vast number of bug fixes, and +helped out with support for more recent Linux/glibc versions". + +Robert Walsh, rj...@du..., added file descriptor leakage +checking, new library interception machinery, support for client +allocation pools, and minor other tweakage. readelf's dwarf2 source line reader, written by Nick Clifton, was |
|
From: Nicholas N. <nj...@ca...> - 2004-07-16 18:30:32
|
On Fri, 16 Jul 2004, Jeremy Fitzhardinge wrote: > On Fri, 2004-07-16 at 14:21 +0100, Julian Seward wrote: > > The other motivation for the SEGV direct map scheme is that it does work > for 64 bit targets. The current scheme doesn't scale, and would > probably need another level of page table to make it work on 64-bit > targets, which affects all the Tools which use it. The SEGV scheme can > hide those details from the Tools. > > My feeling is that once you add all this extra complexity, the direct > mapped SEGV scheme looks a lot more attractive. The performance > difference vs the current scheme was around +/- 1%. In the 64-bit > environment, the SEGV scheme would cost the same as it does now, whereas > adding extra levels/cache lookups/etc are going to cost more. We should be careful with terminology, two different things are being mixed up here. There are two dimensions here, giving three three possible approaches: 1. Current: ENSURE_MAPPABLE with shadow chunk table 2. allocate-on-SEGV with shadow chunk table 3. allocate-on-SEGV with direct mapping (1) and (2) are better for 32-bit machines, where address space is cramped, because direct mapping is an address space hog, and we only need a 2-level table for 32-bit addresses. (3) is better for 64-bit machines because address space is plentiful and a shadow chunk table is difficult to do well with 64-bit addresses. Hmm. N |
|
From: Jeremy F. <je...@go...> - 2004-07-16 18:04:56
|
On Fri, 2004-07-16 at 15:28 +0100, Julian Seward wrote: > On Friday 16 July 2004 14:51, Nicholas Nethercote wrote: > > On Fri, 16 Jul 2004, Julian Seward wrote: > > > I think a major coup would be to arrive at a design which > > > (a) reduces, perhaps, or localises, any Linux-specific > > > assumptions about layout, to make porting to other OSs easier, > > > and more (b) works uniformly well on both 32- and 64-bit > > > platforms. Achieving both would be a majorly Good Thing. > > > > Yeah, I'm concerned about how shadow pages are going to be stored on > > 64-bit architectures, because moving from a 2-level page table to a > > 4-level one sounds bad. > > It is. But do we need 4 levels? What about a 3-level scheme > where the top level table has a 22-bit index, then level 2 > also has a 22-bit index, and the final level has a 20-bit > index --(22,22,20) so to speak. On 32-bit machine we could > stick with (16,16) or move to (12,20) so the final level code > is shared with the 64-bit case. > > Idea #2. Is there a portable/sane way to restrict the addresses > generated to (say) 40 bits, so that the top 24 are zero? Then > we could use a (20,20) scheme. 40 bits is 256 GB, which is a big > step up from 4GB. Probably not doable, I guess. In both cases, a 1MByte page size sounds pretty expensive. Maybe not. Not sure. My feeling is that once you add all this extra complexity, the direct mapped SEGV scheme looks a lot more attractive. The performance difference vs the current scheme was around +/- 1%. In the 64-bit environment, the SEGV scheme would cost the same as it does now, whereas adding extra levels/cache lookups/etc are going to cost more. J |
|
From: Jeremy F. <je...@go...> - 2004-07-16 18:01:11
|
On Fri, 2004-07-16 at 14:21 +0100, Julian Seward wrote: > I agree. A single mechanism is much better, and I would prefer the > ENSURE_MAPPABLE scheme, as it doesn't make any assumptions about the > underlying OS. But: isn't the allocate-on-SEGV approach required for > tools which don't do shadow memory? Apologies if this is a dumb > question, I didn't follow all this thread in detail. If they don't use shadow, they don't need either mechanism. > I think a major coup would be to arrive at a design which > (a) reduces, perhaps, or localises, any Linux-specific > assumptions about layout, to make porting to other OSs easier, > and more (b) works uniformly well on both 32- and 64-bit > platforms. Achieving both would be a majorly Good Thing. The other motivation for the SEGV direct map scheme is that it does work for 64 bit targets. The current scheme doesn't scale, and would probably need another level of page table to make it work on 64-bit targets, which affects all the Tools which use it. The SEGV scheme can hide those details from the Tools. J |
|
From: Nicholas N. <nj...@ca...> - 2004-07-16 17:44:07
|
CVS commit by nethercote:
Slightly change, with J's approval, startup copyright messages to better
reflect reality.
M +1 -1 addrcheck/ac_main.c 1.63
M +1 -1 cachegrind/cg_main.c 1.72
M +2 -2 coregrind/vg_main.c 1.171
M +1 -1 helgrind/hg_main.c 1.80
M +1 -1 memcheck/mc_main.c 1.49
M +1 -1 none/tests/cmdline1.stdout.exp 1.6
M +1 -1 none/tests/cmdline2.stdout.exp 1.6
--- valgrind/addrcheck/ac_main.c #1.62:1.63
@@ -1264,5 +1264,5 @@ void SK_(pre_clo_init)(void)
VG_(details_description) ("a fine-grained address checker");
VG_(details_copyright_author)(
- "Copyright (C) 2002-2004, and GNU GPL'd, by Julian Seward.");
+ "Copyright (C) 2002-2004, and GNU GPL'd, by Julian Seward et al.");
VG_(details_bug_reports_to) (VG_BUGS_TO);
VG_(details_avg_translation_sizeB) ( 135 );
--- valgrind/cachegrind/cg_main.c #1.71:1.72
@@ -1428,5 +1428,5 @@ void SK_(pre_clo_init)(void)
VG_(details_description) ("an I1/D1/L2 cache profiler");
VG_(details_copyright_author)(
- "Copyright (C) 2002-2004, and GNU GPL'd, by Nicholas Nethercote.");
+ "Copyright (C) 2002-2004, and GNU GPL'd, by Nicholas Nethercote et al.");
VG_(details_bug_reports_to) (VG_BUGS_TO);
VG_(details_avg_translation_sizeB) ( 155 );
--- valgrind/coregrind/vg_main.c #1.170:1.171
@@ -1583,5 +1583,5 @@ void usage ( Bool debug_help )
" Extra options read from ~/.valgrindrc, $VALGRIND_OPTS, ./.valgrindrc\n"
"\n"
-" Valgrind is Copyright (C) 2000-2004 Julian Seward\n"
+" Valgrind is Copyright (C) 2000-2004 Julian Seward et al.\n"
" and licensed under the GNU General Public License, version 2.\n"
" Bug reports, feedback, admiration, abuse, etc, to: %s.\n"
@@ -1961,5 +1961,5 @@ static void process_cmd_line_options( UI
VERSION);
VG_(message)(Vg_UserMsg,
- "Copyright (C) 2000-2004, and GNU GPL'd, by Julian Seward.");
+ "Copyright (C) 2000-2004, and GNU GPL'd, by Julian Seward et al.");
}
--- valgrind/helgrind/hg_main.c #1.79:1.80
@@ -3276,5 +3276,5 @@ void SK_(pre_clo_init)(void)
VG_(details_description) ("a data race detector");
VG_(details_copyright_author)(
- "Copyright (C) 2002-2004, and GNU GPL'd, by Nicholas Nethercote.");
+ "Copyright (C) 2002-2004, and GNU GPL'd, by Nicholas Nethercote et al.");
VG_(details_bug_reports_to) (VG_BUGS_TO);
VG_(details_avg_translation_sizeB) ( 115 );
--- valgrind/memcheck/mc_main.c #1.48:1.49
@@ -1637,5 +1637,5 @@ void SK_(pre_clo_init)(void)
VG_(details_description) ("a memory error detector");
VG_(details_copyright_author)(
- "Copyright (C) 2002-2004, and GNU GPL'd, by Julian Seward.");
+ "Copyright (C) 2002-2004, and GNU GPL'd, by Julian Seward et al.");
VG_(details_bug_reports_to) (VG_BUGS_TO);
VG_(details_avg_translation_sizeB) ( 228 );
--- valgrind/none/tests/cmdline1.stdout.exp #1.5:1.6
@@ -38,5 +38,5 @@
Extra options read from ~/.valgrindrc, $VALGRIND_OPTS, ./.valgrindrc
- Valgrind is Copyright (C) 2000-2004 Julian Seward
+ Valgrind is Copyright (C) 2000-2004 Julian Seward et al.
and licensed under the GNU General Public License, version 2.
Bug reports, feedback, admiration, abuse, etc, to: valgrind.kde.org.
--- valgrind/none/tests/cmdline2.stdout.exp #1.5:1.6
@@ -60,5 +60,5 @@
Extra options read from ~/.valgrindrc, $VALGRIND_OPTS, ./.valgrindrc
- Valgrind is Copyright (C) 2000-2004 Julian Seward
+ Valgrind is Copyright (C) 2000-2004 Julian Seward et al.
and licensed under the GNU General Public License, version 2.
Bug reports, feedback, admiration, abuse, etc, to: valgrind.kde.org.
|
|
From: Nicholas N. <nj...@ca...> - 2004-07-16 17:32:31
|
CVS commit by nethercote:
apostrophe pedantry; comment change only
M +1 -1 vg_memory.c 1.62
M +1 -1 vg_syscalls.c 1.111
--- valgrind/coregrind/vg_memory.c #1.61:1.62
@@ -553,5 +553,5 @@ Addr VG_(find_map_space)(Addr addr, UInt
This is designed for use around system calls which allocate
memory in the process address space without providing a way to
- control it's location such as io_setup. By choosing a suitable
+ control its location such as io_setup. By choosing a suitable
address with VG_(find_map_space) and then adding a segment for
it and padding the address space valgrind can ensure that the
--- valgrind/coregrind/vg_syscalls.c #1.110:1.111
@@ -451,5 +451,5 @@ static OpenFd *allocated_fds;
static int fd_count = 0;
-/* Given a file descriptor, attempt to deduce it's filename. To do this,
+/* Given a file descriptor, attempt to deduce its filename. To do this,
we use /proc/self/fd/<FD>. If this doesn't point to a file, or if it
doesn't exist, we just return NULL. Otherwise, we return a pointer
|
|
From: Tom H. <th...@cy...> - 2004-07-16 15:36:54
|
CVS commit by thughes:
Add comments to explain the address space padding technology.
M +13 -0 vg_memory.c 1.61
--- valgrind/coregrind/vg_memory.c #1.60:1.61
@@ -546,4 +546,15 @@ Addr VG_(find_map_space)(Addr addr, UInt
}
+/* Pad the entire process address space, from VG_(client_base)
+ to VG_(valgrind_end) by creating an anonymous and inaccessible
+ mapping over any part of the address space which is not covered
+ by an entry in the segment list.
+
+ This is designed for use around system calls which allocate
+ memory in the process address space without providing a way to
+ control it's location such as io_setup. By choosing a suitable
+ address with VG_(find_map_space) and then adding a segment for
+ it and padding the address space valgrind can ensure that the
+ kernel has no choice but to put the memory where we want it. */
void VG_(pad_address_space)(void)
{
@@ -580,4 +591,6 @@ void VG_(pad_address_space)(void)
}
+/* Removed the address space padding added by VG_(pad_address_space)
+ by removing any mappings that it created. */
void VG_(unpad_address_space)(void)
{
|
|
From: Tom H. <th...@cy...> - 2004-07-16 14:39:38
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> Idea #2. Is there a portable/sane way to restrict the addresses
> generated to (say) 40 bits, so that the top 24 are zero? Then
> we could use a (20,20) scheme. 40 bits is 256 GB, which is a big
> step up from 4GB. Probably not doable, I guess.
Well currently Athlon 64's actually use a 48 bit virtual address
space and a 40 bit physical address space although I believe that
can change with different chip models.
It is reported in /proc/cpuinfo so is presumably read with cpuid
or something - here's what /proc/cpuinfo says on our Athlon 64:
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 4
model name : AMD Athlon(tm) 64 Processor 3200+
stepping : 8
cpu MHz : 1994.895
cache size : 1024 KB
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext lm 3dnowext 3dnow
bogomips : 4089.44
TLB size : 1088 4K pages
clflush size : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Julian S. <js...@ac...> - 2004-07-16 14:38:05
|
> If the top-level table is dynamically rearranged to bring frequently > used entries to the front, and the bottom level pages are big enough, > then you may wind up with the first 3 entries being for the currently > active stack, heap and data-area sub-tables. In which case the > accesses would be cheap for the most case. witter witter witter ... Alternatively, the top-level table could have an associated small direct-mapped cache, which would naturally come to hold the popular entries, falling back to the main table on a miss. One problem is then a new scheme would be needed to do the free alignment checking and I'm not sure what that would be. J |
|
From: Julian S. <js...@ac...> - 2004-07-16 14:26:44
|
On Friday 16 July 2004 14:51, Nicholas Nethercote wrote: > On Fri, 16 Jul 2004, Julian Seward wrote: > > I think a major coup would be to arrive at a design which > > (a) reduces, perhaps, or localises, any Linux-specific > > assumptions about layout, to make porting to other OSs easier, > > and more (b) works uniformly well on both 32- and 64-bit > > platforms. Achieving both would be a majorly Good Thing. > > Yeah, I'm concerned about how shadow pages are going to be stored on > 64-bit architectures, because moving from a 2-level page table to a > 4-level one sounds bad. It is. But do we need 4 levels? What about a 3-level scheme where the top level table has a 22-bit index, then level 2 also has a 22-bit index, and the final level has a 20-bit index --(22,22,20) so to speak. On 32-bit machine we could stick with (16,16) or move to (12,20) so the final level code is shared with the 64-bit case. Idea #2. Is there a portable/sane way to restrict the addresses generated to (say) 40 bits, so that the top 24 are zero? Then we could use a (20,20) scheme. 40 bits is 256 GB, which is a big step up from 4GB. Probably not doable, I guess. Idea #3. Have a two-level scheme. The bottom level tables are as now, covering power-of-two sized bits of address space. The top level table is (conceptually) a list of (start, size, pointer-to-low-level-table) triples. An access first searches the top table to find the bottom table. This could work equally well/badly on both 32 and 64 bit platforms. If the top-level table is dynamically rearranged to bring frequently used entries to the front, and the bottom level pages are big enough, then you may wind up with the first 3 entries being for the currently active stack, heap and data-area sub-tables. In which case the accesses would be cheap for the most case. Alignment checking in the common case could be made free by making each stated chunk in the top-level section marginally smaller than the low-level table it points at. So an access which straddles a low-level table boundary would simply fail to match any top level entries. A slow-case handler which understood this trickery would then deal with it. Another comment is that it's probably worth considering a scheme which minimises the cache miss rate rather than the dynamic instruction count. Euro 0.02, etc. J |
|
From: Nicholas N. <nj...@ca...> - 2004-07-16 13:51:50
|
On Fri, 16 Jul 2004, Julian Seward wrote: > I think a major coup would be to arrive at a design which > (a) reduces, perhaps, or localises, any Linux-specific > assumptions about layout, to make porting to other OSs easier, > and more (b) works uniformly well on both 32- and 64-bit > platforms. Achieving both would be a majorly Good Thing. Yeah, I'm concerned about how shadow pages are going to be stored on 64-bit architectures, because moving from a 2-level page table to a 4-level one sounds bad. N |
|
From: Julian S. <js...@ac...> - 2004-07-16 13:50:09
|
On Friday 16 July 2004 14:46, Nicholas Nethercote wrote: > On Fri, 16 Jul 2004, Julian Seward wrote: > > I agree. A single mechanism is much better, and I would prefer the > > ENSURE_MAPPABLE scheme, as it doesn't make any assumptions about the > > underlying OS. But: isn't the allocate-on-SEGV approach required for > > tools which don't do shadow memory? > > No, this is all about allocating shadow pages; you can either get the > tools to do it themselves (ENSURE_MAPPABLE) or just let them assume the > pages exist and let the core allocate-on-SEGV. Ah, ok, thanks. J |
|
From: Nicholas N. <nj...@ca...> - 2004-07-16 13:46:18
|
On Fri, 16 Jul 2004, Julian Seward wrote: > I agree. A single mechanism is much better, and I would prefer the > ENSURE_MAPPABLE scheme, as it doesn't make any assumptions about the > underlying OS. But: isn't the allocate-on-SEGV approach required for > tools which don't do shadow memory? No, this is all about allocating shadow pages; you can either get the tools to do it themselves (ENSURE_MAPPABLE) or just let them assume the pages exist and let the core allocate-on-SEGV. N |
|
From: Julian S. <js...@ac...> - 2004-07-16 13:20:18
|
> * I was an early advocate of the addr*scale+offset direct-mapping > approach, but I've now changed my mind. Partly because it didn't, as you > say, help performance very much. And partly because direct-mapping > requires shadow memory to be in a single block, which constrains memory > layout too much. With the memory layout changes I'm considering, Valgrind > memory, tool memory and shadow memory will all be intermingled and not in > any particular order. > > * As for the ENSURE_MAPPABLE vs. allocate-on-SEGV approaches for shadow > page allocation, I'm not too fussed either way. But I'd like to choose > one, and remove the other. I don't like having two ways of doing > something. It's confusing and increases code size. Having code that is > never used is a liability. There was a paper written a couple of years > ago about why GCC is so hard to maintain, and having more than one way of > doing things was identified as a major factor. If pressed, I'd go for > ENSURE_MAPPABLE because I think it's simpler and has less code and won't > be noticeably slower. I agree. A single mechanism is much better, and I would prefer the ENSURE_MAPPABLE scheme, as it doesn't make any assumptions about the underlying OS. But: isn't the allocate-on-SEGV approach required for tools which don't do shadow memory? Apologies if this is a dumb question, I didn't follow all this thread in detail. > How does that sound? What does everyone think? I think a major coup would be to arrive at a design which (a) reduces, perhaps, or localises, any Linux-specific assumptions about layout, to make porting to other OSs easier, and more (b) works uniformly well on both 32- and 64-bit platforms. Achieving both would be a majorly Good Thing. J |
|
From: Jani M. <ja...@iv...> - 2004-07-16 13:18:47
|
Hello,
with this patch against today's CVS HEAD the impossible does not happen
anymore in a proprietary app I am valgrinding.
The new opcodes are dealing with BCD extended doubles.
Does it look ok?
thanks
Jani
--- coregrind/vg_to_ucode.c 16 Jun 2004 20:51:45 -0000 1.139
+++ coregrind/vg_to_ucode.c 16 Jul 2004 13:11:13 -0000
@@ -2545,8 +2545,12 @@
return dis_fpu_mem(cb, sorb, 2, wr, eip, first_byte);
case 3: /* FISTP word-integer */
return dis_fpu_mem(cb, sorb, 2, wr, eip, first_byte);
+ case 4: /* FBLD extended-real */
+ return dis_fpu_mem(cb, sorb, 10, rd, eip, first_byte);
case 5: /* FILD qword-integer */
return dis_fpu_mem(cb, sorb, 8, rd, eip, first_byte);
+ case 6: /* FBSTP extended-real */
+ return dis_fpu_mem(cb, sorb, 10, wr, eip, first_byte);
case 7: /* FISTP qword-integer */
return dis_fpu_mem(cb, sorb, 8, wr, eip, first_byte);
default:
|
|
From: Nicholas N. <nj...@ca...> - 2004-07-16 13:01:44
|
On Thu, 15 Jul 2004, Jeremy Fitzhardinge wrote: > The SIGSEGV mechanism is a result of a discussion we had ages ago about > direct-mapping the shadow memory from the client address - basically so > that there's a simple addr*scale+offset to map from a client address to > a shadow address. > > The theory was that this computation is simple enough to do inline, so a > number of the shadow memory accesses wouldn't require calls to helpers. > It would also be, in theory, faster because it removes a memory > reference indirecting through the page. > > There are a couple of problems in practice. The direct-mapped approach > also needs a bounds check to make sure that the pointer is actually a > user-address-space pointer - if it isn't the computed shadow pointer > could end up in the middle of the Valgrind address space. This bounds > check approximately doubles the size of the address calculation, and > probably undermines any performance improvement. > > Which leads the other problem: I never actually measured a clear > performance improvement. I'm not really sure why. It wasn't because of > the overhead of the SIGSEGV handler, since the fault rate dropped a lot > once the working set was established. But I was never really sure of > what it did to cache access patterns, and the address calculation often > ended up being quite fiddly. > > So, no, nothing is using this at the moment. It looks like it should be > a useful mechanism, but it isn't clear that any of the existing tools > can easily use it. On the other hand, there's no huge performance hit, > so if the code turns out to be simpler with direct mapping, it might be > the way to go (ie, it isn't worth changing existing tools, but it might > make sense for new ones). Here are my thoughts: * I was an early advocate of the addr*scale+offset direct-mapping approach, but I've now changed my mind. Partly because it didn't, as you say, help performance very much. And partly because direct-mapping requires shadow memory to be in a single block, which constrains memory layout too much. With the memory layout changes I'm considering, Valgrind memory, tool memory and shadow memory will all be intermingled and not in any particular order. * As for the ENSURE_MAPPABLE vs. allocate-on-SEGV approaches for shadow page allocation, I'm not too fussed either way. But I'd like to choose one, and remove the other. I don't like having two ways of doing something. It's confusing and increases code size. Having code that is never used is a liability. There was a paper written a couple of years ago about why GCC is so hard to maintain, and having more than one way of doing things was identified as a major factor. If pressed, I'd go for ENSURE_MAPPABLE because I think it's simpler and has less code and won't be noticeably slower. How does that sound? What does everyone think? N |
|
From: Nicholas N. <nj...@ca...> - 2004-07-16 09:11:13
|
On Fri, 16 Jul 2004, Tom Hughes wrote: > Implement support for the async I/O system calls in 2.6 kernels. This > requires padding of the address space around calls to io_setup in order > to constrain the kernel's choice of address for the I/O context. > > Based on patch from Scott Smith <sco...@ge...> with various > enhancements, this fixes bug #83060. Tom, Can you please add a couple of brief comments to explain why the padding is necessary and how it works? Thanks. N |
|
From: Tom H. <th...@cy...> - 2004-07-16 06:03:33
|
CVS commit by thughes:
Commit missing kernel interface definitions for async I/O calls.
M +58 -0 vg_kerneliface.h 1.19
--- valgrind/include/vg_kerneliface.h #1.18:1.19
@@ -883,4 +883,62 @@ struct elf_prpsinfo
};
+/*
+ * linux/aio_abi.h
+ */
+
+typedef struct {
+ unsigned id; /* kernel internal index number */
+ unsigned nr; /* number of io_events */
+ unsigned head;
+ unsigned tail;
+
+ unsigned magic;
+ unsigned compat_features;
+ unsigned incompat_features;
+ unsigned header_length; /* size of aio_ring */
+} vki_aio_ring ;
+
+typedef vki_aio_ring *vki_aio_context_t;
+
+typedef struct {
+ ULong data;
+ ULong obj;
+ Long result;
+ Long result2;
+} vki_io_event;
+
+typedef struct {
+ /* these are internal to the kernel/libc. */
+ ULong aio_data; /* data to be returned in event's data */
+ ULong aio_key;
+ /* the kernel sets aio_key to the req # */
+
+ /* common fields */
+ UShort aio_lio_opcode; /* see IOCB_CMD_ above */
+ UShort aio_reqprio;
+ UInt aio_fildes;
+
+ ULong aio_buf;
+ ULong aio_nbytes;
+ Long aio_offset;
+
+ /* extra parameters */
+ ULong aio_reserved2; /* TODO: use this for a (struct sigevent *) */
+ ULong aio_reserved3;
+} vki_iocb; /* 64 bytes */
+
+enum {
+ VKI_IOCB_CMD_PREAD = 0,
+ VKI_IOCB_CMD_PWRITE = 1,
+ VKI_IOCB_CMD_FSYNC = 2,
+ VKI_IOCB_CMD_FDSYNC = 3,
+ /* These two are experimental.
+ * IOCB_CMD_PREADX = 4,
+ * IOCB_CMD_POLL = 5,
+ */
+ VKI_IOCB_CMD_NOOP = 6,
+};
+
+
#endif /* __VG_KERNELIFACE_H */
|
|
From: <js...@ac...> - 2004-07-16 02:52:04
|
Nightly build on nemesis ( SuSE 9.1 ) started at 2004-07-16 03:50:00 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow vg_syscalls.c:5456: error: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5456: error: parse error before ')' token vg_syscalls.c:5459: error: `cb' undeclared (first use in this function) vg_syscalls.c:5459: error: parse error before ')' token vg_syscalls.c:5463: error: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5468: error: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_cancel': vg_syscalls.c:5485: error: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5487: error: `vki_io_event' undeclared (first use in this function) vg_syscalls.c: In function `after_io_cancel': vg_syscalls.c:5493: error: `vki_io_event' undeclared (first use in this function) make[4]: *** [vg_syscalls.o] Error 1 make[4]: Leaving directory `/home/sewardj/ValgrindABT/valgrind/coregrind' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/home/sewardj/ValgrindABT/valgrind/coregrind' make[2]: *** [check] Error 2 make[2]: Leaving directory `/home/sewardj/ValgrindABT/valgrind/coregrind' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/home/sewardj/ValgrindABT/valgrind' make: *** [check] Error 2 |
|
From: Tom H. <to...@co...> - 2004-07-16 02:21:28
|
Nightly build on dunsmere ( Fedora Core 2 ) started at 2004-07-16 03:20:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow vg_syscalls.c:5456: error: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5456: error: syntax error before ')' token vg_syscalls.c:5459: error: `cb' undeclared (first use in this function) vg_syscalls.c:5459: error: syntax error before ')' token vg_syscalls.c:5463: error: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5468: error: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_cancel': vg_syscalls.c:5485: error: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5487: error: `vki_io_event' undeclared (first use in this function) vg_syscalls.c: In function `after_io_cancel': vg_syscalls.c:5493: error: `vki_io_event' undeclared (first use in this function) make[4]: *** [vg_syscalls.o] Error 1 make[4]: Leaving directory `/tmp/valgrind.24861/valgrind/coregrind' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/tmp/valgrind.24861/valgrind/coregrind' make[2]: *** [check] Error 2 make[2]: Leaving directory `/tmp/valgrind.24861/valgrind/coregrind' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.24861/valgrind' make: *** [check] Error 2 |
|
From: Tom H. <th...@cy...> - 2004-07-16 02:16:26
|
Nightly build on audi ( Red Hat 9 ) started at 2004-07-16 03:15:03 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow vg_syscalls.c:5434: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5435: `vev' undeclared (first use in this function) vg_syscalls.c:5439: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_submit': vg_syscalls.c:5457: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5457: parse error before ')' token vg_syscalls.c:5459: `cb' undeclared (first use in this function) vg_syscalls.c:5459: parse error before ')' token vg_syscalls.c:5463: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5468: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_cancel': vg_syscalls.c:5486: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5488: `vki_io_event' undeclared (first use in this function) vg_syscalls.c: In function `after_io_cancel': vg_syscalls.c:5493: `vki_io_event' undeclared (first use in this function) make[2]: *** [vg_syscalls.o] Error 1 make[2]: Leaving directory `/tmp/valgrind.14503/valgrind/coregrind' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.14503/valgrind/coregrind' make: *** [check-recursive] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-07-16 02:11:14
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-07-16 03:10:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow vg_syscalls.c:5434: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5435: `vev' undeclared (first use in this function) vg_syscalls.c:5439: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_submit': vg_syscalls.c:5457: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5457: parse error before ')' token vg_syscalls.c:5459: `cb' undeclared (first use in this function) vg_syscalls.c:5459: parse error before ')' token vg_syscalls.c:5463: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5468: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_cancel': vg_syscalls.c:5486: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5488: `vki_io_event' undeclared (first use in this function) vg_syscalls.c: In function `after_io_cancel': vg_syscalls.c:5493: `vki_io_event' undeclared (first use in this function) make[2]: *** [vg_syscalls.o] Error 1 make[2]: Leaving directory `/tmp/valgrind.14348/valgrind/coregrind' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.14348/valgrind/coregrind' make: *** [check-recursive] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-07-16 02:06:10
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-07-16 03:05:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow vg_syscalls.c:5439: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c:5442: default label not within a switch statement vg_syscalls.c: In function `before_io_submit': vg_syscalls.c:5456: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5456: parse error before `)' vg_syscalls.c:5459: `cb' undeclared (first use in this function) vg_syscalls.c:5459: parse error before `)' vg_syscalls.c:5463: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5468: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c:5473: default label not within a switch statement vg_syscalls.c: In function `before_io_cancel': vg_syscalls.c:5485: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5487: `vki_io_event' undeclared (first use in this function) vg_syscalls.c: In function `after_io_cancel': vg_syscalls.c:5493: `vki_io_event' undeclared (first use in this function) make[2]: *** [vg_syscalls.o] Error 1 make[2]: Leaving directory `/tmp/valgrind.627/valgrind/coregrind' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.627/valgrind/coregrind' make: *** [check-recursive] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-07-16 02:01:16
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-07-16 03:00:01 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow vg_syscalls.c:5434: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5435: `vev' undeclared (first use in this function) vg_syscalls.c:5439: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_submit': vg_syscalls.c:5457: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5457: parse error before ')' token vg_syscalls.c:5459: `cb' undeclared (first use in this function) vg_syscalls.c:5459: parse error before ')' token vg_syscalls.c:5463: `VKI_IOCB_CMD_PREAD' undeclared (first use in this function) vg_syscalls.c:5468: `VKI_IOCB_CMD_PWRITE' undeclared (first use in this function) vg_syscalls.c: In function `before_io_cancel': vg_syscalls.c:5486: `vki_iocb' undeclared (first use in this function) vg_syscalls.c:5488: `vki_io_event' undeclared (first use in this function) vg_syscalls.c: In function `after_io_cancel': vg_syscalls.c:5493: `vki_io_event' undeclared (first use in this function) make[2]: *** [vg_syscalls.o] Error 1 make[2]: Leaving directory `/tmp/valgrind.13642/valgrind/coregrind' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.13642/valgrind/coregrind' make: *** [check-recursive] Error 1 |