|
From: Greg P. <gp...@us...> - 2005-09-10 04:53:15
|
There's a capability in Mac OS X's Mach VM that basically looks like new memory mappings appearing as a result of a syscall (other than mmap). There isn't a way to specify where any such mapping should be inserted (nothing like MAP_FIXED); indeed, in the worst case, the mapping just appears without any explicit request by the process. In the current Valgrind memory model, there's the client space and Valgrind's space, and never the twain shall meet. To prevent rogue mappings, it would be possible for Valgrind to keep all of Valgrind's space mapped, and leave the unused parts with no permissions. This requires a bit of code change, but should be straightforward. Does aspacemgr still have address ranges that are not mapped, but must not be used for client allocations? If so, it would also need some notion of "mapped to prevent client use but not used by Valgrind". If not, then the entire problem goes away and everybody's happy. (I'm hoping that the truly spontaneous mapping case doesn't actually occur in real programs. The window server is a risk, but so far I've only seen it insert memory regions during window server requests. If spontaneous mappings do occur in real life, then memcheck might need to verify Valgrind's memory map against the kernel's before it actually flags an access error. Determining the initialized-ness of such a lazily-discovered mapping is left as an exercise for the reader.) -- Greg Parker gp...@us... |
|
From: Tom H. <to...@co...> - 2005-09-10 06:39:17
|
In message <171...@ka...>
Greg Parker <gp...@us...> wrote:
> There's a capability in Mac OS X's Mach VM that basically looks like
> new memory mappings appearing as a result of a syscall (other than mmap).
> There isn't a way to specify where any such mapping should be inserted
> (nothing like MAP_FIXED); indeed, in the worst case, the mapping just
> appears without any explicit request by the process.
There are some system calls in linux that do that as well.
> In the current Valgrind memory model, there's the client space and
> Valgrind's space, and never the twain shall meet. To prevent rogue
> mappings, it would be possible for Valgrind to keep all of Valgrind's
> space mapped, and leave the unused parts with no permissions. This
> requires a bit of code change, but should be straightforward.
That's basically we handle it at the moment - there is code to pad
the address space which is invoked before the problem system calls
to ensure that the mapping goes where we want it.
Keeping large chunks of address space mapped is problematic however
which is one reason for the rewrite of the address space manager.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2005-09-10 09:34:54
|
> > In the current Valgrind memory model, there's the client space and
> > Valgrind's space, and never the twain shall meet. To prevent rogue
> > mappings, it would be possible for Valgrind to keep all of Valgrind's
> > space mapped, and leave the unused parts with no permissions. This
> > requires a bit of code change, but should be straightforward.
>
> That's basically we handle it at the moment - there is code to pad
> the address space which is invoked before the problem system calls
> to ensure that the mapping goes where we want it.
>
> Keeping large chunks of address space mapped is problematic however
> which is one reason for the rewrite of the address space manager.
Where I'm going (I think) is: the address space manager behaves as an
observer of what the kernel does, and tries to influence layout where
it can. In the end however it has to go along with what the kernel
does. It seems to be impossible to write a manager which is the sole
dictator of layout, since it can always be defeated by a sufficiently
uncooperative kernel. The best you can hope for is to veto (fail)
client fixed mmap requests in inconvenient places.
The new manager, like the current one, is based around maintaining a
list of segments describing what the current layout is. It uses this
list to generate advisory placements ("please put this new mapping at
address X if you can"). Unlike the current manager, the list explicitly
represents free areas as that makes it easier to iterate over them,
and so it should account for *every* address in (Addr)0 through
(Addr)(-1) inclusive.
The new manager also has the concept of a reservation segment. Such a
segment is not mapped, and so is similar to a free-space segment, but
with the difference that it will not attempt to allocate anything in
that space.
So I think this helps in 2 ways:
- It allows the kernel to do what the hell it likes, and will record
the outcome, provided there is a way to find out what happened.
- It allows you to create reservation sections, which I believe give
you "mapped to prevent client use but not used by Valgrind"
semantics. In fact what it gives you is "I will never hand this
out to anyone of my own accord, but I am prepared to let the kernel
do so" semantics. I think this is what you need?
Reservation segments also seem like a general mechanism for implementing
grow-down stacks. You create an initial stack mapping, and immediately
below that put a reservation section marked as having a shrinkable upper
end. This prevents aspacem from allocating new stuff in that area
(again, the kernel can do what it likes, but there's nothing we can
do about that). Same idea for implementing brk sections (the resvn
goes after the brk section in this case).
I think the real issue re spontaneous mappings is to have a
reliable way to know they have happened, and where the new mapping
is. On Linux that comes down to rescanning /proc/self/maps
after any event which might create such a mapping.
Anyway, that's the theory. Let me know ASAP if it is not what
Darwin needs. It's taken me most of this week to understand in detail
how the aspacemgr needs to be connected to the rest of system.
I hope to have something starting to work in the next couple of days.
J
------------
Here's an example of the segment list on x86-linux at startup, just
after reading /proc/self/maps to get the initial layout:
<<< SHOW_SEGMENTS: With contents of /proc/self/maps (0 segments, 1 segnames)
( 0) /home/sewardj/VgASPACEM/aspacem/Inst/lib/valgrind/memcheck
0: rsvn 0x00000000-0x03FFFFFF 64m ---- (Fixed,Fixed,0)
1: FREE 0x04000000-0xAFFFFFFF 2752m
2: file 0xB0000000-0xB00ECFFF 970752 r-x- d=0x802 i=363997 o=0
3: file 0xB00ED000-0xB00EDFFF 4096 rw-- d=0x802 i=363997 o=966656 (0)
4: anon 0xB00EE000-0xB08A9FFF 8110080 rwx- d=0x000 i=0 o=0 (-1)
5: FREE 0xB08AA000-0xBFFFEFFF 247m
6: anon 0xBFFFF000-0xBFFFFFFF 4096 rw-- d=0x000 i=0 o=0 (-1)
7: rsvn 0xC0000000-0xFFFFDFFF 1023m ---- (Fixed,Fixed,0)
8: anon 0xFFFFE000-0xFFFFEFFF 4096 ---- d=0x000 i=0 o=0 (-1)
9: rsvn 0xFFFFF000-0xFFFFFFFF 4096 ---- (Fixed,Fixed,0)
>>>
You can see for example that there's a reservation for 0xC0000000-0xFFFFFFFF
since that's unavailable to us. The reservation (7,9) is interrupted
by the sysinfo page (8). Similarly I (somewhat arbitrarily) placed a
reservation in the lowest 64M since I didn't think allocating down there
was a good idea.
|
|
From: Greg P. <gp...@us...> - 2005-09-11 03:44:21
|
Julian Seward writes: > The new manager also has the concept of a reservation segment. Such a > segment is not mapped, and so is similar to a free-space segment, but > with the difference that it will not attempt to allocate anything in > that space. This is perfect for Mac OS X's special "shared regions". Basically, this is a chunk of address space reserved for mapping some system shared libraries, if the process uses them. You don't want to try to put anything else there - either the kernel won't let you, or you'll really confuse the system if a library gets loaded later. > I think the real issue re spontaneous mappings is to have a > reliable way to know they have happened, and where the new mapping > is. On Linux that comes down to rescanning /proc/self/maps > after any event which might create such a mapping. I haven't seen any truly spontaneous mappings; there's always something in the transaction that says at least where the mapping is, even if it's unconvenient. Mac OS X has kernel API to enumerate the mappings, not unlike /proc/self/maps. > Anyway, that's the theory. Let me know ASAP if it is not what > Darwin needs. It's taken me most of this week to understand in detail > how the aspacemgr needs to be connected to the rest of system. > I hope to have something starting to work in the next couple of days. I like everything I've heard so far. -- Greg Parker gp...@ap... Runtime Wrangler |
|
From: Nicholas N. <nj...@cs...> - 2005-09-11 20:52:14
|
On Sat, 10 Sep 2005, Julian Seward wrote: > Unlike the current manager, the list explicitly > represents free areas as that makes it easier to iterate over them, > and so it should account for *every* address in (Addr)0 through > (Addr)(-1) inclusive. > > The new manager also has the concept of a reservation segment. Such a > segment is not mapped, and so is similar to a free-space segment, but > with the difference that it will not attempt to allocate anything in > that space. Ah, those both sound very good. N |
|
From: Tom H. <to...@co...> - 2005-09-12 09:51:00
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> You can see for example that there's a reservation for 0xC0000000-0xFFFFFFFF
> since that's unavailable to us. The reservation (7,9) is interrupted
> by the sysinfo page (8). Similarly I (somewhat arbitrarily) placed a
> reservation in the lowest 64M since I didn't think allocating down there
> was a good idea.
Reserving the bottom 64M doesn't work on amd64 though as, at least on
my box, the default load address for an executable is 0x400000 which is
only 4M.
I has to reduce spacem_minAddr to 0x400000 to make it work on amd64.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2005-09-12 10:39:28
|
> Reserving the bottom 64M doesn't work on amd64 though as, at least on > my box, the default load address for an executable is 0x400000 which is > only 4M. > > I has to reduce spacem_minAddr to 0x400000 to make it work on amd64. Ok, interesting. What do you get from ./none/none -d -d --trace-signals=yes --trace-syscalls=yes date ? J |
|
From: Tom H. <to...@co...> - 2005-09-12 10:57:09
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
>> Reserving the bottom 64M doesn't work on amd64 though as, at least on
>> my box, the default load address for an executable is 0x400000 which is
>> only 4M.
>>
>> I has to reduce spacem_minAddr to 0x400000 to make it work on amd64.
>
> Ok, interesting.
>
> What do you get from
>
> ./none/none -d -d --trace-signals=yes --trace-syscalls=yes date
Well I have also had to hack VG_(main_thread_wrapper_NORETURN) in
the amd64 code to match what you did in the x86 code and also, since
your data segment commit this morning I have had to tweak map_base
in load_client but with those hacks and the above change to the amount
of reserved memory I get:
--5358:1:debuglog DebugLog system started by Stage 2 (main), level 2 logging requested
--5358:1:main Welcome to Valgrind version 3.1.ASPACEM debug logging
--5358:1:main Checking current stack is plausible
--5358:1:main Checking initial stack was noted
--5358:1:main Starting the address space manager
--5358:2:aspacem sp_at_startup = 0x7FFFFFED7AB0 (supplied)
--5358:2:aspacem minAddr = 0x00400000 (computed)
--5358:2:aspacem maxAddr = 0x3FFFFFFFF (computed)
--5358:2:aspacem cStart = 0x00400000 (computed)
--5358:2:aspacem vStart = 0x200200000 (computed)
--5358:2:aspacem suggested_clstack_top = 0x3FF000FFF (computed)
--5358:2:aspacem <<< SHOW_SEGMENTS: Initial layout (0 segments, 0 segnames)
--5358:2:aspacem 0: RSVN 00000000-003FFFFF 4194304 ---- SmFixed
--5358:2:aspacem 1: 00400000-2001FFFFF 8190m
--5358:2:aspacem 2: RSVN 200200000-200200FFF 4096 ---- SmFixed
--5358:2:aspacem 3: 200201000-3FFFFFFFF 8189m
--5358:2:aspacem 4: RSVN 400000000-FFFFFFFFFFFFFFFF 17592186028032m ---- SmFixed
--5358:2:aspacem >>>
--5358:2:aspacem Reading /proc/self/maps
--5358:2:aspacem 0: FILE 70000000-7011CFFF 1167360 r-x- d=0xFD00 i=7704126 o=0 (0)
--5358:2:aspacem 0: FILE 7021C000-70225FFF 40960 rw-- d=0xFD00 i=7704126 o=1163264 (0)
--5358:2:aspacem 0: ANON 70226000-708E9FFF 7094272 rw-- d=0x000 i=0 o=0 (-1)
--5358:2:aspacem 0: ANON 7FFFFFEC4000-7FFFFFED8FFF 86016 rw-- d=0x000 i=0 o=0 (-1)
--5358:2:aspacem 0: ANON FFFFFFFFFF600000-FFFFFFFFFFDFFFFF 8388608 ---- d=0x000 i=0 o=0 (-1)
--5358:2:aspacem <<< SHOW_SEGMENTS: With contents of /proc/self/maps (0 segments, 1 segnames)
--5358:2:aspacem ( 0) /home/thh/src/valgrind-aspacem/none/none
--5358:2:aspacem 0: RSVN 00000000-003FFFFF 4194304 ---- SmFixed
--5358:2:aspacem 1: 00400000-6FFFFFFF 1788m
--5358:2:aspacem 2: FILE 70000000-7011CFFF 1167360 r-x- d=0xFD00 i=7704126 o=0 (0)
--5358:2:aspacem 3: 7011D000-7021BFFF 1044480
--5358:2:aspacem 4: FILE 7021C000-70225FFF 40960 rw-- d=0xFD00 i=7704126 o=1163264 (0)
--5358:2:aspacem 5: ANON 70226000-708E9FFF 7094272 rw-- d=0x000 i=0 o=0 (-1)
--5358:2:aspacem 6: 708EA000-2001FFFFF 6393m
--5358:2:aspacem 7: RSVN 200200000-200200FFF 4096 ---- SmFixed
--5358:2:aspacem 8: 200201000-3FFFFFFFF 8189m
--5358:2:aspacem 9: RSVN 400000000-7FFFFFEC3FFF 134201342m ---- SmFixed
--5358:2:aspacem 10: ANON 7FFFFFEC4000-7FFFFFED8FFF 86016 rw-- d=0x000 i=0 o=0 (-1)
--5358:2:aspacem 11: RSVN 7FFFFFED9000-FFFFFFFFFF5FFFFF 17592051826679m ---- SmFixed
--5358:2:aspacem 12: ANON FFFFFFFFFF600000-FFFFFFFFFFDFFFFF 8388608 ---- d=0x000 i=0 o=0 (-1)
--5358:2:aspacem 13: RSVN FFFFFFFFFFE00000-FFFFFFFFFFFFFFFF 2097152 ---- SmFixed
--5358:2:aspacem >>>
--5358:1:main Address space manager is running
--5358:1:main Starting the dynamic memory manager
--5358:1:mallocfr newSuperblock at 0x200201000, for VALGRIND, 1048552 payload bytes
--5358:1:main Dynamic memory manager is running
--5358:1:main Doing scan_auxv()
--5358:1:main Preprocess command line opts
--5358:1:main Loading client
--5358:1:main Setup client env
--5358:1:main preload_string = /tmp/valgrind-aspacem/lib/valgrind/vg_preload_core.so:/tmp/valgrind-aspacem/lib/valgrind/vgpreload_memcheck.so
--5358:1:main Setup client stack
--5358:2:main Client info: entry=0x400009E0 client_SP=0x3FEFFEC80 vg_argc=5 brkbase=0x50B000
--5358:1:main Setup client data (brk) segment
--5358:0:aspacem <<< SHOW_SEGMENTS: Before reserving data segment (0 segments, 1 segnames)
--5358:0:aspacem ( 0) /home/thh/src/valgrind-aspacem/none/none
--5358:0:aspacem 0: RSVN 00000000-003FFFFF 4194304 ---- SmFixed
--5358:0:aspacem 1: file 00400000-00409FFF 40960 r-x- d=0x000 i=0 o=0 (-1)
--5358:0:aspacem 2: 0040A000-00508FFF 1044480
--5358:0:aspacem 3: file 00509000-0050AFFF 8192 rw-- d=0x000 i=0 o=36864 (-1)
--5358:0:aspacem 4: 0050B000-3FFFFFFF 1018m
--5358:0:aspacem 5: file 40000000-40019FFF 106496 r-x- d=0x000 i=0 o=0 (-1)
--5358:0:aspacem 6: 4001A000-40118FFF 1044480
--5358:0:aspacem 7: file 40119000-4011AFFF 8192 rw-- d=0x000 i=0 o=102400 (-1)
--5358:0:aspacem 8: 4011B000-6FFFFFFF 766m
--5358:0:aspacem 9: FILE 70000000-7011CFFF 1167360 r-x- d=0xFD00 i=7704126 o=0 (0)
--5358:0:aspacem 10: 7011D000-7021BFFF 1044480
--5358:0:aspacem 11: FILE 7021C000-70225FFF 40960 rw-- d=0xFD00 i=7704126 o=1163264 (0)
--5358:0:aspacem 12: ANON 70226000-708E9FFF 7094272 rw-- d=0x000 i=0 o=0 (-1)
--5358:0:aspacem 13: 708EA000-2001FFFFF 6393m
--5358:0:aspacem 14: RSVN 200200000-200200FFF 4096 ---- SmFixed
--5358:0:aspacem 15: ANON 200201000-200300FFF 1048576 rwx- d=0x000 i=0 o=0 (-1)
--5358:0:aspacem 16: 200301000-3FE800FFF 8165m
--5358:0:aspacem 17: RSVN 3FE801000-3FEFFDFFF 8376320 ---- SmUpper
--5358:0:aspacem 18: anon 3FEFFE000-3FF000FFF 12288 rwx- d=0x000 i=0 o=0 (-1)
--5358:0:aspacem 19: 3FF001000-3FFFFFFFF 15m
--5358:0:aspacem 20: RSVN 400000000-7FFFFFEC3FFF 134201342m ---- SmFixed
--5358:0:aspacem 21: ANON 7FFFFFEC4000-7FFFFFED8FFF 86016 rw-- d=0x000 i=0 o=0 (-1)
--5358:0:aspacem 22: RSVN 7FFFFFED9000-FFFFFFFFFF5FFFFF 17592051826679m ---- SmFixed
--5358:0:aspacem 23: ANON FFFFFFFFFF600000-FFFFFFFFFFDFFFFF 8388608 ---- d=0x000 i=0 o=0 (-1)
--5358:0:aspacem 24: RSVN FFFFFFFFFFE00000-FFFFFFFFFFFFFFFF 2097152 ---- SmFixed
--5358:0:aspacem >>>
--5358:1:main Setup file descriptors
--5358:1:main Initialise the tool
==5358== Nulgrind, a binary JIT-compiler.
==5358== Copyright (C) 2002-2005, and GNU GPL'd, by Nicholas Nethercote.
==5358== Using LibVEX rev 1363, a library for dynamic binary translation.
==5358== Copyright (C) 2004-2005, and GNU GPL'd, by OpenWorks LLP.
==5358== Using valgrind-3.1.ASPACEM, a dynamic binary instrumentation framework.
==5358== Copyright (C) 2000-2005, and GNU GPL'd, by Julian Seward et al.
--5358:1:main Initialise scheduler
--5358:1:main Initialise thread 1's state
--5358:1:main Initialise signal management
--5358-- Max kernel-supported signal is 64
--5358:1:main Initialise TT/TC
--5358:1:main Initialise redirects
--5358:1:mallocfr newSuperblock at 0x200301000, for VALGRIND, 1048552 payload bytes
--5358:1:main Tell tool about permissions for asm helpers
==5358== For more details, rerun with: -v
==5358==
--5358:1:mallocfr newSuperblock at 0x200401000, for VALGRIND, 1048552 payload bytes
--5358:1:main
--5358:1:main
--5358:1:aspacem <<< SHOW_SEGMENTS: Memory layout at client startup (0 segments, 1 segnames)
--5358:1:aspacem ( 0) /home/thh/src/valgrind-aspacem/none/none
--5358:1:aspacem 0: RSVN 00000000-003FFFFF 4194304 ---- SmFixed
--5358:1:aspacem 1: file 00400000-00409FFF 40960 r-x- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 2: 0040A000-00508FFF 1044480
--5358:1:aspacem 3: file 00509000-0050AFFF 8192 rw-- d=0x000 i=0 o=36864 (-1)
--5358:1:aspacem 4: anon 0050B000-0050BFFF 4096 rwx- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 5: RSVN 0050C000-00D0AFFF 8384512 ---- SmLower
--5358:1:aspacem 6: 00D0B000-3FFFFFFF 1010m
--5358:1:aspacem 7: file 40000000-40019FFF 106496 r-x- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 8: 4001A000-40118FFF 1044480
--5358:1:aspacem 9: file 40119000-4011AFFF 8192 rw-- d=0x000 i=0 o=102400 (-1)
--5358:1:aspacem 10: 4011B000-6FFFFFFF 766m
--5358:1:aspacem 11: FILE 70000000-7011CFFF 1167360 r-x- d=0xFD00 i=7704126 o=0 (0)
--5358:1:aspacem 12: 7011D000-7021BFFF 1044480
--5358:1:aspacem 13: FILE 7021C000-70225FFF 40960 rw-- d=0xFD00 i=7704126 o=1163264 (0)
--5358:1:aspacem 14: ANON 70226000-708E9FFF 7094272 rw-- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 15: 708EA000-2001FFFFF 6393m
--5358:1:aspacem 16: RSVN 200200000-200200FFF 4096 ---- SmFixed
--5358:1:aspacem 17: ANON 200201000-200300FFF 1048576 rwx- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 18: ANON 200301000-200400FFF 1048576 rwx- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 19: ANON 200401000-200500FFF 1048576 rwx- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 20: 200501000-3FE800FFF 8163m
--5358:1:aspacem 21: RSVN 3FE801000-3FEFFDFFF 8376320 ---- SmUpper
--5358:1:aspacem 22: anon 3FEFFE000-3FF000FFF 12288 rwx- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 23: 3FF001000-3FFFFFFFF 15m
--5358:1:aspacem 24: RSVN 400000000-7FFFFFEC3FFF 134201342m ---- SmFixed
--5358:1:aspacem 25: ANON 7FFFFFEC4000-7FFFFFED8FFF 86016 rw-- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 26: RSVN 7FFFFFED9000-FFFFFFFFFF5FFFFF 17592051826679m ---- SmFixed
--5358:1:aspacem 27: ANON FFFFFFFFFF600000-FFFFFFFFFFDFFFFF 8388608 ---- d=0x000 i=0 o=0 (-1)
--5358:1:aspacem 28: RSVN FFFFFFFFFFE00000-FFFFFFFFFFFFFFFF 2097152 ---- SmFixed
--5358:1:aspacem >>>
--5358:1:main
--5358:1:main
--5358:1:main Running thread 1
--5358:1:syswrap- entering VG_(main_thread_wrapper_NORETURN)
--5358:1:syswrap- run_a_thread_NORETURN(tid=1): ML_(thread_wrapper) called
--5358:1:core_os ML_(thread_wrapper)(tid=1): entry
--5358:1:transtab allocate sector 0
SYSCALL[5358,1]( 12) sys_brk ( 0x0 ) --> [pre-success] Success(0x50B000)
SYSCALL[5358,1]( 9) sys_mmap2 ( 0x0, 4096, 3, 34, -1, 0 )
--5358:0:aspacem Valgrind: FATAL: find_map_space
--5358:0:aspacem Exiting now.
The file mappings at 0x40000000 are ld.so because that is what I have
set map_base to in load_client.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Tom H. <to...@co...> - 2005-09-12 11:47:59
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> Well, that's promising. It gets as far as the x86 one does, and
> memory layout is roughly as expected -- artificially constrained
> to the lowest 16G for the most part.
Presumably that constraint is to make memcheck efficient and stop
it using auxiliary maps? In which case it is only client allocations
that need to be below the 16G boundary? Currently it tried to put
everything under that.
I have a small patch here to move vStart up to 16G on amd64 if
you're interested.
> Why did you need to change map_base? I had the idea that m_ume
> should load the executable at the place where the executable wants
> to be loaded, but that it could load the interpreter (ld.so) anywhere
> it feels like.
Because as it stood map_base was zero which meant that the interpreter
got mapped immediately above the client (because the client is loading
at 0x400000 so there is no space below it) and then when you try and
create the data segment immediately above the client the memory you
wanted is already in use.
I suspect you are getting away with it on x86 because there is
normally space below the client for the interpreter.
Really map_base should be somewhere in the middle of the address
space to allow room for the data segment underneath. We should also
really respect the load address from the interpreter if that is
possible - we currently calculate it in interp_addr but then ignore
it for some reason and just use map_base.
That doesn't help on amd64 though because the interpreters preferred
load address is well above the 16G boundary.
> One thing I couldn't figure out in ume.c is how the interpreter
> knows where the executable is. Presumably the interpreter, when
> started (on the VCPU), reads the executable's program header so
> as to find its immediate dependencies, and works from there.
> Any ideas?
Is it not in the auxiliary vector? I think it uses some combination
of AT_PHDR, AT_PHNUM and AT_ENTRY to find what it needs.
> Today am working to get client mmap/munmap/mprotect working, so
> programs can actually run. I have a long list of stuff which is
> now broken or needs reconsidering. What I plan to do is get basic
> functionality restored, tidy up and document how it works, and
> then we can look at whether the design needs revision or not.
> (Fred Brooks lurks in the background :-) I imagine that will take
> me most of this week.
Excellent.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2005-09-12 12:33:21
|
> > Well, that's promising. It gets as far as the x86 one does, and > > memory layout is roughly as expected -- artificially constrained > > to the lowest 16G for the most part. > > Presumably that constraint is to make memcheck efficient and stop > it using auxiliary maps? Yup. > In which case it is only client allocations that need to be below > the 16G boundary? True. Good point. The layout policy is controlled by just one function, VG_(aspacem_getAdvisory). It considers the space request contained in *req and forClient, and produces a suggested (advisory) address (or it decides "no, I'm going to veto that"). I was going to say: If you wanted to look at playing with layout policy, this is the place to start. But then I realised that it's more complicated, because VG_(aspacem_getAdvisory) observes the constraints imposed by reserved segments, and so there also needs to be some consideration of the initial placement of reservations. > I have a small patch here to move vStart up to 16G on amd64 if > you're interested. Thanks, but for the moment I need to stagger dazedly around the minefield for a while longer. I don't yet have a system which can run any program successfully to completion. Perhaps later in the week. J |
|
From: Nicholas N. <nj...@cs...> - 2005-09-12 13:31:43
|
On Mon, 12 Sep 2005, Tom Hughes wrote: > Really map_base should be somewhere in the middle of the address > space to allow room for the data segment underneath. Yes. On x86/Linux it is 0x40000000. In the trunk's code we approximate that by measuring the size of the hole between the executable and the top of the client's space, and then put map_base 1/3 of the way along. Something similar would be good for ASPACEM. N |
|
From: Julian S. <js...@ac...> - 2005-09-12 11:29:55
|
> Well I have also had to hack VG_(main_thread_wrapper_NORETURN) in > the amd64 code to match what you did in the x86 code and also, since > your data segment commit this morning I have had to tweak map_base > in load_client but with those hacks and the above change to the amount > of reserved memory I get: Well, that's promising. It gets as far as the x86 one does, and memory layout is roughly as expected -- artificially constrained to the lowest 16G for the most part. Why did you need to change map_base? I had the idea that m_ume should load the executable at the place where the executable wants to be loaded, but that it could load the interpreter (ld.so) anywhere it feels like. One thing I couldn't figure out in ume.c is how the interpreter knows where the executable is. Presumably the interpreter, when started (on the VCPU), reads the executable's program header so as to find its immediate dependencies, and works from there. Any ideas? Today am working to get client mmap/munmap/mprotect working, so programs can actually run. I have a long list of stuff which is now broken or needs reconsidering. What I plan to do is get basic functionality restored, tidy up and document how it works, and then we can look at whether the design needs revision or not. (Fred Brooks lurks in the background :-) I imagine that will take me most of this week. J |