|
From: Jeremy F. <je...@go...> - 2004-07-05 17:18:59
|
On Mon, 2004-07-05 at 10:11 +0100, Julian Seward wrote:
> This is something we discussed a few months ago. On the plus side
> it completely decouples the client and V's address space and so
> sidesteps what I see (perhaps incorrectly) as the somewhat fragile
> scheme we have at the moment. On the minus side there is some
> overhead, plus there is complication at syscall boundaries of
> doing the relevant address translation and scatter/gather
> operations (copy_to_user, copy_from_user in effect).
Unfortunately I think a VMMU would be even more fragile. It would mean
being able to reliably intercept every single memory address passing
over the user/kernel boundary. At present we make a best attempt, but
if we miss something its no big deal. With a VMMU we would have to be
100% accurate (this is the same reason I'm not keen on completely
virtualizing the file-descriptor space).
Aside from that, there are some much more common syscalls which would be
impossible to deal with in a VMMU scheme, like the SHM ones. They don't
allow a shared memory segment to be broken into separate pieces in the
user virtual address space, so you wouldn't be able to map them into the
Valgrind client's address space linearly. It may not even help with
this aio problem, if the shared aio memory has pointers in it.
The static address space partition is a pain, I agree. But I think its
pain we can deal with, given the advantages of protecting Valgrind from
the client, being able to enforce the client's address space
limitations, and being able to scale to a 64-bit address space easily.
After all, all this is just a limitation of the 32-bit address space; in
a 64-bit space we have plenty of space to fit everything in, and a
simple address space partition is the simplest way to go.
I think for now, the best way squeeze everything into the address space
is:
* take advantage of the 4G/4G patches which some distros are
shipping with; that gives us an extra Gbyte of address space to
play with
* use less memory internally; the big hog is the debug stuff.
stabs makes it hard to do anything but load everything at once,
but stabs is obsolete and dwarf allows pretty fine-grained
incremental loading
* fix things as they come up
J
|