|
From: Julian S. <js...@ac...> - 2004-07-05 09:16:50
|
On Monday 05 July 2004 09:47, Nicholas Nethercote wrote: > On Sun, 4 Jul 2004, Tom Hughes wrote: > > As a result it typically seems to get allocated in the valgrind part > > of the address space which can't be accessed by the client while > > pointer checking is turned on. As libaio tries to peek at it in order > > to avoid entering the kernel when io_getevents is called and there > > are no events pending this is a problem... > > > > Anybody got any suggestions for a way of handling this? > > Something Julian suggested: instead of partitioning the address space, as > we currently do -- part for the client, part for Valgrind+tool -- instead > virtualize it, by introducing a software MMU. Every address would have to > be converted before use, though... it would be a lot of infrastructure and > probably quite tricky, and possibly very slow. However, it would get rid > of the problems with the client and Valgrind banging heads in the address > space. This is something we discussed a few months ago. On the plus side it completely decouples the client and V's address space and so sidesteps what I see (perhaps incorrectly) as the somewhat fragile scheme we have at the moment. On the minus side there is some overhead, plus there is complication at syscall boundaries of doing the relevant address translation and scatter/gather operations (copy_to_user, copy_from_user in effect). =46rom the performance point of view, it might be possible to somehow piggyback on what is effectively a simulated MMU=20 maintained anyway by memcheck/helgrind/addrcheck. =20 In any case I would be interested to know what the performance and complexity impact of a soft MMU is. Without that data, at the moment we don't really know what tradeoff we're making. J |