|
From: Jeremy F. <je...@go...> - 2004-09-08 20:51:31
|
On Wed, 2004-09-01 at 22:44 +0100, Nicholas Nethercote wrote: > >> The downside of switching to incremental shadow memory is that it makes > >> direct-offset shadow addressing impossible, at least on 32-bit. > > > > The trouble is that memcheck&co do have a fixed ratio of shadow memory > > to real memory used. If the client uses its address space sparsely then > > it causes sparse (wasteful) use of the shadow memory, > > Exactly, that's why I'm arguing against direct-offset shadow addressing. > > > but since we get > > to place all the mmaps, we needn't make it have sparse memory use. The > > exception is if the client explicitly places its mappings, but I don't > > think that's common. > > > > So I know that people are running into memory problems, but it isn't > > clear to me that we can't solve them by using the address space more > > densely. > > (I assume you mean the client portion of the address space?) > > How do propose to use the client address space more densely? I can't see > how this would work. > > I'm not sure if we're all on the same wavelength with this stuff. OK, here's how I'm looking at it. The worse case is that the client uses two pages: one at 0x0, and one at client_end. This has an extremely sparse address space use, and while we only need two pages of shadow, the shadow_mapping occupies a lot of address space. The best case is that the client uses every byte of its mapped address space. In this case, the incremental shadow allocation will use the same amount of memory as the shadow mapping scheme, since every byte needs N bits of shadow. Now, since clients almost never use MAP_FIXED, it means that the address of every memory mapping is under our control. This means that if we place them in memory in a dense fashion, we can approach the best case memory usage density. This means that our original estimate of the amount of shadow memory needed (client size * N bits/byte) will be accurate, and we're making best use of the overall address space. So even if we allow the client address space to grow up as new things are mapped, and the shadow allocations to grow down, they'll always end up meeting at the same place anyway. > > Tools which don't have a fixed ratio (cachegrind) are another issue. > > They're not, technically, using shadow memory (since there isn't the 1:1 > > relationship between client addresses and shadow addresses), but > > Valgrind heap. > > I'm not sure why you say "technically"; Cachegrind (and Calltree, > Nulgrind, and Massif) don't use shadow memory at all. Much of the > discussion doesn't apply to them. However, they are still affected by > some of the rigidity problems eg. Calltree suffers from Problem P4. Wasn't cachegrind changed to allocate its stuff out of the shadow memory anyway? > (And "valgrind heap" is misleading because Valgrind no longer has a heap > as such; I took it out when I rejigged the memory layout stuff. It now > only allocates via maps.) Well, I'd still call that a heap, since its the thing under VG_(malloc). Doesn't really matter how the actual memory is requested from the kernel. > A problem with that is that on x86-64 executables are mapped very low, > 0x400000 I think (4MB), which doesn't leave enough room for even an 8MB > stack. I like the idea of putting it just below client_mapbase better. It would be interesting to know where Solaris-x86-64 puts things. J |