On Thu, 2005-08-04 at 09:16 -0400, Rob MacLachlan wrote:
> Dave Roberts wrote:
> > Hi, Rob,
> > On Wed, 2005-08-03 at 07:53 -0400, Rob MacLachlan wrote:
> >>Maybe nobody is worried about this kind of efficency issue anymore, but
> >>I don't like the idea of relocating the heap on two grounds:
> >> 1] the startup time required to relocate everything
> > This is no more than the GC time of the memory occupied by the core
> > file, right?
> It's a lot worse, more like purify, since you're dirtying the entire
> heap. In normal full GC (as opposed to incremental, which does even
> better) you only have to copy dynamic space and scan static space.
> read-only space (the largest due to code and constants) is entirely ignored.
Gotcha. I'll have to go read that code. I'll admit to not really
understanding what happens during purify.
> >>To me, the idea of getting the unix loader to do the mapping sounds
> >>promising, though I have no idea if it will work.
> >you need a 32-bit offset in the relocation
> > table for every pointer in the core file. And in a Lisp system, that
> > could be close to every other word of the core (okay, not that bad, >but
> > *lots* of pointers). Thus, your relocation table could be as large as
> > your core file is, effectively doubling the size of a core file
> The pointer content is more like 20%, at least when I last checked. But
> I thought someone was raising the possibility that we could still get
> fixed allocations if we did it via the unix loader.
Okay, that's better. We still may be better off doing it ourselves.
That said, there are three techniques that I can think of. Let me try to
summarize what has been proposed:
1. We request a memory region via mmap and relocate the core file as we
load it. Dumping a core works roughly as today, modulo any small changes
needed to ease the later relocation.
2. We modify the SBCL ELF to reserve the memory spaces we need, but we
don't actually put anything into the ELF. The spaces have a @nobits
section. This keeps the memory randomizer in the kernel off our backs
for the space we want. We load the core file into the reserved memory as
we do today. Juho Snellman said he tried this and that it causes
problems in a system with smaller RAM than we want to reserve unless you
set /proc/sys/vm/overcommit_memory. IMO, this is just trading on problem
3. When we dump a core, we actually link the core file into an SBCL
executable with an ELF segment that specifies the properties we want.
The segment is non-relocatable. The issue I see here is that if we
reserve one or more large segments, large enough for the whole heap, not
just the actual core file, we suffer the same problem as Juho found with
4. As in #3, but the ELF segments for the SBCL memory regions are stored
in relocatable segments. We use the ELF loader to do our pointer
arithmetic. Again, this may suffer the same problem as #2.
5. As in #4, but we let the ELF loader relocate things where it wants,
then we go through and perform the pointer arithmetic ourselves. Again,
this may suffer the same problem as #2.
Are there any more that I have missed?
> I admit I don't understand the randomization approach, but this sounds
> insane to me. Is lisp really the only application getting gored? I
> would think that any application that used a heap-save approach would be
> in deep doo-doo. What about emacs, for example? What about any number
> of persistent object systems?
Emacs got screwed, too, but only when they performed the dump itself. I
guess they handle the reloading of their heap differently. I don't
understand why, exactly.
Dave Roberts <ldave@...>