Juho Snellman wrote on Mon, Nov 28, 2011 at 05:29:29AM +0100:
The new heap management doesn't quite work for me. I can't make sense
> If nothing unexpected happens, I'll be releasing a new SBCL next weekend.
> Until then, testing and fixes for regressions will be appreciated more than
> radical code changes :-)
of this relation between bytes allocated and max heap size:
When saving a core you might need a dynamic space of up to double the high-water memory usage, due to the kludgy way the core is compacted. Basically at that point the gc will act like a non-conservative semi-space collector: first it does a gc into memory above last_free_page, and then another one to relocate the data to the start of the heap. This has been the case for a long time, so a failure to save a core with 2.7G of live data with a 4G heap should not be a new failure.
The part about that output that looks dodgy is that in one place it claims 1.7G of heap in use, and in another 2.7G. How large a core do you expect this to produce?