The current way PDL allocates memory for a new piddle
is via the SvGROW perl API call. The problem with this
is that 'Out of memory!' is a fatal error for the perl interpreter.
This makes it impossible to repond to memory limits
in any smart way since the first failure will exit perl itself
(kind of like the end of the universe as far as program
execution goes). If the constructor used die instead,
then it would be possible to catch a problem and retry
with a different size piddle.
An additional point: a lot of times the memory is
there, just not available as a contiguous memory
region. It would be very nice if it were possible to
use dataflow handling to chain together smaller
allocations to create an array of the needed size.
On strawberry perl, adding new piddles in a loop,
I can get up to 1973MB if I do it a MB at a time.
With 10MB at a time, I get to 1980MB.
With 100MB at a time, I get to 1400MB.
With 200MB at a time, I get to 1000MB.
With 300MB at a time, I get to 900MB.
With 400MB at a time, I get to 800MB...
So you can see the ability to use smaller pieces
of memory to make a new piddle could more
than double the available space. Non-lethality
of allocations would prevent temporaries being
generated from killing your PDL session.
Test script attached to ticket...