From: Brian H. <bri...@ql...> - 2003-04-14 15:58:02
|
On Mon, 14 Apr 2003, John Max Skaller wrote: > 1) the strategy is specified by a user supplied function > with signature > > strategy: int -> int -> int -> int > I like it. Allows the user much better customization of allocation routines. > > 2) It is nice to provide a default function which has > reasonable performance. A memory efficient algorithm is: > [ snip- alogrithm that increases size in chunks ] I'm not sure Hysteresis is the best algorithm. Here's the problem. Assume that we double the array size every time we need to increase it's size, and start at an array size of 1. We then add n elements to an empty array- how much copying do we need to do? Well, when we add the second element, we need to copy one element to a new array (of length 2), when we add the third we need to copy two elements to a new array (of length 4), we add the fifth element, we need to copy 4 elements to a new array (of length 8), etc. So we end up copying 2^0 + 2^1 + 2^2 + 2^3 + 2^4 ... elements. If n is just 1 element shy of being, or is a power of 2, we have to copy n-1 elements total. If n is just over a power of two, we have to copy 2n - 1 elements total. In either case, to add n elements to an empty array requires O(n) elements being copied. Now, let's consider the same situation, adding n elements, but with a hysteresis algorithm. Every time we need to increase the array size, we increase the array size by a fixed amount- say c elements. To put numbers on it, lets assume c=10 and n=1000. The first 10 elements get added for free (no copying), but at element 11 we have to copy the first 10 elements. Then at element 21 we end up having to copy the first 20 elements, at element 31 the first 30 elements, and so on. The total number of elements we have to copy is 10+20+30+..., or to make the series more obvious, 10*(1+2+3+...). So using the simple forumla sum 1..n = n*(n+1)/2, we get: (n/c)*((n/c)+1) n * (n + c) 1 n^2 + (n * c) c* --------------- = c * ----------- * --- = --------------- 2 c 2 2 In other words, we have to copy O(n^2) elements to add n elements. In our example of n=1000 and c=10, we have to copy 49,500 elements. While in the double algorithm, since 1000 < 1024, we only have to copy 1023 elements, or 2.07% the work that Hysteresis requires. Going to 1025 elements requires doubling to reallocate and copy 1024 elements, bringing the total to 2047 elements copied. But hystersis requires an extra 3,030 copies- almost three times the work doubling requires. > An faster routine uses a doubling factor instead, > but HYSTERESIS is vital to prevent thrashing. You preventing thrashing by letting shrinkage go longer before shrinking. Here's the idea: you don't shrink the array until it falls below 25% capacity, and then you halve the size until the array is back up above 25% capacity (but it'll still be below 50% capacity). Thrashing is still a possibility, but only if the size is bouncing over a very wide range- at which point I question if it's really thrashing. Although this is why I like the idea of having a user definable reallocation strategy. Don't like the default? Write your own. Even better, I could see users writting custom reallocation strategies that "hook into" how the arrays are being used, for example: let my_realloc_strategy currsize used = if (my_algorithm_phase == heavy_allocation_and_deallocation) then (* Aggressizely upsize the array to avoid copying, and rarely * if ever downsize the array, I'll need those slots. Probably * use a doubling algorithm. *) else (* Conservatively upsize the array, but aggressively downsize. * Probably use a hystersis algorithm. *) ;; Brian |