From: Brian H. <bh...@sp...> - 2004-08-24 17:40:07
|
On Tue, 24 Aug 2004, Diego Olivier Fernandez Pons wrote: > Bonjour, > > Version array have to be stamped. The stamp is a global value hence > anyone can know if the cache he is using reflects the most recent > version of the array. > > The problems faced with stamping are : > - tree-like stamp structure (what if an other worker still uses an old > version of the array in a read-write way ?) So now all accesses to the array are O(log N), where N is the number of "live" versions of the array are still around. So arrays that go through a lot of versions but remain more or less constant in size (for example, arrays being used as queues) would get hit worse. > - garbage collection (there is always a pointer to a value since you > need to be able to restore it in a cache. How will the GC know that it > is a 'restoration' pointer and not a 'user' one) ? This can be handled with weak pointers. Which, I comment, Ocaml has. > > Many implementations allow only read operations on old versions not to > have to cope with the first problem. So the data structures aren't purely applicative. There are reasons I like purely applicative data structures- it's not just elitism (although there is some of that... :-). I like it that when function a passes a list to function b, it knows function b can't change the list without a's consent. I've been bitten once too often by changes to function b suddenly causing it to change the list- screwing up function a. This solution reintroduces the problem- function b may not be able to modify the contents of the array a passed in- but b can make it impossible for a to modify it's list anymore- a change that is visible to a. Worse yet, a can depend upon being able to modify the list you pass into it, but you can now pass in arrays that can not be modified. > Most of the work of Melissa O'Neill is to try to solve the stamp and > garbage-collection problems. The binary tree part being almost > obvious. I think a dynarr implementation might be better. One other thing I worry about is constant factors. When is an O(log N) algorithm faster than an O(1) algorithm? When the constant factor of the O(1) algorithm is more than log N of the constant factor of the O(log N) algorithm. We're talking about a very complicated O(1) algorithm, vr.s a very simple O(log N) algorithm. This, plus the concerns of getting the algorithm correct, and avoiding "surprising" failure modes, makes me disinclined to go for the fancy solution. -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |