From: Brian H. <bh...@sp...> - 2004-07-28 22:24:38
|
Switching lists to one where it's on-topic. I've been pushing for applicative enums for a while- with "destructive" enums being forced to emulate applicative semantics. If I wrote it, would you support it? Or would it get shot down again? I'm willing to do a structure of functions approach- I'm not married to the object based approach. This problem should be the sweet spot of enums- if they're not usefull here, I question their usefullness. I thought enums has a filter function (also lazily applied)? If there is some other behavior you need that can't be (efficiently) implemented with the API already provided, let us know- it can be fixed. As for pattern matching- it's a usefull bit of syntactic sugar. But lacking it isn't that ugly. The advantages of enums should far outweigh this minor disadvantage. On 29 Jul 2004, skaller wrote: > On Thu, 2004-07-29 at 00:36, Brian Hurt wrote: > > On 28 Jul 2004, skaller wrote: > > > > > On Wed, 2004-07-28 at 11:43, Brian Hurt wrote: > > > > On Tue, 27 Jul 2004 br...@ar... wrote: > > > > > > > Very long lists are a sign that you're using the wrong data structure. > > > > > > What would you recommend for a sequence of tokens? > > > Streams are slow and hard to match on.. bucket lists > > > have lower storage overhead but hard to match on. > > > > Extlib Enumerations. For short lists, yeah they're slower than lists. > > That doesn't matter -- the lists are long by specification. > > > But for long lists, I could see them being a lot faster. Don't forget > > cache effects- streaming processing can have much better cache behavior > > than repeatedly walking a long list (too large to fit into cache). > > Can't pattern match on them. One reason for building > a list is I filter it, for example, in Felix I strip out white space > tokens, in Vyper (Python interpreter written in Ocaml) > I did something like 13 separate passes to handle > the indentation and other quirks to precondition the input > to the parser so it became LALR(1). > > So, I'd have to use a list as a buffer for the head of the stream > anyhow.. > > Also, there is a serious design problem with ExtLib Enums. > Although the data structure appears functional, it doesn't > specify when things happen precisely. > > In particular if the input is a stream, that is, uses > mutators to extract elements, then instead of using > the persistence and laziness so you can use the Enums > as forward iterators -- for example in a backtracking > parser -- the Enums actually degrade to uncopyable > input iterators. > > Since Ocamllex uses a mutable lex buffer, the Enums > based on them are also non-functional input iterators .. > [I can get around that by calling 'force()' but that > totally defeats the purpose of using Enums .. :] > > Whereas, a plain old list is a purely functional > forward iterator, and unquestionably works with > a backtracking parser. > > As an example of a simple modification I could do that > won't work easily with uncontrolled control inversion: > suppose I cache the token stream on disk, and in > particular Marshal file 'fred.flx' out as 'fred.tokens'. > [Now you *have* to force() all the iterators, or > each one inside the #include will write the file > to disk at the end of the sub-file .. but that > should only be done once -- its quite slow writing > a file to disk .. forcing all the enums makes > separate copies of the tokens .. argggg .. ] > > The problem goes away when I manually build lists > and preprocess them because I have explicit control. > > Bottom line is that Enums work fine to integrate > purely functional data structures together but they're > not very useful mixing coupled streams together. > > Crudely -- if you have a hierarchy of streams > you may need to read them in a particular order > due to the coupling .. with STL input iterators > you can do that, with hand written Ocaml > you can do that -- with Enums you can't. > > -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: skaller <sk...@us...> - 2004-07-28 23:41:13
|
On Thu, 2004-07-29 at 08:32, Brian Hurt wrote: > Switching lists to one where it's on-topic. > > I've been pushing for applicative enums for a while- with "destructive" > enums being forced to emulate applicative semantics. I am wondering exactly what that means though .. :) > If I wrote it, would > you support it? Or would it get shot down again? There is a tension here between compatibility with existing Enums and a newer design. My personal position is roughly: (a) the newer design must preserve most of the good properties of the existing design (b) It must introduce new good properties the existing design lacks (c) if these conditions are met, I'd be willing to break the existing interface. However I must say this will have zero immediate impact on my code base because I'm not actually using ExtLib. However it will have an important impact in that I might actually be able to use it if Enums are fixed: I'd like to. > I'm willing to do a > structure of functions approach- I'm not married to the object based > approach. This problem should be the sweet spot of enums- if they're not > usefull here, I question their usefullness. Enums are clearly useful for purely functional data stuctures since they integrate the generic notion of 'inductive data type' with that of 'sequence' via some kind of arbitrary visitation algorithm. That's useful! It's a limited and messy form of polyadic programming, but its better than naive Ocaml, where mixing up multiple representations of some sequences (such as lists, sets, arrays etc) in either space (at once in one program) or in time (where you want to change your representation) is really quite messy. It isn't that Enums are not a useful concept: they are. Its that he design has been pushed to try to cover coinductive data types like streams and that just doesn't work. The properties 'I' which apply to inductive data types and 'C' which apply to coninductive ones are dual but distinct, and if you want a common interface you can only use the intersection. To restore the extra bits for each kind you'd normally use a simple design like: kind Iterator kind Inductive : Iterator kind CoInductive : Itertor > I thought enums has a filter function (also lazily applied)? If there is > some other behavior you need that can't be (efficiently) implemented with > the API already provided, let us know- it can be fixed. I'm one of 'us' -- see the SF list of developers :) > As for pattern matching- it's a usefull bit of syntactic sugar. But > lacking it isn't that ugly. The advantages of enums should far outweigh > this minor disadvantage. Its not a minor issue in my application. Correctness is important, and its trivial for me to use HOF's to sequence repeated pattern match based folds on a list to get the right output. It may be slow, but it isn't even marginally slow compared with my lookup algorithm :) -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: Brian H. <bh...@sp...> - 2004-07-29 04:27:33
|
On 29 Jul 2004, skaller wrote: > On Thu, 2004-07-29 at 08:32, Brian Hurt wrote: > > Switching lists to one where it's on-topic. > > > > I've been pushing for applicative enums for a while- with "destructive" > > enums being forced to emulate applicative semantics. > > I am wondering exactly what that means though .. :) The "core" functionality I'm thinking of is something like this: val curr: 'a Enum.t -> 'a option (* returns Some x if x is the current value of the enumeration, None if the enumeration doesn't have a current value. *) val next: 'a Enum.t -> 'a Enum.t (* Returns the enumeration of the rest of the elements *) You could use it like: let list_of_enum e = let rec loop e accu = match Enum.curr e with | None -> List.rev accu | Some x -> loop (Enum.next e) (x :: accu) in loop e [] ;; Except that we'd implement this function is Obj.magic, but that's a different topic. Now, for enums of applicative data structures, this works just fine. But for imperitive enums, where getting the current element changes global state, they'd have to fake applicative behavior. What this means is that they'd have to keep track of the what it returned as next. Basically, it'd have to keep an 'a Enum.t option ref around. If it's None, the next enum has not been created yet, and you create the next enum and update the reference. If it's Some x, you return x, the same value you did last time. You'd have to do the same thing with the value returned from curr. This is a little extra overhead for imperitive enums, but it keeps the enum applicative. And note- since all the references are unidirectional, if you're really just stepping through the list and discarding the enums as you go, the GC will clean the enums up behind you, even if you're imperitive. Ocaml is actually quite good at not keeping false references to dead data around. The complexity of handling the imperitive to applicative interface could be put in the library. You'd have a seperate creation function for imperitive enums vr.s applicative, that might look something like: val enum_applicative: (~curr:unit -> 'a option) -> (~next:unit -> 'a Enum.t) -> (?count: unit -> int) -> 'a Enum.t val enum_imperitive: (~get: unit -> 'a option) -> (?count: unit -> int) -> 'a Enum.t You could create an imperitive enumeration of all the lines of a file just by: let enum_lines chan = enum_imperitive ~get:(fun () -> try (Some (input_line chan)) with End_of_file -> None) Actually, I go one step better. I define my enums as ('a, 'b) Enum.t. The 'b is the "internal" type. I redefine the interface to be: val curr: ('a, 'b) Enum.t -> 'a option val next: ('a, 'b) Enum.t -> ('a, 'b) Enum.t val enum_applicative: (~curr:unit -> 'a option) -> (~next:unit -> 'a Enum.t) -> (?count: unit -> int) -> ('a, 'a) Enum.t val enum_imperitive: (~get: unit -> 'a option) -> (?count: unit -> int) -> ('a, 'a) Enum.t I.e. initially the internal type is meaningless. But then I could define: val map: ('a -> 'b) -> ('a, 'c) Enum.t -> ('b, 'c) Enum.t This allows me to short circuit map functions. Mapping enums multiple times doesn't matter, I never allocate more than two Some blocks per call to curr. But I do this without the annoying option/exception confusion the current enums have. The advantages of this approach are, IMHO: 1) Applicative semantics makes it easier to write correct code (works more like a list) 2) Cleaner API (no clone, no exception/option switchs) The disadvantages: 1) Possibly higher instruction count per operation for applicative enums- each next you have to allocate a new enumeration or two (no more than two- map can short circuit that as well). 2) Imperitive enums will be "signifigantly" (for anal enough definitions of signifigantly) slower, take more memory, and be overall more complex. We're still not talking more than few dozens of instructions. That's the sticking point. The current library is optimized for ultimate speed- at the cost of making it harder to use. So it doesn't get used in places where IMHO it should. That's the debate as I see it- ultimate performance vr.s usability. IMHO, the performance hit for enums of applicative data structures is damned close to zero- remember that we currently have the cost of setting up a try/with block. Haven't actually implemented it, so I don't know. And the performance hit of imperitive enums will only be taken for stuff that'll have other, much more signifigant, performance cost (i.e. file I/O). > I'm one of 'us' -- see the SF list of developers :) This actually makes it worse :-). > > > As for pattern matching- it's a usefull bit of syntactic sugar. But > > lacking it isn't that ugly. The advantages of enums should far outweigh > > this minor disadvantage. > > Its not a minor issue in my application. Correctness is important, > and its trivial for me to use HOF's to sequence repeated pattern > match based folds on a list to get the right output. It may > be slow, but it isn't even marginally slow compared with my > lookup algorithm :) > > Um, if you're looking up items in a long list, you really want to be using some sort of tree structure. Also this is nice feature of enums- create the enum, map it multiple times, then fold over it putting the elements into a tree. It's not either-or. I'm with you- correctness is goal number one. -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: skaller <sk...@us...> - 2004-07-29 09:19:50
|
On Thu, 2004-07-29 at 14:35, Brian Hurt wrote: > On 29 Jul 2004, skaller wrote: > Now, for enums of applicative data structures, this works just fine. But > for imperitive enums, where getting the current element changes global > state, they'd have to fake applicative behavior. What this means is that > they'd have to keep track of the what it returned as next. Basically, > it'd have to keep an 'a Enum.t option ref around. If it's None, the next > enum has not been created yet, and you create the next enum and update the > reference. If it's Some x, you return x, the same value you did last > time. You'd have to do the same thing with the value returned from curr. This idea was implemented by me for STL many years ago: basically you have an input iterator type I, and build an iterator adaptor container F, which converts I to a forward iterator by simply buffering the input from the 'earliest' iterator to the current position. However, this technique *still* has a problem: the actual stream destructor is lazy in the sense it is only called when needed to fetch a never before fetched element. Unfortunately the lazy evaluation in the Enum package itself means that time is indeterminate, so you still lose control over coupled streams. Just as an example -- and a very nasty one -- consider a tokeniser and a parser streamed together, and suppose that after some reduction the parser modifies the tokeniser to change what it tokenises... then you simply cannot buffer the tokens, because when you read forward after modifying the tokeniser, you get the wrong tokens, since the ones in the buffer used the old rules. Before you laugh too hard at this example -- THE most common lexer and parser combination in the world must do precisely what I described, its called a C compiler :) In particular the lexer is modified as soon as a typename is defined so the next identifier with the same spelling is lexed as a typename keyword not an ordinary identifier.. Frontc does this, and so do most C lexer/parser systems AFAIK :) Anyhow, I think you are right this technique will solve a lot of problems, and what's more, there is no general technique that CAN solve them all. Instead we just specify all imperative enums must be uncoupled. This is a restriction, and it may be if you break the rules, Enum still works fine. The important thing is we don't promise it. The key thing is to be very clear when the specified interface semantics hold. For me to use Enums, even if I know the implementation code backwards, I need to verify the documented requirements are met by my application -- for example deciding that the C lexer parser would break the rules, and would NOT be a suitable combination to glue together with Enums. This is fine .. there are plenty of things Enums will work just fine for. Here's another example that doesn't work well: Hashtable. Weird -- but what happens if you iterate a hashtable and insert and delete inside the argument function? The result may include 'deleted' elements, and fail to include some inserted ones. I actually have this problem in Felix compiler, I had to convert to a Set to control the process: Hashtbl.iter just isn't all that useful: Set.choose is a much better function: its a stream iterator, whereas all those functions in Ocaml called 'iter' are NOT in fact iterators at all -- the control relation is inside out. This is again a coupling problem .. and the purely functional approach neatly sidesteps it, despite the algorithm actually being procedural (choose an element from a set .. assign the set to a mutable field with the element removed .. if more elements are then added, well, set choose will chose them, and if they're removed it won't -- the evaluation order is under control! [Something about adding a level of indirection solving all problems in CS .. :] -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: Brian H. <bh...@sp...> - 2004-07-29 15:56:38
|
On 29 Jul 2004, skaller wrote: > This idea was implemented by me for STL many years ago: > basically you have an input iterator type I, and build > an iterator adaptor container F, which converts I to > a forward iterator by simply buffering the input from > the 'earliest' iterator to the current position. I actually wouldn't have a problem with this. But I'd have functions to convert in both directions (turn an I into an F and vice versa). Both directions can be faked. > > Before you laugh too hard at this example -- > THE most common lexer and parser combination > in the world must do precisely what I described, > its called a C compiler :) Actually, in the case of C this isn't true. There is enough information in the grammar to be able to differentiate type names from variable names. You generally need to sanity check after the fact that the name they're using as a type name really is a type name, but you don't have to modify the token stream. This is demonstratable as most C compilers use lex and yacc to build their parser. There are other languages for which your example is true- so the point stands. But C isn't a good example. > Here's another example that doesn't work well: Hashtable. Iterating over imperitive data structures has always been a sore spot. One of the many advantages of functional programming :-). Most programmers in imperitive languages, however, know enough to not modify the data structures they're iterating over. I'd be inclined to say that modifying a data structure while iterating over it causes undefined behavior- in the terminology of the C standard, undefined behavior means that any behavior is acceptable. If you invoke undefined behavior, the code can whistle "Night in Tunisia", make a pot of coffee, and explode, and still be conformant. -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: Nicolas C. <war...@fr...> - 2004-07-29 18:47:36
|
> > This idea was implemented by me for STL many years ago: > > basically you have an input iterator type I, and build > > an iterator adaptor container F, which converts I to > > a forward iterator by simply buffering the input from > > the 'earliest' iterator to the current position. > > I actually wouldn't have a problem with this. But I'd have functions to > convert in both directions (turn an I into an F and vice versa). Both > directions can be faked. This would be nice, I wonder if this would be actually feasible using current imperative enums. Any implementation proposal ? Regards, Nicolas Cannasse |
From: skaller <sk...@us...> - 2004-07-29 20:36:46
|
On Fri, 2004-07-30 at 02:04, Brian Hurt wrote: > On 29 Jul 2004, skaller wrote: > > Before you laugh too hard at this example -- > > THE most common lexer and parser combination > > in the world must do precisely what I described, > > its called a C compiler :) > > Actually, in the case of C this isn't true. There is enough information > in the grammar to be able to differentiate type names from variable names. I'm afraid not: counter-example: (A)(B)(C) Here's an actual example, which compiles and runs fine under gcc: int f(int x){ return 1; } typedef int (*fp)(int); fp g(fp x){ return x; } int main() { int x = (int)(f)(1); // parsed as (int)(f(1)) int y = (g)(f)(1); // parsed as (g(f))(1) printf("%d, %d\n", x,y); } Application has a higher precedence than casts in C:) There are other cases which are merely LR(1) ambiguous but which can be parsed correctly, as you claim, provided you can look ahead to the end of statement. -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: skaller <sk...@us...> - 2004-07-29 20:51:53
|
On Fri, 2004-07-30 at 02:04, Brian Hurt wrote: > On 29 Jul 2004, skaller wrote: > I'd be inclined to say that > modifying a data structure while iterating over it causes undefined > behavior- You could also say 'it is unspecified whether the inserted or deleted element is included in the iteration' and the same for the value of a particular key that is changed in the iteration. Unspecified behaviour is not the same thing as undefined behaviour. In particular Xavier did say that this is the case for Hashtbl: it won't crash if you modify the table while iterating. This would be useful to me -- for example, I have a function that does 'inlining' in Felix by running through all the functions in a hashtable and doing inlining into them .. which involves further lookup in the same table. The process ALSO adds new functions to the table as it goes. The correctness of the result is not changed no matter whether newly inserted functions are seen or not, nor whether a function to be inlined is already inlined into or not. It does impact efficiency though, and it also means an added function might not be inlined into (affecting the efficiency of generated code). So the distinction does have an impact .. although the best solution is to use a technique that guarrantees closure, it is sometimes useful to have a less than perfect algorithm to start so you can check some other piece of code actually works (in this case the individual function inliner needed checking before checking closure). -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: Brian H. <bh...@sp...> - 2004-07-29 21:30:51
|
On 30 Jul 2004, skaller wrote: > On Fri, 2004-07-30 at 02:04, Brian Hurt wrote: > > On 29 Jul 2004, skaller wrote: > > > I'd be inclined to say that > > modifying a data structure while iterating over it causes undefined > > behavior- > > You could also say 'it is unspecified whether > the inserted or deleted element is included in the iteration' > and the same for the value of a particular key that is changed > in the iteration. Can you see the same element twice? Imagine the following situation: the hashtable is built on top of a resizable array, each entry a bucket. We start with a hash table with 16 buckets in it, including element x in bucket 3. We iterate through the hashtable, seeing element x when we go past bucket three. Then, sometime before we're done iterating, we add an element to the hashtable that causes the hashtable to resize the underlying array. In doing so, it moves element x from bucket 3 to bucket 19. So when we start iterating again, when we get to bucket 19, we see element x a second time. We can also miss any given element as well- including ones that are already in the hashtable. Instead of adding elements, we're removing elements, and move element x down from bucket 19 to bucket 3 after we've already passed bucket 3. -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: skaller <sk...@us...> - 2004-07-29 22:40:29
|
On Fri, 2004-07-30 at 07:38, Brian Hurt wrote: > On 30 Jul 2004, skaller wrote: > > > On Fri, 2004-07-30 at 02:04, Brian Hurt wrote: > > > On 29 Jul 2004, skaller wrote: > > > > > I'd be inclined to say that > > > modifying a data structure while iterating over it causes undefined > > > behavior- > > > > You could also say 'it is unspecified whether > > the inserted or deleted element is included in the iteration' > > and the same for the value of a particular key that is changed > > in the iteration. > > Can you see the same element twice? Possibly -- it would depend on the exact statement of behaviour. get_first() get_after() get_after_or_equal() would fix this :) -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: Nicolas C. <war...@fr...> - 2004-07-29 07:46:56
|
> Switching lists to one where it's on-topic. > > I've been pushing for applicative enums for a while- with "destructive" > enums being forced to emulate applicative semantics. If I wrote it, would > you support it? Or would it get shot down again? I'm willing to do a > structure of functions approach- I'm not married to the object based > approach. This problem should be the sweet spot of enums- if they're not > usefull here, I question their usefullness. Well, I guess I need to answer this one :) As I already told to John Skaller, I'm not against modifying the current Enum implementation. I agree there is a problem with input iterators VS forward iterators as Enums. However, I'm not satisfied with proposals being made to correct this. We should keep : - maximum compliance with current enums : the "applicative enums" (let's call them purely functional) proposal is not going this way since it's modifying quite a lot current enums - one type to rule them all ( no shadow types, no subtyping , both kind of iterators unified as a single enum ) in the following sample : let e = Std.input_lines ch in let e2 = Enum.map (fun x y -> (x,y)) e e in where two inputs iterators are used together, only the purely functional style would actually succeed, but there is a big speed tradeoff.... I would like to get some benchs of an Array.of_enum (List.enum big_list) with purely functional VS imperative style. > I thought enums has a filter function (also lazily applied)? If there is > some other behavior you need that can't be (efficiently) implemented with > the API already provided, let us know- it can be fixed. Well, to get the idea of what is needed, you should look at the Enum API and try to implement it using purely functional enums. Only once you did that you can see if the design is good or not. Nicolas Cannasse |
From: Brian H. <bh...@sp...> - 2004-07-29 15:32:58
|
On Thu, 29 Jul 2004, Nicolas Cannasse wrote: > - maximum compliance with current enums : the "applicative enums" (let's > call them purely functional) proposal is not going this way since it's > modifying quite a lot current enums In other words, functional enums are never, ever, going to be acceptable. > - one type to rule them all ( no shadow types, no subtyping , both kind of > iterators unified as a single enum ) I'm assuming this is refering to the ('a, 'b) Enum.t idea. Personally, I consider this less ugly than the exception/option switch the current library uses. > > I thought enums has a filter function (also lazily applied)? If there is > > some other behavior you need that can't be (efficiently) implemented with > > the API already provided, let us know- it can be fixed. > > Well, to get the idea of what is needed, you should look at the Enum API and > try to implement it using purely functional enums. Only once you did that > you can see if the design is good or not. > Bluntly, I've gotten more than a little bit tired of having the rug pulled out from under me. I've been around this block before, and I'm disinclined to spend the time and effort to write code that'll simply get tossed out. And the concern for backwards compatibility rings hollow in my ears as well. We're quite willing to break compatibility with the INRIA standard library when it suits us- witness the demodularized Set and Map libraries. Bluntly, these are probably hurting our chances on getting folded in to the standard library. But that's perfectly OK. To the point where if you actually submitted code that was backwards compatible to the standard Map library, it'd get bounced because it used modules. Demonstratably. But breaking backwards compatibility with the Enum library- now, that would be bad. So unless I'm pretty well convinced that the code will be accepted before I start writting, I am strongly disinclined to start writting. As, based upon my previous experiences, I currently beleive that the code won't be accepted. And no, "write it up and we'll think about it" isn't helping. -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: Nicolas C. <war...@fr...> - 2004-07-29 18:44:39
|
> > - maximum compliance with current enums : the "applicative enums" (let's > > call them purely functional) proposal is not going this way since it's > > modifying quite a lot current enums > > In other words, functional enums are never, ever, going to be acceptable. That's not exactly what I said (see below). > > - one type to rule them all ( no shadow types, no subtyping , both kind of > > iterators unified as a single enum ) > > I'm assuming this is refering to the ('a, 'b) Enum.t idea. Personally, I > consider this less ugly than the exception/option switch the current > library uses. Shadow types have different usage, which is the possibility to subtype without objects. This would give us a compiler check in order to distingish between forward and input iterators, and then allow some operations (such as count) on the former but not the later. The exception/option problem you're raising is - if I remember correctly - the fact that "Enum.create" ask an unit -> 'a next method eventualy raising an exception while the "Enum.get" is returning a 'a option. IMHO, this really makes sense for performances reason and also because the people creating the enums and the people using them are "most of the time" different : for exemple you'll not rewrite your List/Array enum bindings, but simply blindly use them, so you only need the best performances, whatever the tricks of the implementation. > > > I thought enums has a filter function (also lazily applied)? If there is > > > some other behavior you need that can't be (efficiently) implemented with > > > the API already provided, let us know- it can be fixed. > > > > Well, to get the idea of what is needed, you should look at the Enum API and > > try to implement it using purely functional enums. Only once you did that > > you can see if the design is good or not. > > > > Bluntly, I've gotten more than a little bit tired of having the rug pulled > out from under me. I've been around this block before, and I'm > disinclined to spend the time and effort to write code that'll simply get > tossed out. > > And the concern for backwards compatibility rings hollow in my ears as > well. We're quite willing to break compatibility with the INRIA standard > library when it suits us- witness the demodularized Set and Map libraries. > Bluntly, these are probably hurting our chances on getting folded in to > the standard library. But that's perfectly OK. To the point where if you > actually submitted code that was backwards compatible to the standard Map > library, it'd get bounced because it used modules. Demonstratably. But > breaking backwards compatibility with the Enum library- now, that would be > bad. > > So unless I'm pretty well convinced that the code will be accepted before > I start writting, I am strongly disinclined to start writting. As, based > upon my previous experiences, I currently beleive that the code won't be > accepted. And no, "write it up and we'll think about it" isn't helping. I'm sorry I you took that remark badly, it was not the goal. It's actually my way of programming : when some difficult problem arise - and enums are a difficult problem or we wouldn't be talking now - the best is often to try to implement it, and see how it works. I did that first time with Enums, and later with IO. Then for IO I realized my design was not apprioriate so I made the changes : it was better when done so I was satisfied. I don't think we should keep strict backward compatibility, so if I was sure that purely functionnal enums would correct all the problems so I'ld say let's go for it ! but I'm not sure, maybe because I didn't try to implement them this way. I don't have a lot of time but I might try to do so, I was just here making the proposal that you provide us working sources that we can directly have a look at the impact of the changes on the API and how it works better. Concerning some code you previously posted, I don't remind I said 'no' to some piece of code that would have stuck perfectly in ExtLib. If you or other people here think I didn't took enough time to watch some proposal, then please remind me and post it again. But since right now I'm kind of "benevolent dictator of ExtLib" (although I dislike the expression) I sometimes end up making the choices of not including some code unless I'm convinced, because I think it's better for ExtLib not to grow uncontrolled. Regards, Nicolas Cannasse |
From: Brian H. <bh...@sp...> - 2004-07-29 20:10:09
|
On Thu, 29 Jul 2004, Nicolas Cannasse wrote: > > > - maximum compliance with current enums : the "applicative enums" (let's > > > call them purely functional) proposal is not going this way since it's > > > modifying quite a lot current enums > > > > In other words, functional enums are never, ever, going to be acceptable. > > That's not exactly what I said (see below). Exactly how much compatibility is required? My position on this is that I want a clear idea of where the line is drawn. At which point I will consider wether it's worth my while to write the code. I have better things to do than to throw code against a wall and see if it sticks. > The exception/option problem you're raising is - if I remember correctly - > the fact that "Enum.create" ask an unit -> 'a next method eventualy raising > an exception while the "Enum.get" is returning a 'a option. IMHO, this > really makes sense for performances reason and also because the people > creating the enums and the people using them are "most of the time" > different : for exemple you'll not rewrite your List/Array enum bindings, > but simply blindly use them, so you only need the best performances, > whatever the tricks of the implementation. Yep. You've hit the nail on the head- and here is where we part company. I will gladly sacrifice a fair bit of performance for design cleanliness. The *ONLY* argument in favor of the current exception/option switch (which is exactly what you think it is) is performance. So how much wriggle room do I have on performance? Note that the proposal I've given actually makes getting the current element *faster*- IIRC from the benchmarks I did on the first go around, a try block cost 2.5 times as much as allocating a Some block- so allocating 2 Some blocks is about 4/7th the cost of a try + one Some block (the current implementation). But my implementation allocates a new enumeration every step- which slows that part down. Total slowdown? I don't know. But the last time we went around this block, the answer came back that any slow down was unacceptable. Performance at all costs. Fine. Except now we know what one of the costs is- people aren't using enums, and instead are using long lists, because they value correctness over wringing that last ounce of performance out of the code. > I'm sorry I you took that remark badly, it was not the goal. > It's actually my way of programming : when some difficult problem arise - > and enums are a difficult problem or we wouldn't be talking now - the best > is often to try to implement it, and see how it works. I did that first time > with Enums, and later with IO. Then for IO I realized my design was not > apprioriate so I made the changes : it was better when done so I was > satisfied. "Plan to throw one away" isn't a bad idea. I've already thrown one away. The lessons I've learned lead me to beleive that were I to implement this idea, it'd get thrown away too. > > I don't think we should keep strict backward compatibility, so if I was sure > that purely functionnal enums would correct all the problems so I'ld say > let's go for it ! but I'm not sure, maybe because I didn't try to implement > them this way. I don't have a lot of time but I might try to do so, I was > just here making the proposal that you provide us working sources that we > can directly have a look at the impact of the changes on the API and how it > works better. Feel free to steal my ideas and implement the code. The three main areas of improvement (in rough order of importance) are: 1) Functional, not impertive, semantics 2) Get rid of the exception/option switch 3) Seperate the "move within in the enum" behavior from the "get the head element of the enum" behavior. > > Concerning some code you previously posted, I don't remind I said 'no' to > some piece of code that would have stuck perfectly in ExtLib. If you or > other people here think I didn't took enough time to watch some proposal, > then please remind me and post it again. But since right now I'm kind of > "benevolent dictator of ExtLib" (although I dislike the expression) I > sometimes end up making the choices of not including some code unless I'm > convinced, because I think it's better for ExtLib not to grow uncontrolled. > Has you position that modules are bad changed? Or would I be submitting the code just to have it bounce again? Under what circumstances would code *not* be bounced for using modules? Compatibility with an existing standard library? Because the semantics of one or more operations stand to benefit from them (by enforcing ordering, etc)? Because the author of the library felt that they were a good interface? Or never? -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: skaller <sk...@us...> - 2004-07-29 21:37:51
|
On Fri, 2004-07-30 at 06:18, Brian Hurt wrote: > On Thu, 29 Jul 2004, Nicolas Cannasse wrote: > Yep. You've hit the nail on the head- and here is where we part company. > I will gladly sacrifice a fair bit of performance for design cleanliness. Nah, there is no conflict! "Trust me" the cleanest and most elegant and correct design WILL also be the fastest :) What I suspect is .. to get performance, it WILL be necessary to have more than one module: either you use RTTI like dynamic checking such as 'fast_count', to gain speed dynamically, or you provide the bifurcation in the interface. The point is that there IS a 'kind' distinction, the semantics aren't the same for lists and streams. There simply isn't any possible way to get high performance and correct behaviour for both kinds without bifurcation, the choice is simply whether to make the split statically or at run time. I prefer a static split if possible, however this may lead to a combinatorial explosion of kinds, which is sometimes better handled dynamically (where you can navigate the combination tree in time instead of exploding your code base infinitely ..) I actually think Nicolas has the right attitude -- "its broke but I cant see a really good alternative so I won't fix it" -- but in email I think it comes across more hardline than he intends. BTW: at least part of the problem is Ocaml itself. Ocaml makes a limited number of choices at run time, to provide a unified static interface. Unfortunately, that's really hard to specialise when you want better performance for some cases -- such as arrays of unboxed float. In fact C++ handles this far better than Ocaml, since specialisation is builtin and fundamental to the whole template concept. -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: Nicolas C. <war...@fr...> - 2004-07-29 20:23:39
Attachments:
test.ml
|
> I'm sorry I you took that remark badly, it was not the goal. > It's actually my way of programming : when some difficult problem arise - > and enums are a difficult problem or we wouldn't be talking now - the best > is often to try to implement it, and see how it works. I did that first time > with Enums, and later with IO. Then for IO I realized my design was not > apprioriate so I made the changes : it was better when done so I was > satisfied. Here, I tried some bench to compare purely functionnal Enums with existing ones. The source code could be improved, but here are my first thoughts : - writing correct code with lazy evaluation is quite tricky. You can easily run infinite loops if you don't take care - performances are awful : 15 times slower in native. my source might offer room for improvments, but not in that order. Please tell me after reading it. Regards, Nicolas Cannasse |
From: Nicolas C. <war...@fr...> - 2004-07-29 20:28:25
Attachments:
test.ml
|
> > I'm sorry I you took that remark badly, it was not the goal. > > It's actually my way of programming : when some difficult problem arise - > > and enums are a difficult problem or we wouldn't be talking now - the best > > is often to try to implement it, and see how it works. I did that first > time > > with Enums, and later with IO. Then for IO I realized my design was not > > apprioriate so I made the changes : it was better when done so I was > > satisfied. > > Here, I tried some bench to compare purely functionnal Enums with existing > ones. > The source code could be improved, but here are my first thoughts : > - writing correct code with lazy evaluation is quite tricky. You can easily > run infinite loops if you don't take care > - performances are awful : 15 times slower in native. my source might offer > room for improvments, but not in that order. Damn, looks like I mistake the "test.ml" file :) Here's the good one, please ignore the previous one. NC |
From: Brian H. <bh...@sp...> - 2004-07-29 21:21:40
|
On Thu, 29 Jul 2004, Nicolas Cannasse wrote: > Here, I tried some bench to compare purely functionnal Enums with existing > ones. > The source code could be improved, but here are my first thoughts : > - writing correct code with lazy evaluation is quite tricky. You can easily > run infinite loops if you don't take care > - performances are awful : 15 times slower in native. my source might offer > room for improvments, but not in that order. > > Please tell me after reading it. You're using lazy evaluation from the Lazy module, so of course performance sucks. You don't need lazy evaluation to get applicative semantics. -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: Nicolas C. <war...@fr...> - 2004-07-29 21:34:22
|
> > Here, I tried some bench to compare purely functionnal Enums with existing > > ones. > > The source code could be improved, but here are my first thoughts : > > - writing correct code with lazy evaluation is quite tricky. You can easily > > run infinite loops if you don't take care > > - performances are awful : 15 times slower in native. my source might offer > > room for improvments, but not in that order. > > > > Please tell me after reading it. > > You're using lazy evaluation from the Lazy module, so of course > performance sucks. You don't need lazy evaluation to get applicative > semantics. From your previous post : "Now, for enums of applicative data structures, this works just fine. But for imperitive enums, where getting the current element changes global state, they'd have to fake applicative behavior. What this means is that they'd have to keep track of the what it returned as next." To some extent, if an Enum is purely functional, it has to remember about it's current value and it's next enum (once computed). That's exactly what lazy is doing. Or maybe I got wrong your idea of applicative enums ? Regards, Nicolas Cannasse |
From: Brian H. <bh...@sp...> - 2004-07-29 22:07:56
|
On Thu, 29 Jul 2004, Nicolas Cannasse wrote: > > > Here, I tried some bench to compare purely functionnal Enums with > existing > > > ones. > > > The source code could be improved, but here are my first thoughts : > > > - writing correct code with lazy evaluation is quite tricky. You can > easily > > > run infinite loops if you don't take care > > > - performances are awful : 15 times slower in native. my source might > offer > > > room for improvments, but not in that order. > > > > > > Please tell me after reading it. > > > > You're using lazy evaluation from the Lazy module, so of course > > performance sucks. You don't need lazy evaluation to get applicative > > semantics. > > >From your previous post : > > "Now, for enums of applicative data structures, this works just fine. But > for imperitive enums, where getting the current element changes global > state, they'd have to fake applicative behavior. What this means is that > they'd have to keep track of the what it returned as next." > > To some extent, if an Enum is purely functional, it has to remember about > it's current value and it's next enum (once computed). That's exactly what > lazy is doing. Or maybe I got wrong your idea of applicative enums ? type 'a foo = { mutable has_curr: bool; mutable curr_elem: 'a option; mutable has_next: bool; mutable next_enum: 'a Enum.t option; get: unit -> 'a };; let curr f () = if f.has_curr then f.curr_elem else begin f.has_curr <- true; f.curr_elem <- f.get(); f.curr_elem end ;; etc. Note that this is just to fake applicative semantics around an imperative generator. For applicative data structures, life would be easier- probably something similiar to: type 'a t = { c: unit -> 'a option; n: unit -> 'a t; };; let make ~curr ~next = { c=curr; n=next };; let curr e = e.c (); let next e = e.n (); let list_enum lst = let curr lst () = match lst with | [] -> None | h :: _ -> Some h in let rec next lst () = match lst with | [] -> assert false (* or some other exception *) | _ :: t -> make ~curr:(curr t) ~next:(next t) in make ~curr:(curr lst) ~next:(next lst) ;; -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: skaller <sk...@us...> - 2004-07-29 20:55:17
|
On Fri, 2004-07-30 at 04:44, Nicolas Cannasse wrote: > > > - maximum compliance with current enums : the "applicative enums" (let's > > > call them purely functional) proposal is not going this way since it's > > > modifying quite a lot current enums > > > > In other words, functional enums are never, ever, going to be acceptable. > > That's not exactly what I said (see below). That's true Nicolas, but what you said comes very close to saying you aren't willing to change the interface. Unfortunately, the interface *is* the problem. -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: skaller <sk...@us...> - 2004-07-29 21:20:12
|
On Fri, 2004-07-30 at 04:44, Nicolas Cannasse wrote: > I'm sorry I you took that remark badly, it was not the goal. I think the conditions you stated are too strict. I had the same problem in C++ -- the committee wouldn't consider fixing things that were broken with a rewrite, the'd accept a minor patch, or an extension, or a constraint. Unfortunately when something *fundamental* is broken that some other major facility depends on .. you will just create a complete mess if you don't actually fix the fundamentals 'no matter what the cost': the cost will be higher down the track. The broken type system in C/C++ was never fixed, and so C++ templates just don't work properly -- they magnify all the inconsistencies in the underlying type system from 'bad code for which there is a workaround' to 'there isn't any way to do this'. > It's actually my way of programming : when some difficult problem arise - > and enums are a difficult problem or we wouldn't be talking now - the best > is often to try to implement it, and see how it works. I did that first time > with Enums, and later with IO. Then for IO I realized my design was not > apprioriate so I made the changes : it was better when done so I was > satisfied. I progam the same way. It would be better to be a genius and know in advance what the correct design was -- but I'm not :) I usually push the existing code until I have a better design, and then do a rewrite. That sometimes changes compatibility. Whilst ExtLib does have users, most would be willing to change their code if necessary I think -- and I think it will be, if we want to get some or all of Extlib into the standard distro. ExtLib mainly consists of decoupled components, so we could just grab bits and propose them for the standard distro -- with one important exception, Enums. Of course it is precisely the Enums which *provide* some of the decoupling -- which makes it even more critical to get them right. Because they're so critical to the whole design, there comes a point where you might seriously consider 'slash and burn' as the best solution. This means -- simply delete all the suspect functions in the interface. Eliminate 'fast_count' for example and also the constructor which takes a generator function. Specify the whole thing *only works for purely functional containers*. Make people fix their code so they simply can't use any feature that isn't certain. THEN go back and re-provide functionality which is desirable, but this time making sure the interface is right and won't change any of the specified guarrantees. This process WILL BE PAINFUL for existing users. Bad luck .. but its probably the only way. If you conduct an experiment, you have to be willing to actually 'experiment' .. meaning change things -- and this is a message to the users of ExtLib, not to Nicolas. In C/C++ committee parlance it is reasonable to say we should not *gratuitously* break client code -- but what I'm suggesting above isn't a gratuitous change. Perhaps a way to handle this is to create a branch in CVS. Its *really really hard* for someone like Brian Hurt to propose an alternative without actually implementing it in the code base and then seeing what happens -- which is just the style of programming you use yourself -- I'm sure Brian would be willing to throw out his branch if it didn't actually work well enough, but it is really hard to judge without trying it. Furthermore -- it is often wise to make a completely new *totally* incompatible interface, see if it works, and THEN argue how it can be morphed into the existing interface to minimise the impact. It usually turns out there is a way to do this which allows for a migration path, but it is very complex to simultaneously examine a new proposal for correctness and ALSO try to make it as compatible as possible .. the latter being a waste of time if the first examination isn't successful. -- John Skaller, mailto:sk...@us... voice: 061-2-9660-0850, snail: PO BOX 401 Glebe NSW 2037 Australia Checkout the Felix programming language http://felix.sf.net |
From: Nicolas C. <war...@fr...> - 2004-07-29 21:10:14
|
> > > > - maximum compliance with current enums : the "applicative enums" (let's > > > > call them purely functional) proposal is not going this way since it's > > > > modifying quite a lot current enums > > > > > > In other words, functional enums are never, ever, going to be acceptable. > > > > That's not exactly what I said (see below). > > That's true Nicolas, but what you said comes very close > to saying you aren't willing to change the interface. > > Unfortunately, the interface *is* the problem. > I'll clarify my thinking then : What kind of modifications are needed to support - for example - purely functional enums ? Theses : val iter : ('a -> unit) -> 'a t -> unit val iter2 : ('a -> 'b -> unit) -> 'a t -> 'b t -> unit val fold : ('a -> 'b -> 'b) -> 'b -> 'a t -> 'b val fold2 : ('a -> 'b -> 'c -> 'c) -> 'c -> 'a t -> 'b t -> 'c val iteri : (int -> 'a -> unit) -> 'a t -> unit val iter2i : (int -> 'a -> 'b -> unit) -> 'a t -> 'b t -> unit val foldi : (int -> 'a -> 'b -> 'b) -> 'b -> 'a t -> 'b val fold2i : (int -> 'a -> 'b -> 'c -> 'c) -> 'c -> 'a t -> 'b t -> 'c val map : ('a -> 'b) -> 'a t -> 'b t val mapi : (int -> 'a -> 'b) -> 'a t -> 'b t val filter : ('a -> bool) -> 'a t -> 'a t val filter_map : ('a -> 'b option) -> 'a t -> 'b t val append : 'a t -> 'a t -> 'a t val concat : 'a t t -> 'a t val empty : unit -> 'a t val from : (unit -> 'a) -> 'a t val init : int -> (int -> 'a) -> 'a t are part of Enum API, and I think that even with purely functional enums, we don't need to modify their signatures. I'm open to any proposal of Enum improvment the would keep this kind of Enum behavior (the ability to lazily stack operations such as maps and filters) as long as performances doesn't degrade too much (a good design/performances tradeoff is a winning choice). That's why I'm really open to Brian proposal, not just rejecting it blindly, as long as he/we comes with a working implementation that satisfy theses goals. Regards, Nicolas Cannasse |
From: Brian H. <bh...@sp...> - 2004-07-29 21:37:07
|
On Thu, 29 Jul 2004, Nicolas Cannasse wrote: > I'll clarify my thinking then : > What kind of modifications are needed to support - for example - purely > functional enums ? > > Theses : > > val iter : ('a -> unit) -> 'a t -> unit > val iter2 : ('a -> 'b -> unit) -> 'a t -> 'b t -> unit > val fold : ('a -> 'b -> 'b) -> 'b -> 'a t -> 'b > val fold2 : ('a -> 'b -> 'c -> 'c) -> 'c -> 'a t -> 'b t -> 'c > val iteri : (int -> 'a -> unit) -> 'a t -> unit > val iter2i : (int -> 'a -> 'b -> unit) -> 'a t -> 'b t -> unit > val foldi : (int -> 'a -> 'b -> 'b) -> 'b -> 'a t -> 'b > val fold2i : (int -> 'a -> 'b -> 'c -> 'c) -> 'c -> 'a t -> 'b t -> 'c > val map : ('a -> 'b) -> 'a t -> 'b t > val mapi : (int -> 'a -> 'b) -> 'a t -> 'b t > val filter : ('a -> bool) -> 'a t -> 'a t > val filter_map : ('a -> 'b option) -> 'a t -> 'b t > val append : 'a t -> 'a t -> 'a t > val concat : 'a t t -> 'a t > > val empty : unit -> 'a t All of the above could and should be supported without a change to the signature. > val from : (unit -> 'a) -> 'a t > val init : int -> (int -> 'a) -> 'a t These two functions imply imperitive semantics. init could be salvaged, if I could call the function multiple times with the same index. from would have to create an adaptor. Both could be supported. Except that I'd change from to: val from: (unit -> 'a option) -> ('a, 'a) t > That's why I'm really open to Brian proposal, not just rejecting it blindly, > as long as he/we comes with a working implementation that satisfy theses > goals. > If I write the code, one of two things will happen: 1) it'll be accepted as the new enum implementation 2) I'll fork the project It's not happening this week, in any case. -- "Usenet is like a herd of performing elephants with diarrhea -- massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." - Gene Spafford Brian |
From: Francisco V. <fv...@ts...> - 2004-08-02 17:36:40
|
Hello could I suggest that you didn't throw away previous versions of the Datatype? If I got it right, you are provideing an (improved) opaque implementation of Enum.... Could I suggest that you encapsulate and rename the old one as Lazy (or whatever the author(s) (Nicolas?) deem best...) These two implementations are bound to have different costs for each operation and some users may prefer one to the other for differente applications (the Lazyness in *very* big implementations might be an asset in spite of tail-recursion, etc.) Also, it would help users if you declared the cost of each operation (to the best of your knowledge) in the implementation docs. BUt I also understand that the single-implementation for single-interface is the discipline implicitely endorsed by the Standard library and that keeping several implementations for the same datatype complicates maintenance and usability. So I'll take whatever decision you finally hit upon. ;) Anyway thanks for the interest in improving performance... It benefits us all. Regards, Francisco Valverde - DTSC - UNiversidad Carlos III de Madrid (Visiting at ICSI, Berkeley) Brian Hurt wrote: >On Thu, 29 Jul 2004, Nicolas Cannasse wrote: > > > >>I'll clarify my thinking then : >>What kind of modifications are needed to support - for example - purely >>functional enums ? >> >>Theses : >> >>val iter : ('a -> unit) -> 'a t -> unit >>val iter2 : ('a -> 'b -> unit) -> 'a t -> 'b t -> unit >>val fold : ('a -> 'b -> 'b) -> 'b -> 'a t -> 'b >>val fold2 : ('a -> 'b -> 'c -> 'c) -> 'c -> 'a t -> 'b t -> 'c >>val iteri : (int -> 'a -> unit) -> 'a t -> unit >>val iter2i : (int -> 'a -> 'b -> unit) -> 'a t -> 'b t -> unit >>val foldi : (int -> 'a -> 'b -> 'b) -> 'b -> 'a t -> 'b >>val fold2i : (int -> 'a -> 'b -> 'c -> 'c) -> 'c -> 'a t -> 'b t -> 'c >>val map : ('a -> 'b) -> 'a t -> 'b t >>val mapi : (int -> 'a -> 'b) -> 'a t -> 'b t >>val filter : ('a -> bool) -> 'a t -> 'a t >>val filter_map : ('a -> 'b option) -> 'a t -> 'b t >>val append : 'a t -> 'a t -> 'a t >>val concat : 'a t t -> 'a t >> >>val empty : unit -> 'a t >> >> > >All of the above could and should be supported without a change to the >signature. > > > >>val from : (unit -> 'a) -> 'a t >>val init : int -> (int -> 'a) -> 'a t >> >> > >These two functions imply imperitive semantics. init could be salvaged, >if I could call the function multiple times with the same index. from >would have to create an adaptor. Both could be supported. > >Except that I'd change from to: >val from: (unit -> 'a option) -> ('a, 'a) t > > > >>That's why I'm really open to Brian proposal, not just rejecting it blindly, >>as long as he/we comes with a working implementation that satisfy theses >>goals. >> >> >> > >If I write the code, one of two things will happen: >1) it'll be accepted as the new enum implementation >2) I'll fork the project > >It's not happening this week, in any case. > > > |