You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
(5) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
(127) |
Oct
(37) |
Nov
(4) |
Dec
(1) |
2002 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
(2) |
Jun
|
Jul
(16) |
Aug
(4) |
Sep
(6) |
Oct
(2) |
Nov
(1) |
Dec
(3) |
2003 |
Jan
(1) |
Feb
(3) |
Mar
(1) |
Apr
(1) |
May
(2) |
Jun
(3) |
Jul
(2) |
Aug
(3) |
Sep
(3) |
Oct
(5) |
Nov
(16) |
Dec
(28) |
2004 |
Jan
(13) |
Feb
(9) |
Mar
(3) |
Apr
(9) |
May
|
Jun
(10) |
Jul
(2) |
Aug
(3) |
Sep
(3) |
Oct
(4) |
Nov
(6) |
Dec
(7) |
2005 |
Jan
|
Feb
(1) |
Mar
(19) |
Apr
(4) |
May
(5) |
Jun
(6) |
Jul
(5) |
Aug
(3) |
Sep
(7) |
Oct
(24) |
Nov
(7) |
Dec
(4) |
2006 |
Jan
(11) |
Feb
(3) |
Mar
(9) |
Apr
(7) |
May
(31) |
Jun
(25) |
Jul
(13) |
Aug
(9) |
Sep
(9) |
Oct
(23) |
Nov
(35) |
Dec
(13) |
2007 |
Jan
(49) |
Feb
(26) |
Mar
(22) |
Apr
(12) |
May
(24) |
Jun
(34) |
Jul
(42) |
Aug
(75) |
Sep
(52) |
Oct
(35) |
Nov
(41) |
Dec
(36) |
2008 |
Jan
(26) |
Feb
(33) |
Mar
(57) |
Apr
(82) |
May
(97) |
Jun
(78) |
Jul
(79) |
Aug
(61) |
Sep
(54) |
Oct
(32) |
Nov
(49) |
Dec
(48) |
2009 |
Jan
(54) |
Feb
(32) |
Mar
(59) |
Apr
(65) |
May
(149) |
Jun
(131) |
Jul
(80) |
Aug
(40) |
Sep
(26) |
Oct
(63) |
Nov
(12) |
Dec
(21) |
2010 |
Jan
(10) |
Feb
(16) |
Mar
(41) |
Apr
(43) |
May
(53) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
From: Peter S. <se...@di...> - 2001-09-27 15:58:34
|
On Thu, 27 Sep 2001, Andrew Kennedy wrote: > Then let me try and bring some cheer to the discussion. I don't see anything to be depressed about either. > (C) Structure sharing, vector expressions/patterns, or-patterns, > higher-order functors, non-expansive raise, laziness, > withtype in signatures, updateable records, polymorphic > recursion Quotations and anti-quotations also belong in this category. SML/NJ, Moscow ML and the ML Kit (and probably Poly/ML and others) implement them, they behave exactly the same in those implementations, and they are disabled by default (as far as I know). Such agreement is what we must strive for. > Finally, under the heading > > (D) "changes to SML that break existing code" > > we have: > > (D) Removal of abstype, removal of equality polymorphism, > fixing surface syntax problems, wraparound arithmetic This collection of proposals was far more depressing and more dangerous to the health of SML. It is just unbearable for users (including myself) to have to apply `fixes' whenever we compile an existing program. Especially so since the change process would go on for years, given the number of radical proposals that were put forward. > (1) Any extension first gets discussed on this mailing list. That > way we won't end up with multiple designs for the same feature. (Am > I right in thinking that this has already happened for higher-order > modules?) The syntax for higher-order functors is different between SML/NJ and Moscow ML, yes. > (3) Compilers should have a compatibility mode in which extensions > are rejected or deprecated. This could be controllable at the level > of individual extensions. Moscow ML by default warns about all non-SML Modules extensions, and has an option -orthodox that rejects them outright. Peter |
From: Andrew K. <ak...@mi...> - 2001-09-27 14:56:16
|
I wrote: > There are a surprising number of live implementations or=20 > nearly-ready implementations of the language: >=20 > (In no particular order) > Moscow ML, SML/NJ, TILT, Church Project, MLton, Poly-ML,=20 > MLj, SML.NET Of course, I forgot to add: the ML Kit. Any more to take the count to ten? - Andrew. |
From: Andrew K. <ak...@mi...> - 2001-09-27 14:23:11
|
Matthias Blume wrote: > Well, not having further changes will certainly create=20 > perfect harmony: All users and all implementers will agree=20 > perfectly (since there will be none). Unfortunately, it will=20 > be the harmony one usually experiences only in graveyards... >=20 > (Sorry for sounding so depressed, but this discussion _is_=20 > depressing!) Then let me try and bring some cheer to the discussion. This is the first time I can remember a significant number of SML implementers discussing the language in such detail at all. Some points: SML is not yet dead ------------------- There are a surprising number of live implementations or=20 nearly-ready implementations of the language: (In no particular order) Moscow ML, SML/NJ, TILT, Church Project, MLton, Poly-ML,=20 MLj, SML.NET I'd really like to see the "nearly-ready" implementations=20 released (and here I include the "real soon now" version 111=20 of SML/NJ). Such a breadth of implementations each with its=20 own focus is surely a sign of life in a language. Our proposals are modest ------------------------ Almost all of the proposals so far discussed come under=20 one of the headings=20 (A) "matters not covered by the Definition of Standard ML",=20 (B) "mistakes in the Definition" or=20 (C) "changes to SML that don't break existing code".=20 Here's a quick categorization: (A) Libraries, compilation management, FFI, the "top level", scope of flex-record/overloading resolution (B) Free type variables at top level (C) Structure sharing, vector expressions/patterns, or-patterns, higher-order functors, non-expansive raise, laziness, withtype in signatures, updateable records, polymorphic recursion Finally, under the heading (D) "changes to SML that break existing code" we have: (D) Removal of abstype, removal of equality polymorphism, fixing surface syntax problems, wraparound arithmetic =20 Unless one is really hard-line (implementations must be=20 bug-for-bug compatible with the Definition) it's hard to argue with (A) and (B). The danger with (C) is that each implementation has its own ad-hoc extensions; then programmers write code that uses the extensions and is therefore tied to a single implementation. How about adopting the following procedures: (1) Any extension first gets discussed on this mailing list. That way we won't end up with multiple designs for the same feature. (Am I right in thinking that this has already happened for higher-order modules?) (2) Ideally extensions should be specified in the style of the=20 Definition, thus providing a solid reference for compiler writers. (3) Compilers should have a compatibility mode in which extensions=20 are rejected or deprecated. This could be controllable at the level=20 of individual extensions. (4) Where possible we should provide source-to-source translation tools=20 that rewrite extensions as SML'97 code. A common parsing and=20 layout-preserving pretty-printing infrastructure would be very helpful here, but with it I believe that some of the proposed extensions can be translated away (e.g. or-patterns, vector syntax, updateable records). Comments? - Andrew. |
From: David M. <Dav...@pr...> - 2001-09-27 13:21:44
|
On Wednesday, September 26, 2001 7:41 PM, Matthias Blume [SMTP:bl...@re...] wrote: > As for the simultaneous type-checking-and-constraint-applying approach: Why > can > one not work with unbound type variables (just like when type-checking a > "let")? > As I understand, that's what OCaml does. Of course, I do not agree with > OCaml's > solution of simply leaving unbound type variables in the final type; I have > explained > why. (It would not be a good idea in the context of separate compilation.) > What I am suggesting here is to leave the variable _tentatively_ unbound. > At the end of the day, when the whole topdec including all its signature > constraints > etc. has been considered and there are _still_ free type variables in the > result, > then we reject the program. But not before that. > > Matthias When I implemented this in Poly/ML I thought along similar lines and this is a fairly accurate description of the current implementation. The note to rules 87-89 says that no free type variables enter the basis and this is enforced in the compiler by a final walk over all the structures, functors and values. Until then the variables are unbound. Variables that don't appear in the result signature, such as those that have been shadowed, are simply ignored. > structure S: sig val x: int list end = struct val x = ref nil val x = nil @ nil end; # # # # # # # # structure S : sig val x : int list end The original implementation in Poly/ML produced an error message and failed if a free type-variable was found. I changed that in a later release to a warning and the type variables are unified with new unique types. The reason was much as Bob described. To avoid confusion with existing types or with type variables the types print with the (illegal) identifiers _a, _b etc. Without the signature: structure S = struct val x = ref nil val x = nil @ nil end; # # # # Warning: The type of (S.x) contains a free type variable. Setting it to a unique monotype. structure S : sig val x : _a list end David. |
From: Dave M. <db...@cs...> - 2001-09-27 12:48:02
|
As one who has invested a lot of effort in Standard ML, I agree wholeheartedly with Bob's reply to Robin and Mads. A living language will evolve -- only dead languages remain fixed. I should point out that the adjective "Standard" turned out to be an unfortunate choice. It represented a hope that has clearly not been bourne out by later developments in the ML community, which is now regretably divided into two camps. Perhaps we should let Robin and Mads keep "Standard", and choose a better adjective to reflect a renewal of purpose. While evolution, and hopefully growth, must happen, I agree that we must go forward together, which requires careful discussion and open minds, followed up with joint effort. The current dialogs on sml-implementers are a good start. We must at least avoid further fragmentation of the ???? ML camp if we are going to have a chance of practical success. Dave MacQueen > From: Robert Harper <Rob...@cs...> > To: sml...@li... > Cc: Rob...@cl... > Date: Tue, 25 Sep 2001 22:26:28 EDT > Subject: RE: [Sml-implementers] Subject: Standard ML > No language, natural or artificial, can remain static, if it is used at all; > it will evolve whether we like it or not. The question is how we manage > that evolution. > > There are at least six serious implementations of Standard ML, representing > a substantial commitment by a substantial community to its continuing > vitality. At present these implementations different in important ways, > particularly in the treatment of separate compilation and in libraries, but > also in correcting deficiencies and making extensions to the language. It > is vital that we harmonize them. It is also vital that we, as a community, > discuss possible paths of evolution to keep pace with the world around us. > > The community has a choice. We can either go our separate ways, dissipating > our efforts, and bring a rapid end to there being any community at all. Or > we can decide that it is in our best interest to work together to guide the > ongoing and inevitable evolution of Standard ML through a community process > that preserves the existence of a common language, preserves the community > that has arisen around it, and nurtures a set of ideas that have inspired > and substantially engaged us all. > > It is not for me, nor any one person, to decide, but for us all collectively > to determine whether we will go forward separately or together. For my > part, I can see no profit in attempting to suppress the process, but > enormous gain in fostering and guiding it. > > Bob Harper > > -----Original Message----- > From: Robin Milner [mailto:Rob...@cl...] > Sent: Tuesday, September 25, 2001 8:22 AM > To: sml...@li... > Cc: Rob...@cl... > Subject: [Sml-implementers] Subject: Standard ML > > > > We are authors of Standard ML, both 1990 and 1997; we also wrote > the Commentary. > > We welcome standardisation of matters not covered by the > Definition of Standard ML, such as libraries and separate compilation. > > We also welcome language designs built on Standard ML; they may > proclaim membership of the ML family, as for example CAML does. > > However, we will not accept further revision of Standard ML. It > is natural that people discuss changes and new features, and thus > that successors of the language will arise. We expect, however, > the designers of any successor language to make it clear that > their language is not Standard ML. > > With regards > Robin Milner, Mads Tofte > > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers |
From: Andreas R. <ros...@ps...> - 2001-09-27 10:32:46
|
Peter Sestoft wrote: > > However, it is confusing (and probably a mistake) that Moscow ML > currently accepts free type variables in structure declarations in the > interactive system: > > - structure S = struct val i = rev o rev end; > ! Warning: Value polymorphism: > ! Free type variable(s) at top level in structure identifier S > > structure S : {val i : 'b list -> 'b list} > > Note that 'b in the type of S.i is not universally quantified. IMHO the confusion could be reduced by not using ordinary type variable syntax for reporting "undetermined types". But apart from such cosmetics, the behaviour of Moscow ML's toplevel is absolutely reasonable, IMO. And I believe, as I tried to explain in my post, that strict reading of the Definition does not allow any other behaviour (although that might not have been the intention of the authors). OTOH, for compilation units, Andrew's proposal and Moscow ML's approach of rejecting undetermined types seems the only sensible thing to do. This is something that has to be specified along with the separate compilation mechanism. I even think it should be enforced that every compilation unit has a syntactic signature (i.e. one that can be expressed within the language, precluding e.g. hidden types etc.). > The question of how much context to take into account has existed for > years, namely, when resolving operator overloading and flex records. > The Definition only specifies that at most the enclosing > structure-level declaration should be taken into account. For flexible records the Definition does not even say that the context for resolving it might be limited by an implementation, although unrestricted context is incompatible with let polymorphism. This arguably is more or less a bug. In any case I strongly vote for fixing any such source of implementation-dependent behaviour. The natural choice in the case of overloading and flexible records seems to be the smallest enclosing value declaration. - Andreas |
From: Peter S. <se...@di...> - 2001-09-26 20:46:45
|
On Wed, 26 Sep 2001, Andrew Kennedy wrote: > (1) What does one do about types that are still "undetermined" > at the level of the unit-of-compilation (e.g. "..." or "SIG" > above do not resolve the type of "settings"). Instantiating > with something arbitrary (e.g. unit) is just too confusing so > I would vote for rejecting such programs as ambiguous. I agree with that and believe that the compilation should be rejected; separately compiled entities with free ("undetermined") type variables raise unpleasant questions about their propagation (and possibly distinct instantiations, yrk) when compiling other compilation units. Moscow ML rejects free top-level type variables in compilation units. > (2) What does one do in a top-level-loop environment? Here it makes sense to accept free top-level type variables as in OCaml and Moscow ML. The latter system issues a warning, and the type report shows the list of universally quantified type variables (the 'b in the type of i2): - val i1 = rev o rev; ! Warning: Value polymorphism: ! Free type variable(s) at top level in value identifier i1 > val i1 = fn : 'a list -> 'a list - val i2 = fn xs => (rev o rev) xs; > val 'b i2 = fn : 'b list -> 'b list However, it is confusing (and probably a mistake) that Moscow ML currently accepts free type variables in structure declarations in the interactive system: - structure S = struct val i = rev o rev end; ! Warning: Value polymorphism: ! Free type variable(s) at top level in structure identifier S > structure S : {val i : 'b list -> 'b list} Note that 'b in the type of S.i is not universally quantified. The question of how much context to take into account has existed for years, namely, when resolving operator overloading and flex records. The Definition only specifies that at most the enclosing structure-level declaration should be taken into account. Also in this respect Moscow ML and SML/NJ differ somewhat, but this very rarely matters in practice. Peter -- Department of Mathematics and Physics * http://www.dina.kvl.dk/~sestoft/ Royal Veterinary and Agricultural University * Tel +45 3528 2334 Thorvaldsensvej 40, DK-1871 Frederiksberg C, Denmark * Fax +45 3528 2350 |
From: Matthias B. <bl...@re...> - 2001-09-26 18:41:55
|
Robert Harper wrote: > The possibility to use signature information to constrain declarations has > often been suggested, but it is difficult to put into practice. For > example, as you go along processing a structure, a binding for x might later > be shadowed, so any constraint on x's type in the signature would not affect > the earlier one. This makes it tricky to try to simultaneously type check a > structure and impose a signature on it. Thus most implementations that I am > aware of opt for synthesizing the principal signature of the structure, then > matching it against the target signature. However, this can have an > unpleasant effect of the kind you mention. I don't understand what the problem might be: If a variable's binding gets shadowed later, then whether or not its (tentative -- see below) type has a free type variable in it does not matter -- the compiler can simply choose an arbitrary type for that variable and no-one will ever know. (That's AFAIK the same situation as in "let val x = [] @ [] in 1 end" which is a perfectly valid SML program.) As for the simultaneous type-checking-and-constraint-applying approach: Why can one not work with unbound type variables (just like when type-checking a "let")? As I understand, that's what OCaml does. Of course, I do not agree with OCaml's solution of simply leaving unbound type variables in the final type; I have explained why. (It would not be a good idea in the context of separate compilation.) What I am suggesting here is to leave the variable _tentatively_ unbound. At the end of the day, when the whole topdec including all its signature constraints etc. has been considered and there are _still_ free type variables in the result, then we reject the program. But not before that. Matthias |
From: Robert H. <Rob...@cs...> - 2001-09-26 18:17:18
|
I think implementations differ on the top-level. NJ instantiates the free type variables to a hidden type. One reason to do this is to make it more convenient to use the top level as a "calculator". eg, students find it confusing (and many find it peculiar) if evaluating "nil" is OK, but "nil @ nil" is rejected because of some incomprehensible type error. The NJ approach means that "nil @ nil" evaluates (to nil!) but with a weird-looking type. The value is not much use, however, because of the hidden type. The possibility to use signature information to constrain declarations has often been suggested, but it is difficult to put into practice. For example, as you go along processing a structure, a binding for x might later be shadowed, so any constraint on x's type in the signature would not affect the earlier one. This makes it tricky to try to simultaneously type check a structure and impose a signature on it. Thus most implementations that I am aware of opt for synthesizing the principal signature of the structure, then matching it against the target signature. However, this can have an unpleasant effect of the kind you mention. Bob -----Original Message----- From: Andrew Kennedy [mailto:ak...@mi...] Sent: Wednesday, September 26, 2001 9:34 AM To: Andreas Rossberg; sml...@li... Subject: RE: [Sml-implementers] Free type variables on toplevel Andreas, Whatever the glitches in the Definition, as a user of SML as well as an implementer I would like to see fewer of my programs rejected by the type-checker. In particular, I dislike the fact that some implementations won't accept structure STR :> SIG = struct val settings = ref [] ... end despite the constraints on "settings" imposed by the rest of the structure ("...") or the signature SIG. Furthermore, I find it unpleasant that the typing of a program might be influenced by the presence of a semicolon separator which otherwise can be regarded as optional. There are two questions: (1) What does one do about types that are still "undetermined" at the level of the unit-of-compilation (e.g. "..." or "SIG" above do not resolve the type of "settings"). Instantiating with something arbitrary (e.g. unit) is just too confusing so I would vote for rejecting such programs as ambiguous. (2) What does one do in a top-level-loop environment? - Andrew. _______________________________________________ Sml-implementers mailing list Sml...@li... https://lists.sourceforge.net/lists/listinfo/sml-implementers |
From: John H. R. <jh...@re...> - 2001-09-26 15:27:57
|
In message <715...@ap...>, David Matt hews writes: > ... > > First, is there any reason to believe that programming languages must > change? Certainly languages must respond to their environment to the > extent of adding new libraries and modules. I think everyone accepts > that. Where there is disagreement is whether it is necessary to change > the fundamentals, the elements with which the Definition is concerned. > If one looks at other widely used and certainly viable languages such as > C I cannot see any evidence that they are changing in that area. C is a poor choice of example. A substantial revision of the specification of C (called C99) was just completed a year or so ago. In fact, I doubt that you can name any language in widespread use that hasn't had periodic revision. Implementations are going to experiment with extensions. SML/NJ for example, has lazyness, or patterns, vector patterns and expressions, and soon functional record update. These are all features that aid in programming and should be considered for standardization across implementations. I would hope that we can continue to call this evolving specification "Standard ML." - John |
From: Matthias B. <bl...@re...> - 2001-09-26 15:03:37
|
David Matthews wrote: > > First, is there any reason to believe that programming languages must > change? Certainly languages must respond to their environment to the > extent of adding new libraries and modules. I think everyone accepts > that. Where there is disagreement is whether it is necessary to change > the fundamentals, the elements with which the Definition is concerned. > If one looks at other widely used and certainly viable languages such as > C I cannot see any evidence that they are changing in that area. C is a very fine example in support of Bob's analysis. Witness K&R C, ANSI C, C++ (arguably a "different" language altogether) and now C'99. > Second, is there any pressure for change? Ken Friis Larsen pointed out > that even among the implementers responding to these messages there > seemed to be a majority against change. Making such claims (about "majority" and such) would require taking a vote. Plus, you might even be right: Many of those disappointed with the current state of the language have already left us for the greener pastures of OCaml etc. Of course, we can let this continue to happen (and it will if SML proves unable to adapt to its users' needs). Ultimately this will lead to SML's death. (Arguable, it is very sick already.) > If Standard ML is to be treated > as a serious language the most important community is that of the users > of the language. I get no impression that they are calling out for > changes, rather the opposite. A quick look at the Isabelle theorem > prover suggests that it contains around 200 000 lines of Standard ML. I > am sure Larry and his colleagues would object to having to change this > to accommodate another change in the language definition. The effect is > that even if the definition of the language changed Poly/ML would have > to continue to support SML97 for the foreseeable future. Bob was quite explicit with the goals of the immediate endeavor: Moderate changes that clarify what hasn't been clarified so far and minor changes that are hopefully even backward-compatible. And if there end up being one or two minor incompatibilities, so be it: Even a 200,000 lines ML program can be brought up to speed very quickly. > Finally, is it possible to manage the diversity? There is a fundamental > difference of approach between implementers when it comes to the > language. Some see the Definition as a guideline to be followed as far > as possible but where they perceive deficiencies or where they wish to > make extensions they are happy to diverge from it. I would want to make one thing clear here: This is not something driven by implementers; this is driven by _users_ of the language (of which implementers are a part). Most of the changes that I would like to see are those for which I find the current workarounds to be too bothersome when _programming_ in ML. > Others, and I would > count myself among them, see the Definition as a form of contract with > the users and attempt to follow it as closely as possible. I see it the same way. But when following the existing contract is no longer desirable, then one can try to revise it. This requires some form of agreement among the parties, and that's what we are trying to get. We are not talking about some Holy Scripture here, ML is not a religion (despite the many religious wars that are being fought over programming languages). > I don't see > it is possible to "manage" this difference in philosophy. Some > implementers will wish to experiment with new language features beyond > any new Definition while others will feel that it is more important to > continue to support the older versions of the language. Bob was quite explicit about that, too. The changes we are talking about (in the framework of revising Standard ML) are not of the "language research and experiments" nature. For the latter (something that researchers will always be interested in), a standardized language is, indeed, not a good vehicle. But that's not the issue here. > Far from > creating harmony between the implementations I can see further change in > the Definition as leading to even greater divergence. Well, not having further changes will certainly create perfect harmony: All users and all implementers will agree perfectly (since there will be none). Unfortunately, it will be the harmony one usually experiences only in graveyards... (Sorry for sounding so depressed, but this discussion _is_ depressing!) Matthias |
From: David M. <Dav...@pr...> - 2001-09-26 14:36:29
|
I have to say I fundamentally disagree with this analysis. If have understood Bob's argument correctly, it is that languages must change to be viable, that there is pressure within the Standard ML community for change and that by responding positively to this pressure it is possible to "manage" it and retain cohesiveness within the Standard ML community. I believe that all these are false. First, is there any reason to believe that programming languages must change? Certainly languages must respond to their environment to the extent of adding new libraries and modules. I think everyone accepts that. Where there is disagreement is whether it is necessary to change the fundamentals, the elements with which the Definition is concerned. If one looks at other widely used and certainly viable languages such as C I cannot see any evidence that they are changing in that area. Second, is there any pressure for change? Ken Friis Larsen pointed out that even among the implementers responding to these messages there seemed to be a majority against change. If Standard ML is to be treated as a serious language the most important community is that of the users of the language. I get no impression that they are calling out for changes, rather the opposite. A quick look at the Isabelle theorem prover suggests that it contains around 200 000 lines of Standard ML. I am sure Larry and his colleagues would object to having to change this to accommodate another change in the language definition. The effect is that even if the definition of the language changed Poly/ML would have to continue to support SML97 for the foreseeable future. Finally, is it possible to manage the diversity? There is a fundamental difference of approach between implementers when it comes to the language. Some see the Definition as a guideline to be followed as far as possible but where they perceive deficiencies or where they wish to make extensions they are happy to diverge from it. Others, and I would count myself among them, see the Definition as a form of contract with the users and attempt to follow it as closely as possible. I don't see it is possible to "manage" this difference in philosophy. Some implementers will wish to experiment with new language features beyond any new Definition while others will feel that it is more important to continue to support the older versions of the language. Far from creating harmony between the implementations I can see further change in the Definition as leading to even greater divergence. This disagreement is not about the ends but about the means of achieving them. I am sure we all want to see Standard ML and languages like it used more extensively. It may be that the Standard ML community will split into those who are interested in language design and in experimenting with new features and those whose interests lie more in providing a broader support for the existing language. Maybe it is too much to expect a single group to follow both paths. David. On Wednesday, September 26, 2001 3:26 AM, Robert Harper [SMTP:Rob...@cs...] wrote: > No language, natural or artificial, can remain static, if it is used at all; > it will evolve whether we like it or not. The question is how we manage > that evolution. > > There are at least six serious implementations of Standard ML, representing > a substantial commitment by a substantial community to its continuing > vitality. At present these implementations different in important ways, > particularly in the treatment of separate compilation and in libraries, but > also in correcting deficiencies and making extensions to the language. It > is vital that we harmonize them. It is also vital that we, as a community, > discuss possible paths of evolution to keep pace with the world around us. > > The community has a choice. We can either go our separate ways, dissipating > our efforts, and bring a rapid end to there being any community at all. Or > we can decide that it is in our best interest to work together to guide the > ongoing and inevitable evolution of Standard ML through a community process > that preserves the existence of a common language, preserves the community > that has arisen around it, and nurtures a set of ideas that have inspired > and substantially engaged us all. > > It is not for me, nor any one person, to decide, but for us all collectively > to determine whether we will go forward separately or together. For my > part, I can see no profit in attempting to suppress the process, but > enormous gain in fostering and guiding it. > > Bob Harper > > -----Original Message----- > From: Robin Milner [mailto:Rob...@cl...] > Sent: Tuesday, September 25, 2001 8:22 AM > To: sml...@li... > Cc: Rob...@cl... > Subject: [Sml-implementers] Subject: Standard ML > > > > We are authors of Standard ML, both 1990 and 1997; we also wrote > the Commentary. > > We welcome standardisation of matters not covered by the > Definition of Standard ML, such as libraries and separate compilation. > > We also welcome language designs built on Standard ML; they may > proclaim membership of the ML family, as for example CAML does. > > However, we will not accept further revision of Standard ML. It > is natural that people discuss changes and new features, and thus > that successors of the language will arise. We expect, however, > the designers of any successor language to make it clear that > their language is not Standard ML. > > With regards > Robin Milner, Mads Tofte > > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers |
From: Matthias B. <bl...@re...> - 2001-09-26 13:53:53
|
Andrew Kennedy wrote: > > Andreas, > > Whatever the glitches in the Definition, as a user of SML > as well as an implementer I would like to see fewer of > my programs rejected by the type-checker. In particular, I > dislike the fact that some implementations won't accept > > structure STR :> SIG = struct > > val settings = ref [] > > ... > > end > > despite the constraints on "settings" imposed by the rest > of the structure ("...") or the signature SIG. > > Furthermore, I find it unpleasant that the typing of a > program might be influenced by the presence of a semicolon > separator which otherwise can be regarded as optional. > > There are two questions: > > (1) What does one do about types that are still "undetermined" > at the level of the unit-of-compilation (e.g. "..." or "SIG" > above do not resolve the type of "settings"). Instantiating > with something arbitrary (e.g. unit) is just too confusing so > I would vote for rejecting such programs as ambiguous. > > (2) What does one do in a top-level-loop environment? I share these thoughts. Here is my take on it: 1. What is a topdec? Important case: programs consisting of several separately compiled compilation units (e.g., ML source files): A topdec is the contents of the _entire_ compilation unit (i.e., regardless of any semicolons within). Less important case: interactive toplevel: I don't really care about this one, but I'd say that topdecs are separated by semicolons as they are now. 2. Important design decision: No free type variables may "escape" a topdec. While in an interactive setting one can imagine such free type variables to be bound "later" depending on how the values in question get used, this is not acceptable in the separately compiled case. The reason being the fact that different compilation units might (try to) bind such a free type variable differently, which would require tedious and non-obvious additional bookkeeping which does not follow traditional notions of dependency. 3. Reject topdecs as late as possible: I find the current behavior of SML/NJ extremely unpleasant. This is along the lines of Andrew's signature constraint example above and many of the other examples that people have been giving in this thread. Matthias |
From: Andrew K. <ak...@mi...> - 2001-09-26 13:37:36
|
Andreas,=20 Whatever the glitches in the Definition, as a user of SML as well as an implementer I would like to see fewer of my programs rejected by the type-checker. In particular, I dislike the fact that some implementations won't accept structure STR :> SIG =3D struct val settings =3D ref [] ... end despite the constraints on "settings" imposed by the rest=20 of the structure ("...") or the signature SIG. Furthermore, I find it unpleasant that the typing of a=20 program might be influenced by the presence of a semicolon=20 separator which otherwise can be regarded as optional. There are two questions: (1) What does one do about types that are still "undetermined" at the level of the unit-of-compilation (e.g. "..." or "SIG" above do not resolve the type of "settings"). Instantiating with something arbitrary (e.g. unit) is just too confusing so I would vote for rejecting such programs as ambiguous. (2) What does one do in a top-level-loop environment?=20 - Andrew. |
From: Andreas R. <ros...@ps...> - 2001-09-26 09:47:37
|
"Derek R. Dreyer" wrote: > > At first glance, these > two examples might appear to elaborate the same way, but I claim that according > to the Definition it's not clear that they need to. Right. Program 1 definitely has to elaborate (although it does not on some of the systems I tried, namely SML/NJ). I was considering examples like program 2. > In Program 2, however, the two val's are in different topdec's. A careful > reading of Section 8 on the semantics of Programs seems to indicate that there > are two possible outcomes for the elaboration and execution of Program 2. Under > Rule 188, we could guess the type of x as being "bool list" as in Program 1, and > thus elaborate the second val binding fine. However, it is also possible to > guess the type of x as being "int list". In this case, when we get around to > elaborating the second val binding, it will fail to elaborate. This is ok > because this causes Rule 187 to apply, resulting in the abortion of execution of > the second val binding. The ambiguity is due to the negative premise of Rule > 187, which is matched when a topdec does *not* elaborate. It is rather > concerning that there is such an ambiguity, so either I'm wrong about Rule 187 > applying or the Definers could maybe comment on this? I believe this is not a valid interpretation of rule 187. For most programs, there are arbitrary many non-elaborations, AFAICS. Thus, if the rule would have the meaning you suggest, it would follow that almost all programs could be legally rejected by an implementation. So I think that the correct interpretation of the negative premise is: there does not exist any elaboration (if I understand correctly, your interpretation would be: there exists a non-elaboration). > 3) A related issue that I observed while playing around was that NJ and Moscow > have very different behavior with respect to failed elaborations in general, and > as usual Moscow ML is the one that is in line with the Definition. The specific > question is: When you hit a topdec that does not elaborate, do you stop or do > you keep elaborating and executing the program? The Definition (rule 187 as > before) is quite clear on this point: you keep going. Thus, for instance, > > val x = 3; val y = x + true; val z = 4; > > causes NJ to give an error at the second val binding and stop, not elaborating > the third one. Moscow ML, however, gives an error at the second val binding and > continues, elaborating "val z = 4" and adding z to the environment. The > question is: which is really better? Although I appreciate that Moscow ML > implements the Definition faithfully, I sort of think NJ is right: it makes more > sense for the compiler to stop evaluating the program once it hits a type > error. The program rules mainly describe the behaviour of the interactive toplevel. For this, it is of course important that the user can continue yping and executing programs after encounter of an erroneous input. Everybody seems to agree that this focus on an interactive toplevel is an anachronism, though. Best regards, - Andreas |
From: Robert H. <Rob...@cs...> - 2001-09-26 02:25:45
|
No language, natural or artificial, can remain static, if it is used at all; it will evolve whether we like it or not. The question is how we manage that evolution. There are at least six serious implementations of Standard ML, representing a substantial commitment by a substantial community to its continuing vitality. At present these implementations different in important ways, particularly in the treatment of separate compilation and in libraries, but also in correcting deficiencies and making extensions to the language. It is vital that we harmonize them. It is also vital that we, as a community, discuss possible paths of evolution to keep pace with the world around us. The community has a choice. We can either go our separate ways, dissipating our efforts, and bring a rapid end to there being any community at all. Or we can decide that it is in our best interest to work together to guide the ongoing and inevitable evolution of Standard ML through a community process that preserves the existence of a common language, preserves the community that has arisen around it, and nurtures a set of ideas that have inspired and substantially engaged us all. It is not for me, nor any one person, to decide, but for us all collectively to determine whether we will go forward separately or together. For my part, I can see no profit in attempting to suppress the process, but enormous gain in fostering and guiding it. Bob Harper -----Original Message----- From: Robin Milner [mailto:Rob...@cl...] Sent: Tuesday, September 25, 2001 8:22 AM To: sml...@li... Cc: Rob...@cl... Subject: [Sml-implementers] Subject: Standard ML We are authors of Standard ML, both 1990 and 1997; we also wrote the Commentary. We welcome standardisation of matters not covered by the Definition of Standard ML, such as libraries and separate compilation. We also welcome language designs built on Standard ML; they may proclaim membership of the ML family, as for example CAML does. However, we will not accept further revision of Standard ML. It is natural that people discuss changes and new features, and thus that successors of the language will arise. We expect, however, the designers of any successor language to make it clear that their language is not Standard ML. With regards Robin Milner, Mads Tofte _______________________________________________ Sml-implementers mailing list Sml...@li... https://lists.sourceforge.net/lists/listinfo/sml-implementers |
From: Derek R. D. <dr...@cs...> - 2001-09-25 22:31:23
|
Andreas raises an interesting question. Although the point appears of minor import at best, it's not totally trivial since SML/NJ and Moscow ML seem to behave differently with regards to it. After studying the Definition carefully, here's my take. (If you don't care about this issue, you may still want to skip to bullet 3 below, where I point out a related and somewhat more important distinction between NJ and Moscow.) 1) I think you're right that the informal discussion in G.8 is somewhat wrong: since you can always replace the free type variables introduced by a "topdec" with some arbitrary monotype like "int", it's wrong to refuse to elaborate a "topdec" on the grounds that you don't want to make a choice. In fact, both NJ and Moscow elaborate val x = ref nil; albeit with warnings. 2) I think the formal situation is more complex and subtle than you give it credit. The complexity stems from whether the place such a val binding as "val x = ref nil" is defined and the place the free type variable in the type of x gets resolved are in the same "topdec". Consider the following two programs: Program 1 --------- val x = ref nil signature S = sig end val y = (x := [true]); Program 2 --------- val x = ref nil; signature S = sig end; val y = (x := [true]); You can pretty much ignore the signature binding. I just put it there because I find the syntax of "topdec"'s (see page 14) to be a bit ambiguous. Putting the signature binding in forces the two val bindings in Program 2 to be in *different* topdec's. The two val bindings in Program 1 are in the *same* topdec because there are no semicolons separating them. At first glance, these two examples might appear to elaborate the same way, but I claim that according to the Definition it's not clear that they need to. Here's why. In Program 1, the two val's are in the same topdec. If x can correctly be given a type so that the whole topdec elaborates, then you have to give it that type. Therefore, in this case, I agree with Andreas that we should be able to "guess" the type of x as being "bool list" to make the program elaborate. This seems to me to be incontrovertible. In Program 2, however, the two val's are in different topdec's. A careful reading of Section 8 on the semantics of Programs seems to indicate that there are two possible outcomes for the elaboration and execution of Program 2. Under Rule 188, we could guess the type of x as being "bool list" as in Program 1, and thus elaborate the second val binding fine. However, it is also possible to guess the type of x as being "int list". In this case, when we get around to elaborating the second val binding, it will fail to elaborate. This is ok because this causes Rule 187 to apply, resulting in the abortion of execution of the second val binding. The ambiguity is due to the negative premise of Rule 187, which is matched when a topdec does *not* elaborate. It is rather concerning that there is such an ambiguity, so either I'm wrong about Rule 187 applying or the Definers could maybe comment on this? On both Programs 1 and 2, SML/NJ (110.9.1) fails to elaborate, and Moscow ML (2.00) succeeds. Given that Program 1 must succeed according to the Definition and Program 2 may, Moscow ML can be said to be implementing the Definition on this point, while NJ can't. Perhaps NJ implementers could comment why they chose to diverge from the Definition on this point? 3) A related issue that I observed while playing around was that NJ and Moscow have very different behavior with respect to failed elaborations in general, and as usual Moscow ML is the one that is in line with the Definition. The specific question is: When you hit a topdec that does not elaborate, do you stop or do you keep elaborating and executing the program? The Definition (rule 187 as before) is quite clear on this point: you keep going. Thus, for instance, val x = 3; val y = x + true; val z = 4; causes NJ to give an error at the second val binding and stop, not elaborating the third one. Moscow ML, however, gives an error at the second val binding and continues, elaborating "val z = 4" and adding z to the environment. The question is: which is really better? Although I appreciate that Moscow ML implements the Definition faithfully, I sort of think NJ is right: it makes more sense for the compiler to stop evaluating the program once it hits a type error. Otherwise, you could end up with a slew of type errors if the rest of the program depends on a variable bound by a buggy val binding. Thoughts? Derek Andreas Rossberg wrote: > > Long posting, this time not about language evolution but about > interpretation of the current Definition :-) > > I would like to reraise an issue in this forum I already discussed with > Stephen Weeks on comp.lang.ml recently. Since we did not entirely agree > over the exact interpretation of the rules I hope that some more people, > in particular the authors, can comment on it. > > At the end of section G.8 (Principal Environments) the Definition > explains that the intent of the restrictions on free type variables at > the toplevel (side-conditions in rules 87 and 89) is to avoid reporting > free type variables to the user. However, I believe that this paragraph > confuses two notions of type variable: type variables as semantic > objects, as appearing in the formal rules of the Definition, and yet > undetermined types during Hindley/Milner type inference, which are also > commonly represented by type variables. However, these are variables on > a completely different level. The former are part of the formal > framework of the Definition, while the latter are an `implementation > detail' that lies outside the scope of the Definition's formalism. Let > us distinguish both by referring to the former as _semantic type > variables_ and to the latter as _undetermined types_. > > Using this terminology, and judging from the remainder of the same > paragraph in the Definition, the main purpose of the aforementioned > restrictions obviously is to avoid reporting _undetermined types_ to the > user. However, they fail to achieve that. In fact, it seems impossible > to enforce such behaviour within the formal framework of the Definition, > since it essentially would require formalising type inference (the > current formalism has no notion of undetermined type). Consequently, the > comment about the possibility of relaxing the restrictions by > substituting arbitrary monotypes misses the point as well. > > In fact, I interpret the formal rules of the Definition to actually > imply the exact opposite, namely that an implementation may _never_ > reject a program that results in undetermined types at the toplevel, and > is thus compelled to report them. The reason is explicitly given in the > same section: "implementations should not reject programs for which > successful elaboration is possible". Consider the following well-known > example: > > val r = ref nil; > r := [true]; > > Rule 2 has to non-deterministically choose some type (tau list) for the > occurence of nil. The choice of tau is not determined by the declaration > itself: it is not used, nor can it be generalised, due to the value > restriction. However, bool is a perfectly valid choice for tau, and this > choice will allow the entire program to elaborate. So according to the > quote above, an implementation has to make exactly that choice. Now, if > both declarations are entered seperately into an interactive toplevel > the implementation obviously has to delay commitment to that choice > until it has actually seen the second declaration. Consequently, it can > do nothing else but reporting an undetermined type for the first > declaration. The only effect the side conditions in rules 87 and 89 have > on this is that the types committed to later may not contain free > semantic type variables -- but there is no way this could happen anyway > in a practical implementation, considering the way such variables are > introduced in type inference (namely only during generalisation). > > There are two possibilities of dealing with this matter: (1) take the > formal rules as they are and ignore the comment in the appendix, or (2) > view the comment as an informal "further restriction" and fix its actual > formulation to match the obvious intent. Since the comments in appendix > G are not supposed to be a normative part of the Definition but merely > explanatory, and moreover are somewhat inconsistent, strict reading of > the Definition as is should give the formal rules priority and choose > option 1, IMHO. > > Furthermore, this observation gives rise to the question whether the > claim about the existence of principal environments in section 4.12 of > the SML'90 Definition was valid in the first place. I believe it wasn't: > a declaration like the one of r has no principal environment that would > be expressible within the formalism of the Definition, despite allowing > different choices of free imperative type variables. The reasoning that > this relaxation was sufficient to regain principality is based on the > same mix-up of semantic type variables and undetermined types as above. > The relaxation does not solve the problem with expansive declarations, > since semantic type variables are rather unrelated to it -- choosing a > semantic type variable for an undetermined type is no more principal > than choosing any particular monotype. > > I hope I was making sense and hope for some comments. If I am correct > with these observations, should it be fixed in future versions? > > Best regards, > > - Andreas > > -- > Andreas Rossberg, ros...@ps... > > "Computer games don't affect kids; I mean if Pac Man affected us > as kids, we would all be running around in darkened rooms, munching > magic pills, and listening to repetitive electronic music." > - Kristian Wilson, Nintendo Inc. > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers |
From: Andreas R. <ros...@ps...> - 2001-09-25 15:06:27
|
Matthias Blume wrote: > > Well, that settles that. Sounds like "Standard ML" is soon to be officially > dead. Ideas for a new name, anyone? How about continuing a good tradition in naming PLs and call it "Robin"? :-) - Andreas |
From: Matthias B. <bl...@re...> - 2001-09-25 14:09:52
|
Robin Milner wrote: > > We are authors of Standard ML, both 1990 and 1997; we also wrote > the Commentary. > > We welcome standardisation of matters not covered by the > Definition of Standard ML, such as libraries and separate compilation. > > We also welcome language designs built on Standard ML; they may > proclaim membership of the ML family, as for example CAML does. > > However, we will not accept further revision of Standard ML. It > is natural that people discuss changes and new features, and thus > that successors of the language will arise. We expect, however, > the designers of any successor language to make it clear that > their language is not Standard ML. > > With regards > Robin Milner, Mads Tofte Well, that settles that. Sounds like "Standard ML" is soon to be officially dead. Ideas for a new name, anyone? Matthias |
From: Robin M. <Rob...@cl...> - 2001-09-25 12:22:12
|
We are authors of Standard ML, both 1990 and 1997; we also wrote the Commentary. We welcome standardisation of matters not covered by the Definition of Standard ML, such as libraries and separate compilation. We also welcome language designs built on Standard ML; they may proclaim membership of the ML family, as for example CAML does. However, we will not accept further revision of Standard ML. It is natural that people discuss changes and new features, and thus that successors of the language will arise. We expect, however, the designers of any successor language to make it clear that their language is not Standard ML. With regards Robin Milner, Mads Tofte |
From: Andreas R. <ros...@ps...> - 2001-09-25 09:42:40
|
Long posting, this time not about language evolution but about interpretation of the current Definition :-) I would like to reraise an issue in this forum I already discussed with Stephen Weeks on comp.lang.ml recently. Since we did not entirely agree over the exact interpretation of the rules I hope that some more people, in particular the authors, can comment on it. At the end of section G.8 (Principal Environments) the Definition explains that the intent of the restrictions on free type variables at the toplevel (side-conditions in rules 87 and 89) is to avoid reporting free type variables to the user. However, I believe that this paragraph confuses two notions of type variable: type variables as semantic objects, as appearing in the formal rules of the Definition, and yet undetermined types during Hindley/Milner type inference, which are also commonly represented by type variables. However, these are variables on a completely different level. The former are part of the formal framework of the Definition, while the latter are an `implementation detail' that lies outside the scope of the Definition's formalism. Let us distinguish both by referring to the former as _semantic type variables_ and to the latter as _undetermined types_. Using this terminology, and judging from the remainder of the same paragraph in the Definition, the main purpose of the aforementioned restrictions obviously is to avoid reporting _undetermined types_ to the user. However, they fail to achieve that. In fact, it seems impossible to enforce such behaviour within the formal framework of the Definition, since it essentially would require formalising type inference (the current formalism has no notion of undetermined type). Consequently, the comment about the possibility of relaxing the restrictions by substituting arbitrary monotypes misses the point as well. In fact, I interpret the formal rules of the Definition to actually imply the exact opposite, namely that an implementation may _never_ reject a program that results in undetermined types at the toplevel, and is thus compelled to report them. The reason is explicitly given in the same section: "implementations should not reject programs for which successful elaboration is possible". Consider the following well-known example: val r = ref nil; r := [true]; Rule 2 has to non-deterministically choose some type (tau list) for the occurence of nil. The choice of tau is not determined by the declaration itself: it is not used, nor can it be generalised, due to the value restriction. However, bool is a perfectly valid choice for tau, and this choice will allow the entire program to elaborate. So according to the quote above, an implementation has to make exactly that choice. Now, if both declarations are entered seperately into an interactive toplevel the implementation obviously has to delay commitment to that choice until it has actually seen the second declaration. Consequently, it can do nothing else but reporting an undetermined type for the first declaration. The only effect the side conditions in rules 87 and 89 have on this is that the types committed to later may not contain free semantic type variables -- but there is no way this could happen anyway in a practical implementation, considering the way such variables are introduced in type inference (namely only during generalisation). There are two possibilities of dealing with this matter: (1) take the formal rules as they are and ignore the comment in the appendix, or (2) view the comment as an informal "further restriction" and fix its actual formulation to match the obvious intent. Since the comments in appendix G are not supposed to be a normative part of the Definition but merely explanatory, and moreover are somewhat inconsistent, strict reading of the Definition as is should give the formal rules priority and choose option 1, IMHO. Furthermore, this observation gives rise to the question whether the claim about the existence of principal environments in section 4.12 of the SML'90 Definition was valid in the first place. I believe it wasn't: a declaration like the one of r has no principal environment that would be expressible within the formalism of the Definition, despite allowing different choices of free imperative type variables. The reasoning that this relaxation was sufficient to regain principality is based on the same mix-up of semantic type variables and undetermined types as above. The relaxation does not solve the problem with expansive declarations, since semantic type variables are rather unrelated to it -- choosing a semantic type variable for an undetermined type is no more principal than choosing any particular monotype. I hope I was making sense and hope for some comments. If I am correct with these observations, should it be fixed in future versions? Best regards, - Andreas -- Andreas Rossberg, ros...@ps... "Computer games don't affect kids; I mean if Pac Man affected us as kids, we would all be running around in darkened rooms, munching magic pills, and listening to repetitive electronic music." - Kristian Wilson, Nintendo Inc. |
From: Ken F. L. <kf...@it...> - 2001-09-24 11:19:01
|
Matthias Blume <bl...@re...> writes: > Please, no name-calling! No name-calling was intended! If I called Bob any names (other than Bob or "an author of the Definition") then I apologies and play the "I'm not a native english speaker" card. > If Bob is willing to listen to SML implementers and users when they > ask for revisions, But many implementers did *not* ask for revisions! Here is my count of the replies from the maintainers of the different SML compiliers: Two said please make the changes in a new language, two said please only make minor changes , one was enthusiastic about changes, one I had dificulties interpret (could be both for and against changes), and one has been suspiciously quiet. > Asking Bob, and perhaps Robin or Mads or Dave to sit back and watch > is not realistic, IMO. I was not asking Bob (or any of the other authors) to stay out of the discussion. I was just pointing out that not all of us think that revisions are just good. > > That revisions are do-able does not necessitate revisions. > While this is true, the revisions are necessary anyway. I'm nor sure I agree with the "necessary" part. Personally I have a few quibbles with the Definition, but I think that the stability of the language is more important. > Nearly every major implementation of SML has already departed in > more or less significant ways from the text of the definition. But they do it with different grace. Moscow ML, for example, warns loudly if you use any of the extensions (even withtype in signatures). > > But we don't want to repeat the fiasco of the Standard Basis Library > > (for those involved, apologies for being so blunt). > What exactly is the "fiasco"? I have elaborated for John in a private mail. My main compliants are that the spec for SBL isn't available on paper (preferably downloadable on the web) and that the library is designed in a closed manner. > > ... This is the first revised version, and we foresee no others. > This was written in 1997 when there was still a strong expectation that SML > would be succeeded by ML2000. We all know what happened to the latter, so > things have changed. Sorry, I don't know what happened to ML2000 (except that it is delayed judging from the name), has the effort been dropped? Whatever happened (since it sounds bad), I would hate to see the same thing happen to SML. --Ken |
From: Matthias B. <bl...@re...> - 2001-09-21 17:31:39
|
"Derek R. Dreyer" wrote: > > I don't understand the problem with using where type in your example. Define > signature E as follows: > > signature E = sig > structure C : C > structure D : D where type d = C.B.b > end > > This is precisely an example of where a symmetric sharing constraint doesn't > work but an asymmetric where type constraint works fine. Where type allows you > to introduce a type sharing between a flexible type in the signature and *any > other type* in the context of the signature. Am I missing something here? Well, you are right, the example was not quite what I wanted it to be. (Also, see the followup to my own message.) As long as one of the two sides of the equation is still flexible, you can add a definitional spec. However, sometimes this is not the case. Moreover, when it is the case, then it forces an ordering on other things: C _must_ come before D. If there is another pair of types for which you also want equality but where the flexible type is in C, then you have a problem. You can always get it to work by applying the recursive algorithm that I outlined "by hand". But that is not very convenient, it may obscure what you are really trying to say, and it is fragile with respect to future revisions. Say D.d was defined to be some D.d' * D.d' (D.d' still being flexible). Then all you can do is express the desired type equality by saying "C.A.a = D.d'" -- which completely obscures the original purpose of saying "D.d = C.B.b" and which will break if the definition of D.d is changed to, say D.d'' * D.d'' at some later time. Anyway, I don't see what the _advantage_ of the assymetric solution is. I definitely see its disadvantages, even though others might disagree with me as to whether these disadvantages are really a big problem or not. So all else being equal, I don't see why we should not adopt a symmetric approach. Matthias > Matthias Blume wrote: > > > > I am still not convinced about the need for an asymetric "where type". > > Consider the following scenario (which, in fact, has happened to us > > in similar form, and it was a very annoying experience). > > > > Suppose we have the following signatures A, B, C, and D: > > > > signature A = sig type a end > > > > signature B = sig type b end > > > > signature C = sig > > structure A : A > > structure B : B where type b = A.a * A.a > > end > > > > signature D = sig type d > > val f : d -> unit > > end > > > > Now, suppose further that we want to construct a signature E (perhaps > > the formal argument of a functor) consisting of two substructures that > > match C and D, respectively: > > > > signature E = sig > > structure C : C > > structure D : D > > end > > > > Further, we want to be able to apply D.f to values of type C.B.b, so > > we must specify that D.d and C.B.b are the same type. All I want to > > convey to the compiler is that D.d = C.B.b, but I can't do this -- > > neither using "sharing" nor using "where type" because C.B.b > > is not flexible anymore! So I have to say something like: > > > > signature E = sig > > structure C : C > > structure D : D where type d = C.A.a * C.A.a > > end > > > > In other words, the language forces me to trace back what C.B.b was > > defined to be -- even though this information is completely > > irrelevant to what I am trying to express. Moreover, it is > > intuitively clear that the compiler could infer the above automatically > > from the equation D.d = C.B.b. > > > > I see not conceptual difficulty with making "where type" (or any > > type abbreviation in signatures) symmetric. The meaning should be this: > > An attempt is made to "unify" the two types in question, with currently > > "flexible" type names playing the roles of type variables. > > > > - "unifying" two flexible type names generates a traditional sharing > > constraint for them, i.e., it throws the two names into an equivalence class > > - "unifying" a rigid type name, i.e., one that already has a definitonal > > spec, applies recursively to the RHS of its spec > > - "unifying" a flexible type name with a type constructor application > > generates a definitional spec for the name (and its equivalence class) > > - "unifying" two type constructor applications with equal head constructor > > causes the respective arguments to be "unified" recursively > > - "unifying" two type constructor applications with unequal head constructor > > causes elaboration to fail with an error message > > > > Why can't we have this? It seems simple and intuitive and expressive. > > > > Matthias > > > > _______________________________________________ > > Sml-implementers mailing list > > Sml...@li... > > https://lists.sourceforge.net/lists/listinfo/sml-implementers > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers |
From: Derek R. D. <dr...@cs...> - 2001-09-21 16:42:55
|
I don't understand the problem with using where type in your example. Define signature E as follows: signature E = sig structure C : C structure D : D where type d = C.B.b end This is precisely an example of where a symmetric sharing constraint doesn't work but an asymmetric where type constraint works fine. Where type allows you to introduce a type sharing between a flexible type in the signature and *any other type* in the context of the signature. Am I missing something here? Derek Matthias Blume wrote: > > I am still not convinced about the need for an asymetric "where type". > Consider the following scenario (which, in fact, has happened to us > in similar form, and it was a very annoying experience). > > Suppose we have the following signatures A, B, C, and D: > > signature A = sig type a end > > signature B = sig type b end > > signature C = sig > structure A : A > structure B : B where type b = A.a * A.a > end > > signature D = sig type d > val f : d -> unit > end > > Now, suppose further that we want to construct a signature E (perhaps > the formal argument of a functor) consisting of two substructures that > match C and D, respectively: > > signature E = sig > structure C : C > structure D : D > end > > Further, we want to be able to apply D.f to values of type C.B.b, so > we must specify that D.d and C.B.b are the same type. All I want to > convey to the compiler is that D.d = C.B.b, but I can't do this -- > neither using "sharing" nor using "where type" because C.B.b > is not flexible anymore! So I have to say something like: > > signature E = sig > structure C : C > structure D : D where type d = C.A.a * C.A.a > end > > In other words, the language forces me to trace back what C.B.b was > defined to be -- even though this information is completely > irrelevant to what I am trying to express. Moreover, it is > intuitively clear that the compiler could infer the above automatically > from the equation D.d = C.B.b. > > I see not conceptual difficulty with making "where type" (or any > type abbreviation in signatures) symmetric. The meaning should be this: > An attempt is made to "unify" the two types in question, with currently > "flexible" type names playing the roles of type variables. > > - "unifying" two flexible type names generates a traditional sharing > constraint for them, i.e., it throws the two names into an equivalence class > - "unifying" a rigid type name, i.e., one that already has a definitonal > spec, applies recursively to the RHS of its spec > - "unifying" a flexible type name with a type constructor application > generates a definitional spec for the name (and its equivalence class) > - "unifying" two type constructor applications with equal head constructor > causes the respective arguments to be "unified" recursively > - "unifying" two type constructor applications with unequal head constructor > causes elaboration to fail with an error message > > Why can't we have this? It seems simple and intuitive and expressive. > > Matthias > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers |
From: Matthias B. <bl...@re...> - 2001-09-21 16:37:37
|
Matthias Blume wrote: > > I am still not convinced about the need for an asymetric "where type". > Consider the following scenario (which, in fact, has happened to us > in similar form, and it was a very annoying experience). > > Suppose we have the following signatures A, B, C, and D: > > signature A = sig type a end > > signature B = sig type b end > > signature C = sig > structure A : A > structure B : B where type b = A.a * A.a > end > > signature D = sig type d > val f : d -> unit > end > > Now, suppose further that we want to construct a signature E (perhaps > the formal argument of a functor) consisting of two substructures that > match C and D, respectively: > > signature E = sig > structure C : C > structure D : D > end > > Further, we want to be able to apply D.f to values of type C.B.b, so > we must specify that D.d and C.B.b are the same type. All I want to > convey to the compiler is that D.d = C.B.b, but I can't do this -- > neither using "sharing" nor using "where type" because C.B.b > is not flexible anymore! So I have to say something like: > > signature E = sig > structure C : C > structure D : D where type d = C.A.a * C.A.a > end > > In other words, the language forces me to trace back what C.B.b was > defined to be -- even though this information is completely > irrelevant to what I am trying to express. Moreover, it is > intuitively clear that the compiler could infer the above automatically > from the equation D.d = C.B.b. Reviewing what I wrote I realize that D.d was still flexible so I could have said " ... D where type d = C.B.b". But imagine D.d was also rigid already for some reason, e.g., because D was defined as: signature D = sig type d0 type d = d0 val f : d -> unit end Now I would have to "know" that I have to apply the "where type" spec to d0 instead of d. Changes to D which change how d was defined would break clients -- even though clients really didn't care about such details. All they wanted to say is that C.B.b = D.d. Anyway, the rest of what I wrote stands... > I see not conceptual difficulty with making "where type" (or any > type abbreviation in signatures) symmetric. The meaning should be this: > An attempt is made to "unify" the two types in question, with currently > "flexible" type names playing the roles of type variables. > > - "unifying" two flexible type names generates a traditional sharing > constraint for them, i.e., it throws the two names into an equivalence class > - "unifying" a rigid type name, i.e., one that already has a definitonal > spec, applies recursively to the RHS of its spec > - "unifying" a flexible type name with a type constructor application > generates a definitional spec for the name (and its equivalence class) > - "unifying" two type constructor applications with equal head constructor > causes the respective arguments to be "unified" recursively > - "unifying" two type constructor applications with unequal head constructor > causes elaboration to fail with an error message > > Why can't we have this? It seems simple and intuitive and expressive. > > Matthias > > _______________________________________________ > Sml-implementers mailing list > Sml...@li... > https://lists.sourceforge.net/lists/listinfo/sml-implementers |