readable-discuss Mailing List for Readable Lisp S-expressions (Page 3)
Readable Lisp/S-expressions with infix, functions, and indentation
Brought to you by:
dwheeler
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(19) |
Dec
(27) |
2008 |
Jan
(38) |
Feb
(13) |
Mar
(2) |
Apr
|
May
(2) |
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
(19) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
(11) |
Jul
(440) |
Aug
(198) |
Sep
(30) |
Oct
(5) |
Nov
(6) |
Dec
(39) |
2013 |
Jan
(162) |
Feb
(101) |
Mar
(39) |
Apr
(45) |
May
(22) |
Jun
(6) |
Jul
(12) |
Aug
(17) |
Sep
(23) |
Oct
(11) |
Nov
(77) |
Dec
(11) |
2014 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
|
May
(20) |
Jun
(24) |
Jul
(14) |
Aug
|
Sep
(1) |
Oct
(23) |
Nov
(28) |
Dec
(5) |
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(18) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
(6) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
(2) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: David A. W. <dwh...@dw...> - 2014-11-22 00:32:02
|
David A. Wheeler: > > If wisp interpreted neoteric-expressions by default, > > then many more expressions work in both systems... On Fri, 21 Nov 2014 22:38:13 +0100, Arne Babenhauserheide <arn...@we...> wrote: > That’s true, but then lines with a single element would be treated > differently than lines with multiple elements, and that is a gotcha I want to avoid. As I've commented before, I think the wisp rule *seems* simpler ("each line is new list"), but in practice it is *itself* a gotcha, because it leads to bizarre behavior like this. Which is why both SRFI-49 and sweet-expressions don't do it. But suspending that old discussion, let's focus on the example you mentioned... > It hits you with things like newline > wisp: > define : hello > display "Hello World!" > newline > define : hello2 who > format #t "Hello ~A!\n" who If you're using wisp you probably do *not* want to use a neoteric expression as the *first* element on a line (unless you're actually calculating what function/procedure to call). So teach that style rule, and you avoid that (wisp) gotcha. However, in *both* wisp and sweet-expressions there are MANY uses for neoteric-expressions in the REST of the line. For example, here's a line from math.slisp: cons car(lyst) flatten-operation(op cdr(lyst)) It's pretty common to have several short parameters on a line; neoteric-expressions are quite useful in this case. A quick grep finds many examples. You *can* do it using traditional s-expression notation, of course: cons (car lyst) (flatten-operation op (cdr lyst)) However, I think the former is more readable. In particular, the "car(lyst)" format is the same as mathematics and nearly all other programming languages, making it much more familiar. I always use the "car(lyst)" form when it's a call, never the "(car lyst)" form, so there's no problem of "which format do I use". Being readable in great part depends on building on what people already know, and this is the more familiar notation. Besides, neoteric-expressions are *already* supported in curly-infix. --- David A. Wheeler |
From: David A. W. <dwh...@dw...> - 2014-11-21 23:48:58
|
It is obviously possible to change the semantics of leading period. I am hesitant to add yet another operator; you may disagree but I really tried to make it a short list. I also really wanted to fix the notation, but leading period is basically never used so that is probably not really a problem. Let me think about it. On November 21, 2014 4:38:13 PM EST, Arne Babenhauserheide <arn...@we...> wrote: >Am Mittwoch, 19. November 2014, 18:34:25 schrieb David A. Wheeler: >> It's possible to write code that is interpreted *identically* >> on both wisp and sweet when indentation is enabled. > >That’s cool! > >> In sweet, a "." at the >> beginning of a line post-indent is basically ignored. > >Would it be possible to generalize this, so sweet would also make the >full line a continuation instead of only ignoring the dot? > >That would make many uses of \\ unnecessary, and wisp would then be >almost a subset of sweet. > >> Thus, in both sweet and wisp: >> a b c >> d e >> . f >> g h >> becomes: >> (a b c >> (d e) >> f >> (g h)) > >> If wisp interpreted neoteric-expressions by default, >> then many more expressions work in both systems, e.g.: >> defun factorial() >> if {n <= 1} >> . 1 >> {n * factorial{n - 1}} > >That’s true, but then lines with a single element would be treated >differently than lines with multiple elements, and that is a gotcha I >want to avoid. > >It hits you with things like newline > >wisp: > define : hello > display "Hello World!" > newline > define : hello2 who > format #t "Hello ~A!\n" who > hello2 "wisp" > >sweet: > define hello() > display "Hello World!" > newline() > hello() > define hello(who) > format #t "Hello ~A!\n" who > hello2 "sweet" > ; or > hello2("sweet") > >> So while neoteric-expressions provide two ways to write something, >> in practice, there's a "more readable" way that better expresses the >purpose >> in each case. > >It’s almost as if you had intentionally motivated a quote I found >yesterday but didn’t share because I didn’t know whether it would come >off as offensive. With that kind of (unintentional?) prep-work: > > > wisp-expressions are not as sweet as readable, but they KISS. > > >:-) > >Best wishes, >Arne --- David A.Wheeler |
From: Arne B. <arn...@we...> - 2014-11-21 21:38:28
|
Am Mittwoch, 19. November 2014, 18:34:25 schrieb David A. Wheeler: > It's possible to write code that is interpreted *identically* > on both wisp and sweet when indentation is enabled. That’s cool! > In sweet, a "." at the > beginning of a line post-indent is basically ignored. Would it be possible to generalize this, so sweet would also make the full line a continuation instead of only ignoring the dot? That would make many uses of \\ unnecessary, and wisp would then be almost a subset of sweet. > Thus, in both sweet and wisp: > a b c > d e > . f > g h > becomes: > (a b c > (d e) > f > (g h)) > If wisp interpreted neoteric-expressions by default, > then many more expressions work in both systems, e.g.: > defun factorial() > if {n <= 1} > . 1 > {n * factorial{n - 1}} That’s true, but then lines with a single element would be treated differently than lines with multiple elements, and that is a gotcha I want to avoid. It hits you with things like newline wisp: define : hello display "Hello World!" newline define : hello2 who format #t "Hello ~A!\n" who hello2 "wisp" sweet: define hello() display "Hello World!" newline() hello() define hello(who) format #t "Hello ~A!\n" who hello2 "sweet" ; or hello2("sweet") > So while neoteric-expressions provide two ways to write something, > in practice, there's a "more readable" way that better expresses the purpose > in each case. It’s almost as if you had intentionally motivated a quote I found yesterday but didn’t share because I didn’t know whether it would come off as offensive. With that kind of (unintentional?) prep-work: wisp-expressions are not as sweet as readable, but they KISS. :-) Best wishes, Arne |
From: Arne Bab. <Arn...@we...> - 2014-11-21 09:41:58
|
At Wed, 19 Nov 2014 15:46:54 -0500 (EST), dwheeler wrote: > > On Wed, 19 Nov 2014 20:48:28 +0100, Arne Babenhauserheide <arn...@we...> wrote: > > It’s crazy to think that nowadays it’s actually possible to do > > > > guile -L . --language=wisp tests/factorial.w > > > > and have guile execute the file as real code. > > I guess you know the feeling ☺ > > Right, we got curly-infix in, and that was a great feeling. > > Sweet-expressions are still an external library, and not available through --language. > Any suggestions on the best way to get them into guile that way? I’d just s̶t̶e̶a̶l̶ build on the wisp-code: https://bitbucket.org/ArneBab/wisp/src/v0.8.1/wisp-reader.w That gets parsed and copied to language/wisp/spec.scm https://bitbucket.org/ArneBab/wisp/src/v0.8.1/bootstrap.sh?at=default#cl-41c It’s not really hard. Best wishes, Arne |
From: Jörg F. W. <Joe...@so...> - 2014-11-20 13:26:24
|
Am 19.11.2014 um 20:48 schrieb Arne Babenhauserheide: > And for example today Mu Lei (Nala Ginrut) had the idea of > representing sxml templates as wisp - a case where I think the sweet > <* *> syntax could come in really handy. I can agree. Using SXML with sweet (in that case) works quite well. Here a ~100kbyte example application (be sure to read in whitespace preserving mode; my browser gets it wrong): http://ball.askemos.org/A60aa8b838c61b0de7e9f3cfd5d3ea0c1 (BTW: This is a payment system based on ricardian contracts; quite different from bitcoin. Currently being documented. More here: http://ball.askemos.org/?_v=search&_id=1856 http://ball.askemos.org/A5023d27b0e3fce3ee0b12b79e7e337ce comments welcome.) Side note: The <* *> can be problematic if your LISP code is itself embedded in XML formatted source code. Which for me is the case. (To escape I allowed the {* and *} as to alias those.) /Jörg |
From: David A. W. <dwh...@dw...> - 2014-11-19 23:34:33
|
BTW, It's possible to write code that is interpreted *identically* on both wisp and sweet when indentation is enabled. In sweet, a "." at the beginning of a line post-indent is basically ignored. This was for consistency with neoteric-expressions, and left there in part to be consistent with sweet. In wisp, a leading "." is NECESSARY to disable automatic list-wrapping. So if you begin any line with "." if it has a single element, and avoid the additional markers like ":", "$", "\\", and <*...*>, they're identical. Thus, in both sweet and wisp: a b c d e . f g h becomes: (a b c (d e) f (g h)) Of course, once you open a list (...) and format it normally, or start a curly-infix-expression {...}, they are identical. If wisp interpreted neoteric-expressions by default, then many more expressions work in both systems, e.g.: defun factorial() if {n <= 1} . 1 {n * factorial{n - 1}} In general I find that if the first element is a symbol, I normally write it using f(...), e.g., cos(). That is ALWAYS true if it's a procedure I'm calling. However, if the first element is not a symbol, e.g., a number, then I write a normal list, e.g., '(1 2 3). The pretty-printer exploits this; if something is a symbol, and the list is not too long (e.g., 16 items or so), it's presented in f(...) format. So while neoteric-expressions provide two ways to write something, in practice, there's a "more readable" way that better expresses the purpose in each case. --- David A. Wheeler |
From: David A. W. <dwh...@dw...> - 2014-11-19 22:49:20
|
Version 1.0.6 of the "readable" library has been released. This is mainly minor improvements for the Common Lisp implementation: * Bug fix for a subtle rarely-encountered error in the Common Lisp implementation of the sweet-expression reader. Previously, the reader would not work correctly on an n-expression if it was not a symbol, indentation processing is active, and it begins with the punctuation symbols "$", "\", "<", "*", or ".". For example, (* 4 5) and {4 * 5} worked fine, but *(4 5) did not. This bug took a long time to detect, because it didn't affect infix or traditional s-expression notation, and this is normally how such expressions would be used. Also, the neoteric reader worked just fine. * In Common Lisp, maintain the readtable-case setting with enable-sweet (it already did so for other notations). That way you can both type and show lowercase symbolic input (by using the Common Lisp standard's "invert" setting). * Modify sweet-sbcl to use the readtable-case invert setting. * Added "math.slisp", a symbol math simplifier in Common Lisp that demonstrates the readable notations. --- David A. Wheeler |
From: Arne B. <arn...@we...> - 2014-11-19 19:48:41
|
Hi David, Am Dienstag, 18. November 2014, 22:28:11 schrieb David A. Wheeler: > > It comes down to personal preferences: The weight we give to different > > aesthetic aspects of programming languages. For me, the syntactic > > simplicity is one of the main selling points of lisp and scheme, and > > sweet departs from that by adding more than the absolute minimum of > > the required syntax elements for creating a general, indentation-based > > representation of scheme-code. > > Neither Scheme nor Common Lisp are so simple to parse once you > consider their full generality (e.g., number types). That hit me with the pure preprocessor… one of the reasons why I switched to using (read) was that I had some longstanding parsing bugs for which I did not have obvious fixes. But that doesn’t mean that the syntax is very complex, just that there are lots of details to take care of. If you hit # a special form begins, strings have some escaping, and otherwise there are the quotes. But maybe my view is a bit biased, because I compare it to Python, Java and C++. Especially C++ ☺ > > Best wishes, > You too! I view this as a friendly competition. > We both agree that there's a need for a Lisp syntax that is > general and homoiconic, and that indentation can help. > We differ on how to best exploit that, that's all. That’s how I see it, too - and we’re also using similar resources (like GNU Guile). On the long run I hope that having two different flavors will help indentation-based syntax for Lisps, because it shows that it’s not just a personal pet-project but rather something of broader appeal. And for example today Mu Lei (Nala Ginrut) had the idea of representing sxml templates as wisp - a case where I think the sweet <* *> syntax could come in really handy. > Thanks. Thank you! Without you I likely would have never been able to reach the point where I can actually write wisp code in the REPL and execute wisp files directly from guile! It’s crazy to think that nowadays it’s actually possible to do guile -L . --language=wisp tests/factorial.w and have guile execute the file as real code. I guess you know the feeling ☺ - Arne -- 1w6 sie zu achten, sie alle zu finden, in Spiele zu leiten und sacht zu verbinden. → http://1w6.org |
From: David A. W. <dwh...@dw...> - 2014-11-19 03:28:19
|
On Tue, 18 Nov 2014 23:46:44 +0100, Arne Babenhauserheide <arn...@we...> wrote: > ... Yes, if you know > all the sweet syntax, quite a few code snippets will likely look more > readable than those in wisp. But the additional syntax elements > provide a high barrier for learning, because they require retraining > the eyes. I think that's a trade worth making. More syntax *does* take more training, but if you look at a lot of code, then that additional syntax can pay back large dividends. > > "Less bad" is not exactly a high aspiration :-). > > It’s a different trade-off: Simplicity against less rough edges. ... > > Of course, this why I decided to NOT have all lines begin a function call in sweet-expressions. > > Both sweet-expressions and I-expressions (SRFI-49) have a different semantic, because it > > seems to be "what humans assume" in practice. > > It’s what lisp and scheme do, but for example it isn’t what the shell > does. I agree that lisp and scheme programmers tend to assume this, > but I don’t think that it is right for indentation-based scheme. But > since this is the main forking point between sweet and wisp, I guess > it isn’t a point which can be changed by argument ☺ Fair enough :-). > It comes down to personal preferences: The weight we give to different > aesthetic aspects of programming languages. For me, the syntactic > simplicity is one of the main selling points of lisp and scheme, and > sweet departs from that by adding more than the absolute minimum of > the required syntax elements for creating a general, indentation-based > representation of scheme-code. Neither Scheme nor Common Lisp are so simple to parse once you consider their full generality (e.g., number types). > The cost I pay for that is that there will be code snippets which will > look less elegant in wisp than in sweet. You could say, that they look > less sweet ☺ ... > Best wishes, > Arne You too! I view this as a friendly competition. We both agree that there's a need for a Lisp syntax that is general and homoiconic, and that indentation can help. We differ on how to best exploit that, that's all. Thanks. --- David A. Wheeler |
From: Arne B. <arn...@we...> - 2014-11-18 23:57:06
|
Hi David, Am Donnerstag, 13. November 2014, 18:23:01 schrieb David A. Wheeler: > On Thu, 13 Nov 2014 21:56:19 +0100, Arne Babenhauserheide <arn...@we...> wrote: > > I considered it, but decided against it, because in wisp that provides > > less advantages than in sweet expressions, while adding ambiguity (one > > more way to spell the same code). > > I don't think the ability to spell something more than one way is usually termed "ambiguous"; > I don't know if there's a term for that; perhaps "multiple spellings"? That sounds better, yes. > In any case, "multiple spellings" (or whatever it's called) is inherent in Scheme and Common Lisp. > For example, 'x and (quote x) are already two ways to write the same thing. These are a bootstrap way and a reader way. 'x actually gets translated to (quote x). In wisp however, `a : b` and `a b()` would both be translated to `a (b)`. They are equivalent spellings which become a common lower-level spelling. I don’t mind having the possibility to write `a b()`, and since wisp requires SRFI-105 for curly-infix, all wisp users should have the option to enable neoteric expressions in any source-file without increasing the requirements for implementors. I just don’t think that it makes sense to push the added complexity on every wisp user *by default*. > Yes, it adds a different way to spell the same code, but in many > cases it would be the more common way to do it (e.g., the normal way > in math and other programming languages). I think that for neoteric expressions the cost outweights the gain while for curly-infix the gain is greater than the cost. A few months ago I had a collegue look at some wisp-code I wrote and he said “I forgot how ugly lisp is”. It turned out that with that he was referring only to prefix-math. > > That expression in wisp is simply > > stuff : cos a … > > In sweet that would be a problem, as far as I know, because if you want to do > > > > (stuff (cos a) b c d e) … > > Here neoteric experssions help a lot: > > > > stuff cos(a) b c d e > > That would be the normal way (and recommended) way to do it. > > There are alternatives if you hate neoteric expressions in sweet-expressions, e.g.: > stuff > cos a \\ b \\ c \\ d \\ e This is part of what made me start to work on wisp in the first place ☺ Too many ways to express something, which I think mainly comes from not being able to continue the argument list easily. Yes, if you know all the sweet syntax, quite a few code snippets will likely look more readable than those in wisp. But the additional syntax elements provide a high barrier for learning, because they require retraining the eyes. > But it seems to that the "obvious" way is the right way: > stuff cos(a) b c d e > > But in wisp you’d just do > > > > stuff : cos a > > . b c d e > > which, though still harder to read than sweet-expressions, is less bad :-). > > "Less bad" is not exactly a high aspiration :-). It’s a different trade-off: Simplicity against less rough edges. > I think wisp is the wrong trade-off anyway, but since you're working on it, > contrast that to: > > > stuff cos(a) > > . b c d e > > which is still harder to read, but less harder :-). It actually looks pretty good, I think. But it does not need to go into the spec, because it is (and should remain) trivial to integrate it later, if it turns out to be needed: Just say “requires activation of neoteric expressions in the reader”. It’s a single line of code for the implementations, so I think it can be done later, if my intuition turns out to be wrong. > > On the other hand, this difference makes neoteric expressions much less elegant in wisp than in sweet. In sweet you can just to > > > > stuff > > cos(a) > > > > because a single element on a line is treated as that element, not as a function call. In wisp you’d have to do > > > > stuff > > . cos(a) > > > > because a line always begins a function call, except when started with a period. > > Of course, this why I decided to NOT have all lines begin a function call in sweet-expressions. > Both sweet-expressions and I-expressions (SRFI-49) have a different semantic, because it > seems to be "what humans assume" in practice. It’s what lisp and scheme do, but for example it isn’t what the shell does. I agree that lisp and scheme programmers tend to assume this, but I don’t think that it is right for indentation-based scheme. But since this is the main forking point between sweet and wisp, I guess it isn’t a point which can be changed by argument ☺ It comes down to personal preferences: The weight we give to different aesthetic aspects of programming languages. For me, the syntactic simplicity is one of the main selling points of lisp and scheme, and sweet departs from that by adding more than the absolute minimum of the required syntax elements for creating a general, indentation-based representation of scheme-code. The cost I pay for that is that there will be code snippets which will look less elegant in wisp than in sweet. You could say, that they look less sweet ☺ I think that for new code their number will be small, because programmers will instinctively write code which looks elegant. The programming style will evolve (and if no one but me will use it, then at least my style will evolve - it already does). I think that every language promotes its own style. Java programmers write very verbose programs with long variable and function names and deep nested package structures. Python programmers write concise programs with very short identifiers. From the scheme-code I already know I’d say that scheme-programmers often nest many functions on the same line (especially when using lambda). And lots of apply and map. I’m still learning there, though. Let’s see where it takes me :) Best wishes, Arne |
From: David A. W. <dwh...@dw...> - 2014-11-14 03:42:47
|
I intend to update the 'readable' library soon. It fixes a bug in the Common Lisp sweet-expression reader, and adds an example "math.slisp" Common Lisp demo. If you have last-minute comments, please post! --- David A. Wheeler |
From: David A. W. <dwh...@dw...> - 2014-11-13 23:23:10
|
I said: > > Very cool! Have you considered making neoteric active by default as well? On Thu, 13 Nov 2014 21:56:19 +0100, Arne Babenhauserheide <arn...@we...> wrote: > I considered it, but decided against it, because in wisp that provides > less advantages than in sweet expressions, while adding ambiguity (one > more way to spell the same code). I don't think the ability to spell something more than one way is usually termed "ambiguous"; I don't know if there's a term for that; perhaps "multiple spellings"? In any case, "multiple spellings" (or whatever it's called) is inherent in Scheme and Common Lisp. For example, 'x and (quote x) are already two ways to write the same thing. Yes, it adds a different way to spell the same code, but in many cases it would be the more common way to do it (e.g., the normal way in math and other programming languages). > That expression in wisp is simply > stuff : cos a > > or > stuff > cos a Sure. You can also do that easily in Sweet-expressions: stuff $ cos a ... or ... stuff cos a > In sweet that would be a problem, as far as I know, because if you want to do > > (stuff (cos a) b c d e) > > without neoteric expressions, you have to do it as > > stuff > cos a > b > c > d > e > > Here neoteric experssions help a lot: > > stuff cos(a) b c d e That would be the normal way (and recommended) way to do it. There are alternatives if you hate neoteric expressions in sweet-expressions, e.g.: stuff cos a \\ b \\ c \\ d \\ e But it seems to that the "obvious" way is the right way: stuff cos(a) b c d e > But in wisp you’d just do > > stuff : cos a > . b c d e > > or > stuff > cos a > . b c d e > which, though still harder to read than sweet-expressions, is less bad :-). "Less bad" is not exactly a high aspiration :-). I think wisp is the wrong trade-off anyway, but since you're working on it, contrast that to: > stuff cos(a) > . b c d e which is still harder to read, but less harder :-). > On the other hand, this difference makes neoteric expressions much less elegant in wisp than in sweet. In sweet you can just to > > stuff > cos(a) > > because a single element on a line is treated as that element, not as a function call. In wisp you’d have to do > > stuff > . cos(a) > > because a line always begins a function call, except when started with a period. Of course, this why I decided to NOT have all lines begin a function call in sweet-expressions. Both sweet-expressions and I-expressions (SRFI-49) have a different semantic, because it seems to be "what humans assume" in practice. I think you're right that in the wisp semantics, using a neoteric expression at the *beginning* of a line would be especially confusing. But not everything is at the beginning of a line, and using them afterwards would (I think) be sensible). E.G.: sqrt cos(a) sin(a) --- David A. Wheeler |
From: Arne B. <arn...@we...> - 2014-11-13 20:56:30
|
Hi David, Am Sonntag, 9. November 2014, 19:10:32 schrieb David A. Wheeler: > > With the new release of wisp, curly infix is active by default: > > http://draketo.de/light/english/wisp-lisp-indentation-preprocessor#v0.8.0 > Very cool! Have you considered making neoteric active by default as well? > > Then you can do: > > stuff cos(a) I considered it, but decided against it, because in wisp that provides less advantages than in sweet expressions, while adding ambiguity (one more way to spell the same code). That expression in wisp is simply stuff : cos a or stuff cos a In sweet that would be a problem, as far as I know, because if you want to do (stuff (cos a) b c d e) without neoteric expressions, you have to do it as stuff cos a b c d e Here neoteric experssions help a lot: stuff cos(a) b c d e But in wisp you’d just do stuff : cos a . b c d e or stuff cos a . b c d e On the other hand, this difference makes neoteric expressions much less elegant in wisp than in sweet. In sweet you can just to stuff cos(a) because a single element on a line is treated as that element, not as a function call. In wisp you’d have to do stuff . cos(a) because a line always begins a function call, except when started with a period. So I’d rather have users enable that manually, if they want it. By enabling curly-infix, wisp requires SRFI-105, so neoteric expressions should always be available with wisp (so people can experiment with them and decide whether they want them, and they don’t have to worry that that brings them incompatibilities in other wisp implementations). Curly infix is another matter, because it allows using the language from the domain of the problem (math): stuff {3 + 4} This makes the basics of a very important domain look natural, which everybody learns in school. Function syntax with f(x) = ... is only taught much later, so I don’t consider it as similarly essential as infix math notation. Best wishes, Arne |
From: David A. W. <dwh...@dw...> - 2014-11-10 00:10:40
|
On Mon, 10 Nov 2014 00:27:02 +0100, Arne Babenhauserheide <arn...@we...> wrote: > Hi, > > With the new release of wisp, curly infix is active by default: > > http://draketo.de/light/english/wisp-lisp-indentation-preprocessor#v0.8.0 > > Also the implementation became more similar to readable (using read > wherever possible). Very cool! Have you considered making neoteric active by default as well? Then you can do: stuff cos(a) --- David A. Wheeler |
From: Arne B. <arn...@we...> - 2014-11-09 23:27:12
|
Hi, With the new release of wisp, curly infix is active by default: http://draketo.de/light/english/wisp-lisp-indentation-preprocessor#v0.8.0 Also the implementation became more similar to readable (using read wherever possible). Best wishes, Arne |
From: David A. W. <dwh...@dw...> - 2014-11-09 21:53:04
|
FYI, here's a new demo file for readable notations, using Common Lisp. It implements a basic math expression simplifier: http://sourceforge.net/p/readable/code/ci/develop/tree/math.slisp Suggestions (or patches) welcome. To my knowledge the Common Lisp reader works on all Common Lisp implementations as-is, though it's been tested more on clisp and sbcl. --- David A. Wheeler |
From: David A. W. <dwh...@dw...> - 2014-11-09 21:32:13
|
On Sun, 9 Nov 2014 07:54:11 +0800, Alan Manuel Gloria <alm...@gm...> wrote: > Ugh. Non-ASCII is hard to type on a majority of keyboards unless you > add special stuff. I don't think that'll increase acceptance. The only reasonable option I see is #[...]. I don't think changing the semantics of {...} would increase its likelihood of acceptance in Clojure. Also, the BDFL of Clojure objected to *any* infix support a few years ago. Don't know if that's still true or not. --- David A. Wheeler |
From: Alan M. G. <alm...@gm...> - 2014-11-08 23:54:18
|
Ugh. Non-ASCII is hard to type on a majority of keyboards unless you add special stuff. I don't think that'll increase acceptance. On Thu, Oct 30, 2014 at 12:58 AM, David A. Wheeler <dwh...@dw...> wrote: > It appears that Clojure normally loads source files assuming they are UTF-8, > which makes supporting Unicode much easier. This suggests that using a non-ASCII > character might not be too hard for them to support. > > Source file src/jvm/clojure/lang/Compiler.java routine "loadFile" has this Java line, > which I believe forces reading of source code as UTF-8: > return load(new InputStreamReader(f, RT.UTF8), new File(file).getAbsolutePath(), (new File(file)).getName()); > > It's possible to do indirect loading where additional magic is necessary to force > configuration of the encoding, as discussed here: > https://stackoverflow.com/questions/1431008/enabling-utf-8-encoding-for-clojure-source-files > > --- David A. Wheeler > > ------------------------------------------------------------------------------ > _______________________________________________ > Readable-discuss mailing list > Rea...@li... > https://lists.sourceforge.net/lists/listinfo/readable-discuss |
From: Alan M. G. <alm...@gm...> - 2014-11-08 23:53:24
|
In Clojure [...] is lexically read as a vector (similar to Scheme #(..)). The Clojure eval then requires lambda arguments to be in a vector rather than in a list. >From what I notice, Clojure uses [ ] rather sparingly, so I think indent should be ( ) On Wed, Oct 29, 2014 at 7:18 AM, David A. Wheeler <dwh...@dw...> wrote: > On Mon, 27 Oct 2014 02:04:42 +0100, martijn brekelmans <tij...@ms...> wrote: >> Hello everybody, >> >> >> I'm fiddling around with clojure, and I'd like to use readable with clojure. > > I've looked a little more at implementing "readable" (at least some tiers) in Clojure, beyond http://clojure.org/reader. > > Without changing the core code you could implement basic curly infix with a somewhat different syntax and Clojure's tagged literals. Tagged literals let you do "#my/tag element" - the reader then reads element, and passes through my/tag. Reader tags without namespace qualifiers are reserved for Clojure, however, so the tag will be multiple characters. So the best you could do without changing the reader, as far as I can tell, would be something like #n/fx (i >= 0), where "#n/fx" is an infix processor for curly-infix. That's ugly, especially if they are embedded: #n/fx (i >= #n/fx (a + b)). > > Clojure has nothing user-accessible like the Common Lisp readtable. > > Implementing any of the readable tiers with a nicer-looking syntax will require modifying the Clojure reader's source code. I took a quick peek at src/jvm/clojure/lang/LispReader.java - that appears to be where the key reader functionality is implemented. It doesn't look like it'd be hard to add a variation of curly-infix or neoteric expressions, but getting those changes *accepted* might be another matter. > > I'm sure backwards-compatibility is critical for them, so using #[...] instead of {...} is probably the only practical approach for them. > > It might be sensible to start simple, just try to get #[...] accepted for as a notation for basic curly-infix (with NO neoteric support). That has NO possibility of conflict with existing code. We could warn users to NOT include syntax in there of the form a(b) with the assumption that it would be interpreted as "a (b)". The next step would be to get neoteric supported inside #[...], at least in the sense of supporting a(....) as a synonym for (a ...). Maybe that version of neoteric could be supported at all times; the problem is not writing the code, it's getting such a change *accepted*. It would be possible to also interpret "x[y...]" as "(x #[y...])", which would be full neoteric with a slightly different syntax. Note that unprefixed [...] would continue to have its current meaning. > > Any indentation-sensitive syntax would be a much bigger step - again, not because it's hard to implement, but because it has to be *accepted*. > > Example of infix notation in this situation: > #[#[a + b] > #[c * d]] > That is not as nice as {{a + b} > {c * d}}, but I don't see a nicer alternative unless they're willing to use non-ASCII pairing characters (!). > > In Clojure [...] and (...) have a different meaning, but it seems to me that we should just leave [...] as-is. Parens are way more common for enclosing larger scopes, as far as I can tell. > > --- David A. Wheeler > > ------------------------------------------------------------------------------ > _______________________________________________ > Readable-discuss mailing list > Rea...@li... > https://lists.sourceforge.net/lists/listinfo/readable-discuss |
From: David A. W. <dwh...@dw...> - 2014-10-30 02:04:49
|
I claimed: > > «x + 1» : Left/right-pointing double-angle quotation mark, > > U+AB/U+BB. These are very well-supported (e.g., they are used for > > French quotations and are in Latin-1), and in many cases are the easiest > > to enter. However, do they too similar to the comparison operators < > > and >? On Wed, 29 Oct 2014 20:10:36 -0400, John Cowan <co...@me...> wrote: > The barrier is input, not display. I have a keyboard driver for Windows > that allows you to type about 1000 characters[1], but most people don't. > I would say these are the only brackets that are easy for anyone to type, > and even then they don't work for people using the U.S. keyboard and > standard drivers. (On X, there is a compose key, but only a few people > are using Linux on their desktops.) Input is obviously critical. I think the options are in place, though. There's a lot of info here: https://en.wikipedia.org/wiki/Unicode_input So - regarding Windows. Many applications have application-specific mechanisms. Windows programs that use the RichEdit control (like WordPad) let you enter the hex digits followed by alt-x. So "a" "b" "alt-x" will create U+AB. Emacs and vim have other mechanisms. You can also hold down ALT, and while holding it down, type 0 followed by the DECIMAL value and release on the numeric keypad. This depends on the input language. So "Hold-ALT 0 1 7 1 Release-ALT" inserts the left, and "Hold-ALT 0 1 8 7 Release-ALT" inserts the right. I just tried it out on a laptop without a separate keypad, and it still worked (I had to use Fn with the key it represented, but it did work). The most general solution in Windows is to first set the registry key HKEY_CURRENT_USER\Control Panel\Input Method EnableHexNumpad string type (REG_SZ) to 1. Then reboot. Then you can type "ALT + hex-of-Unicode release-ALT" (only the + on the numeric keypad works). This works on laptops without a numeric keypad too, just use the Fn key. That really should be the default; it's much faster and simpler. An annoying problem is that you have to set your editors to actually save as UTF-8 on Windows. Thankfully, that's a one-time step. If you want other applications to know it's UTF-8, you may need to insert the UTF-8 BOM, an abomination that's only useful on Windows but seems to be gracefully handled in many cases. I found that vim quietly keeps the UTF-8 BOM if it's there, and I suspect other applications do the same. There's no doubt that it is easier to use ASCII than anything else. On the other hand, we've been working to move to a Unicode world for years. Perhaps the world is finally ready :-). --- David A. Wheeler |
From: John C. <co...@me...> - 2014-10-30 00:10:44
|
David A. Wheeler scripsit: > «x + 1» : Left/right-pointing double-angle quotation mark, > U+AB/U+BB. These are very well-supported (e.g., they are used for > French quotations and are in Latin-1), and in many cases are the easiest > to enter. However, do they too similar to the comparison operators < > and >? The barrier is input, not display. I have a keyboard driver for Windows that allows you to type about 1000 characters[1], but most people don't. I would say these are the only brackets that are easy for anyone to type, and even then they don't work for people using the U.S. keyboard and standard drivers. (On X, there is a compose key, but only a few people are using Linux on their desktops.) [1] http://recycledknowledge.blogspot.com/2013/09/us-moby-latin-keyboard-for-windows.html -- John Cowan http://www.ccil.org/~cowan co...@cc... As you read this, I don't want you to feel sorry for me, because, I believe everyone will die someday. --From a Nigerian-type scam spam |
From: David A. W. <dwh...@dw...> - 2014-10-29 16:58:09
|
It appears that Clojure normally loads source files assuming they are UTF-8, which makes supporting Unicode much easier. This suggests that using a non-ASCII character might not be too hard for them to support. Source file src/jvm/clojure/lang/Compiler.java routine "loadFile" has this Java line, which I believe forces reading of source code as UTF-8: return load(new InputStreamReader(f, RT.UTF8), new File(file).getAbsolutePath(), (new File(file)).getName()); It's possible to do indirect loading where additional magic is necessary to force configuration of the encoding, as discussed here: https://stackoverflow.com/questions/1431008/enabling-utf-8-encoding-for-clojure-source-files --- David A. Wheeler |
From: David A. W. <dwh...@dw...> - 2014-10-29 03:05:15
|
Both Scheme and Common Lisp do not assign a meaning to { and }, characters that naturally pair and are available in ASCII. So they are the obvious choice for surrounding infix lists in them. Unfortunately, Clojure (and probably others) assign (), [], {}, and <> to pre-existing meanings. (The < and > are used for comparisons). The best backward-compatible syntax I see for Closure using just ASCII is using #[...], which is ugly. So I'd like to hear some opinions for this proposition: In Lisps where {} and [] are already used, would a Unicode non-ASCII pair be okay instead? If so, what pair? First: is the world ready for Unicode in code? In particular, is support for input, processing, and display easy enough? Second, if so, what pair would serve best? These pages presume to list pairing characters in Unicode: * http://xahlee.info/comp/unicode_matching_brackets.html * http://www.unicode.org/Public/UNIDATA/BidiBrackets.txt There are a lot of characters, but issues with many: * Many Chinese punctuation chars are full-width, which look odd when combined with the so-called half-width characters in Western fonts. * Support for some of the mathematical characters in some fonts seems dicey. That said, it may be easier to get people to fix their fonts. * Some characters are hard to distinguish from otehrs. For example, the "left/right angle bracket with dot" pair has such a tiny dot on some fonts that it would be missed. These look like the best options (if your display can handle them!): «x + 1» : Left/right-pointing double-angle quotation mark, U+AB/U+BB. These are very well-supported (e.g., they are used for French quotations and are in Latin-1), and in many cases are the easiest to enter. However, do they too similar to the comparison operators < and >? ⦃x + 1⦄ : Left/Right white curly bracket, U+2983/U+2984. These are nice-looking, they are similar to {}, and yet easily distinguished from them and other characters. However, I have some concerns that they aren't uniformly supported in fonts. ⟪x + 1⟫ : Mathematical left/right double angle bracket, U+27EA/U+27EB. Look good, but may not be universally supported in fonts. ⟦x + 1⟧ : Mathematical left/right square bracket, U+27e6/U+27e7. Look good, but may not be universally supported in fonts. Some other options, don't look so good. ⦑x + 1⦒ : Left/Right angle bracket with dot. U+2992/U+2993. Not universally supported in fonts. The dot is hard to see, so this is probably a bad choice. 【x + 1】: Left/right black lenticular bracket, U+3010/U+3011. Chinese, so they are "full width" (and thus space odd with western letters). 《x + 1》 : Left/right double angle bracket, U+300a/U+300b. Chinese, so they are "full width" (and thus space odd with western letters). --- David A. Wheeler |
From: David A. W. <dwh...@dw...> - 2014-10-28 23:18:35
|
On Mon, 27 Oct 2014 02:04:42 +0100, martijn brekelmans <tij...@ms...> wrote: > Hello everybody, > > > I'm fiddling around with clojure, and I'd like to use readable with clojure. I've looked a little more at implementing "readable" (at least some tiers) in Clojure, beyond http://clojure.org/reader. Without changing the core code you could implement basic curly infix with a somewhat different syntax and Clojure's tagged literals. Tagged literals let you do "#my/tag element" - the reader then reads element, and passes through my/tag. Reader tags without namespace qualifiers are reserved for Clojure, however, so the tag will be multiple characters. So the best you could do without changing the reader, as far as I can tell, would be something like #n/fx (i >= 0), where "#n/fx" is an infix processor for curly-infix. That's ugly, especially if they are embedded: #n/fx (i >= #n/fx (a + b)). Clojure has nothing user-accessible like the Common Lisp readtable. Implementing any of the readable tiers with a nicer-looking syntax will require modifying the Clojure reader's source code. I took a quick peek at src/jvm/clojure/lang/LispReader.java - that appears to be where the key reader functionality is implemented. It doesn't look like it'd be hard to add a variation of curly-infix or neoteric expressions, but getting those changes *accepted* might be another matter. I'm sure backwards-compatibility is critical for them, so using #[...] instead of {...} is probably the only practical approach for them. It might be sensible to start simple, just try to get #[...] accepted for as a notation for basic curly-infix (with NO neoteric support). That has NO possibility of conflict with existing code. We could warn users to NOT include syntax in there of the form a(b) with the assumption that it would be interpreted as "a (b)". The next step would be to get neoteric supported inside #[...], at least in the sense of supporting a(....) as a synonym for (a ...). Maybe that version of neoteric could be supported at all times; the problem is not writing the code, it's getting such a change *accepted*. It would be possible to also interpret "x[y...]" as "(x #[y...])", which would be full neoteric with a slightly different syntax. Note that unprefixed [...] would continue to have its current meaning. Any indentation-sensitive syntax would be a much bigger step - again, not because it's hard to implement, but because it has to be *accepted*. Example of infix notation in this situation: #[#[a + b] > #[c * d]] That is not as nice as {{a + b} > {c * d}}, but I don't see a nicer alternative unless they're willing to use non-ASCII pairing characters (!). In Clojure [...] and (...) have a different meaning, but it seems to me that we should just leave [...] as-is. Parens are way more common for enclosing larger scopes, as far as I can tell. --- David A. Wheeler |
From: David A. W. <dwh...@dw...> - 2014-10-27 22:33:13
|
FYI, I have posted a simple math expression simplifier written in Common Lisp. It is written using sweet-expressions and itself reads and writes sweet-expressions: http://sourceforge.net/p/readable/code/ci/develop/tree/math.slisp --- David A.Wheeler |