wheat-developer Mailing List for Wheat
Status: Pre-Alpha
Brought to you by:
mark_lentczner
You can subscribe to this list here.
2005 |
Jan
(7) |
Feb
(28) |
Mar
(10) |
Apr
(20) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: Mark L. <ma...@gl...> - 2005-06-28 17:36:12
|
Friends - I have begun work on R2 -- Okay, I can't really justify it other than on emotional grounds, but it seems time to bump the major release number. I have decided to develop this release with darcs as the source code control mechanism. Here's how I see it in use: 1) On the wheatfarm.org server will be the "master" development repository. 2) Folks with r/w access will pull and push changes to it over ssh. 3) The world that wants to see the bleeding edge can get it over http. 4) This repository will be mirrored into sourceforge's cvs nightly (or more often?) in the module "r2". No one else should ever check into this module. 5) Any current work in sourceforge's cvs in the module "r1" can continue, and I've left tags in both cvs and darcs so that I can **manually** incorporate those changes into the new line when ready. Please don't do too much of this, the "r1" module is being shutdown. Everything except part 4 has been implemented, though I haven't tried to do a part 5 style integration yet... -=- The repos are available here: http://www.wheatfarm.org/repos/r2/ - browseable source in your browser, gettable with darcs da...@ns...:repos/Wheat/r2 - available if you have an authorized ssh key If you want to get via the http version for now, you'll still be able to push (and pull) later via ssh. In the mean time, you can always do a "darcs send" to generate a patch and e-mail it to me. -=- You can find out more about darcs here: http://abridgegame.org/darcs/ There are distributions available for most platforms. If you need help with a specific one, I've done OS X, Fedora Core (via yum), and Windows (w/cygwin). -=- A good practice is to keep two repos on your local machine: one which corresponds to the server version, and one to work in. To set this up you'd run something like this in your shell: cd ~ mkdir Wheat cd Wheat darcs get --repo-name r2-main http://www.wheatfarm.org/repos/r2/ darcs get --repo-name r2-dev r2-main Now you'll have two directories, r2-main and r2-dev. This way you can easily stage and preview patches to and from the main line: # getting changes cd r2-main darcs pull # pulls from server ... check out the changes, see if you want 'em in your dev right now cd ../r2-dev darcs pull # pull the patches from r2-main ... interactively pick just the ones you want # sending a patch cd r2-dev ... work one files ... darcs record ... perhaps do this multiple times ... # now decide to push darcs push # pushes to r2-main cd ../r2-main ... check that all is fine ... darcs push # pushes server Of course you just work with one local repo instead of two if you like -- and the great thing about darcs is that any time you can switch to the two (or three or four...) repo version. I've been using darcs for two months now and am quite happy with it. Feel free to ask me if you run into stumbling blocks. - Mark |
From: Jim K. <ki...@pa...> - 2005-05-10 05:42:47
|
I'm working on trying to get relative pathnames to work in prototype declarations. It seems to work fine for wheatscript. However, XML Media operates differently. In util/testutil.cpp, Setup::Setup first builds up a bunch of objects, and only then calls NameSpace::mount (I'm thinking the non-test version operates similarly but don't remember for sure whether I looked at that case). What I need instead is to create an empty root object, call NameSpace::mount on it, and only then start building objects inside it. Are there problems with this approach? Is it considered desirable that an object tree in memory can be remounted without changing the objects therein? Index: root/library/compiler.ws =================================================================== RCS file: /cvsroot/wheat/r1/root/library/compiler.ws,v retrieving revision 1.1 diff -u -r1.1 compiler.ws --- root/library/compiler.ws 5 May 2005 17:42:25 -0000 1.1 +++ root/library/compiler.ws 10 May 2005 05:35:44 -0000 @@ -71,4 +71,22 @@ #assert-error("system/vm/exception/not-found(++)", error); } + point2d: { x: 5; y: 6 } +`` absolute: { :'/library/compiler/tests/point2d': } + relative: { :point2d: } + + test-prototype(): { + point2d := $point2d.absolute-path.as-string; +`` #assert-equals(point2d, $absolute.prototype().as-string); + #assert-equals(point2d, $relative.prototype().as-string); + copy := $relative; + #assert-equals(point2d, copy.prototype().as-string); + } + + test-inheritance(): { + #assert-equals(5, $relative.x); + copy := $relative; + #assert-equals(5, copy.x); + } + } Index: wheat/memobject.cpp =================================================================== RCS file: /cvsroot/wheat/r1/wheat/memobject.cpp,v retrieving revision 1.17 diff -u -r1.17 memobject.cpp --- wheat/memobject.cpp 18 Feb 2005 16:10:40 -0000 1.17 +++ wheat/memobject.cpp 10 May 2005 05:35:46 -0000 @@ -12,6 +12,8 @@ #include "media.hpp" +#include <iostream> // TEMPORARY, for debugging + using namespace Wheat; namespace { @@ -358,8 +360,21 @@ clearValue(v); MemoryObject* m = new MemoryObject; - m->prototype = - prototype.isEmpty() ? standardPrototype(TypeObject) : prototype; + + if (prototype.isEmpty()) { + m->prototype = standardPrototype(TypeObject); + } + else { + Path base = NameSpace::pathTo(p.container()); + Path absolute(base, prototype); + if (prototype.isRelative()) { + std::cout << "relative path " << prototype.debugName() << + " absolutified to \n" << + " " << absolute.debugName() << "\n"; + } + m->prototype = absolute; + } + m->capacity = 0; m->keySize = 0; m->arraySize = 0; |
From: Mark L. <ma...@gl...> - 2005-04-22 17:09:32
|
Here is my list of things I want to talk about this afternoon.... I'll bring paper copies as well. - Mark Language Changes Syntax Links & Paths ??? Need a syntax for them: is it -> followed by a path expression? Is this a link or a path? Object Construction ??? Switch from { ...; ... } to [ ...; ... ] for all objects, not just arrays ??? Support commas as well as semicolons as separators? ??? Syntax for inheriting vs. instancing: Currently it is { <:/foo: ... } vs. { :/foo: ... }. I'm not entirely certain we need both... In any case, do we like the over use of colons, or is it time for something else? ??? Syntax for indexed members. Do we allow index specifications? [ 3: "three"; 12: "dozen" ] ??? Arrays currently use a different default prototype. How to sepcify that without making everyone write [:/library/base/array: ... ] ? Use Declarations ??? Support use /for/bar and use /foo/bar as baz to mean that you can refer to bar (in the first case) or baz (in the second) as a stand in for $/foo/bar. Is this syntax correct? Or should it be use /foo/bar = baz or some such? Copy ??? If we change to asis as default semantics (see below), then what becomes the syntax for forcing a copy? Operators ??? Drop prefix operators, or only allow a few built in ones. ??? Drop user spelled operators and have a fixed set. Semantics ObjectConstruction ??? What is the semantics of object construction Currently there are two: x := { :/foo/bar: name: "widget"; size: 42 } -- is equivalent to -- x := primitive object new w/prototype $/foo/bar x.name := "widget" x.size := 42 -- where as -- x := { <:/foo/bar: name: "widget"; size: 42 } -- is equivalent to -- x := primitive object new w/prototype $/foo/bar x.add-member("name", "widget") x.add-member("size", 42) One is used for instancing, one for sub-classing. ??? Should arrays continue to have a separate prototype from object? Or should all the array methods just be in object? Assignment Semantics ??? Are we going to change to asis as the default? Tests ??? The ? test seems backwards - it is true for all but undefined. But I always seem to be writing ~x? as I want the opposite. Perhaps it should be true only for undefined (and perhaps error too?). Thoughts on Ajax interactive feel is the key - not refreshing the whole page if possible - generally non-blocking feel to UI session state is an issue and maybe we can do something about it - stored in Wheat tree on client? - shipped back and forth with each request? - how to make this automatic? enabling the creation of REST applications - we've made generating the page easier (I think) - we've not made the connection between URL/POST and action easier client side rendering - can be seen as an extension of <img> - in fact you want this to operate the same way - can/should we use <iframe> ? enabling shared workspaces should be the key driving customer story - automatic change stuff? |
From: Mark L. <ma...@gl...> - 2005-04-22 01:21:18
|
Friends - I'm thinking that we need a little direction setting and design pow-wow tomorrow. We can meet at zLabs at 1:30 (w/Kragen) and discuss some items. I'd like to set the project priorities. In particular, I think it is important to agree on some critical language issues soon (since they will be hard to change later) -- we can leave more weird language things that are just add-ons for later. I'd also like to discuss and review our experience so far and get on the table what we think Wheat needs. In particular, I'd like to present my take on the AJAX stuff and see what we want to take on. I'll try to send out some lists of stuff tonight. I'd also like to keep this planning and discussion portion of tomorrow short: an hour and a half at most. While last week's 5 hour free-ranging discussion about AJAX was really good and gave lots of food for thought - I don't think we need to repeat it again. - Mark |
From: Kragen S. <ksi...@co...> - 2005-04-21 05:07:09
|
This is slightly off-topic. I had forgotten about this hack I made a few years ago: http://lists.canonical.org/pipermail/kragen-hacks/2002-February/000317.html I built a prototype-oriented object system based on Self (you know, a slot foo gives you getter and setter methods foo: and foo) with a hierarchical memory system using symbolic links for non-hierarchical references, including a symbolic link in each object to its prototype. This was inspired by Martin Hinsch's hack "woosh", which is now found at http://132.187.24.1/~martin/woosh/ I didn't build a language for it, though. I used bash. Here's the definition of "basicobject/getprop": #!/bin/bash -e # Intended to be called by other names. defprop makes symlinks to this script. cat "$METHODOBJ/.$METHOD" And here's "basicobject/new": #!/bin/bash -e # 'new' --- normal way to make a new object like an existing one if [[ $# -lt 2 ]]; then echo "Usage: oo obj new dest [args...] creates a copy of 'obj' in 'dest'.">&2 exit 5 fi obj="$1" dest="$2" shift; shift if [[ -e "$dest" ]] ; then echo "$dest already exists" >&2; exit 5; fi trap 'rm -rf "$dest"' EXIT cp -a "$obj" "$dest" oo "$dest" init "$@" trap '' EXIT Pretty creepy, eh? Wheat has a lot more promise as a programming environment because it has a reasonable programming language and stands a chance of having decent performance. Still, it's kind of weird that I was thinking along these lines in 2002, and I'd completely forgotten about it by late 2004 when Mark came along. Some unfortunate person in New York searched Google for "object oriented bash shell" and found this post, which reminded me that it happened. It's too bad all the code is trapped in the antiquated uuencode format... |
From: Kragen S. <ksi...@co...> - 2005-04-18 17:59:34
|
On Sat, 2005-04-16 at 13:59 -0400, Jim Kingdon wrote: > [quoting someone else] > > Okay, okay, this is getting rather ahead of ourselves..... But it is > > a curious idea, no? > > Isn't it more interesting to give wheat something truly novel now, > rather than fleshing out details first? The details won't do any good > unless wheat has some compelling advantage. Being able to build Ajax apps without the Solomonic split between the parts written in JavaScript and the parts written in some other language --- as described in the opening paragraphs of <http://lists.canonical.org/pipermail/kragen-tol/2005-April/000769.html> --- would be a truly novel compelling advantage. Philip Wadler (et multa alia) is also trying to achieve this in Links --- <http://lambda-the-ultimate.org/node/view/634> |
From: Jim K. <ki...@pa...> - 2005-04-16 17:59:33
|
> What is most interesting to me about this is that I think it can be > done in a way that is very clean to the programmer, and degrades to > normal round trips when the browser can't handle it. Hmm, I think I see how that works. You need to pass the subject object over the wire so the client has it, right? And if it has a path or link which points somewhere else in the object space, it won't work (or won't execute all in the browser, at least)? This may be another argument against lexical closures - although it might be that closures could be made to work OK. The degradation to normal round trips might be very interesting, both for the javascript-turned-off type cases, but also perhaps in some low-bandwidth conditions. > Okay, okay, this is getting rather ahead of ourselves..... But it is > a curious idea, no? Isn't it more interesting to give wheat something truly novel now, rather than fleshing out details first? The details won't do any good unless wheat has some compelling advantage. Anyway, I'll be free for hacking this Friday (22 Apr). |
From: Mark L. <ma...@gl...> - 2005-04-14 17:20:32
|
There has been lots of rumbling around the web about AJAX. After all, after anyone has played with Google Maps, why would anyone want to go back? In addition, talks with Rohit Khare, Donovan Preston and Kragen Sitaker have explored how best to build richer applications with systems like Wheat and Nevow. One set of ideas centers around how, if you delegate part of your page to another object: tt-foo(): { #expand(subject: #subject.sub-thing) } That this expansion could be migrated to take place in the client (!). In fact, as that object, through its templates interacts with the user, all of that interaction might take place inside the already loaded page. What is most interesting to me about this is that I think it can be done in a way that is very clean to the programmer, and degrades to normal round trips when the browser can't handle it. Meanwhile I've been thinking crazier thoughts: Moving part of the application to the client is a pain: The language is different (JavaScript), the environment different (the DOM, or rather the intersection/union of various DOM implementations), and programming metaphor a total switch (from the data objects themselves to once again essentially procedural coding based on the UI). What if... What if we ran Wheat in the client? Really! Treat the per-client state, that we'd rather keep on the client (rather than in the session), as a Wheat tree of objects rooted at the client. Treat interaction between client and server as interaction between Wheat objects in different trees (though all rooted in the same, global Internet tree!) And since a client based Wheat implementation wouldn't need all the fancy mount point stuff, I think an implementation wouldn't be that hard in JavaScript (!). As for compiling methods we could either 1) write an interpreter in JavaScript (there are only, what 16 bytecodes)?, or 2) compile to JavaScript! Okay, okay, this is getting rather ahead of ourselves..... But it is a curious idea, no? - Mark |
From: Mark L. <ma...@gl...> - 2005-04-14 17:09:13
|
Philip Wadler, a C.S. Professor at University of Edinburgh, is currently researching Links, "a programming language for web application development, building on my experience with XML, Java, and Haskell" The idea is that if functional programming can prove its worth in this arena, then that will give it the leg up to dominate the future of C.S. ... or some such... see: http://homepages.inf.ed.ac.uk/wadler/ Most recently, he held a one day conference and invited all the big functional big-wigs (not to mention the guys behind BigWig) to come talk. Their sildes are now available here: http://homepages.inf.ed.ac.uk/wadler/linksetaps/ I read all the non-powerpoint ones (sorry Peyton Johns of Microsoft Research -- see, proprietary systems are antithetical to an open exchange of ideas!) Of interest are the langauge Scala, and JWig / Xact 's XML composition system. It bears close conceptual kinship with our template system. I don't think these guys have it. They are a little too wrapped up in bolstering functional programming and strong typing to think about how to actually help the problems of internet programming. But, I'm sure we can learn a thing or two or three from them. - Mark |
From: Kragen S. <ksi...@co...> - 2005-04-07 20:26:02
|
On Thu, 2005-04-07 at 10:32 -0700, Mark Lentczner wrote: > On Wed, 2005-04-06 at 18:39 -0700, Donovan Preston wrote: > > This is why I was advocating applying ids to the nodes you want to test > Then the XHTML page would have > a level of semantic markup in it -- which is sort of cool. BUT, it > dawns on me that this would automatically give you the ids you need. It is likely to help significantly, but sometimes people do split a single template variable into two, combine two into one, or eliminate them entirely. Automated tests are exactly the sort of thing this is intended to help with. |
From: Donovan P. <wh...@ul...> - 2005-04-07 18:48:46
|
On Apr 7, 2005, at 10:32 AM, Mark Lentczner wrote: > Kragen had this idea that TinyTemplate should (or could) leave the > tt:name fields in-tact after expansion. Then the XHTML page would > have a level of semantic markup in it -- which is sort of cool. BUT, > it dawns on me that this would automatically give you the ids you > need. So, rather than fetching nodes by id attribute, surely you can > code a utility routine that can fetch nodes by tt:name attribute. This is a good idea but... Heh. This is the w3c DOM we're talking about here. ids are pretty much the only sane way to do it. We may be able to cook something up using the NodeIterator or TreeWalker or whatever DOM extension that is, but I'm not sure how well supported it is cross browser. Also, I have a fuzzy memory of some browsers just silently discarding node attributes they don't recognize from the DOM (Safari?). Anyway, we'll figure something out! :-) dp |
From: Mark L. <ma...@gl...> - 2005-04-07 17:32:28
|
On Wed, 2005-04-06 at 18:39 -0700, Donovan Preston wrote: > This is why I was advocating applying ids to the nodes you want to test Kragen had this idea that TinyTemplate should (or could) leave the tt:name fields in-tact after expansion. Then the XHTML page would have a level of semantic markup in it -- which is sort of cool. BUT, it dawns on me that this would automatically give you the ids you need. So, rather than fetching nodes by id attribute, surely you can code a utility routine that can fetch nodes by tt:name attribute. The only hitches in this scheme are 1) tt:name values don't have to be unique as the same expansion could be used twice (as in <title tt:name="title">foo</title> ... <h1 tt:name="title">foo</h1> ) 2) tt:name values could be repeated multiple times (as in table rows) 3) tt:name values may have to be considered hierarchically These don't seem to pose to much a problem to me: The tests would be coded as one of: 1) a single replacement (tt:name="title" should expand to "Bob's Blog"), in which case it must expand to the same thing every where it appears 2) a list of replacements (tt:name="item"/0 should expand to "apple", tt:name="item"/1 should exapnd to "orange"...) The third one is a little more tricky, but we just need some what for the test writer to capture the expected nested expansions: inside tt:name="item"/0 there should be tt:name="name" expanded to "hammer" tt:name="price" expanded to "12.50" inside tt:name="item"/1 there should be tt:name="name" expanded to "wrench" tt:name="price" expanded to "21.00" etc... On Wed, 2005-04-06 at 18:39 -0700, Donovan Preston wrote: > Should I plan on coming down Friday? On Apr 7, 2005, at 9:37 AM, Kragen Sitaker wrote: > I'd love to have the opportunity to work with you on Friday. I think this is excellent. I'll be there two - but you two should pair this week. I might be able to get my friend Bruce to come down and pair with me.... - Mark |
From: Kragen S. <ksi...@co...> - 2005-04-07 16:38:52
|
On Wed, 2005-04-06 at 18:39 -0700, Donovan Preston wrote: > Should I plan on coming down Friday? I would like to work on this, but > I don't want to be pushy, and besides I have lots of other stuff I > could be doing instead :-) I'd love to have the opportunity to work with you on Friday. |
From: Jim K. <ki...@pa...> - 2005-04-07 06:34:07
|
> http://www.wheatfarm.org/wiki/TinyTemplateAttributes > I'm leaning toward the alternate proposal (the one modeled on Nevow), > with both short-cuts. I like the alternate proposal. The short-cuts are OK, I guess, although we better only go with this if we like the full form well enough, because we use prefix a fair bit now, and those usages won't be able to use the short-cuts in the new scheme. The medium-verbosity option <tt:attribute for="title" name="next-month-title" /> strikes me as unobjectionable. The low-verbosity option tt:attributes="title=next-month-title;href= next-month-url" is hardly thrilling give that it needs all that non-self-explanatory syntax in the attribute (is the tt name the one before or after the equals?), but it does have advantages in addition to brevity: it allows one to replace nested elements with an attribute, which I suspect might be handy sometimes. So I'll offer this one a 0 on a scale of [-1,0,+1], I guess. |
From: Jim K. <ki...@pa...> - 2005-04-07 06:17:41
|
I could go on at some length about http/html testing (httpunit vs Selenium or whatever), but I actually don't think I need to dive too deeply into test philosophy and how-to to say: TinyTemplateUnit is cool. Let's implement it. I personally don't expect it to replace all other ways of testing web sites. But Mark can write cardfile with it, and it can replace the assert-contains tests we now have. |
From: Donovan P. <wh...@ul...> - 2005-04-07 01:39:35
|
On Apr 6, 2005, at 2:54 PM, Mark Lentczner wrote: > Donovan: >> All use some sort of DOM structure to locate subsections of a page, >> and all forced you to tag nodes for test with unique ids. > Kragen: >> My experience ... was that the "invisible" details of web >> pages changed a lot more than the "visible" parts. Divs would become >> paragraphs; single pages would split into frames; links would become >> form submit buttons; > > So, these seem at odds with each other: We test by comparing against > the XHTML structure, but that structure is the thing that keeps > changing. For the person coding the model objects, and how those > objects render parts of themselves, this is awful: Every time the > designer does one of those "invisible" detail changes, the tests for > the model object fail, even though nothing is really wrong: The test > just tests the wrong level. This is why I was advocating applying ids to the nodes you want to test and testing that the contents are what you expect. Any other mechanism for locating nodes to test depends on the structure of the DOM, which is bad. For example: <span id="firstName">Donovan</span> Applying ids by hand is tedious, though, which is why I was suggesting it would be nice to have uniform automatic node id generation. (May not be possible, but it would be nice.) > Now, I still believe that tests at the XHTML level are important - but > those are more tests of the templates themselves than how the model > objects render. I see them more as a test of whether the app works at all. If you try to render a page or an object in a page and get an exception or get totally the wrong output, these tests should at least be able to catch that. The model objects can't render without a template, so I don't really see why you are making a distinction there. > Donovan: >> The API I have settled on as lowest common denominator required >> functionality is: >> assert(nodeId, nodeContents) >> follow(nodeId) -- follows a link >> post(formId, {fieldName: fieldValue, fieldName2: fieldValue2}) -- >> posts a form > > I noticed this about your in-browser tests. These very much have the > feel of acceptance tests, not unit tests. Mostly because rather than > explore the cases of how individual objects render, it follows a > narrative of a user experience. Yes. I never, ever call these unit tests. Unit tests should test that your units of functionality fit together as expected, which is what you are proposing about running a template and making sure the right expanders ran. Unit tests are important, but nothing which renders pages as a whole and makes assertions about the pages as a whole should be called a unit test -- you're not testing a unit, you are testing whether the system functions. Thus, I call these functional tests. They test whether the system functions as a whole when put together, from a user interface perspective, rather than whether the units of code fulfill the contracts they expose at the boundaries of abstraction. Acceptance tests in my mind are a big long list of things that people check by hand. If some of the acceptance tests can be automated, that's cool, but the distinction with acceptance tests for me is that the list is really long and exhaustive. > Indeed, when I looked at the examples for things like XMLUnit, > HTMLUnit, JWebTest and Canoo Webtest, all of them have the same taste: > In fact, often the examples only superficially tested that the > content of the page contained what was expected (testing only the > title or a string or two.) Seems pretty useless to me, although more useful than nothing, which is why people write them, I guess. > Now, I realize there is a fuzzy line between acceptance test and unit > test in this case, but the differences in terms of who writes the > tests and what sort of things they cover are still there. > > So, I think I still want to do the "template trace" style of tests so > that the model object author can write server side unit tests that > prove that: > a) the object implements all the tt- names the way they are expected > (replacing with strings, skipping, keeping, or expanding in a loop > properly). > b) the template and the object agree on the set of tt- names > (template doesn't ask for things the object doesn't expand, and the > template does ask for everything the object expects it to) There is > no need for this kind of test to involve an end-user browser, since > the browser doesn't influence this level of rendering. Absolutely. Unit tests. > In addition, we also want to ensure the application as a whole works, > and this kind of test is written by the template designer and customer > to ensure that the links work as expected, and that the pages > represent the correct things. This seems to be the kind of test that > I saw in nevow, and I agree that it make lots of sense to have this > sort of thing actually driven through/by a live browser. I love that > it gives you a great way to test cross-browser compatibility of the > application. Yep. Functional tests :-) > I wonder if perhaps this splitting into two layers of tests is what's > missing in HTTP/HTML application frameworks. As I said, everything I > found had people throwing their hands up. Perhaps, by introducing a > layer here, we can have what we need in each piece, rather than trying > to come up with one piece that does it all. Totally agree, and I'm sorry I wasn't clear that this was my stance earlier. It's been clear to me for a few years that the distinction is necessary, and that both kinds are necessary. > Looking at Selenium, why did nevow need its own version? Why couldn't > Wheat just use Selenium? Selenium has some architectural flaws that I initially thought I could work around, but then decided were insurmountable. I may come back to Selenium after I have more experience in the area and attempt to provide patches for some of the more egregious flaws, but for my purposes livetest was easier to write from scratch. livetest has more features that I needed in terms of browser-client coordination, to allow the server to wait for things in the client and the tests to assert things about the server state. If you want, we can try to use Selenium with Wheat but I have a hunch it'll actually be easier to write something specific for Wheat, which is why I proposed it. Should I plan on coming down Friday? I would like to work on this, but I don't want to be pushy, and besides I have lots of other stuff I could be doing instead :-) Donovan |
From: Mark L. <ma...@gl...> - 2005-04-06 21:54:31
|
Donovan: > All use some sort of DOM structure to locate subsections of a page, > and all forced you to tag nodes for test with unique ids. Kragen: > My experience ... was that the "invisible" details of web > pages changed a lot more than the "visible" parts. Divs would become > paragraphs; single pages would split into frames; links would become > form submit buttons; So, these seem at odds with each other: We test by comparing against the XHTML structure, but that structure is the thing that keeps changing. For the person coding the model objects, and how those objects render parts of themselves, this is awful: Every time the designer does one of those "invisible" detail changes, the tests for the model object fail, even though nothing is really wrong: The test just tests the wrong level. Now, I still believe that tests at the XHTML level are important - but those are more tests of the templates themselves than how the model objects render. Donovan: > The API I have settled on as lowest common denominator required > functionality is: > assert(nodeId, nodeContents) > follow(nodeId) -- follows a link > post(formId, {fieldName: fieldValue, fieldName2: fieldValue2}) -- > posts a form I noticed this about your in-browser tests. These very much have the feel of acceptance tests, not unit tests. Mostly because rather than explore the cases of how individual objects render, it follows a narrative of a user experience. Indeed, when I looked at the examples for things like XMLUnit, HTMLUnit, JWebTest and Canoo Webtest, all of them have the same taste: In fact, often the examples only superficially tested that the content of the page contained what was expected (testing only the title or a string or two.) Now, I realize there is a fuzzy line between acceptance test and unit test in this case, but the differences in terms of who writes the tests and what sort of things they cover are still there. So, I think I still want to do the "template trace" style of tests so that the model object author can write server side unit tests that prove that: a) the object implements all the tt- names the way they are expected (replacing with strings, skipping, keeping, or expanding in a loop properly). b) the template and the object agree on the set of tt- names (template doesn't ask for things the object doesn't expand, and the template does ask for everything the object expects it to) There is no need for this kind of test to involve an end-user browser, since the browser doesn't influence this level of rendering. In addition, we also want to ensure the application as a whole works, and this kind of test is written by the template designer and customer to ensure that the links work as expected, and that the pages represent the correct things. This seems to be the kind of test that I saw in nevow, and I agree that it make lots of sense to have this sort of thing actually driven through/by a live browser. I love that it gives you a great way to test cross-browser compatibility of the application. I wonder if perhaps this splitting into two layers of tests is what's missing in HTTP/HTML application frameworks. As I said, everything I found had people throwing their hands up. Perhaps, by introducing a layer here, we can have what we need in each piece, rather than trying to come up with one piece that does it all. Looking at Selenium, why did nevow need its own version? Why couldn't Wheat just use Selenium? - Mark |
From: Kragen S. <ksi...@co...> - 2005-04-06 20:48:41
|
On Wed, 2005-04-06 at 11:49 -0700, Donovan Preston wrote: > On Apr 6, 2005, at 9:10 AM, Mark Lentczner wrote: > > Everyone > > seems to say that of course you should do it, then throws their hands > > up and says "oh well, the best we can do is lots of string matches". > > At best some people do lots of XPath or DOM based tests. They all > > acknowledge how brittle it all is, and simply shrug. Or those suggest > > that the final template layer be very thin and not tested. I think we > > have the opportunity to do something much better here. > > I have implemented 4 functional testing frameworks for woven/nevow from > the ground up: > > - One which used twisted.web.client and twisted.web.microdom (for > woven, unreleased (probably included in very old quotient)) > - One which used newclient (never finished) and twisted.web.microdom > (for nevow, unreleased (probably included in old quotient)) > - One which used mechanize_ (quotient.test.webtest for quotient) > - One based on ideas from selenium_ which uses livepage and a bunch of > javascript (nevow.livetest for nevow 0.4) ... > The first three frameworks used some sort of HTTP client library and > parsed the resulting HTML. The selenium/livetest approach is to instead > tell the browser to fetch a URL and perform some assertion for you. > There are big advantages and some disadvantages to this approach, but I > believe it is the only sane way to perform web functional testing. I was pretty happy with WWW::Mechanize in Perl --- better than any alternative I'd tried. I'm delighted to see a Python version. My experience with writing HTTP functional tests, for one particular large application and for a bunch of embedded web-server systems that did basically the same thing, was that the "invisible" details of web pages changed a lot more than the "visible" parts. Divs would become paragraphs; single pages would split into frames; links would become form submit buttons; template variables would be renamed or recombined; form fields would be renamed; etc. I spent a little time comparing WinRunner against Mechanize. WinRunner drives an Internet Explorer browser, much as Selenium does, while watching what you click on and type into. It remembers some large set of properties of the things you click on, and when running the test script later, it uses this large set of properties to guess which object on the new page corresponds to the one you clicked on while recording the script, if any. I think I would have recommended using WinRunner, despite its inability to abstract out parts of a test, if we could have run it using Mozilla under Linux (in, say, Xvnc or Xvfb). |
From: Donovan P. <wh...@ul...> - 2005-04-06 18:49:31
|
On Apr 6, 2005, at 9:10 AM, Mark Lentczner wrote: > Folks - > > I'm looking for some comments on > http://www.wheatfarm.org/wiki/TinyTemplateAttributes > I'm leaning toward the alternate proposal (the one modeled on Nevow), > with both short-cuts. I have already explained why I favor the verbose form (uniformity) and I like the proposed shortcuts even though they are not uniform. So this looks good. > I think it is time to change the attribute naming syntax in Tiny > Template: I've implemented some of the ideas about testing templates > (see: http://www.wheatfarm.org/wiki/TinyTemplateUnit), and it isn't > worth the complications to make them support the current method of > handling attributes. > > By the way, I'm proceeding implementing the CardFile application and > using it as a driving customer story for implementing changes to the > system. Right now I'm thwarted because writing test cases for > generated pages is such a tedious and error prone affair. I really > think getting this template test unit stuff correct will make writing > applications with proper tests much easier both now and in the future. > > Also, having surveyed what's written out there about testing HTML/HTTP > applications, I have to say that I'm generally underwhelmed. Everyone > seems to say that of course you should do it, then throws their hands > up and says "oh well, the best we can do is lots of string matches". > At best some people do lots of XPath or DOM based tests. They all > acknowledge how brittle it all is, and simply shrug. Or those suggest > that the final template layer be very thin and not tested. I think we > have the opportunity to do something much better here. I have implemented 4 functional testing frameworks for woven/nevow from the ground up: - One which used twisted.web.client and twisted.web.microdom (for woven, unreleased (probably included in very old quotient)) - One which used newclient (never finished) and twisted.web.microdom (for nevow, unreleased (probably included in old quotient)) - One which used mechanize_ (quotient.test.webtest for quotient) - One based on ideas from selenium_ which uses livepage and a bunch of javascript (nevow.livetest for nevow 0.4) All use some sort of DOM structure to locate subsections of a page, and all forced you to tag nodes for test with unique ids. The API I have settled on as lowest common denominator required functionality is: assert(nodeId, nodeContents) follow(nodeId) -- follows a link post(formId, {fieldName: fieldValue, fieldName2: fieldValue2}) -- posts a form Selenium allows you to locate nodes using XPath or javascript expressions (document.form.field) The first three frameworks used some sort of HTTP client library and parsed the resulting HTML. The selenium/livetest approach is to instead tell the browser to fetch a URL and perform some assertion for you. There are big advantages and some disadvantages to this approach, but I believe it is the only sane way to perform web functional testing. Disadvantages: - Requires a real browser present to run - Requires writing JavaScript which may fail - Requires writing cross-browser compatibility code Advantages: - Can actually assert whether the app works in multiple browsers - Can execute JavaScript - Can extract information from dynamically generated or mutated portions of a page - An advantage of livetest but not selenium is the ability for the client and server to coordinate test continuation, for example the server can pause running tests until some event occurs in the browser There may be another solution, writing a server-side testing framework which incorporates spidermonkey_ and possibly the dom-construction code from Mozilla (no idea where this is or if it is feasible to extract it?) to give the javascript-execution and dom examination capabilities of the client side approach without requiring a browser to run the tests. But this sounds like a lot of work for little benefit, and indeed the loss of the ability to run the same test suite in multiple browsers to test cross-browser compatibility. An advantage of my current livetest module is that it is very short. I believe I could produce a livetest clone for wheat with very little work, and would be very interested in doing this. I am down in Palo Alto again this week, however I have a very full schedule. Perhaps I could take time off on Friday afternoon again to help design or implement whatever approach is decided upon. This is a very difficult problem but it is one which is essential to solve in order to have a sane development environment. ..mechanize: http://wwwsearch.sourceforge.net/mechanize/ ..selenium: http://selenium.thoughtworks.com/index.html ..spidermonkey: http://www.mozilla.org/js/spidermonkey/ Donovan |
From: Mark L. <ma...@gl...> - 2005-04-06 16:09:58
|
Folks - I'm looking for some comments on http://www.wheatfarm.org/wiki/TinyTemplateAttributes I'm leaning toward the alternate proposal (the one modeled on Nevow), with both short-cuts. I think it is time to change the attribute naming syntax in Tiny Template: I've implemented some of the ideas about testing templates (see: http://www.wheatfarm.org/wiki/TinyTemplateUnit), and it isn't worth the complications to make them support the current method of handling attributes. By the way, I'm proceeding implementing the CardFile application and using it as a driving customer story for implementing changes to the system. Right now I'm thwarted because writing test cases for generated pages is such a tedious and error prone affair. I really think getting this template test unit stuff correct will make writing applications with proper tests much easier both now and in the future. Also, having surveyed what's written out there about testing HTML/HTTP applications, I have to say that I'm generally underwhelmed. Everyone seems to say that of course you should do it, then throws their hands up and says "oh well, the best we can do is lots of string matches". At best some people do lots of XPath or DOM based tests. They all acknowledge how brittle it all is, and simply shrug. Or those suggest that the final template layer be very thin and not tested. I think we have the opportunity to do something much better here. - Mark |
From: Mark L. <ma...@gl...> - 2005-04-05 02:53:34
|
On Apr 4, 2005, at 7:24 PM, Kragen Sitaker wrote: > This does a reasonable job of syntax-highlighting Wheat code in Vim. Cool > Where should things like this go inside the source repository? how about in /misc -- I'll add the SubEthaEdit mode files there too > And where on the web site? well, that is harder, since the web site doesn't yet have any real depth or organization. Before our next major release we'll need to clean that up. > > " Filetype autodetection. My .vimrc says: > " > " au BufRead,BufNewFile *.ws setf wheat > " au! Syntax wheat source $HOME/wheat.vim > " Should probably also do something along the lines of: > " if did_filetype() > " finish > " endif > " if getline(1) =~ '^wheat(version:' > " setf wheat > " endif Might want to add something like: " Wheat files all use spaces, not tabs autocmd BufRead,BufNewFile *.ws set expandtab - Mark |
From: Kragen S. <ksi...@co...> - 2005-04-05 02:25:53
|
This does a reasonable job of syntax-highlighting Wheat code in Vim. It even handles some of the hard stuff, like multiline strings, multiline comments, nested multiline strings, and nested multiline comments. It also allows Vim to do a reasonable job of folding Wheat code according to syntax, and makes the #, *, and ^] commands do pretty much the right thing. (It doesn't, sadly, make ctags do the right thing.) For whatever reason, syncing on comments and handling of the "." character seem a little flaky. Where should things like this go inside the source repository? And where on the web site? " Vim syntax file " Language: Wheat " Maintainer: Kragen Sitaker <kr...@po...> " Last Change: 2005-04-04 " Location: http://wheatfarm.org/wheat.vim " Remark: Incomplete but better than nothing. setlocal iskeyword+=- syn keyword Conditional if else syn keyword Repeat while syn keyword Keyword return syn keyword Boolean true false syn match Number "[0-9]\+" syn match Float "\.[0-9]\+" syn match Float "[0-9]\+\.[0-9]*" syn match Delimiter "[:;]" " what to do with #, \, and $? Or .? " Clearly [:*+-/%]= is too restrictive, but I don't actually know Wheat's " grammar well enough to know what the grammar for new in-place operators is. syn match Operator "!\|!!\|?\|??\|->\|&&\|||\|[*+-/%]=\?\|:=\|[!=]=\|\~" syn match Constant "???\|!!!" syn match Identifier "[-a-zA-Z][-a-zA-Z0-9]*" syn match Identifier "'[^']*'" syn match Comment "``.*" syn region wheatBlockComment start="``(" end="``)" contains=wheatBlockComment fold hi def link wheatBlockComment Comment " For folding, we define these separately from the other delimiters. To use " this, :set foldmethod=syntax and move around doing zo, zc, and maybe zm and " zr. syn region wheatSexpr matchgroup=Delimiter start="{" end="}" contains=ALL fold syn region wheatSexpr matchgroup=Delimiter start="\[" end="]" contains=ALL fold syn region wheatSexpr matchgroup=Delimiter start="(" end=")" contains=ALL fold syn region String start=+"+ end=+"+ skip=+\\"+ fold syn region wheatString start=+""(+ end=+"")+ contains=wheatString fold hi def link wheatString string " TODO: " ellipses? " more operators " Should we really treat !!! as a constant? Incidentally the !!/!!! ambiguity " confuses Vim. " Filetype autodetection. My .vimrc says: " " au BufRead,BufNewFile *.ws setf wheat " au! Syntax wheat source $HOME/wheat.vim " Should probably also do something along the lines of: " if did_filetype() " finish " endif " if getline(1) =~ '^wheat(version:' " setf wheat " endif |
From: Mark L. <ma...@gl...> - 2005-03-31 05:33:23
|
Friends - The time has come to tackle closures... Brace (or square bracket) yourself for the discussion at: http://www.wheatfarm.org/wiki/MarkThinksAboutClosures I tried to motivate it in the context of the real world expander situation. Please comment.... - Mark |
From: Jim K. <ki...@pa...> - 2005-03-28 02:32:04
|
> some sketching out of a task-card style system... > ...open up stack.html in a browser and check it out! Interesting. There's the usual problem with making assumptions about the browser window width. If CSS is unable to let us adapt to the actual window width (a subject about which I mostly profess ignorance), I suppose the next best thing would be some kind of setting for "2 wide", "3 wide", "4 wide", etc. The cards probably take up more space than you want for a "shuffle them around" type page. I'm thinking just the title ("Sew Sails" etc) would appear on the stack page, with some way of getting the text. Maybe a mouseover (although it isn't clear to me there is a good way to do that). Or a details pane (one per page) or something. Can a card be in more than one stack? For an open source project, I've been wondering about a design where there is a central repository of cards, and each developer (or interested user) has one or more stacks. For example "jim's high priority cards", "things jim is unlikely to work on but wants to keep an eye on", "jim's wild ideas", "release goals for r2", etc. I'm not sure what should happen when the user edits a card - whether it would be copy-on-write (with some kind of link between the two, perhaps), or whether the user has to explicitly choose between "edit shared card" or "make a new card, leaving the old one alone", or what. |
From: Jim K. <ki...@pa...> - 2005-03-17 07:23:46
|
I have another contract for the next 4 Fridays, so I won't be getting together to do wheat at those times. Oddly enough, I'm not sure my output of wheat code has declined in the few days I've had this work, despite drastically reduced available time to work on wheat. There's some kind of paradox here about what gets me coding... |