From: Tara S. <te...@cl...> - 2001-11-06 18:21:47
|
I've been digging around a bit in the script files, especially the parts=20 that output html (templates et al) - because I'd like to customize the=20 templates to output xhtml strict. The main templates I have found, as well as the stylesheets, but I'm in=20 more trouble for the transformation rules - the markup the wiki=20 generates depending on user input. Where will I find that? it it all in=20 transform.php (and I just have to stare at it long enough to let it sink=20 into my brain), or is it scattered amongst other files? I also took a look at the linking scheme. I personally like WikiWords a=20 lot, and would like to modify the script so that it it not possible to=20 create a page with a title like "this is my page" or "whatever" =3D> it=20 would have to be "ThisIsMyPage" and "Whatever". The idea being that the name of the page must start with a capital=20 letter, and cannot contain any spaces. What part of transform.php should=20 I be digging in for that? I'm a big confused, because it seems to be=20 just around the settings for allowing HTML (which I don't want), and I=20 don't want to mess things up too much ;) Other than that, if you think my above idea is a bad one, do let me know! Thanks a lot for help, comments, suggestions, etc. --=20 Je r=E9ponds au mieux de mes connaissances Climb to the Stars! - http://climbtothestars.org/ no tables: http://climbtothestars.org/coding/tableless/ Pompeurs Associ=E9s - http://pompage.net/ |
From: Adam S. <ad...@pe...> - 2001-11-06 20:08:31
|
> I've been digging around a bit in the script files, especially the > parts that output html (templates et al) - because I'd like to > customize the templates to output xhtml strict. that sounds like a good think, can you mail them back to the list when you're done making them xhtml compliant? > The main templates I have found, as well as the stylesheets, but I'm > in more trouble for the transformation rules - the markup the wiki > generates depending on user input. Where will I find that? it it all > in transform.php (and I just have to stare at it long enough to let it > sink into my brain), or is it scattered amongst other files? i believe it's all in transform.php but i'm very much not the expert. > I also took a look at the linking scheme. I personally like WikiWords > a lot, and would like to modify the script so that it it not possible > to create a page with a title like "this is my page" or "whatever" => > it would have to be "ThisIsMyPage" and "Whatever". this gets tricky pretty fast. ShouldIBe allowed that as a wiki word? i'd generally encourage you to force this by editorial control rather then forcing it in the wiki unless you're really sure you want it. adam. |
From: Tara S. <te...@cl...> - 2001-11-06 20:35:01
|
Adam Shand wrote: >>I've been digging around a bit in the script files, especially the >>parts that output html (templates et al) - because I'd like to >>customize the templates to output xhtml strict. >> >=20 > that sounds like a good think, can you mail them back to the list when > you're done making them xhtml compliant? If I manage to get there, I certainly well (though from what I've seen,=20 there are some <br> and <hr> thingies in other files too - I'll have to=20 do a thorough search to make sure I haven't forgotten anything. But before that, I need to get the wiki running - been at it for a=20 couple of days, with help from various mysql-literate people (I'm not,=20 unfortunately), and it still won't work: http://tara.scdi.org/newwiki/ Can anybody help? is PearDB really really broken? thanks Tara --=20 Je r=E9ponds au mieux de mes connaissances Climb to the Stars! - http://climbtothestars.org/ no tables: http://climbtothestars.org/coding/tableless/ Pompeurs Associ=E9s - http://pompage.net/ |
From: Jeff D. <da...@da...> - 2001-11-06 20:55:37
|
On Tue, 06 Nov 2001 21:31:13 +0100 "Tara Star" <te...@cl...> wrote: > > Can anybody help? is PearDB really really broken? > I sent the following reply to you awhile ago, but apparently due to your DNS transients or whatever, its still in the sendmail queue at this end. (Do you have another e-mail address besides @climbtothestars.org I could use in the mean time?) My best guess (which could well be wrong) is that you're running a very old version of MySQL --- at this point I don't think that the PEAR code is the cause of the problem. ===Here's the message waiting in my sendmail queue:=== Hi Tara, It looks like your mysql doesn't like the "USING(id)" syntax, but then again who knows? What version of mysql are you running? Can you try running that query manually (using the 'mysql' program)? ("That query" being: select phpwiki_page.id from phpwiki_nonempty inner join phpwiki_page using(id) where pagename='HomePage'; If that fails, try: select phpwiki_page.id from phpwiki_nonempty inner join phpwiki_page on phpwiki_page.id=phpwiki_nonempty.id where pagename='HomePage'; On Tue, 06 Nov 2001 17:58:46 +0100 "Tara Star" <te...@cl...> wrote: > Tara Star wrote: > > > > http://www.climbtothestars.org/newwiki/ (if you don't get a nasty nasty > > mysql error, it means the problem is fixed) > > > dns is propagating sluggishly. Try http://tara.scdi.org/newwiki/ instead. > > Thanks for your help, I'm still stuck! |
From: Tara S. <te...@cl...> - 2001-11-06 21:06:26
|
Thanks a lot, Jeff : ) seems I'm getting my mail now, at least. Otherwise, st...@po...=20 works : ) (though I'm sure I've lost loads of mail during these last=20 days, but that's another story) http://tara.scdi.org/phpinfo.php will give you my version of mysql. I'll=20 try running the query manually (I've found someone to walk me through=20 it!) and I'll let you know if it works or not. The db name is tara and not phpwiki, unfortunately (sysadmin did it for=20 me), but I specified "tara" in index.php, so I guess that should be ok? Steph aka Tara --=20 Je r=E9ponds au mieux de mes connaissances Climb to the Stars! - http://climbtothestars.org/ no tables: http://climbtothestars.org/coding/tableless/ Pompeurs Associ=E9s - http://pompage.net/ |
From: Reini U. <ru...@x-...> - 2001-11-07 14:14:42
|
Tara Star schrieb: > seems I'm getting my mail now, at least. Otherwise, st...@po... > works : ) (though I'm sure I've lost loads of mail during these last > days, but that's another story) > > http://tara.scdi.org/phpinfo.php will give you my version of mysql. I'll > try running the query manually (I've found someone to walk me through > it!) and I'll let you know if it works or not. mysql 3.22.30 is too old. we are at 3.23.x remove USING(id) as Jeff sent you. -- Reini Urban http://xarch.tu-graz.ac.at/home/rurban/ |
From: Jeff D. <da...@da...> - 2001-11-07 15:38:12
|
On Wed, 07 Nov 2001 13:19:24 +0000 "Reini Urban" <ru...@x-...> wrote: > mysql 3.22.30 is too old. we are at 3.23.x > remove USING(id) as Jeff sent you. Though I don't think there's any fundamental reason we can't or shouldn't support mysql 3.22.30. Tara/Steph: if you figure out anything, do let me know... My current hunch is that the older mysql has a problem specifically with the INNER JOIN ... USING(...) syntax and that "LEFT JOIN ... USING(...)" would work fine. I'd like some confirmation of that, though, before I act on it. Perhaps, for maximum portability, we/I should convert all the joins to plain cross joins. Instead of SELECT FROM x INNER JOIN y USING(id) WHERE ... use the archaic: SELECT FROM x,y WHERE x.id=y.id AND ... ? |
From: Reini U. <ru...@x-...> - 2001-11-08 14:22:13
|
Jeff Dairiki schrieb: > > On Wed, 07 Nov 2001 13:19:24 +0000 > "Reini Urban" <ru...@x-...> wrote: > > > mysql 3.22.30 is too old. we are at 3.23.x > > remove USING(id) as Jeff sent you. > > Though I don't think there's any fundamental reason we can't or > shouldn't support mysql 3.22.30. > > Tara/Steph: if you figure out anything, do let me know... > > My current hunch is that the older mysql has a problem > specifically with the INNER JOIN ... USING(...) syntax > and that "LEFT JOIN ... USING(...)" would work fine. > I'd like some confirmation of that, though, before I act > on it. mysql 3.22.25: [ok] SELECT topage, count(*) AS nlinks FROM wikilinks LEFT JOIN wiki AS dest ON topage=dest.pagename WHERE dest.pagename IS NULL GROUP BY topage ORDER BY nlinks DESC; [ok] select distinct pagename from wiki left join wikilinks on pagename=topage where topage is NULL order by pagename; (but aching slow! 13 sec on 500/3200 pages/links) [ok] select w.pagename,s.score from wiki as w left join wikiscore s on w.pagename=s.pagename; [ok] select w.pagename,s.score from wiki as w left join wikiscore s using (pagename); fail: mysql> SELECT topage, count(*) AS nlinks FROM wikilinks inner JOIN wiki AS dest ON topage=dest.pagename WHERE dest.pagename IS NULL GROUP BY topage ORDER BY nlinks DESC; ERROR 1064: You have an error in your SQL syntax near 'ON topage=dest.pagename WHERE dest.pagename IS NULL GROUP BY topage ORDER BY nli' at line 1 mysql> select w.pagename,s.score from wiki as w inner join wikiscore s using (pagename); ERROR 1064: You have an error in your SQL syntax near 'inner join wikiscore s using (pagename)' at line 1 mysql> select w.pagename,s.score from wiki as w join wikiscore s using (pagename); ERROR 1064: You have an error in your SQL syntax near 'using (pagename)' at line 1 mysql> select w.pagename,s.score from wiki as w outer join wikiscore s using (pagename); ERROR 1064: You have an error in your SQL syntax near 'outer join wikiscore s using (pagename)' at line 1 I see no advantage in JOIN ... USING, besides shorter strings. but left join ... using is okay. > Perhaps, for maximum portability, we/I should convert all > the joins to plain cross joins. Instead of > SELECT FROM x INNER JOIN y USING(id) WHERE ... > use the archaic: > SELECT FROM x,y WHERE x.id=y.id AND ... Agreed. There's no speed improvement in USING, but a portability problem. In another big php project of mine which still supports php3, we use SELECT FROM x,y WHERE x.id=y.id ... and better SELECT FROM x LEFT JOIN y ON x.id = y.id WHERE ... http://www.theexchangeproject.org -- Reini Urban http://xarch.tu-graz.ac.at/home/rurban/ |
From: Gary B. <ga...@in...> - 2001-11-07 01:00:51
|
On Tue, 6 Nov 2001, Tara Star wrote: > I've been digging around a bit in the script files, especially the parts > that output html (templates et al) - because I'd like to customize the > templates to output xhtml strict. What would be really cool would be for the engine to output XML of some kind and then slap that through an XSL template to create the final page (rather than using the html templates with ###TAGS### in them). |
From: Steve W. <sw...@pa...> - 2001-11-07 03:41:05
|
On Wed, 7 Nov 2001, Gary Benson wrote: > What would be really cool would be for the engine to output XML of some > kind and then slap that through an XSL template to create the final page > (rather than using the html templates with ###TAGS### in them). Looks like this will be feasible in the future: http://www.php.net/manual/en/ref.xslt.php However, I don't think anyone will be wanting to edit Wiki pages on their WAP phones or Palm Pilots anytime soon. Who knows? At the moment it sounds to me like gold plating, which we developers love to do ;-) I wonder how slow it would be. ~swain --- http://www.panix.com/~swain/ "Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid." -- Frank Zappa |
From: Tara S. <te...@cl...> - 2001-11-07 06:27:12
|
Steve Wainstead wrote: > On Wed, 7 Nov 2001, Gary Benson wrote: >=20 >=20 >>What would be really cool would be for the engine to output XML of some >>kind and then slap that through an XSL template to create the final pag= e >>(rather than using the html templates with ###TAGS### in them). >> >=20 > Looks like this will be feasible in the future: >=20 > http://www.php.net/manual/en/ref.xslt.php >=20 > However, I don't think anyone will be wanting to edit Wiki pages on the= ir > WAP phones or Palm Pilots anytime soon. Who knows? At the moment it sou= nds > to me like gold plating, which we developers love to do ;-) >=20 > I wonder how slow it would be. Producing output in xml isn't just for WAP and PalmPilot users. It makes=20 life much simpler for webmasters - see for example http://whump.com=20 which runs on XML (there are countless other sites that do too). As for Palm... when I talked about wikis at work, one of the first=20 questions I was asked was "can you access it and edit it with a Palm?"=20 (of course, I work in a mobile phone company, so they are interested in=20 anything "mobile") - but I don't think it's so far-fetched to think=20 people will be doing it some time from now. Tara --=20 Je r=E9ponds au mieux de mes connaissances Climb to the Stars! - http://climbtothestars.org/ no tables: http://climbtothestars.org/coding/tableless/ Pompeurs Associ=E9s - http://pompage.net/ |
From: Steve W. <sw...@pa...> - 2001-11-07 06:44:30
|
On Wed, 7 Nov 2001, Tara Star wrote: > Steve Wainstead wrote: > > > However, I don't think anyone will be wanting to edit Wiki pages on their > > WAP phones or Palm Pilots anytime soon. Who knows? At the moment it sounds > > to me like gold plating, which we developers love to do ;-) > > Producing output in xml isn't just for WAP and PalmPilot users. It makes > life much simpler for webmasters - see for example http://whump.com > which runs on XML (there are countless other sites that do too). > > As for Palm... when I talked about wikis at work, one of the first > questions I was asked was "can you access it and edit it with a Palm?" Figures! :-) I didn't mean to imply that XML is only for WAP and PDAs. I've also wanted an XML dump feature for PhpWiki; that is, all the pages can be dumped as one huge XML file. But Jeff's zip dumps work so well there's been no need. ~swain --- http://www.panix.com/~swain/ "Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid." -- Frank Zappa |
From: Tara S. <te...@cl...> - 2001-11-07 06:51:40
|
Steve Wainstead wrote: > I didn't mean to imply that XML is only for WAP and PDAs. I've also wan= ted > an XML dump feature for PhpWiki; that is, all the pages can be dumped a= s > one huge XML file. But Jeff's zip dumps work so well there's been no ne= ed. I think the suggestion here was more to make xml the "natural output=20 markup" for the wiki - instead of html4.1 trans as it is now. For=20 example, I'm going to have to hack through all the markup production to=20 tailor the wiki's looks to my liking - whereas if the production was=20 xml, I could do it simply by modifying the stylesheet. In the present case, I can't obtain the desired result just by modifying=20 the css files - I need to go in the templates, and in transform.php too=20 (and GodKnowsWhereElse). See what I mean? Tara --=20 Je r=E9ponds au mieux de mes connaissances Climb to the Stars! - http://climbtothestars.org/ no tables: http://climbtothestars.org/coding/tableless/ Pompeurs Associ=E9s - http://pompage.net/ |
From: Steve W. <sw...@pa...> - 2001-11-07 07:06:26
|
On Wed, 7 Nov 2001, Tara Star wrote: > In the present case, I can't obtain the desired result just by modifying > the css files - I need to go in the templates, and in transform.php too > (and GodKnowsWhereElse). > > See what I mean? Yes, CSS is not enough. I know, I've worked on content management systems for five years now. :-) I'll continue to think about it. It depends on what we get with a "typical" PHP install. My goal has always been to make PhpWiki as accessible as possible: to admins, to programmers, and to end users. Your problem is an admin problem: you can't tailor the output at the low level you want to. When XSLT is shipped with PHP and comes in a default installation, it would make sense to go this route. What I want to avoid is making the end user rebuild PHP in order to use PhpWiki. It would never get used in that case. I have never used Galeon for this very reason: it wants all the latest Gnome stuff and I have no patience to install it all. I've wanted to do XHTML output for a while. That might be a good middle step for now. cheers ~swain --- http://www.panix.com/~swain/ "Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid." -- Frank Zappa |
From: Jeff D. <da...@da...> - 2001-11-07 15:15:47
|
On Wed, 7 Nov 2001 02:06:24 -0500 (EST) "Steve Wainstead" <sw...@pa...> wrote: > I've wanted to do XHTML output for a while. That might be a good middle > step for now. I don't know much about XHTML, so fill me in. Steve, Tara, those who know: If we move to XHTML what are the implications for those who use older browsers? What exactly are the practicalities involved in converting to XHTML? Is it really much of a new paradigm (if so, I don't see it yet) or is it just syntactical cleanup? It would be nice to clean up the output (HTML) generation some. As Tara has discovered the HTML is scattered, mostly between the templates, lib/transform.php, various functions in lib/stdlib.php, with the remainder everywhere else. But I agree with Steve, that until/unless there's a widespread, easily accessible XSLT converter available for PHP it's too early to move completely in that direction. > I've also wanted an XML dump feature for PhpWiki; that is, > all the pages can be dumped as one huge XML file. (Or indididual XML files.) > But Jeff's zip dumps work so well there's been no need. Since i dooed it, I can say it: the use of MIME for storing metadata in the zip files was/is somewhat poorly concieved. I'm all for moving to XML for page dumpage. The only drawback, is that (for awhile, at least) we'd have to support three methods of undumping (zip, serialized pages, XML). (The other drawback, of course, is that someone would have to write the code.) |
From: Gary B. <ga...@in...> - 2001-11-08 20:25:13
|
On Wed, 7 Nov 2001, Jeff Dairiki wrote: > What exactly are the practicalities involved in converting to XHTML? > Is it really much of a new paradigm (if so, I don't see it yet) or is > it just syntactical cleanup? It's a syntactical cleanup. Chief differences are that a) there is no such thing as an empty tag: <hr> in HTML becomes <hr></hr> in XHTML. Since this would be a PITA, XHTML has the form <hr /> which is equivalent to <hr></hr> b) parsing is much stricter: | <ul> | <li>foo | <li>bar | </ul> is illegal in XHTML -- you must close the <li> tags: | <ul> | <li>foo</li> | <li>bar</li> | </ul> The rules of which tags may go where are more strictly enforced. c) tag attributes must be quoted. <img src=foo> is illegal: it should say <img src="foo"> or <img src='foo'>. d) it is case sensitive (all tags are lower case) Everything on inauspicious.org -- apart from the PhpWiki-based wiki :) -- is valid XHTML -- view the source :) > Steve, Tara, those who know: If we move to XHTML what are the > implications for those who use older browsers? Absolutely nothing -- XHTML was designed so that HTML compliant browsers wouldn't choke on it. The only problem you get is that some seriously broken browsers display <hr /> as if it was <hr>/ (ie the slash is visible), but since such browsers are unlikely to render HTML properly anyway it isn't worth worrying about. Gary [ ga...@in... ][ GnuPG 85A8F78B ][ http://inauspicious.org/ ] |
From: Adam S. <ad...@pe...> - 2001-11-08 05:04:08
|
> What I want to avoid is making the end user rebuild PHP in order to use > PhpWiki. It would never get used in that case. i like this decision, i get really frustrated with projects that spend all their time rewriting the backend to be better when it doesn't have any (or minimal) tangible benifit to the end user (me! :-). > I have never used Galeon for this very reason: it wants all the latest > Gnome stuff and I have no patience to install it all. see? you need debian ... # apt-get install galeon i just swapped over about a month ago and am very happy with it. it's faster then mozilla, has more features then mozilla and actually has some cool new stuff (smart bookmarks are great). > I've wanted to do XHTML output for a while. That might be a good > middle step for now. i like the idea of xml file storage. this might make sense for the text file backend as well. convert wiki markup to xml and then xml to html. adam. |
From: Steve W. <sw...@pa...> - 2001-11-08 08:30:20
|
On Wed, 7 Nov 2001, Adam Shand wrote: > > I've wanted to do XHTML output for a while. That might be a good > > middle step for now. > > i like the idea of xml file storage. this might make sense for the text > file backend as well. convert wiki markup to xml and then xml to html. Hmm. You remind me of an optimization I've thought about in the past: when a page is saved after editing, convert it to xhtml and store it. When pages are served, there's no transformation to do. When a page is pulled for editing, you have to convert it to wiki markup again. The downside, of course, is twice as much code (converting between the two formats). ~swain --- http://www.panix.com/~swain/ "Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid." -- Frank Zappa |
From: Adam S. <ad...@pe...> - 2001-11-08 17:55:22
|
> Hmm. You remind me of an optimization I've thought about in the past: > when a page is saved after editing, convert it to xhtml and store it. > When pages are served, there's no transformation to do. When a page is > pulled for editing, you have to convert it to wiki markup again. The > downside, of course, is twice as much code (converting between the two > formats). the really good thing about this though is that you get the static page cache. for busy wiki sites having a static page cache becomes a necissity pretty quickly if you want to survive a slashdotting. :-( adam. |
From: Gary B. <ga...@in...> - 2001-11-08 20:13:13
|
On Thu, 8 Nov 2001, Adam Shand wrote: > > > Hmm. You remind me of an optimization I've thought about in the past: > > when a page is saved after editing, convert it to xhtml and store it. > > When pages are served, there's no transformation to do. Almost right. If someone creates or deletes pages which are linked to from a cached page then the cached page will be stale. You'd have to rebuild them then as well. > > When a page is pulled for editing, you have to convert it to wiki > > markup again. The downside, of course, is twice as much code > > (converting between the two formats). Or you could just store the HTML and the wiki-markup in the DB, with the wiki-version being the authoritative version. > the really good thing about this though is that you get the static page > cache. for busy wiki sites having a static page cache becomes a necissity > pretty quickly if you want to survive a slashdotting. :-( If you are running your PhpWiki on your own webserver then you can put a reverse-proxy in front of it. This is what /. does if I remember correctly -- all pages have a five minute expiry time or something. Gary [ ga...@in... ][ GnuPG 85A8F78B ][ http://inauspicious.org/ ] |
From: Adam S. <ad...@pe...> - 2001-11-08 20:15:34
|
> If you are running your PhpWiki on your own webserver then you can put > a reverse-proxy in front of it. This is what /. does if I remember > correctly -- all pages have a five minute expiry time or something. actually that's a good point, and should be fairly easy to do. adam. |
From: Gary B. <ga...@in...> - 2001-11-08 20:26:43
|
On Thu, 8 Nov 2001, Adam Shand wrote: > > If you are running your PhpWiki on your own webserver then you can put > > a reverse-proxy in front of it. This is what /. does if I remember > > correctly -- all pages have a five minute expiry time or something. > > actually that's a good point, and should be fairly easy to do. It is -- give me a shout off the list if you want a hand. Gary |
From: Gary B. <ga...@in...> - 2001-11-08 23:50:48
|
On Thu, 8 Nov 2001, Gary Benson wrote: > If you are running your PhpWiki on your own webserver then you can put > a reverse-proxy in front of it. This is what /. does if I remember > correctly -- all pages have a five minute expiry time or something. In fact, you could do away with the expiry time altogether if you hacked PhpWiki to emit the relevant cache control headers and respond to the requests appropriatly. Look at http://www.w3.org/Protocols/rfc2068/rfc2068 for the full gory details but basically you add ETag: and Last-modified: headers on pages you send, and respond to If-Modified-Since: and If-None-Match: headers in the requests (by returning a 304 Not Modified instead of the page itself). Oh, and /. doesn't have a proxy but they are considering it: http://slashdot.org/article.pl?sid=01/09/13/154222&mode=thread |
From: Reini U. <ru...@x-...> - 2001-11-09 10:24:58
|
Steve Wainstead schrieb: > > On Wed, 7 Nov 2001, Adam Shand wrote: > > > I've wanted to do XHTML output for a while. That might be a good > > > middle step for now. > > > > i like the idea of xml file storage. this might make sense for the text > > file backend as well. convert wiki markup to xml and then xml to html. > > Hmm. You remind me of an optimization I've thought about in the past: when > a page is saved after editing, convert it to xhtml and store it. When > pages are served, there's no transformation to do. http://phpwiki.sourceforge.net/phpwiki/index.php?PerformanceHacks > When a page is pulled for editing, you have to convert it to wiki markup > again. The downside, of course, is twice as much code (converting between > the two formats). not really. xhtml is backwards compatible. you can simply output xhtml and every browser will render it. you probably mean to sore only xhtml and not the wiki markup in the db? hmm, weird idea. -- Reini Urban http://xarch.tu-graz.ac.at/home/rurban/ |
From: Jeff D. <da...@da...> - 2001-11-09 15:18:39
|
On Fri, 09 Nov 2001 10:24:46 +0000 "Reini Urban" <ru...@x-...> wrote: > not really. xhtml is backwards compatible. > you can simply output xhtml and every browser will render it. Thanks Gary & Reini for the XHTML primer. It seems XHTML is backwards compatible with HTML 4, if one takes certain precautions. See http://www.w3.org/TR/2001/WD-xhtml1-20011004/#guidelines As long as older browsers will have only minor-ish cosmetic problems with XHTML, I do think it's time to make that move. (The new CSS stuff exhibits significant cosmetic problems on older (and some newer) browsers, anyway.) The biggest problem I see is in the transform code. I know there's some pretty kludgy code currently in 1.3 which is concerned with making sure the <p>s are properly </p>ed. It would be nice to clean that up. (I am planning on re-writing the transform code sometime, to make fancy marked-up diff output possible, among other things. But, as usual, it might be awhile before I get to it.) |