From: Tony L. <la...@is...> - 2002-10-26 11:14:17
|
Greetings. ISSHO Kikaku, a non-profit that studies diversity, multicultural and multilingual issues, has begun using the Postnuke module phpWiki (phpWiki module v0.1 - Implementation of phpWiki v1.3.1) on a soon-to-be-launched multilingual site whose data is all stored as utf-8. All the data has been converted to utf-8, and charsets have all been changed where appropriate. No changes were made to the charset settings in phpWiki, if we can recall correctly. We were able to save small strings of non-English data on the homepage and in the Sandbox, with no problem. Those pages can be seen here: http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=HomePage http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=SandBox However, we are experiencing a major problem when we do the following: * define a term on the homepage (by clicking on the question mark) and inputting text that was all non-English (Japanese, in this case). The problem: screen garbage on preview, and after saving. Other characteristics: when going back to editing that page again, the characters display correctly. That page can be seen here: http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=IsshoSelfSupport_en Constructive hints on what needs to be done to alleviate this problem would be very much appreciated. Thanks very much in advance. Tony Laszlo Director, ISSHO Kikaku http://www.issho.org/laszlo.html |
From: Tony L. <la...@is...> - 2002-10-27 07:50:38
|
Further to this issue. We have read through the utf8migration and doublebyte documents, and through the archives of phpwiki-talk. We have also downloaded index.php and the template files (browse.html, et al) from the cvs, and hacked them to get them to work with Postnuke. Tried sticking on those header lines mentioned on the the above-mentioned wiki pages. We were not able to get that utf-8 data that was screen garbage, to display correctly (though other pages with utf-8 were displaying properly). Then we experimented a bit more and discovered that certain characters in the strings being input were being mangled by the wiki. Thus, we determined that the utf-8 in the sandbox and homepage is being displayed largely due to luck; the characters being used were somehow passing through unscathed - probably the exception rather than the rule. We also compared the source of the pages that were being displayed properly and those not being displayed properly, and found the header information to be identical. The page is being displayed properly (the Postnuke-generated utf-8 data is fine), but the stuff in between the div /div that phpWiki generates, is getting mangled. Wiki clones in Asia have made refinements to Wiki code (or designed their code) so that this doesn't happen. We are looking into what they are doing, and hope to see if it can be applied to this rather specific problem, i.e., the code for the phpWiki module for Postnuke. Quite possibly it will involve the use of things mb_* things, or maybe rawurlencode something or other. However, we are quite puzzled, and suspect that there is an easy answer within the domain and experience of phpWiki users that might be had. What might that be? For those of you not familiar, the phpWiki module for Postnuke includes the latter's "header.php" file, and so allows the CMS to deal with the header issue for each page generated. Thus, phpWiki's index.php is missing all that header information that is in that of phpWiki proper. The relevant lines from Postnuke's header.php are as follow: else { echo "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\ n"; echo "<html>\n<head>\n"; if (defined("_CHARSET") && _CHARSET != "") { echo "<meta http-equiv=\"Content-Type\" ". "content=\"text/html; charset="._CHARSET."\">\n"; } There is no mention of mb_ anything anywhere in the code, yet there is no problem with screen garbage. That _CHARSET, btw, is set to utf-8 on our system, in all instances. These lines are the only relevant code that exists within the Postnuke that we are running. Thus, these lines and these lines alone are able to ensure that all pages display all the utf-8 data correctly, in Postnuke. For our purposes, the reported limitations that php and mysql have regarding utf-8 are not an issue. That is also true with all modules we use with Postnuke. Except, unfortunately, for this one (so far). So, we suspect that there may be something that is causing the data in between the div /div that phpWiki makes, to get mangled, while the page itself is displayed properly. If we can locate it, and tweak that, we should be in business. But, we haven't managed to find out what that mechanism might be just yet. Looking forward to advice from both the Wabi and Sabi experts among you. Thanks. Tony Laszlo, ISSHO http://www.issho.org On Sat, 26 Oct 2002, Tony Laszlo wrote: > on a soon-to-be-launched multilingual site whose data is > all stored as utf-8. > We were able to save small strings of non-English data on the > homepage and in the Sandbox, with no problem. > Those pages can be seen here: > http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=HomePage However, [we have major problems otherwise, see here]: > http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=IsshoSelfSupport_en |
From: Jeff D. <da...@da...> - 2002-10-27 16:34:18
|
Here's a guess: The code in lib/transform.php (the "old" transform code) mangles one "magic" byte-value. Which character it is that get mangled is determined by the setting of the global $FieldSeparator. By default, it is the value '\x81'. (This value is picked because it is a non-printing control character in all the ISO-8859-x encodings.) (The trasform.php code uses this byte to insert temporary markers within the page text as it processes it...) (In PhpWiki 1.3.3 there are two separate markup engines. The "old" one in transform.php, and the new one in BlockParser.php and InlineParser.php. The new one gets used if you check the "use new markup" box when you're editing the page. In the most recent CVS code only the new markup engine is used. I have no idea how recent the postnuke module code is, so can't tell you where you are within this timeline.) This is hackish, but: you might try changing $FieldSeparator to '\xff' (which is an illegal byte in UTF-8) or to one of the lesser used ASCII control characters: maybe '\x01' (SOH). If that doesn't help, then this is a red herring. In that case, if you can determine exactly which byte-values/characters are being mangled it might help diagnose the problem... Jeff |
From: Carsten K. <car...@us...> - 2002-10-28 19:13:07
|
Do you think it would cause any problems to just commit this change of FieldSeparator to '\xff'? I'm going to leave it this way on my Wiki for a while to see how it works out as a permanent change. If this is not a good idea, maybe add a check in config.php for CHARSET==utf-8 and set FieldSeparator accordingly? I'm amazed that I'm now able to use UTF-8 (Japanese etc) text in my personal wiki, even though my system has only MySQL 3.2.3.49 which does not support UTF-8, and is actually configured for iso-8859-1. The last time I tried to use UTF-8 in PhpWiki (over a year ago) I had to use PostgreSQL, and there were still lots of garbage display problems then. So far the only (minor) problems I see running PhpWiki CVS in utf-8 are the field names on the DebugInfo page and character translation problem on the Sign In page. I'm sure these two minor problems can be worked out, and if anyone volunteers to translate we could have Japanese/Chinese localizations included with PhpWiki too, the admin would just have to change CHARSET to utf-8 in index.php during installation. Carsten On Sunday, October 27, 2002, at 11:34 am, Jeff Dairiki wrote: > This is hackish, but: you might try changing > $FieldSeparator to '\xff' (which is an illegal > byte in UTF-8) or to one of the lesser used ASCII |
From: Jeff D. <da...@da...> - 2002-10-28 21:29:11
|
> Do you think it would cause any problems to just commit this change of > FieldSeparator to '\xff'? Yes it would. '\xff' is a printing character in the ISO-8859-x encodings. (In iso-8859-1 it's a "latin small letter y with diaeresis".) > I'm going to leave it this way on my Wiki for a while to see how it > works out as a permanent change. If this is not a good idea, maybe add > a check in config.php for CHARSET==utf-8 and set FieldSeparator > accordingly? That would be okay, I think. Switching to one of the non-used ASCII control characters (in the 0x01 - 0x1f range) would probably be okay, too (as long as care is taken to strip that character from form input, etc...) If you want to do anything, it's probably safer to stick with our current setting except for UTF-8. But, note that in the current CVS code, lib/transform.php is no longer used. (Though the code is still there.) Both old and new markup get run through the new markup engines (old markup goes through a pre-processor to hack it up into new markup...) IIRC, the new markup code doesn't use any magic marker characters (FieldSeparator), so the issue is mostly moot. > So far the only (minor) problems I see running PhpWiki CVS in utf-8 are > the field names on the DebugInfo page and character translation problem > on the Sign In page. Since PhpWiki was never designed with multi-byte character encodings in mind (really it's only been well(?) tested under iso-8859-1), I suspect that numerous small problems will show up (most, probably, could be easily fixed). There may be less minor problems with the searching functionality. PHP regexps are not unicode aware... (does the latest PHP have unicode support yet?) MySQL knows nothing about unicode, so any pattern/string matching done in MySQL queries is problematic. |
From: Carsten K. <car...@us...> - 2002-10-28 22:51:03
|
On Monday, October 28, 2002, at 04:29 pm, Jeff Dairiki wrote: > IIRC, the new markup code doesn't use any magic marker characters > (FieldSeparator), so the issue is mostly moot. Hehe, I didn't realize this, thanks for pointing it out. So I changed that magic marker back and of course it has no effect and UTF-8 is still working in the latest CVS. ^^ > There may be less minor problems with the searching functionality. > PHP regexps are not unicode aware... (does the latest PHP have > unicode support yet?) MySQL knows nothing about unicode, so any > pattern/string matching done in MySQL queries is problematic. Yes the regexp stuff definitely could be a problem. I'll look at the PHP website to see how the mb functions are coming along. Preliminary informal tests so far (read: goofing around) with utf-8 shows that searching for Japanese words works, and surprisingly syntax-highlighting in a FullText search looks just fine too. Logins with utf-8 Japanese text doesn't work, probably due to bumpywords checking. In diffs the line prefixes for non-changed lines and line-endings show with a garbage character, but this might just be my browser. Otherwise the only issue I noticed is that square brackets must be used for Japanese text. This is expected because there aren't any BumpyWords and so it's not really a problem. I don't want to jump to any conclusions because I can't really read Japanese at all. I'm interested in how well this works for other people using the latest CVS PhpWiki. Change your CHARSET to utf-8 in index.php so your browser knows which charset to use. Try pasting some text in from http://www.yahoo.co.jp/ or something and linkify a few words with [ ]. Carsten |
From: <la...@us...> - 2002-10-27 19:44:44
|
Tony, I am away from home at the moment, and have jsut picked up your messages. I have to say (as the author of the PN module) that I don't think that the problem is caused by anything PN specific in the code. Is it possible for you to try a standalone phpwiki installation, using the code from CVS? If you could make the site accessible to us, that would help. Lawrence |
From: Tony L. <la...@is...> - 2002-10-28 01:58:41
|
On Sun, 27 Oct 2002, Jeff Dairiki wrote: > This is hackish, but: you might try changing > $FieldSeparator to '\xff' (which is an illegal > byte in UTF-8) or to one of the lesser used ASCII > control characters: maybe '\x01' (SOH). I changed the relevant line in /lib/config.php so that FieldSeparator = \xff . That seems to have done the trick, at least with regard to this particular page (so far so good). Thank you! While we are on this subject, could I trouble you for advice on how to insert the control character \xff directly into the MySQL database via the command line? I tried inserting '\\xff' but that just inserts the actual string, doesn't it... I would like to see if anything different happens using that method vs. the above method. One last thing. Are there any things to look out for when putting brackets [] around a utf-8 string in phpWiki, so as to define it? I notice that one such string is getting defined in one place, but that string is not being highlighted when it turns up in a different location (the definition is not being picked up). Also, I guess phpWiki is set not to ignore underscores _ in such a string, by default? A string with such an underscore was successfully defined, but that definition also did not stick. Thanks! > I am away from home at the moment, and have jsut picked up your messages. Thank you for your pn implementation, Lawrence. Works great! Tony Laszlo, Tokyo http://www.issho.org/laszlo.html |
From: Tony L. <la...@is...> - 2002-10-28 02:19:23
|
On Mon, 28 Oct 2002, Tony Laszlo wrote: > One last thing. Are there any things to look out for > when putting brackets [] around a utf-8 string in phpWiki, > so as to define it? I notice that one such string is getting > defined in one place, but that string is not being highlighted > when it turns up in a different location (the definition is > not being picked up). Also, I guess phpWiki is set not to > ignore underscores _ in such a string, by default? A string > with such an underscore was successfully defined, but that > definition also did not stick. Strike this last bit. I understand what is happening now. phpWiki will highlight terms that have been defined but not if the terms were defined via the bracket [] method. So, one needs to put brackets around the definded terms every time they are used, rather than having automatic highlighting. We can live with this, I think. :) Great tool! Kudos to all. |
From: Jeff D. <da...@da...> - 2002-10-28 15:19:03
|
> Thank you! NP. > While we are on this subject, could I trouble you for > advice on how to insert the control character \xff > directly into the MySQL database via the command line? > I tried inserting '\\xff' but that just inserts the actual > string, doesn't it... Reading the mysql docs, the only way I could figure to generate '\xff' is CHAR(255) (which you would then need to CONCAT() with other strings if you wanted to make a string longer than one character.) See: http://www.mysql.com/doc/en/String_functions.html#IDX1161 That said, I don't see why you would want to insert \xff's into the MySQL database. Since '\xff' isn't valid within UTF-8 strings, I think that will just break things more than they are already broken. > ... but that string is not being highlighted > when it turns up in a different location ... Yes, it sounds like you figured it out. Strings are not "hilit" (or "linkified") based on whether they exist as pages or not; rather whether they are hilit depends on whether they match certain regexps. In the wiki tradition CamelCase words (multiple capitalized words run together with no spaces) are what normally gets linkified --- and square brackets can be used to "force" linkification of anything that doesn't match the CamelCase pattern. Jeff PS: From the "it's no big deal, but you might like to know" department: Your ISP won't allow me to send e-mail directly to you Tony. My mail server runs on an AT&T cable modem connection --- apparently, as an anti-spam measure, your ISP blocks mail from all such hosts. This is the first time I've encountered such a restriction... |
From: Tony L. <la...@is...> - 2002-10-30 22:11:24
|
On Mon, 28 Oct 2002, Jeff Dairiki wrote: > connection --- apparently, as an anti-spam measure, your > ISP blocks mail from all such hosts. This is the first > time I've encountered such a restriction... Thanks for the heads-up. It looks like my ISP's spam filter is over-zealous; I've disengaged it now (I hope). Postnuke's code for reading RSS doesn't quite grok the phpWiki recentchanges feed, whether the feed be from the phpWiki site on sourceforge, or that generated by the module on the ISSHO Kikaku site (running Postnuke). There are some notes about that here: http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=MultilingualWikiTalk The Postnuke folks may come up with a revision of the code that can handle the phpWiki's feed. In the meantime, might there be some older phpWiki code that I might use to generate something in a shade of 0.9 or otherwise less state-of-the-art? Also, note that at least one validator didn't like what it saw in the current phpWiki feed. Interestingly enough, I learned of this from the envolution site (a cms still quite similar to Postnuke, I think) whose phpWiki feed _is_ being picked up being parsed a bit better than Postnuke. Whether the validator has a point or not, I don't know, but I thought you might like to know. -T.L. |
From: Jeff D. <da...@da...> - 2002-10-30 22:57:47
|
There are two nearly completely orthogonal branches of RSS. o RSS 0.9x, and RSS 2.0 (the "Dave Winer" specs) (non-RDF). o RSS 1.0 (a subset of RDF). Note that the numbering scheme is not completely chronological (nor, even, logical.) > Postnuke's code for reading RSS doesn't quite grok the phpWiki > recentchanges feed, whether the feed be from the phpWiki site on > sourceforge, or that generated by the module on the ISSHO Kikaku site > (running Postnuke). Is postnuke supposed to be able to grok RSS 1.0? > In the meantime, > might there be some older phpWiki code that I might use > to generate something in a shade of 0.9 or otherwise less > state-of-the-art? Support for RSS 0.91 has been in PhpWiki since 1.3.3, but it looks like the postnuke version was forked before that. (It's accessed using a format=rss091 rather than format=rss query arg.) > Also, note that at least one validator didn't like > what it saw in the current phpWiki feed. At first look, that looks like a 0.9x, 2.0 centric validator, so it's no surprise it doesn't like RDF 1.0. I'll look into a bit more, though... Thanks for the heads up. |
From: Jeff D. <da...@da...> - 2002-10-30 23:28:49
|
> > Also, note that at least one validator didn't like > > what it saw in the current phpWiki feed. > > At first look, that looks like a 0.9x, 2.0 centric > validator, so it's no surprise it doesn't like RDF 1.0. > I'll look into a bit more, though... Thanks for the heads up. As it turns out, the validator did have a valid (but minor) beef. I doubt it has anything to do with your postnuke problems. (Gory details: <description> is a required element of <channel> in RSS 1.0 --- we (I) were using <dc:description> instead of <description>...) I just checked in the fixes to CVS. |
From: Tony L. <la...@is...> - 2002-10-31 04:14:23
|
On Wed, 30 Oct 2002, Jeff Dairiki wrote: > As it turns out, the validator did have a valid (but minor) beef. > I doubt it has anything to do with your postnuke problems. > > (Gory details: <description> is a required element of <channel> > in RSS 1.0 --- we (I) were using <dc:description> instead of > <description>...) > I just checked in the fixes to CVS. I saw only the UN_en fix, in a file that does not exist in the pn module setup. Was there something else, in RssWriter.php perhaps? > Is postnuke supposed to be able to grok RSS 1.0? I hear that it may not. This is what is being used, for your reference. http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=rss.php on a different matter, I noticed that some double-byte characters got chewed up pretty significantly when I used diff earlier today. I had been lucky up until then. I tried to use page history to generate a URL where one could see how the characters were being spindled, but ran into a "cannot access that file directly" nymph. Or maybe it was an imp. http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=MultilingualWikiTalk |
From: Jeff D. <da...@da...> - 2002-10-31 16:27:46
|
> I saw only the UN_en fix, in a file that does not > exist in the pn module setup. Was there something > else, in RssWriter.php perhaps? lib/plugin/RecentChanges.php > > Is postnuke supposed to be able to grok RSS 1.0? > > I hear that it may not. Sounds like that's right. It may be that you can just grab the latest lib/plugin/RecentChanges.php and lib/RSSWriter091.php, (or perhaps the versions from the 1.3.3 distribution,) drop them into your installation, and have RSS 0.91 support. Lawrence might be able to tell you more... > on a different matter, I noticed that some double-byte characters > got chewed up pretty significantly when I used diff earlier today. Yes, that's the word-level diff engine doing that, probably... (splitting words in the middle of multi-byte characters...) A quick fix^H^H^Hhack would be to disable the word-level diffing. I think you can do that be deleting the method definitions of _pack(), _split() and _changed() in class HtmlUnifiedDiffFormatter (file lib/diff.php). (HtmlUnifiedDiffFormatter will then inherit the _changed() method from UnifiedDiffFormatter.) It may be possible to fix the word-level diff algorithm so that it doesn't split-up UTF-8 characters (by adjusting the regexp used in HtmlUnifiedDiffFormatter::_split,) but I haven't had enough coffee yet today to be able to figure that out... (Of course, the fix would be trivial if PHP's regexp engine was/were UTF-8 aware...) |
From: Lawrence A. <la...@us...> - 2002-10-31 17:03:38
|
Quoting Jeff Dairiki <da...@da...>: > It may be that you can just grab the latest lib/plugin/RecentChanges.php > and lib/RSSWriter091.php, (or perhaps the versions from the 1.3.3 > distribution,) drop them into your installation, and have RSS 0.91 > support. Lawrence might be able to tell you more... > Tony, I've already emailed you about this, but for anyone else reading this: No you can't. You need the as-yet-unreleased-but-written-(honest) fix files, which I will probably get around to releasing on the Postnuke site in the next week or so. For some reason, a number of people have requested this recently, so I've actually done something about it (for a change) Lawrence |
From: Tony L. <la...@is...> - 2002-10-31 21:04:31
|
Hey there, Jeff. On Thu, 31 Oct 2002, Jeff Dairiki wrote: > Yes, that's the word-level diff engine doing that, probably... (splitting > words in the middle of multi-byte characters...) > > A quick fix^H^H^Hhack would be to disable the word-level diffing. > > I think you can do that be deleting the method definitions of _pack(), > _split() and _changed() in class HtmlUnifiedDiffFormatter (file > lib/diff.php). (HtmlUnifiedDiffFormatter will then inherit the _changed() > method from UnifiedDiffFormatter.) Thanks. Not that easy, it seems. Tried that but this error was generated when trying to diff: Parse error: parse error, expecting `T_OLD_FUNCTION' or `T_FUNCTION' or `T_VAR' or `'}'' in /web/premium/www.issho.org/modules/phpWiki/lib/diff.php on line 69 Fatal error: Call to undefined function: showdiff() in /web/premium/www.issho.org/modules/phpWiki/lib/main.php on line 139 On Thu, 31 Oct 2002, Lawrence Akka wrote: > No you can't. You need the as-yet-unreleased-but-written-(honest) fix files, > which I will probably get around to releasing on the Postnuke site in the next > week or so. For some reason, a number of people have requested this recently, > so I've actually done something about it (for a change) Super, Lawrence. Looking forward to it! Thanks. |
From: Jeff D. <da...@da...> - 2002-10-31 21:26:57
|
On Fri, 1 Nov 2002 06:08:49 +0900 (JST) Tony Laszlo <la...@is...> wrote: > Thanks. Not that easy, it seems. > Tried that but this error was generated when trying to diff: > > Parse error: parse error, expecting `T_OLD_FUNCTION' or `T_FUNCTION' or > `T_VAR' or `'}'' in > /web/premium/www.issho.org/modules/phpWiki/lib/diff.php on line 69 I suspect you deleted one too many closing curly braces. Make sure the closing brace for the class definition is there. If you're still stuck, send me your diff.php (via private e-mail). |
From: Tony L. <la...@is...> - 2002-10-31 21:36:57
|
On Thu, 31 Oct 2002, Jeff Dairiki wrote: > > Parse error: parse error, expecting `T_OLD_FUNCTION' or `T_FUNCTION' or > > `T_VAR' or `'}'' in > > /web/premium/www.issho.org/modules/phpWiki/lib/diff.php on line 69 > > I suspect you deleted one too many closing curly braces. > Make sure the closing brace for the class definition is there. Silly of me. In fact, it cleared up the problem, at least in this particular instance. http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=MultilingualWikiTalk&action=diff&version=18&previous=major Thank you! |
From: Tony L. <la...@is...> - 2002-11-17 06:41:46
|
Beautiful work folks. phpWiki has really done wonders for the ISSHO Kikaku site. On Thu, 31 Oct 2002 17:03:26 +0000 > > It may be that you can just grab the latest lib/plugin/RecentChanges.php > > and lib/RSSWriter091.php > No you can't. You need the as-yet-unreleased-but-written-(honest) fix files, > which I will probably get around to releasing on the Postnuke site in the next > week or so. For some reason, a number of people have requested this recently, > so I've actually done something about it (for a change) * Fix files for PN Please send a pointer; very much looking forward to that fix. * RSS beauty is in the eyes of the aggregator(?) As noted here: http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=WikipnRss MyHeadlines 4.0.7 (cms module that handles rss feeds) could only display the logo of the wiki recentchanges feeds, including that from phpWiki. the new version 4.1, treats the feed as "bad" ; one can't even display the logo. The probably says a lot more about the shortcomings of that program rather than of wiki, but thought I would mention it. Very Best, Tony Laszlo, Tokyo Journalist http://www.issho.org/laszlo.html |
From: Tony L. <la...@is...> - 2002-11-17 09:14:49
|
On Sun, 17 Nov 2002, Tony Laszlo wrote: > * RSS beauty is in the eyes of the aggregator(?) > http://www.issho.org/modules.php?op=modload&name=phpWiki&file=index&pagename=WikipnRss ...syndic8 also finds fault with the phpWiki recentchanges feed from the ISSHO site... :( That's a bit of a bummer. I note that the feed from phpWiki stand-alone was approved recently :) Sounds like the module rss feed issue is not only a matter of rss v0.9 vs. 1.0. Looks like something is not right with the feed that's being generating... |
From: Jeff D. <da...@da...> - 2002-11-18 05:08:24
|
> ...syndic8 also finds fault with the phpWiki recentchanges feed from the > ISSHO site... :( That's a bit of a bummer. Hi Tony, I think this patch fixes the warning: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/phpwiki/phpwiki/lib/plugin/RecentChanges.php.diff?r1=1.67&r2=1.68 (Just change 'dc:description' to 'description' in lib/plugin/RecentChanges.php.) (I don't think that's what's causing the rest of your troubles with PN...) > Sounds like the module rss feed issue is not only a matter of rss v0.9 > vs. 1.0. How so? |
From: <la...@us...> - 2002-10-31 12:22:28
|
I looked into this myself for Tony a couple of days ago (or yesterday - they all seem the same ...). For the record: Postnuke can't (yet) do RSS 1.0, and doesn't even do RSS 0.91 very well. So its not a phpWiki problem (phew) Also, is there strictly anything wrong with using dc:description instead of description? I thought that the first was just a fully namespace-qualified version of the latter. The parser ought to be able to cope with it. Lawrence Quoting Jeff Dairiki <da...@da...>: > > > Also, note that at least one validator didn't like > > > what it saw in the current phpWiki feed. > > > > At first look, that looks like a 0.9x, 2.0 centric > > validator, so it's no surprise it doesn't like RDF 1.0. > > I'll look into a bit more, though... Thanks for the heads up. > > As it turns out, the validator did have a valid (but minor) beef. > I doubt it has anything to do with your postnuke problems. > > (Gory details: <description> is a required element of <channel> > in RSS 1.0 --- we (I) were using <dc:description> instead of > <description>...) > > I just checked in the fixes to CVS. > |
From: Jeff D. <da...@da...> - 2002-10-31 16:01:38
|
> Also, is there strictly anything wrong with using dc:description instead > of description? I thought that the first was just a fully > namespace-qualified version of the latter. The parser ought to be able > to cope with it. No. They're different. The default namespace is rss (http://purl.org/rss/1.0/). Dc ("Dublin Core", http://purl.org/dc/elements/1.1/) is, therefore, a different namespace. So they are different. I am, however, clueless as to what, precisely, that implies. It seems reasonable, however, (and the validator concurs) that if we include only a single 'description', it ought to be <rss:description> rather than <dc:description>. |