You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(103) |
Jul
(105) |
Aug
(16) |
Sep
(16) |
Oct
(78) |
Nov
(36) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(100) |
Feb
(155) |
Mar
(84) |
Apr
(33) |
May
(22) |
Jun
(77) |
Jul
(36) |
Aug
(37) |
Sep
(183) |
Oct
(74) |
Nov
(235) |
Dec
(165) |
2002 |
Jan
(187) |
Feb
(183) |
Mar
(52) |
Apr
(10) |
May
(15) |
Jun
(19) |
Jul
(43) |
Aug
(90) |
Sep
(144) |
Oct
(144) |
Nov
(171) |
Dec
(78) |
2003 |
Jan
(113) |
Feb
(99) |
Mar
(80) |
Apr
(44) |
May
(35) |
Jun
(32) |
Jul
(34) |
Aug
(34) |
Sep
(30) |
Oct
(57) |
Nov
(97) |
Dec
(139) |
2004 |
Jan
(132) |
Feb
(223) |
Mar
(300) |
Apr
(221) |
May
(171) |
Jun
(286) |
Jul
(188) |
Aug
(107) |
Sep
(97) |
Oct
(106) |
Nov
(139) |
Dec
(125) |
2005 |
Jan
(200) |
Feb
(116) |
Mar
(68) |
Apr
(158) |
May
(70) |
Jun
(80) |
Jul
(55) |
Aug
(52) |
Sep
(92) |
Oct
(141) |
Nov
(86) |
Dec
(41) |
2006 |
Jan
(35) |
Feb
(62) |
Mar
(59) |
Apr
(52) |
May
(51) |
Jun
(61) |
Jul
(30) |
Aug
(36) |
Sep
(12) |
Oct
(4) |
Nov
(22) |
Dec
(34) |
2007 |
Jan
(49) |
Feb
(19) |
Mar
(37) |
Apr
(16) |
May
(9) |
Jun
(38) |
Jul
(17) |
Aug
(31) |
Sep
(16) |
Oct
(34) |
Nov
(4) |
Dec
(8) |
2008 |
Jan
(8) |
Feb
(16) |
Mar
(14) |
Apr
(6) |
May
(4) |
Jun
(5) |
Jul
(9) |
Aug
(36) |
Sep
(6) |
Oct
(3) |
Nov
(3) |
Dec
(3) |
2009 |
Jan
(14) |
Feb
(2) |
Mar
(7) |
Apr
(16) |
May
(2) |
Jun
(10) |
Jul
(1) |
Aug
(10) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
(13) |
Apr
(11) |
May
(18) |
Jun
(44) |
Jul
(7) |
Aug
(2) |
Sep
(14) |
Oct
|
Nov
(6) |
Dec
|
2011 |
Jan
(2) |
Feb
(6) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(11) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(4) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
(5) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
(11) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(3) |
2024 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Didier B. <di...@br...> - 2003-02-24 07:31:04
|
Hello, I try to install phpwiki-1.3.4 on http://d.bretin.free.fr/phpwiki/index.php and I got this error : FileFinder.php:403: Warning[2]: php_uname() has been disabled for security reasons Request.php:240: Warning[2]: Cannot send session cookie - headers already sent by (output started at /var/www/free.fr/3/d/b/r/d.bretin/phpwiki/lib/XmlElement.php:239) Request.php:240: Warning[2]: Cannot send session cache limiter - headers already sent (output started at /var/www/free.fr/3/d/b/r/d.bretin/phpwiki/lib/XmlElement.php:239) display.php:135: Warning[2]: Cannot add header information - headers already sent by (output started at /var/www/free.fr/3/d/b/r/d.bretin/phpwiki/lib/XmlElement.php:239) And when I want to modify a page it didn't want :o(. Do the server need something special for the 1.3.4 version ? Regards. -- Didier BRETIN di...@br... http://www.bretin.net/ ICQ: 46032186 |
From: Jeff D. <da...@da...> - 2003-02-24 02:12:02
|
> file as database. Ahh. I thought you had said you were using dba, so that's what I was testing with. I hadn't tried the flat-file backend at all until now. The problem (at least the TitleSearch problem) is specific to the flat-file backend.... I think I've fixed that problem now. That might have fixed some of the HTML dump problems too, but I think some problems will remain with filenames/urls... Anyhow, give it a try when you get the chance... There's a problem with using urlencoding to generate filenames for HTML output. That's because: how do you link to filenames with '%'s in them? Well ... it depends on whether you're going through a webserver, or just getting the files off of local disk. I think the answer is to switch to some other encoding scheme for filenames (probably of our own devising). That would get around the '/' in filenames problem too. Anyhoo, I'm probably not going to get to looking at it for a bit. If someone else wants to take a crack at it, feel free. (Also, the HTML dumps fail horribly if USE_PATH_INFO is false.) |
From: Jeff D. <da...@da...> - 2003-02-24 01:19:24
|
On Sun, 23 Feb 2003 18:32:57 -0500 Carsten Klapp <car...@us...> wrote: > (Can fmt() be used inside of trigger_error? Removing that at least > should fix the "Object to string conversion" problem and display the > real "can't set locale" error.) Doh! Of course! (I was thinking that $loc was the offending object.) I've just played around with PHP a bit, and it seems that PHP always coerces the first argument of trigger_error to a string. I'vd just changed that fmt to a sprintf() (in CVS). Now maybe Johannes (or however else is seeing the warning) can see if the error message becomes any more informative... |
From: Klaus - G. L. <Le...@we...> - 2003-02-23 23:03:59
|
> > I can't duplicate that here. I created a MESH+CreditUnions page, > and it works fine for me (I haven't tried an (X)HTML dump yet, > but it zips fine, and shows up in a TitleSearch just fine.) The normal functions, browsing editing and so on are ok. I didn't try a zip yet since I are only importing from a 1.2.2 wiki. The zipped (X)HTML did give the difficulties. > What platform & PHP version are you running on? Suse Linux 8.1 Apache 1.3.26 - 82 mod-php 4.2.2 - 82 file as database. But if I remember correctly I also tried dba once to verify it is not purely related to the file databaseand got the same problem. Warnings in the output during the creation of the web pages. But I can't remember with which version and I didn't look at the pages. I could try it again with dba if you want. Maybe it is a matter when you have more than one such file. Try to import the wiki_sample.zip that I send privately to you. If you have other suggestions I will try them tommorrow after work. Klaus Leiss |
From: Carsten K. <car...@us...> - 2003-02-23 23:00:40
|
(Sorry I meant to say PHP 4.3.0 not 3.0 from Marc Liyanage's web site <http://www.entropy.ch/software/macosx/php/>). |
From: Jeff D. <da...@da...> - 2003-02-23 22:27:01
|
> You are partly right some are there under their original edit date. > I'm not sure if that is a bug or a feature, what would happen if > the page exist in the wiki and i restore the page than. If the page already exists in the wiki, the modification time of the new version is adjusted to ensure that revision modification times are monotonic. (I can't remember off-hand whether the current time is used, or just the time of the most-recent revision.) ... At least, that's what (I think) is supposed to happen... > This is the output of Findpage ( Title search ) > > * ?MESH CreditUnions > * ?MESH Energy > * ?MESH EnergyMovement > * ?MESH Quarternary > * ?MESH Quintinary > * ?MESH Secondary > * ?MESH Tertiary > * MESH=Energy > * ?MESH Tech I can't duplicate that here. I created a MESH+CreditUnions page, and it works fine for me (I haven't tried an (X)HTML dump yet, but it zips fine, and shows up in a TitleSearch just fine.) What platform & PHP version are you running on? |
From: Carsten K. <car...@us...> - 2003-02-23 21:54:20
|
I have been having the same problem too in 10.2.4. For now I have just been suppressing that error by adding "@", everything seems to work fine anyway. @trigger_error(fmt("Can't set locale: '%s'", $loc), E_USER_NOTICE); Looks like Mac OS X's gettext may still have some issues, although I am using the latest version from Fink so I'm not sure what's happening. Also had to revert to PHP3.0pre1, the final release version from Marc Lyneage's web site is always crashing for me now with the latest PhpWiki from CVS. Carsten On Sunday, February 23, 2003, at 03:05 pm, Bill Whitacre wrote: > I get the same warnings after 'updating' my Mac OS X version to 10.2.4 > and doing a > > mv /etc/httpd/httpd.conf.applesaved /etc/httpd/httpd.conf > > to get back to 'old' httpd.conf file. > > I DID NOT have these warning with my original installation under 10.2.3 > > Things still seem to work OK though. > > bw > > --- > > On Friday, February 21, 2003, at 12:58 PM, Johannes Rumpf wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> I got the following Warnings in my Wiki: >> >> PHP Warnings >> lib/config.php:145: Notice[8]: Object to string conversion >> lib/config.php:145: Notice[1024]: Object >> >> I'd experimented with the authorisations, but this i didn' get it >> work. >> >> So if you've go some suggestions, i'll be a happy men... >> >> Joe |
From: Klaus - G. L. <Le...@we...> - 2003-02-23 21:36:15
|
> Hi Klaus, > > > Are you sure they're not there? The pages are probably listed > under their original edit date (i.e. when they were last edited > on the 1.2.2 wiki). Are you sure you looked far enough back in > the RecentChanges? (Try http://path.to.your/wiki/RecentChanges?days=-1) > > Jeff You are partly right some are there under their original edit date. I'm not sure if that is a bug or a feature, what would happen if the page exist in the wiki and i restore the page than. Would that show in the RecentChanges of the day? If Yes, it is a feature to me else a bug. But now to the missing pages in the RecentChanges.This are all pages that a "+" in the page Name, since they did not show on the RecentChanges i did a search. This is the output of Findpage ( Title search ) * ?MESH CreditUnions * ?MESH Energy * ?MESH EnergyMovement * ?MESH Quarternary * ?MESH Quintinary * ?MESH Secondary * ?MESH Tertiary * MESH=Energy * ?MESH Tech It shows the same problem as the HTML Dump the + from the Title is changed to a space in the source of the page one gets <span class="wikiunknown"> <a href="MESH%20CreditUnions?action=edit" The existing pages are not found, but I can follow the links from pages in the wiki to them. They are also in the wiki. I have nothing aginst mangling PageNames but it should be consistent in all parts of the wiki. Klaus Leiss |
From: Jean-Philippe G. <jpg...@ou...> - 2003-02-23 20:56:54
|
Jeff Dairiki <da...@da...> a =E9crit : > On 23 Feb 2003 20:08:14 +0100 > Jean-Philippe Georget <jpg...@ou...> wrote: >=20 > > I didn't see how tell my browser (Mozilla 1.0 Linux) to use the > > "Printer" stylesheet. It seems that this option isn't very common for > > browsers. >=20 [...] > The option to include a button or link to printable version of the page > is a good idea. (However, I'm probably not going to work on it right > now.) I understand that it's not a priority. I see a patch called "Fix for default printer stylesheet" http://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D669563&group= _id=3D6121&atid=3D306121 I'm not a programmer but perhaps it's interesting. --=20 Jean-Philippe Georget jpg...@ou... - http://jpgeorget.ouvaton.org/ |
From: Jean-Philippe G. <jpg...@ou...> - 2003-02-23 20:16:37
|
Jeff Dairiki <da...@da...> a =E9crit : > On 23 Feb 2003 20:08:14 +0100 > Jean-Philippe Georget <jpg...@ou...> wrote: >=20 > > I didn't see how tell my browser (Mozilla 1.0 Linux) to use the > > "Printer" stylesheet. It seems that this option isn't very common for > > browsers. >=20 > I've only got Mozilla 1.2 (Linux) installed right now. > It's under the pulldown "View->Use Style->Printer". > I'm pretty sure Moz 1.0 had the same control, but I could be wrong. You're right.=20 I searched in Edit/Preferences... :-( [...] --=20 Jean-Philippe Georget jpg...@ou... - http://jpgeorget.ouvaton.org/ |
From: Bill W. <bw...@hi...> - 2003-02-23 20:06:02
|
I get the same warnings after 'updating' my Mac OS X version to 10.2.4 and doing a mv /etc/httpd/httpd.conf.applesaved /etc/httpd/httpd.conf to get back to 'old' httpd.conf file. I DID NOT have these warning with my original installation under 10.2.3 Things still seem to work OK though. bw --- On Friday, February 21, 2003, at 12:58 PM, Johannes Rumpf wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I got the following Warnings in my Wiki: > > PHP Warnings > lib/config.php:145: Notice[8]: Object to string conversion > lib/config.php:145: Notice[1024]: Object > > I'd experimented with the authorisations, but this i didn' get it work. > > So if you've go some suggestions, i'll be a happy men... > > Joe > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.2.1 (GNU/Linux) > > iD8DBQE+Vmi0nmAbbUgFSFgRAuusAJ0fYPjVdJ5KptayWaM6CykOppD7BwCeNMdF > G9ipAqfLcdEKg0GkNYl6PHE= > =Da91 > -----END PGP SIGNATURE----- > > > > ------------------------------------------------------- > This SF.net email is sponsored by: SlickEdit Inc. Develop an edge. > The most comprehensive and flexible code editor you can use. > Code faster. C/C++, C#, Java, HTML, XML, many more. FREE 30-Day Trial. > www.slickedit.com/sourceforge > _______________________________________________ > Phpwiki-talk mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phpwiki-talk |
From: Jeff D. <da...@da...> - 2003-02-23 19:45:25
|
On 23 Feb 2003 20:08:14 +0100 Jean-Philippe Georget <jpg...@ou...> wrote: > I didn't see how tell my browser (Mozilla 1.0 Linux) to use the > "Printer" stylesheet. It seems that this option isn't very common for > browsers. I've only got Mozilla 1.2 (Linux) installed right now. It's under the pulldown "View->Use Style->Printer". I'm pretty sure Moz 1.0 had the same control, but I could be wrong. The option to include a button or link to printable version of the page is a good idea. (However, I'm probably not going to work on it right now.) |
From: Jean-Philippe G. <jpg...@ou...> - 2003-02-23 19:15:51
|
Jeff Dairiki <da...@da...> a =E9crit : > > Is there a way of having easily a version of a page for printing (for > > example, without the buttons for edit, diff...) ? >=20 > Telling your browser to use the "Printer" stylesheet is supposed to do > exactly that --- but I see that at the moment, that doesn't seem to be > work. >=20 > I guess your note should be considered a bug report... Why not, but perhaps there is a simple way of having a button for "ready to printing" version in the phpwiki interface as I can see in some web sites. I didn't see how tell my browser (Mozilla 1.0 Linux) to use the "Printer" stylesheet. It seems that this option isn't very common for browsers. --=20 Jean-Philippe Georget jpg...@ou... - http://jpgeorget.ouvaton.org/ |
From: Jeff D. <da...@da...> - 2003-02-23 18:46:08
|
On Sun, 23 Feb 2003 01:06:41 +0100 Martin Geisler <gim...@gi...> wrote: > Jeff Dairiki <da...@da...> writes: > > One of these days, (after user-auth and other things have > > stabilized) it might be good to refactor the backends and SQL schema > > a bit. > > I think that would be nice. I noticed how the database is locked and > unlocked with each operation, even on backends like PostgreSQL (and > now also MySQL with InnoDB) that support transactions. This seams to > be something that could benefit from a cleanup. (Of course one of the big headaches is to optimize the schema while keeping the backend API general enough that we can write and maintain the non-SQL backends (dba, flat-file) as well. Also, it's good to share code as much as possible between the different flavors of SQL, otherwise we get all kinds of funny bugs creeping into the lesser-used backends...) > A similar thing would be a plugin that generates an index like the > ones you find in the back of most books. I'm not sure if this can be > done automatically, but the plugin could skip words that appear on > more than perhaps 10% of the pages or something like that. And it > should also skip words on a stoplist. That's an interesting idea. I'm beginning to think about a more a general API to allow caching of plugin ouput (this would be integrated with the caching of marked-up page content). Once that's in place a plugin like that would be viable. (Until then ... its worth playing with but is going to be slow.) A related idea would be a way to manually enter search terms on pages. E.g. something like "<?plugin Keywords platypus, funny animals ?>" These could be used to form a real book-style index, and to generate a meta keywords tag for search engines... Basically the same as Category pages, I guess... (Maybe we should generate a keywords meta tag from Category links on each page?) Okay, so now I'm just rambling.... |
From: Jeff D. <da...@da...> - 2003-02-23 18:16:36
|
> I would like to use phpwiki-1.3.4 for my own private site. I don't > want to > use phpwiki for its sharing feature, I would like to use it because it > would be easy for me to maintain and to build my web site. > So I would like to know if it is possible to block ALL the pages for > modification except for me of course :o). Perhaprs I need to lock each > page one by one ? You might try setting (in index.php): ALLOW_BOGO_LOGIN to false. REQUIRE_SIGNIN_BEFORE_EDIT to true. That used to be sufficient to ensure that only the admin (with the admin password) could edit pages. (Regardless of whether they were "locked" or not.) I'm not completely sure that it still works --- so you'll need to play with it a bit. Locking pages one by one won't really work (at least without hacking code,) since people will still be able to create pages (i.e. edit non-existant pages.) Another (untested) idea is to use Apache's (or whatever webserver you use) authentication mechanisms to keep anonymous people from being able to POST to the wiki. (I think any operation that modifies the wiki is done via POST, while most or all of the non-modifying functions (e.g. searching, RecentChanges) are done via GET.) |
From: Didier B. <di...@br...> - 2003-02-23 15:54:24
|
Hello, I would like to use phpwiki-1.3.4 for my own private site. I don't want to use phpwiki for its sharing feature, I would like to use it because it would be easy for me to maintain and to build my web site. So I would like to know if it is possible to block ALL the pages for modification except for me of course :o). Perhaprs I need to lock each page one by one ? Regards. -- Didier BRETIN di...@br... http://www.bretin.net ICQ: 46032186 |
From: Klaus - G. L. <Le...@we...> - 2003-02-23 15:04:09
|
Hello, I have found 3 things that appear to me as errors. 1. xhtml dump seems to be broken since at least 31.01.2003 I tried it with three nightly snapshots 31.01.2003 17.02.2003 22.02.2003 I used file and dba as database and get some errors with a virgin database. I don't know if 22.02.2203 is better since there the page InterWikiSearch is broken. The wiki dumps pages until InterWikiSearch and than comes part of the page and the following error message Fatal error: Class searcableinterwikimappagetype: Cannot inherit from undefined class interwikimappagetype in /.../lib/plugin/InterWikiSearch.php on line 63 In the older versions most pages will be dumped but not all. There seems to be a error with non alpha characters in pagenames during the dump. Innoticed the error since some newer pages in my wiki contain pagenames with characters like +{}. Some got dumped but the filenames and or links had been wrong. I wil write a second message to clarify this a bit. 2. the broken InterWikiSearch page 3. If one uses file or dba there is a warning lib/config.php:405: Notice[8]: Undefined index: dsn if have set it to a dummy value and it seems to run afterwards but I don't know if there are other sideeffects. I did not investigate further since i know at the moment very little about php and the phpwiki. So I have a question is there something like a debugger for php where I could do traces and so on. Klaus Leiss |
From: Klaus - G. L. <Le...@we...> - 2003-02-23 15:04:08
|
This is the second mail i promised in Bug Report 1 I restored some pages from a 1.2.2 page dump to a virgin 1.3.4 wiki ( nightly 20030131 ). Outside the problems that I descibed with nonalpha chars, i noticed is that this pages are not on the RecentChanges pages. I'm not sure if this is a feature or a bug. But now to the problem with nonalpha chars in pagenames. The page names had been MESH+CreditUnions MESH+Energy MESH+EnergyMovement MESH+Quarternary MESH+Quintinary MESH+Secondary MESH+Tertiary MESH=Energy MESH+Tech stored as --- page_data/MESH%2BCreditUnions page_data/MESH%2BEnergy page_data/MESH%2BEnergyMovement page_data/MESH%2BQuarternary page_data/MESH%2BQuintinary page_data/MESH%2BSecondary page_data/MESH%2BTertiary page_data/MESH%3DEnergy page_data/MESH%2BTech Output during XHTNML dump --- MESH CreditUnions ... saved as MESH%20CreditUnions.html ... Object MESH Energy ... saved as MESH%20Energy.html ... Object MESH EnergyMovement ... saved as MESH%20EnergyMovement.html ... Object MESH Quarternary ... saved as MESH%20Quarternary.html ... Object MESH Quintinary ... saved as MESH%20Quintinary.html ... Object MESH Secondary ... saved as MESH%20Secondary.html ... Object MESH Tertiary ... saved as MESH%20Tertiary.html ... Object MESH=Energy ... saved as MESH%3DEnergy.html ... Object MESH Tech ... saved as MESH%20Tech.html ... Object some of the resulting filenames MESH%20Tech.html MESH%3DEnergy.html links in the page that links to the above pages <a href="MESH%252BEnergy.html" class="named-wiki" title="MESH+Energy"> <a href="MESH%252BTech.html" class="named-wiki" title="MESH+Tech"> here the %2B of the original filename is escaped to %252b since %25 is %. If I do are zip dump there are files missing. I assume the reason for this is that there are the other errors I described in the other mail. I i open the archive with a text editor there are error messages interspersed with the zip content. Klaus Leiss |
From: Martin G. <gim...@gi...> - 2003-02-23 02:54:26
|
Jeff Dairiki <da...@da...> writes: (Shouldn't that replay go to the list? If so, then you can just post your reply to this mail there...) >> Also, the LIKE search is expensive if it cannot use an index. The >> manual for MySQL actually says that it can't: > > Interesting (& good) points. I still suspect it's a fairly large > performance win to have mysql do the iterative search rather than doing > in PHP. (You avoid the mysql->PHP communcation and the mysql code, > while using the same basic search algorithm is written in C rather > than PHP.) Yes, that should give MySQL better performance... It also depends on where the MySQL and web server is located --- on the same machine so that they can communicate using a local unix socket or on different machines which means real network trafic... > On the other hand, maybe PhpWiki over-all is slow enough that the > difference is unimportant. I guess it would take some experiments to > find out. (The answer, I suspect depends heavily on the size of > the wiki, too...) Yes, I'm not sure either that it would make much of a difference with all those other lines of PHP code that's executed all the time :-) I've played a bit with the new full-text index in MySQL and it works OK. It was just a quick hack where I stored the original search query in the TextSearchQuery class and then added a text_search() to the WikiDB_backend_mysql class. So the highlight is wrong. > Anyhow, none of that is currently high on my list of priorities... > > One of these days, (after user-auth and other things have > stabilized) it might be good to refactor the backends and SQL schema > a bit. I think that would be nice. I noticed how the database is locked and unlocked with each operation, even on backends like PostgreSQL (and now also MySQL with InnoDB) that support transactions. This seams to be something that could benefit from a cleanup. > (Pagetype and the cached markup should each be in their own column, > rather than stored in the general meta-data hash.) But that's for > some other time... Yes, there's plenty of other things to hack on :-) > (Another project would be to implement a real word index, so that > the SQL searches would be indexed.) That sounds like a good idea --- the more work we can do when saving the pages, the better. A similar thing would be a plugin that generates an index like the ones you find in the back of most books. I'm not sure if this can be done automatically, but the plugin could skip words that appear on more than perhaps 10% of the pages or something like that. And it should also skip words on a stoplist. Such a plugin would be cool for fast, exported sites where you cannot do a dynamic search. I'm beginning to think of PhpWiki as a tool that can be used to quickly build static sites: it's quick and easy to update the contents, the linking capabilities are great. And the look of everything is controlled by the template system. -- Martin Geisler My GnuPG Key: 0xF7F6B57B See http://gimpster.com/ and http://phpweather.net/ for: PHP Weather => Shows the current weather on your webpage and PHP Shell => A telnet-connection (almost :-) in a PHP page. |
From: Aredridel <are...@nb...> - 2003-02-22 17:29:06
|
On Fri, Feb 21, 2003 at 11:40:25PM -0500, Carsten Klapp wrote: > I just read about some new improvements to PHP. Apparently as of PHP > 4.3.0, the option --enable-mbstring is the default which means mbstring > functions should be present unless PHP is explicitly compiled otherwise. > > The good news here is that various PHP string/regex functions (if I > read this correctly) can be automatically overloaded with multi-byte > string functions: The standard posix functions are, but pcre UTF-8 is still really dubious. I've enabled it in the wiki I'm maintaining, which is fully UTF-8, and it doesn't work perfectly yet. All in all, UTF-8 is the best internal encoding -- most search algorithms don't have to be modified, and false-positives with a non-UTF aware search are very infrequent. Any database that can handle ISO8859-1 can handle UTF-8 for storage, so that's not really an issue. Browsers don't always send UTF-8 /back/ when you request it, that's one major issue. You do have to validate form submission. Ari |
From: Martin G. <gim...@gi...> - 2003-02-22 10:12:15
|
Jeff Dairiki <da...@da...> writes: >> Could you store the records as binary data. > > Yes, but then you can't use the database text-search functionality > (which would be a big performance penalty.) (E.g. you can't do > case-insensitive searches on binary data.) This is {only,primarily} > an issue with the SQL databases, of course, since file and dbm > database don't have text search ability (so PhpWiki has to iterate > over each page itself to do the search.) Isn't this almost what the database has to do today? A FullTextSearch for 'Hello World' uses this SQL as the $search_clause: (LOWER(pagename) LIKE '%hello%' OR content LIKE '%hello%') AND (LOWER(pagename) LIKE '%world%' OR content LIKE '%world%') I don't think any database will be able to optimize on such a query very much because it contains the computed value LOWER(pagename) which means that an index for pagename probably cannot be used. Also, the LIKE search is expensive if it cannot use an index. The manual for MySQL actually says that it can't: The following SELECT statements will not use indexes: mysql> SELECT * FROM tbl_name WHERE key_col LIKE "%Patrick%"; mysql> SELECT * FROM tbl_name WHERE key_col LIKE other_col; In the first statement, the LIKE value begins with a wildcard character. In the second statement, the LIKE value is not a constant. That was from http://www.mysql.com/doc/en/MySQL_indexes.html#IDX879. When the database is done searching through all the text in the Wiki, then it's processed further with regular expressions to do the highlighting... So perhaps it would be almost as fast if we retrieved all the text from the database, and then searched and highlighted in one step? Then we could store the data in UTF-8 in the database (which would be extremely cool!), because it no longer has to deal with the data, just store it for us. I agree that this sounds a little ugly --- databases are meant to be used to speed up such searches through large masses of data, but since the current code already forces the database to do a slow search, and we then also search (highlight) ourselves afterwards, perhaps it isn't that much uglier than the current scheme... -- Martin Geisler My GnuPG Key: 0xF7F6B57B See http://gimpster.com/ and http://phpweather.net/ for: PHP Weather => Shows the current weather on your webpage and PHP Shell => A telnet-connection (almost :-) in a PHP page. |
From: Carsten K. <car...@us...> - 2003-02-22 04:40:23
|
Using UTF-8 would be the ideal solution. PostgreSQL support for UTF-8 worked well last time I tried it. Only MySQL newer than 4.1.x (Alpha) has UTF-8 but I think most people still use 3.23 including myself. I just read about some new improvements to PHP. Apparently as of PHP 4.3.0, the option --enable-mbstring is the default which means mbstring functions should be present unless PHP is explicitly compiled otherwise. The good news here is that various PHP string/regex functions (if I read this correctly) can be automatically overloaded with multi-byte string functions: http://www.php.net/manual/en/ref.mbstring.php > Multibyte extension (mbstring) also supports 'function overloading' to > add multibyte string functionality without code modification. Using > function overloading, some PHP string functions will be oveloaded > multibyte string functions. For example, mb_substr() is called instead > of substr() if function overloading is enabled. Function overload > makes easy to port application supporting only single-byte encoding > for multibyte application. mbstring.func_overload in php.ini should be > set some positive value to use function overloading. Carsten On Friday, February 21, 2003, at 04:09 pm, Jeff Dairiki wrote: > The real solution to that is to switch to using UTF-8 internally, > so that we can store all those nice characters in a uniform manner. > The problem is that MySQL and PHP support for this is not > (last I checked) good (or universal) enough to do this. |
From: Jeff D. <da...@da...> - 2003-02-21 23:11:46
|
> Could you store the records as binary data. Yes, but then you can't use the database text-search functionality (which would be a big performance penalty.) (E.g. you can't do case-insensitive searches on binary data.) This is {only,primarily} an issue with the SQL databases, of course, since file and dbm database don't have text search ability (so PhpWiki has to iterate over each page itself to do the search.) |
From: Klaus - G. L. <Le...@we...> - 2003-02-21 22:04:19
|
> On Fri, 21 Feb 2003 21:33:51 +0100 > "Klaus - Guenter Leiss" <Le...@we...> wrote: > For form input, (like when editing a page) phpwiki sets > the accept-charset attribute of the <form> tag. This should > make browsers submit the form input in the proper character set. > > (Probably there are some forms that phpwiki generates which > don't have the proper accept-charset attribute. That's a phpwiki > bug... Report those if you find them.) As I mentioned in my post, no experience with phpwiki. Another system had this error. If I ever get this problem I will report this as a bug. But if most modern browser support this than it should no problem for me. > Or, as a hack, I guess we could store everything in US-ASCII, > with all characters above 127 converted to some canonical entity. > (i.e. =DC always needs to be stored the same way: either Ü > or Ü ) > > Another option would be to internally use UTF-8 (in PHP), but > (assuming enough people have UTF-8 enabled PHP's) and convert > to US-ASCII as described above in the backend for those database > who don't support unicode. Could you store the records as binary data. I worked only with file as an database, and have seen that you store serialized data. But I don't know which databases would let you do this. Klaus Leiss p.s. is the 1.2 series actively maintained? At the moment I'm running 1.2.2 since I can't get 1.3.4 running because my webhoster runs PHP in safemode and until now we didn't get the accessright right. I'm asking this because I added a RemovePage funtion to db_filesystem.php . |
From: Jeff D. <da...@da...> - 2003-02-21 21:09:17
|
On Fri, 21 Feb 2003 21:33:51 +0100 "Klaus - Guenter Leiss" <Le...@we...> wrote: > There had been users > with different computerplatforms and different local charsets > and the charcaters above 127 were different. I assume that > is a problem of the browser. Yes, that's probably right. For phpwiki output, the character set is specified in the Content-Type: HTTP header. Browsers are supposed to respect that. For form input, (like when editing a page) phpwiki sets the accept-charset attribute of the <form> tag. This should make browsers submit the form input in the proper character set. (Probably there are some forms that phpwiki generates which don't have the proper accept-charset attribute. That's a phpwiki bug... Report those if you find them.) > I think if you wanted to > discuss music or language you would need many characters > that are not in ISO-8859-1. ... or math... But then that requires things not even in UTF. > > (Searches for "H=E4schen" won't find the text "Häschen".) > This could mean that entities have to be allowed also in searches. The real solution to that is to switch to using UTF-8 internally, so that we can store all those nice characters in a uniform manner. The problem is that MySQL and PHP support for this is not (last I checked) good (or universal) enough to do this. Or, as a hack, I guess we could store everything in US-ASCII, with all characters above 127 converted to some canonical entity. (i.e. =DC always needs to be stored the same way: either Ü or Ü ) Another option would be to internally use UTF-8 (in PHP), but (assuming enough people have UTF-8 enabled PHP's) and convert=20 to US-ASCII as described above in the backend for those database who don't support unicode. Just thinking aloud... or visibly, rather... As for replying to the list or to the sender (or both): I say do whatever you think appropriate for the message. My MUA (sylpheed) seems to reply to the list by default. It must be picking the address from the List-Post: header, since there doesn't seem to be a Reply-To:, and From: lists the sender, not the list. |