You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(103) |
Jul
(105) |
Aug
(16) |
Sep
(16) |
Oct
(78) |
Nov
(36) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(100) |
Feb
(155) |
Mar
(84) |
Apr
(33) |
May
(22) |
Jun
(77) |
Jul
(36) |
Aug
(37) |
Sep
(183) |
Oct
(74) |
Nov
(235) |
Dec
(165) |
2002 |
Jan
(187) |
Feb
(183) |
Mar
(52) |
Apr
(10) |
May
(15) |
Jun
(19) |
Jul
(43) |
Aug
(90) |
Sep
(144) |
Oct
(144) |
Nov
(171) |
Dec
(78) |
2003 |
Jan
(113) |
Feb
(99) |
Mar
(80) |
Apr
(44) |
May
(35) |
Jun
(32) |
Jul
(34) |
Aug
(34) |
Sep
(30) |
Oct
(57) |
Nov
(97) |
Dec
(139) |
2004 |
Jan
(132) |
Feb
(223) |
Mar
(300) |
Apr
(221) |
May
(171) |
Jun
(286) |
Jul
(188) |
Aug
(107) |
Sep
(97) |
Oct
(106) |
Nov
(139) |
Dec
(125) |
2005 |
Jan
(200) |
Feb
(116) |
Mar
(68) |
Apr
(158) |
May
(70) |
Jun
(80) |
Jul
(55) |
Aug
(52) |
Sep
(92) |
Oct
(141) |
Nov
(86) |
Dec
(41) |
2006 |
Jan
(35) |
Feb
(62) |
Mar
(59) |
Apr
(52) |
May
(51) |
Jun
(61) |
Jul
(30) |
Aug
(36) |
Sep
(12) |
Oct
(4) |
Nov
(22) |
Dec
(34) |
2007 |
Jan
(49) |
Feb
(19) |
Mar
(37) |
Apr
(16) |
May
(9) |
Jun
(38) |
Jul
(17) |
Aug
(31) |
Sep
(16) |
Oct
(34) |
Nov
(4) |
Dec
(8) |
2008 |
Jan
(8) |
Feb
(16) |
Mar
(14) |
Apr
(6) |
May
(4) |
Jun
(5) |
Jul
(9) |
Aug
(36) |
Sep
(6) |
Oct
(3) |
Nov
(3) |
Dec
(3) |
2009 |
Jan
(14) |
Feb
(2) |
Mar
(7) |
Apr
(16) |
May
(2) |
Jun
(10) |
Jul
(1) |
Aug
(10) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
(13) |
Apr
(11) |
May
(18) |
Jun
(44) |
Jul
(7) |
Aug
(2) |
Sep
(14) |
Oct
|
Nov
(6) |
Dec
|
2011 |
Jan
(2) |
Feb
(6) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(11) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(4) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
(5) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
(11) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(3) |
2024 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve W. <sw...@wc...> - 2000-07-12 02:52:33
|
On Mon, 10 Jul 2000, Jeff Dairiki wrote: > 1. The string 'HammondWiki' is in the URL for my PhpWiki. It is also the name > of a page within my PhpWiki. Lines which contain an old-style (bumpy-word) > link followed by an old-style link to HammondWiki get munged. E.g., the > line: > > Use FindPage to search HammondWiki. > > gets transformed to: > > Use <a href="http://www.dairiki.org:80/<a href="http://www.dairiki.org:80/ > HammondWiki/index.php3?HammondWiki">HammondWiki</a>/index.php3?FindPage"> > FindPage</a> to search <a href="http://www.dairiki.org:80/HammondWiki/index.php > 3?HammondWiki">HammondWiki</a> I'm trying to duplicate this, can you reproduce it on http://phpwiki.sourceforge.net/phpwki/ ? sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Nicholas <nic...@sy...> - 2000-07-11 05:03:30
|
I've noticed the lag too, maybe SourceForge is suffering SuccessCrisis At 09:51 PM 7/10/00 -0700, Jeff Dairiki wrote: >In message ><Pin...@bo...>,Steve Wa >instead writes: > >I want to >set it up so that we only need one call to OpenDataBase(), ... > >I agree, and have been thinking along the same lines. > >We should anticipate the addition of version control too. I see something >like (letting my bias towards objects show through): > >In wiki_config.php3: > >$dbi = new WikiMySqlDB('localhost','test','guest',''); > > >Then in wiki_xxx.php3: > >// Get current version of page >$current_page = $dbi->retrievePage($pagename); > >// Get most recent archived page >$archived_page = $dbi->retrievePage($pagename, -1); > >// Get second most recent archived page (or false if there is none) >$old_archived_page = $dbi->retrievePage($pagename, -2); > >// Get version 12 of the page (o4 false if version 12 is not in the database). >$page_version12 = $dbi->retrievePage($pagename, 12); > > >This would be a good time to clean up the InitXXX(), XXXNextMatch() interface >too. As stated in a previous post, I suggest iterators: > >$search = $dbi->fullSearch("findme"); >while ($page = $search->next()) { > // do something; >} > >or similar... > >Jeff > >PS Anyone else notice anything funny going on with phpwiki-talk? I posted >two notes earlier today (one re: ZIP file stuff, and one about bugs in >wiki_transform) and I've only seen one come back. Also I can't get into the >archives (I get the oh-so-informative "An error occured.") > > > >_______________________________________________ >Phpwiki-talk mailing list >Php...@li... >http://lists.sourceforge.net/mailman/listinfo/phpwiki-talk -N -- Nicholas Roberts, Webmaster/Director mailto:Nic...@SY... -- Synarchy Australia Pty Ltd ACN: 052 408 849 11/281a Edgecliff Rd, Woollahra 2025, NSW, Australia http://SYNARCHY.NET Mob: 0414 642 316 Ph/Fax: +612 9475 4399 -- SYDNIC ARCHITECTURE: A New Architectural Style with the Sydney Opera House as a Signature Building http://synarchy.net/Sydnic -- Evolution and the Polymath Entrepreneur http://phpwiki.sourceforge.net/1.1.6/index.php3?NicholasRoberts --- Open Editing - Have Your Say! -- |
From: Jeff D. <da...@da...> - 2000-07-11 04:57:22
|
In message <Pin...@bo...>,Steve Wa instead writes: >I want to >set it up so that we only need one call to OpenDataBase(), ... I agree, and have been thinking along the same lines. We should anticipate the addition of version control too. I see something like (letting my bias towards objects show through): In wiki_config.php3: $dbi = new WikiMySqlDB('localhost','test','guest',''); Then in wiki_xxx.php3: // Get current version of page $current_page = $dbi->retrievePage($pagename); // Get most recent archived page $archived_page = $dbi->retrievePage($pagename, -1); // Get second most recent archived page (or false if there is none) $old_archived_page = $dbi->retrievePage($pagename, -2); // Get version 12 of the page (o4 false if version 12 is not in the database). $page_version12 = $dbi->retrievePage($pagename, 12); This would be a good time to clean up the InitXXX(), XXXNextMatch() interface too. As stated in a previous post, I suggest iterators: $search = $dbi->fullSearch("findme"); while ($page = $search->next()) { // do something; } or similar... Jeff PS Anyone else notice anything funny going on with phpwiki-talk? I posted two notes earlier today (one re: ZIP file stuff, and one about bugs in wiki_transform) and I've only seen one come back. Also I can't get into the archives (I get the oh-so-informative "An error occured.") |
From: Steve W. <sw...@wc...> - 2000-07-11 04:12:50
|
I added two new markup rules tonight. I want to do away with the use of tabs in the markup language since tabs are too difficult to use in Windows browsers. Right now we have: * one level ** two levels *** three levels # one # two ## one ## two and it seems to be working. It was a trivial change, but I think it will be a breath of fresh air. I am not going to implement the <dt><dd> tag set (term/definition) since noone uses them (even in HTML noone ever used them). The old rules are still in there as well. I've added TestPage which I hope to be a page that tests all the markup rules and we'll always have a quick and easy way to verify it all works. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-11 03:28:38
|
Hola, Something has been bugging me for a while now. Consider: $wiki = RetrievePage($dbi, $pagename); $dba = OpenDataBase($ArchiveDataBase); $archive= RetrievePage($dba, $pagename); This is out of wiki_diff.php3. What's wrong here? PhpWiki 1.03 was based on two DBM files, and only one at a time was opened (except when a copy was saved to the archive, I think). It's not right to make the relational implementations "fake" this behavior because of the DBM heritage; it will be confusing to other programmers and it needlessly gets a second database handle. The relational database implementations actually pass the table name around in $WikiDataBase and $ArchiveDataBase, which is very misleading. I want to add more DBM files to add the new functionality we've been adding, and I will change the way the DBM files are opened... I want to set it up so that we only need one call to OpenDataBase(), and we can do away with: // All requests require the database if ($copy) { // we are editing a copy and want the archive $dbi = OpenDataBase($ArchiveDataBase); include "wiki_editpage.php3"; CloseDataBase($dbi); exit(); } else { // live database $dbi = OpenDataBase($WikiDataBase); } as well. (from index.php3) I don't think this means a lot of hacking... well, outside of the changes to the DBM implementation. Once I have the new DBM code worked out we can go back and weed out the extra OpenDataBase calls. One OpenDataBase() call should serve the rest of the invocation for the most part. Also, there's an interesting comparison of MySQL and Postgesql on PHPbuilder.com right now, if you are interested in how they stack up. And last, I have been experiencing weird behavior with the mSQL version I have set up at http://wcsb.org/~swain/phpwiki/. Most of the time I get an "mSQL database has gone away" error even though it never goes away (the radio station's program schedule runs off that same database and is never down.) I wonder if there isn't some bug in the msql_pconnect() call? At home I use PHP4+mSQL and never see this error. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-10 20:24:50
|
I've uncovered a few bugs in wiki_transform.php3. 1. The string 'HammondWiki' is in the URL for my PhpWiki. It is also the name of a page within my PhpWiki. Lines which contain an old-style (bumpy-word) link followed by an old-style link to HammondWiki get munged. E.g., the line: Use FindPage to search HammondWiki. gets transformed to: Use <a href="http://www.dairiki.org:80/<a href="http://www.dairiki.org:80/ HammondWiki/index.php3?HammondWiki">HammondWiki</a>/index.php3?FindPage"> FindPage</a> to search <a href="http://www.dairiki.org:80/HammondWiki/index.php 3?HammondWiki">HammondWiki</a> 2. The line [[Link] produces [Link]. gets munged. 3. '''''Bold italic''' and italic'' yields: <strong><em>Bold italic</strong> and italic</em>. (Tags not nested properly.) '''Bold and ''bold-italic''''' has the same problem. Fixes for bugs 1 & 2, I think, should be straightforward. (Though I haven't stared at the wiki_transform code long enough to come up with one.) Bug 3 is somewhat insidious and may not be easily fixable. Jeff |
From: Jeff D. <da...@da...> - 2000-07-10 19:26:19
|
In message <Pin...@bo...>,Steve Wa instead writes: > >Hmm. It might be overkill. How long will it take for you to do? The zipping is done. The CVS version works. Unzipping will be easy enough to implement --- I'm just not sure precisely what to do with the data (how to restore the Wiki) once it's unzipped. >Since this is an admin feature speed is not important; Good point. >but if I read you right, you're going to implement zip in PHP? Yes, but only in a limited way. The unzip code will only be able to unzip archives which were generated by PhpWiki. (Not all (de)compression methods will be supported, and the special headers containing page meta-data must be present.) Jeff. |
From: Steve W. <sw...@wc...> - 2000-07-08 03:27:48
|
Hmm. It might be overkill. How long will it take for you to do? I'm worried that you might spend 100 hours on something that might not have a great demand... but it would be great to have a secure way to dump a Wiki. Since this is an admin feature speed is not important; but if I read you right, you're going to implement zip in PHP? Is there anything you can't do? :-) sw On Fri, 7 Jul 2000, Jeff Dairiki wrote: > I've just checked in to the CVS the beginnings of on-the-fly ZIP file > creation. To use, just click on the link near the bottom of admin/index.php3. > > Current Features: > o If PHP has zlib compiled in, pages are compressed (deflated). > (If PHP doesn't have zlib, pages are just stored --- also ZIP production > is quite a bit slower on account of CRC32 computation in PHP.) > (And I've just discovered that my web hosts PHP doesn't have zlib :-/) > > o Page meta-data (author, version, etc...) is saved in a special custom > header field in the zip file. This information is not accessible via > any standard zip tools, but I plan on writing an unzipper which can > use this information to restore a Wiki from the zip file. (The zip file > is (should be) still readable using any unzipper.) > > o Currently, only the most recent version of a page is archived. > If this is an issue, we can easily add an option for including > all saved versions of every page. > > Known Issues: > o Speed and volume of output might be an issue for large Wikis (especially > when PHP doesn't have zlib support). Certainly the PHP execution timeout > should be increased. > > o I still need to write the unzipper. > > What's the concensus? Is this cool or just overkill? > > Jeff > > > > _______________________________________________ > Phpwiki-talk mailing list > Php...@li... > http://lists.sourceforge.net/mailman/listinfo/phpwiki-talk > ................................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-07 20:19:45
|
I've just checked in to the CVS the beginnings of on-the-fly ZIP file creation. To use, just click on the link near the bottom of admin/index.php3. Current Features: o If PHP has zlib compiled in, pages are compressed (deflated). (If PHP doesn't have zlib, pages are just stored --- also ZIP production is quite a bit slower on account of CRC32 computation in PHP.) (And I've just discovered that my web hosts PHP doesn't have zlib :-/) o Page meta-data (author, version, etc...) is saved in a special custom header field in the zip file. This information is not accessible via any standard zip tools, but I plan on writing an unzipper which can use this information to restore a Wiki from the zip file. (The zip file is (should be) still readable using any unzipper.) o Currently, only the most recent version of a page is archived. If this is an issue, we can easily add an option for including all saved versions of every page. Known Issues: o Speed and volume of output might be an issue for large Wikis (especially when PHP doesn't have zlib support). Certainly the PHP execution timeout should be increased. o I still need to write the unzipper. What's the concensus? Is this cool or just overkill? Jeff |
From: Steve W. <sw...@wc...> - 2000-07-07 03:26:51
|
After some grevious hacking I got the two older Wikis on Sourceforge merged into the one at http://phpwiki.sourceforge.net/phpwiki/. A few notes: * the 1.03 porting script worked very nicely * DO NOT try to use it with a 1.1.4-1.1.5 PhpWiki. Bad things happen. * References were lost going from 1.1.6b -> 1.1.x but I'm not sure why. It might be because at one point I loaded 1.03 into 1.1.6b, and then got the previous version of the FrontPage from the archive and Nicholas' Open Source links were gone. Otherwise I think I will switch the link from 1.1.6 -> phpwiki and all that needs to be done to bring it up to date is cvs update -d periodically. I should probably only do that with tagged releases but I'm impulsive sometimes. ;-) sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-06 20:48:35
|
>> What's the best output format? Tar? Or zip for windows friendliness? >> Or something else I haven't thought of? > >Best: user chooses .zip, .Z, .gz. > >Minimum: .zip, since almost everyone can decompress them >Philisophicaly: gz and bz2 :-) I think we need to use some kind of archive format --- ie. we want to pack multiple files (pages) into one file. This rules out straight .Z and .gz. Tar the .gz is fine. I was thinking that zip would be the most portable, so I started looking into the format. There are CRC32 checksums in the file headers --- I suspect these will be expensive to compute in PHP, as AFAIK, there is no built-in function to do it. So now I'm looking at the tar file format, which is looking much more straightforward to implement --- I think I'll start with that. Jeff. |
From: Steve W. <sw...@wc...> - 2000-07-06 18:19:06
|
On Thu, 6 Jul 2000, Jeff Dairiki wrote: > It might be interesting to do a user survey to find out just what environments > phpwiki is being run in. What do you and Arno deal with? Dunno about Arno; I use Red Hat 6.2 at home with two Apache servers: 1. the default Apache+PHP+Postgresl, which RH ships with 2. Apache 1.3.12+PHP4+mSQL2+Postgresql On wcsb.org: RH 4.2 + mSQL2 At Sourceforge: whatever they have, probably Apache 1.3+MySQL on Debian I *usually* test on all these evironments though I've never found a difference in any of them... > (Confused yet?) auth gives me headaches :-) > What's the best output format? Tar? Or zip for windows friendliness? > Or something else I haven't thought of? Best: user chooses .zip, .Z, .gz. Minimum: .zip, since almost everyone can decompress them Philisophicaly: gz and bz2 :-) > See ignore_user_abort(). See also register_shutdown_function() which allows > you to continue executing PHP after the html output stream has been closed. > (There may be other better ways to do this, but I don't know them.) > (There's also flush() which flushes the output, allowing you to add to it > later.) > > I was thinking of an update count stored in the DBM. When it exceeds a > threshold, the DBM gets rebuilt. Excellent idea. This would be cleaner than using lynx+cron, by far! sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-06 16:20:35
|
In message <Pin...@bo...>,Steve Wa instead writes: >It would only be as secure for as many people out there who don't know >there is a lib/ and I'm sure lots of script kiddies read Freshmeat on a >daily basis... With apache if one creates a lib/.htaccess that reads: order allow,deny deny from all Then no one can get at anything in lib via http. (The same should be done for templates/.htaccess too.) Other httpds (including at least Netscape's and NCSA's) offer similar abilities, though the actual directives may differ. I'd be surprised (I often am) if most httpd's don't allow the user some kind of similar control. When I put together HammondWiki, the first thing I did was move the wiki_*.php3's into a non-readable lib directory. In my case (apache) this required no source code tweaks at all, as I set the php include path in HammondWiki/.htaccess. >Any changes that mean something won't work "out of the box" is going to >meet stiff opposition from me :-) I agree completely. That's one of the advantages to using PHP in my opinion. On the other hand, I'm sure there are plenty of non-portable things you can do in PHP --- and I don't have much experience other than in an apache environment. It might be interesting to do a user survey to find out just what environments phpwiki is being run in. What do you and Arno deal with? >> When apache is configured to do external authorization,... > >(Is that, btw, the same as the .htaccess file?) Uh, sort of. One needs the following (or similar) directives in http.conf or an .htaccess file. (These directives can be disallowed in .htaccess files by http.conf.) # User and group password databases. AuthUserFile /some/password AuthGroupFile /some/group Then either in a <Directory> (or similar) section or in .htaccess you put something like: require user dairiki Then only dairiki can access that stuff. The confusing thing is that once you've set an AuthUserFile (and I can't find a way to unset it locally in an .htaccess file), even in a directory with no 'require' directives (ie. no authentification required), neither the username nor password will ever make it through httpd. (Confused yet?) >> Files and directories which are writeable by through httpd make me nervous,... > >Whenever I've added something that means "the server can write to it," be >it a DBM file or a directory, I think very carefully about it. Being able >to dump Wiki pages to a directory doesn't sound too dangerous, since the >input has to be a dbm file (or it fails completely). .... Yes, it's seems pretty safe. I can think of any exploits other than filling the disk DOS type attacks (which are possible anyhow...) >Also, a friend recently asked me what's to stop him from uploading a >uuencoded warez file and I said nothing; but maybe we want to set a hard >limit on page sizes anyway, like 1M or maybe 500,000K. That could be a >define() in the config file and a check on the size of the page in >wiki_savepage.php3. Good point, a hard limit sounds like a good idea. I would make it considerably smaller than 500,000K (or even 500K :-). It would be interesting to find the biggest current Wiki page. >> Another slick alternative might be a PHP script which creates a tar- (or zip > -) >> file dump of the wiki on the fly > >THAT is a very interesting idea... I'll look into it a bit more. Compression will be a problem without local temporary files (which I suppose aren't a big problem.) (Of course compression will be a bigger problem if your PHP doesn't have zlib...) What's the best output format? Tar? Or zip for windows friendliness? Or something else I haven't thought of? >> Maybe wiki_dbmlib can do this automatically every once in awhile? > >It would have to be an admin function because if you decide that the first >user after 4am every day triggers the rebuild, you are counting on that >user to not stop the transaction before it's done. (i.e., click the "stop" >button and interrupt the rebuild.) That makes me nervous. See ignore_user_abort(). See also register_shutdown_function() which allows you to continue executing PHP after the html output stream has been closed. (There may be other better ways to do this, but I don't know them.) (There's also flush() which flushes the output, allowing you to add to it later.) I was thinking of an update count stored in the DBM. When it exceeds a threshold, the DBM gets rebuilt. Jeff |
From: Steve W. <sw...@wc...> - 2000-07-06 14:05:45
|
On Wed, 5 Jul 2000, Jeff Dairiki wrote: > Why is this in it's own subdirectory? Why not just an admin.php3 in the > main directory? I wanted to keep files grouped according to their function, more or less. Putting everything in admin/ solved this, from my point of view... > I think all the files which get included or required should be moved into > a 'lib' subdirectory. This is mostly a security issue as it makes it much > easier to prevent people from directly browsing eg. > http://blah/wiki_display.php3. > (Not that this necessarily does anything bad, but there's no reason for > that to be a valid URL at all.) It would only be as secure for as many people out there who don't know there is a lib/ and I'm sure lots of script kiddies read Freshmeat on a daily basis... > I suggest that only index.php3 and admin.php3 should be in the top level > directory. These will 'include "lib/wiki_config.php3"' (or maybe 'include > "wikilib/config.php3"'?) That might be a cleaner solution; but I wanted to start coding admin stuff now, and not clutter the main directory. I dunno. What do you think, Arno? admin/ or lib/? > (PHP, as you probably know, does support an include search path via the > configuration variable php_include_path. When PHP is run as an Apache > module, this path can be set in the local .htaccess file. With other > servers, this is probably not so easy. One could write ones version of > include > using file_exists().) I'm very flexible on the directory issue but I won't back down on ease of installation. A lot of people who've installed PhpWiki don't have access to the server itself. (Myself included; on Sourceforge!) Any changes that mean something won't work "out of the box" is going to meet stiff opposition from me :-) (Try installing a few other Wiki clones and see how long it takes to get a Wiki up and running to the point where PhpWiki is...) > When apache is configured to do external authorization, the variables > $PHP_AUTH_USER and $PHP_AUTH_PW never get set. The solution, in this > case, is just to delete the authorization stuff from admin/index.php3, > since the httpd is handling this anyway. > > This is confusing though (for admins setting up a phpwiki, that is). > To maintain maximum plug-and-playness, it might be better to implement > authentification entirely within php. The drawback to this is, as always, > added complication: it probably requires cookies and some sort of session > management. I wasn't aware of this, thanks for bringing it up... I intended to put in a comment in the code inviting a better solution to authentication. Perhaps we will just have to document the problem if the user wants to run PhpWiki on an Apache server that does auth. (Is that, btw, the same as the .htaccess file? I haven't set up one of those since 1997.) > Files and directories which are writeable by through httpd make me nervous, > and I try to minimize their number. (Of course the main databases need to > be writeable, so maybe my fear in moot.) Agreed. Ideally PhpWiki is run by someone who has a clue; I suppose that's wishful thinking on my part :-) At this stage we've been really lucky because people interested in Wikis in general are pretty intelligent people (present company included :-) so it's been pretty smooth. Whenever I've added something that means "the server can write to it," be it a DBM file or a directory, I think very carefully about it. Being able to dump Wiki pages to a directory doesn't sound too dangerous, since the input has to be a dbm file (or it fails completely). Where it's being written has to be writable by the server, so if some bad guy in a black hat decides to hack a PhpWiki and dump all the pages to /tmp he can do so... if the server runs as root (bad!) he could name a page "passwd" and dump the pages to /etc. So a careful check of the directory the user provides will be crucial. We can start by limiting it to anything under /tmp and anything under the server root; nothing with ".." will be allowed; and if the out-of-the-box constraints are too stiff they have GPL'd source code that they are free to do with as they please! That said, I'm sure another security vulnerability lurks somewhere. It might be better to ship PhpWiki with admin functions disabled, and the user has to enable it. (It has no login/password now and won't work until they edit the script.) Also, a friend recently asked me what's to stop him from uploading a uuencoded warez file and I said nothing; but maybe we want to set a hard limit on page sizes anyway, like 1M or maybe 500,000K. That could be a define() in the config file and a check on the size of the page in wiki_savepage.php3. > Another slick alternative might be a PHP script which creates a tar- (or zip-) > file dump of the wiki on the fly (to be saved on the web-clients, rather than > the > web-servers disk.) THAT is a very interesting idea... > >The Perl script shrank the DBM file on wcsb.org from 2,464,640 bytes to > >117,574 (there are 91 pages in it). > > Maybe wiki_dbmlib can do this automatically every once in awhile? It would have to be an admin function because if you decide that the first user after 4am every day triggers the rebuild, you are counting on that user to not stop the transaction before it's done. (i.e., click the "stop" button and interrupt the rebuild.) That makes me nervous. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-05 23:47:31
|
In message <Pin...@bo...>,Steve Wa instead writes: > >OK, here's my first pass at an administrative module for PhpWiki. > >I made a new subdirectory, admin/, which has three files in it. One is >index.php3, which will work much like the main index.php3: it opens the >database and goes through an if/elseif/elseif/else block to decide which >file to load. Some comments: First point: Why is this in it's own subdirectory? Why not just an admin.php3 in the main directory? Which leads to (quoted from admin/index.php3): // temporarily go up to the main directory. is there a way around this? chdir(".."); include "wiki_config.php3"; include "wiki_stdlib.php3"; chdir("admin"); I think all the files which get included or required should be moved into a 'lib' subdirectory. This is mostly a security issue as it makes it much easier to prevent people from directly browsing eg. http://blah/wiki_display.php3. (Not that this necessarily does anything bad, but there's no reason for that to be a valid URL at all.) I suggest that only index.php3 and admin.php3 should be in the top level directory. These will 'include "lib/wiki_config.php3"' (or maybe 'include "wikilib/config.php3"'?) (PHP, as you probably know, does support an include search path via the configuration variable php_include_path. When PHP is run as an Apache module, this path can be set in the local .htaccess file. With other servers, this is probably not so easy. One could write ones version of include using file_exists().) Second point: When apache is configured to do external authorization, the variables $PHP_AUTH_USER and $PHP_AUTH_PW never get set. The solution, in this case, is just to delete the authorization stuff from admin/index.php3, since the httpd is handling this anyway. This is confusing though (for admins setting up a phpwiki, that is). To maintain maximum plug-and-playness, it might be better to implement authentification entirely within php. The drawback to this is, as always, added complication: it probably requires cookies and some sort of session management. >The files it will choose from will be: > >* serialize all pages >* dump all pages as HTML >* load a set of serialized pages Files and directories which are writeable by through httpd make me nervous, and I try to minimize their number. (Of course the main databases need to be writeable, so maybe my fear in moot.) Mostly because of this, I, personally, favor using perl scripts to do the dumping sorts of things. Another slick alternative might be a PHP script which creates a tar- (or zip-) file dump of the wiki on the fly (to be saved on the web-clients, rather than the web-servers disk.) >* rebuild the DB files (for DBM-based Wikis) >Third is a Perl script that reduces the size of a DBM file. I will write >all of it in PHP later but wanted to prove I was right about how DBM files >lose memory first, and I was... for the savvy sysadmin a Perl script will >be faster or more flexible a solution (and can be easily cron'd.) > >The Perl script shrank the DBM file on wcsb.org from 2,464,640 bytes to >117,574 (there are 91 pages in it). Maybe wiki_dbmlib can do this automatically every once in awhile? Jeff |
From: Steve W. <sw...@wc...> - 2000-07-05 22:15:40
|
Hi Markus! Thanks for the kind words. It makes it all worthwhile to hear PhpWiki helps you! I started on the script that will move pages in a 1.0 .. 1.1.5 PhpWiki DBM file and load them into a 1.1.6 or later PhpWiki. Arno fleshed out some important parts today, so it's almost finshed. It will be part of the 1.1.7 release which will be out soon. cheers, sw On Wed, 5 Jul 2000, Markus Guske wrote: > Hi, > > > Are you trying to use the same database? The schema changed from 1.1.5 to > > 1.1.6. Page data is now written to columns instead of storing everything > > in one serialzed hash. > This is just fixed, thanks Arno. > > > I am working on a new script that might help you move your pages from > > 1.1.5 to 1.1.6, if you are concerned about saving them. > Yes, I really appreciate the idea. Otherwise I have to load and copy all of the 1.1.5 Wiki pages on the next > weekend :-) > > - Markus > > > --- > BITPlan GmbH smart solutions - Meerbuscher Str. 58-60 - 40670 Meerbusch-Osterath > Tel +49 2159 5236-0 - Fax +49 2159 5236-10 - mg...@bi... - http://www.bitplan.de > > Home Office: Markus Guske - Erftstrasse 17 - D-41564 Kaarst > Tel +49 173 946 1880 - Fax +49 2131 769-195 - mg...@gu... > ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-05 21:44:27
|
In message <146...@da...>,Arno Hollosi writes: >Here are some thoughts/questions: > >- the db interface will become very large. I realized that when I > added functions for MostPopular. For every such query we need > two new functions. I don't like this. I agree. > Possible solution: all search functions return a $pagehash array. > For some searches the hash may only be sparsely populated, e.g. > when doing a title or mostpopular search, it's unnecessary to > set $pagehash['content']. I think this is basically the right idea. Now I will let me bias towards classes run free: This is where (at least the way I see it) making $pagehash into a class (concrete type) would make things cleaner. $pagehash['content'] turns into $page-> content() which can do something smart (either signal error, or fetch the content) if the content isn't there. > There could be one general NextMatch function in this case. > For the DBM interface that might be impossible - maybe that function > has to have a switch() structure of some kind. The db search/scan functions should return some kind of iterator. For example, some usage like: $hotlist = FindMostPopular($dbi, $limit); // Better yet: $hotlist = $dbi->FindMostPopular($limit); :-) while ($page = $hotlist->next()) echo $page->pagename() . " " . $page->hitcount() . "hits\n"; >- template facility: > A template class that does the translation from $content to $html. > Placeholder objects register with that class, and then get called > from there. Excellent! Some way to get arguments to placeholders would be nice as well. For example, currently the ###IFCOPY### placeholder has as its argument the remainder of the line --- however this is clumsy. As another example, it would be nice to have an "###INCLUDE###" placeholder (taking a file name as an argument) that could be used to suck in a sub-template. Jeff |
From: <Mar...@t-...> - 2000-07-05 21:31:52
|
Hi, > Are you trying to use the same database? The schema changed from 1.1.5 to > 1.1.6. Page data is now written to columns instead of storing everything > in one serialzed hash. This is just fixed, thanks Arno. > I am working on a new script that might help you move your pages from > 1.1.5 to 1.1.6, if you are concerned about saving them. Yes, I really appreciate the idea. Otherwise I have to load and copy all of the 1.1.5 Wiki pages on the next weekend :-) - Markus --- BITPlan GmbH smart solutions - Meerbuscher Str. 58-60 - 40670 Meerbusch-Osterath Tel +49 2159 5236-0 - Fax +49 2159 5236-10 - mg...@bi... - http://www.bitplan.de Home Office: Markus Guske - Erftstrasse 17 - D-41564 Kaarst Tel +49 173 946 1880 - Fax +49 2131 769-195 - mg...@gu... |
From: <Mar...@t-...> - 2000-07-05 21:28:34
|
Hi, [...] > 1.1.6 was released without serious testing. It doesn't matter. Hope I will find most of them :-) > The fix for your problem is most likely the following: > > Add the following in wiki_savepage.php3 around line 39 Thanks it works!!! > for other known errors and their cure see: > http://phpwiki.sourceforge.net/1.1.6/index.php3?Known%20bugs%20in%201.1.6 That's a good point :-) I just bookmarked these pages. Sorry for the delay, but I found no time to check it out (the work). Again thank you for the support, - Markus P.S.: I love this tool it helps me and my friends to save a lot of time!!! I stay tuned. --- BITPlan GmbH smart solutions - Meerbuscher Str. 58-60 - 40670 Meerbusch-Osterath Tel +49 2159 5236-0 - Fax +49 2159 5236-10 - mg...@bi... - http://www.bitplan.de Home Office: Markus Guske - Erftstrasse 17 - D-41564 Kaarst Tel +49 173 946 1880 - Fax +49 2131 769-195 - mg...@gu... |
From: Steve W. <sw...@wc...> - 2000-07-05 00:45:14
|
On Mon, 3 Jul 2000, Jeff Dairiki wrote: > I just checked a new wiki_diff.php3 into the CVS repo. It's faster > than the previous one (which did turn out to be an issue.) Also I've > added the compose() (and reverse()) method to class WikiDiff, so it's all > ready to be used in a full versioning system. Great work! :-) > "All" that needs to be done now is to fix the db access API and schema so > that multiple backup versions can be saved. I can work on that (particularly > for the MySQL and DBM drivers) if I'm not stepping on any toes. Not at all. I think that when a project stops refactoring code because the developer whose code gets changed throws tantrums is the day that project dies. Always feel free to question design choices; I do :-) For the record I also want total versioning down to the first version. However the point I differ on (my personal opinion) is that storage space is cheap, Wikis are just text, so why not save the whole thing? Not every little edit change though. Let me explain. Right now we have a store of live pages and a store of their last version. When a new author saves the page the new version goes live, the current live page goes into the archive, and the last version is lost forever. What if the last version is not lost, but stays in the archive? We have all the previous versions by the previous authors; little edits the current author makes don't get archived (I know when I edit a Wiki page I make at least two saves and usually more). How much room does this take up? A fairly long page on c2.com is http://c2.com/cgi-bin/wiki?LordOfTheFlies. It's 14234 according to Netscape. Let's say the page is very active, and there are 100 edits over its lifetime... that's over a meg of data. That's probably a worst case senario; most pages don't see that much activity. Another good yard stick is Ari's Wiki. It's using 23 megs right now, and because DBM files keep every copy they ever had, and Ari's wiki is *really* active, I think that's pretty good in terms of space (sadly, all previous versions are lost in the DBM file). Now, how much work is it to: * Store only the diffs of the pages * Recombine them when someone wants to see the differences between version 1.1 and version 1.99? If it's not much work at all (on the programmer (Jeff :-) and the server) then I think only storing diffs is a Good Thing. If 50-100 megs for a Wiki sounds accessive, storing only diffs is a Good Thing. If the problem is also one that simply interests you and you don't care what benefit it has for PhpWiki it's also probably a Good Thing (for you :-) I think we can stick with the "archive" table for this and the user will see no changes in behavior, other than being able to retrieve any previos version of the page. I am going to go into this more in a separate post, but first I have to check with my friends to see if we are still going to see the fireworks. I had jury duty last week and took one of my favorite Wiki idea books along, Jon Udell's "Practical Internet Groupware." (See my review at http://www.amazon.com/exec/obidos/ASIN/1565925378/qid%3D962757399/104-0556265-3752700) . It gave me an epiphany regarding saving versions... sw p.s. my other favorite idea book is Philip and Alex's Guide to Web Publishing, which is more of an opinion book than a technical book: http://www.amazon.com/exec/obidos/ASIN/1558605347/qid%3D962757580/104-0556265-3752700 ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-05 00:08:03
|
On Wed, 5 Jul 2000, Arno Hollosi wrote: > I just committed the following changes: > > - some bug fixes > - added GetAllWikiPageNames($dbi) for mySQL > - dumpserial & loadserial now rawurlen/decode() the filename > - added support for locked pages All now live on http://phpwiki.sourceforge.net/phpwiki/ too > Editing locked pages doesn't work yet. > This is because $ScriptURL points to admin/index.php3 instead of > the main index.php3. I ugly hack would be to str_replace /admin/ with "" > But we aim for better solutions, don't we? > Note: if ServerAddress is set manually and not by the if/else clause, > then editing works. Comment out the part from wiki_lockpage and try it out. Hrm. I was reading Ari's code over the weekend (Ari simply emailed me a login/passwd to the server, very generous!) and he (she?) uses $PHP_SELF now... dunno if that will be any help here. Ari also uses $PATH_INFO for the page name instead of the CGI variables, so right off when we want to change from using CGI srings to pathinfo we have a conversion function we can borrow. > Also, ###LOGO### doesn't work from inside admin/ when using templates. > This is because there's no absolute path used in the default config. Oh, did you add templating to the admin forms? (looks...) Oh, you must mean when you edit a locked page... that won't herald the end of the earth fortunately ;-) > So, lock pages works more or less. > It's not 100% secure though -- to make it water-proof one would have > to test in save_page as well. But I didn't bother right now. I mean, > what are the odds, that someone is going to do some URL hacking? Famous last words ;-) And besides I don't know how much louder I can scream "it's BETA!" sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-07-04 22:49:10
|
I just committed the following changes: - some bug fixes - added GetAllWikiPageNames($dbi) for mySQL - dumpserial & loadserial now rawurlen/decode() the filename - added support for locked pages Editing locked pages doesn't work yet. This is because $ScriptURL points to admin/index.php3 instead of the main index.php3. I ugly hack would be to str_replace /admin/ with "" But we aim for better solutions, don't we? Note: if ServerAddress is set manually and not by the if/else clause, then editing works. Comment out the part from wiki_lockpage and try it out. Also, ###LOGO### doesn't work from inside admin/ when using templates. This is because there's no absolute path used in the default config. So, lock pages works more or less. It's not 100% secure though -- to make it water-proof one would have to test in save_page as well. But I didn't bother right now. I mean, what are the odds, that someone is going to do some URL hacking? /Arno |
From: J C L. <cl...@ka...> - 2000-07-04 22:49:07
|
On Tue, 4 Jul 2000 22:10:17 +0200 (MEST) Arno Hollosi <aho...@in...> wrote: >> Perhaps I should restate my original post. What I listed are >> some of the things I'll be doing. I'm starting from a base of >> PHPWiki. > Understood - I thought commenting from our standpoint makes sense > in order to help you see differences and common ground. Oh, absolutely, and I agree. That was the intent of my original post -- to see if there was some basis of cooperation between the two projects given that they share a common root. <bow> > I now better understand the intent of your changes. Quite > intersting. Actually, I'm all in favour of wiki taking over the > web :o) <dissemble> Well, umm, golly gosh. </dissemble> Actually the intent of this project from my end is to act as the foundation for what in essence becomes a human-centric knowledgebase where the concentration is on individual evaluation and discrimination among items without distinction between original items and meta-items. >> Agreed. When viewed a particular way what I'm really doing is >> adding the ability for the author of a WikiItem to define an >> explicit context for his WikiItem (as versus being a purely >> abstracted node) > I'm sure you have looked at the node typing of everything2? Some > good ideas in there too. Actually, until you mentioned it, no. Yes, some very good ideas in there. That's going to take some study. Thanks! -- "Finally coming home" Home: cl...@ka... J C Lawrence Other: co...@ka... ----------(*) Keys etc: finger cl...@ka... --=| A man is as sane as he is dangerous to his environment |=-- |
From: Arno H. <aho...@in...> - 2000-07-04 20:42:46
|
Hi all, ever since Steve forwarded Ari's email to the list I've been thinking about restructuring phpwiki. Here are some thoughts/questions: - the db interface will become very large. I realized that when I added functions for MostPopular. For every such query we need two new functions. I don't like this. Possible solution: all search functions return a $pagehash array. For some searches the hash may only be sparsely populated, e.g. when doing a title or mostpopular search, it's unnecessary to set $pagehash['content']. There could be one general NextMatch function in this case. For the DBM interface that might be impossible - maybe that function has to have a switch() structure of some kind. - template facility: wouldn't it be neat to be able to add new placeholders and their functions by simply including a program file with that additional functionality? this can either be done by having those placeholders and functions added to an array or by implementing them as objects: a base-class for placeholders, which contains the name of the placeholder and a function call. That function call is overloaded by actual implementations of placeholder-objects. A template class that does the translation from $content to $html. Placeholder objects register with that class, and then get called from there. - the same could be used for wiki_transform. Maybe the array (class) also has to provide priorities, so that some functions are executed before others are. The above would implicitly define APIs for phpwiki modules. The actual core could be reduced in size, while modules could simply extend the functionality by adding their functions to provided hooks. This needs some discussion first, as it would be a major modification, but I think it's Very Nice (tm). I think Ari came up with something similar (more or less), so we could learn (read: copy ;o) a lot from his code. What do you think? /Arno |
From: Arno H. <aho...@in...> - 2000-07-04 20:21:46
|
> I just checked a new wiki_diff.php3 into the CVS repo. It's faster > than the previous one (which did turn out to be an issue.) Also I've > added the compose() (and reverse()) method to class WikiDiff, so it's all > ready to be used in a full versioning system. Great. > "All" that needs to be done now is to fix the db access API and schema so > that multiple backup versions can be saved. > I can work on that (particularly for the MySQL and DBM drivers) if > I'm not stepping on any toes. You won't be stepping on mine and I'm sure Steve doesn't ind either. Go ahead :o) > > > -- Inter-version colourised diff-style views available of edited pages > > > >Nice gimmick. Maybe Jeff is willing to implement this. > > Unless I misunderstand, this is there now (in the CVS version.) Your latest changes are indeed a step torwards that direction. What I was thinking about is, e.g. diffs of version 5 against version 17 (not only version 16 against 17). Also, I understood the above as a diff page, that looks like the original wiki page (through wiki_transform), only with the diff changes highlighted inside the wiki-page. Currently wiki_diff shows diffs from the wiki source markup. I'm ok with that. The other way would need some major modifications to wiki_transform and some to wiki_stdlib and I don't really think it's necessary. THere are more important things to be implemented. /Arno |