You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(103) |
Jul
(105) |
Aug
(16) |
Sep
(16) |
Oct
(78) |
Nov
(36) |
Dec
(58) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(100) |
Feb
(155) |
Mar
(84) |
Apr
(33) |
May
(22) |
Jun
(77) |
Jul
(36) |
Aug
(37) |
Sep
(183) |
Oct
(74) |
Nov
(235) |
Dec
(165) |
| 2002 |
Jan
(187) |
Feb
(183) |
Mar
(52) |
Apr
(10) |
May
(15) |
Jun
(19) |
Jul
(43) |
Aug
(90) |
Sep
(144) |
Oct
(144) |
Nov
(171) |
Dec
(78) |
| 2003 |
Jan
(113) |
Feb
(99) |
Mar
(80) |
Apr
(44) |
May
(35) |
Jun
(32) |
Jul
(34) |
Aug
(34) |
Sep
(30) |
Oct
(57) |
Nov
(97) |
Dec
(139) |
| 2004 |
Jan
(132) |
Feb
(223) |
Mar
(300) |
Apr
(221) |
May
(171) |
Jun
(286) |
Jul
(188) |
Aug
(107) |
Sep
(97) |
Oct
(106) |
Nov
(139) |
Dec
(125) |
| 2005 |
Jan
(200) |
Feb
(116) |
Mar
(68) |
Apr
(158) |
May
(70) |
Jun
(80) |
Jul
(55) |
Aug
(52) |
Sep
(92) |
Oct
(141) |
Nov
(86) |
Dec
(41) |
| 2006 |
Jan
(35) |
Feb
(62) |
Mar
(59) |
Apr
(52) |
May
(51) |
Jun
(61) |
Jul
(30) |
Aug
(36) |
Sep
(12) |
Oct
(4) |
Nov
(22) |
Dec
(34) |
| 2007 |
Jan
(49) |
Feb
(19) |
Mar
(37) |
Apr
(16) |
May
(9) |
Jun
(38) |
Jul
(17) |
Aug
(31) |
Sep
(16) |
Oct
(34) |
Nov
(4) |
Dec
(8) |
| 2008 |
Jan
(8) |
Feb
(16) |
Mar
(14) |
Apr
(6) |
May
(4) |
Jun
(5) |
Jul
(9) |
Aug
(36) |
Sep
(6) |
Oct
(3) |
Nov
(3) |
Dec
(3) |
| 2009 |
Jan
(14) |
Feb
(2) |
Mar
(7) |
Apr
(16) |
May
(2) |
Jun
(10) |
Jul
(1) |
Aug
(10) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
|
| 2010 |
Jan
(1) |
Feb
|
Mar
(13) |
Apr
(11) |
May
(18) |
Jun
(44) |
Jul
(7) |
Aug
(2) |
Sep
(14) |
Oct
|
Nov
(6) |
Dec
|
| 2011 |
Jan
(2) |
Feb
(6) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2012 |
Jan
(11) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(4) |
Dec
|
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(1) |
| 2015 |
Jan
(3) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
| 2016 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
(5) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
(11) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(3) |
| 2024 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2025 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
(3) |
Aug
|
Sep
(5) |
Oct
|
Nov
|
Dec
|
|
From: Steve W. <sw...@wc...> - 2000-07-15 02:28:37
|
I've tarred the source code to the NBTSC PhpWiki with Ari's permission. I combed through it carefully to remove anything incriminating (logins, passwords, easter eggs). Available here: ftp://phpwiki.sourceforge.net/pub/phpwiki/nbtsc.phpwiki.tar.gz It's based on the 1.0 series... but I think you'll find the changes interesting. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-14 22:38:32
|
In message <Pin...@bo...>,Steve Wa
instead writes:
>On Fri, 14 Jul 2000, Arno Hollosi wrote:
>
>> Hm, I don't think using special fields within the ZIP is a good idea.
>> That way, if someone should touch the ZIP for whatever reason, that
>> data will be lost. I suggest using an extra file, or a meta-file for every
>> page-file.
>
>I didn't catch this before, but now I do, and I agree with Arno.. the less
>we rely on proprietary solutions the better. I'd rather hack a loader to
>read two files per page that suffer Jeff with mucking with Zip files too
>much. It hides the information from the user as well (a separate metadata
>file can be edited in a text editor, can be cat'd, grep'd and so on).
>Putting it in the Zip file means it's almost human-inaccessible.
Yes, I see your point --- however php-serialized() meta-data is basically
human-inaccessible anyway. I don't see much point in making the meta-data
more easily accessible unless it's in some human-friendly format.
So, I think we'd need to come up with some sort of metadata file format
(XML comes to mind, but as lots of PHP's don't have XML support compiled in,
something simpler is probably called for.)
An Internet-messageish format might work well, for example:
---Snip---
Author: 12.34.56.78
Version: 23
Flags: 0
Lastmodified: 2000-07-14T21:39:08Z
Created: 2000-07-02T12:01:22Z
!!!Sample Page
Here's the page contents, with a WikiLink.
---Snip---
(If we're devising our own metadata format, I see no reason to separate the
metadata and file content into two separate files.)
Is this a good idea? Is it worth the effort? (Actually it's probably
not that much effort...)
Also, what's the thinking about whether we should include all the archived
versions of a page (rather than just the most recent) in the ZIP?
I.e.: do we want to be able to:
1) Make a snapshot of the complete state of the database (all versions
of all pages)?
2) Make a snapshot of the current state of the Wiki (most recent version
of all pages)?
3) Have the option to do either 1 or 2?
If you chose 2 or 3, a secondary question is: what are the semantics of
"restoring" from a type 2 snapshot? Some choices:
A) Wipe the entire wiki, reinitialize from the snapshot.
o Archived pages are lost.
B) Essentially edit each page in the wiki so that it coincides with
the page in the snapshot:
o Resulting page version number won't necessarily agree with snapshot.
o Lastmodified date should probably be set to time of restore,
rather than the time in the snapshot.
o Current (pre-restore) version of the page gets archived?
Jeff
PS
In other news: the bug bit me, so I've started working on a modularized,
OOPified
version of wiki_transform and GeneratePage() (a la Arno's suggestions).
When I get it to where I'm happy with it I'll post it here for comments
before CVSing it.
|
|
From: Steve W. <sw...@wc...> - 2000-07-14 18:03:55
|
Welcome back! :-) On Fri, 14 Jul 2000, Arno Hollosi wrote: > > o Page meta-data (author, version, etc...) is saved in a special custom > > header field in the zip file. This information is not accessible via > Hm, I don't think using special fields within the ZIP is a good idea. > That way, if someone should touch the ZIP for whatever reason, that > data will be lost. I suggest using an extra file, or a meta-file for every > page-file. I didn't catch this before, but now I do, and I agree with Arno.. the less we rely on proprietary solutions the better. I'd rather hack a loader to read two files per page that suffer Jeff with mucking with Zip files too much. It hides the information from the user as well (a separate metadata file can be edited in a text editor, can be cat'd, grep'd and so on). Putting it in the Zip file means it's almost human-inaccessible. > I would like to be able to mix those two, e.g. > > # one > #* some here > #* more there > > Should be quite easy to do, no? Yes, but I'm confused. Is there a reason to allow arbitrary mixing? > About /lib vs. /admin: > I think /lib is cleaner and reduces the clutter in the main > directory. But it's not an urgent issue. Hmm. I guess I'm in the minority on this one! > When will 1.1.7 be shipped? > Where is Ari's code? Always one step ahead of me! I was waiting for you to return before calling for 1.1.7. Right now it's stable and I see no reason not to release 1.1.7 right away -- this will be the Jeff release! It includes all his cool diff stuff, the zip stuff, your additions to the admin/ files, and the few things I added. That said, if you want something in 1.1.7 that isn't there now speak up; otherwise I will release 1.1.7 over the weekend. I invited Ari to join us but never heard back. If there's something specific you want to see in the nbtsc.org Wiki's source I have access to it. Ari wanted to clean up the code and publicly release it, but I'll ask if I can make a tarball of it as-is since we are all professionals and we all do strange things in the privacy of our own servers, and Ari has nothing to be ashamed of :-) Things we've discussed, but have not reached consensus on (or maybe we did, phpwiki-talk has been extremely active) include: * moving to PATH_INFO. Ari has code we can use. This involves more than you think because we get things like $copy, $edit, $info for free now from PHP and using PATH_INFO means we'll have to roll them by hand; * moving files to lib/ which has the benefit of allowing better security like Jeff set up; * refactoring the database interface, which will start with the DBM changes and then involve a lot of search/replace of variable names (renaming all instances of $ArchiveDataBase to $ArchiveTable, or something better, for example); * possibly moving to an OO approach to the database interface after that; * I'm sure I missed something; and there are a number of features we discussed a couple of months ago that I would have to search the mail for, like all the pages we added tables to the database for etc. However these things can go in 1.1.8 or later; I think we are on track to release 1.2 in a couple of months (depending on how much time we all have) and there will probably be two or three releases between now and then. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Arno H. <aho...@in...> - 2000-07-14 16:58:09
|
Hi there,
I'm back and try to catch up with you guys.
> It might be interesting to do a user survey to find out just
> what environments phpwiki is being run in.
> What do you and Arno deal with?
I'm using a vanilla installation from Suse Linux: Apache+php3+mysql
mySQL is updated to the latest stable version which is 3.22.32
> o Page meta-data (author, version, etc...) is saved in a special custom
> header field in the zip file. This information is not accessible via
> any standard zip tools, but I plan on writing an unzipper which can
> use this information to restore a Wiki from the zip file. (The zip file
> is (should be) still readable using any unzipper.)
Hm, I don't think using special fields within the ZIP is a good idea.
That way, if someone should touch the ZIP for whatever reason, that
data will be lost. I suggest using an extra file, or a meta-file for every
page-file.
> I want to add more DBM files to add the new functionality we've been
> adding, and I will change the way the DBM files are opened... I want to
> set it up so that we only need one call to OpenDataBase(), and we can do
Good idea.
> I added two new markup rules tonight. I want to do away with the use of
> tabs in the markup language since tabs are too difficult to use in Windows
> browsers. Right now we have:
> * one level, ** two levels, *** three levels
> # one, # two, ## one, ## two
I would like to be able to mix those two, e.g.
# one
#* some here
#* more there
#** even more here
#** even more there
#* some there
#*# some there . one
#*# some there . two
# two
Should be quite easy to do, no?
Use a regexp like "^([#*]*)([#*])" - the last char determines the
list type. The size of \1 plus \2 determines the level. If level or
type changes then close current list and issue apropriate HTML tags.
> 2. The line
> [[Link] produces [Link].
> gets munged.
Oddly enough I already had fixed this one a while ago.
Apparanetly some changes outside wiki_transform invalidated my fix.
> 3. '''''Bold italic''' and italic''
This is tricky. I suggest to use the new markup instead:
''__Bold italic__ and italic''
About /lib vs. /admin:
I think /lib is cleaner and reduces the clutter in the main
directory. But it's not an urgent issue.
When will 1.1.7 be shipped?
Where is Ari's code?
/Arno
|
|
From: Steve W. <sw...@wc...> - 2000-07-13 03:49:54
|
I updated the site today (http://phpwiki.sourceforge.net/phpwiki/) and tested the zip feature, which works great! sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-12 19:00:35
|
>That's weird, but the proof is in the pudding. I found that this works: > >Use [FindPage] to search [HammondWiki]. Yes, I've discovered that too. I've just fixed the "[[Link] [Link]" bug and the "HammondWiki" bug. It's in the CVS. Jeff. |
|
From: Steve W. <sw...@wc...> - 2000-07-12 15:26:06
|
That's weird, but the proof is in the pudding. I found that this works: Use [FindPage] to search [HammondWiki]. sw On Tue, 11 Jul 2000, Jeff Dairiki wrote: > > > >I'm trying to duplicate this, can you reproduce it on > >http://phpwiki.sourceforge.net/phpwki/ ? > > > > No, I don't think I can. The URL for the wiki has to have a BumpyWord in it. > > All the bugs in my note are demonstrated on > > http://www.dairiki.org/HammondWiki/index.php3?PhpWikiBugs > > which is running an unmodified version of the lastest CVS version > of wiki_transform.php3 (1.12). > > Jeff > > > _______________________________________________ > Phpwiki-talk mailing list > Php...@li... > http://lists.sourceforge.net/mailman/listinfo/phpwiki-talk > ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-12 05:14:28
|
> >I'm trying to duplicate this, can you reproduce it on >http://phpwiki.sourceforge.net/phpwki/ ? > No, I don't think I can. The URL for the wiki has to have a BumpyWord in it. All the bugs in my note are demonstrated on http://www.dairiki.org/HammondWiki/index.php3?PhpWikiBugs which is running an unmodified version of the lastest CVS version of wiki_transform.php3 (1.12). Jeff |
|
From: Steve W. <sw...@wc...> - 2000-07-12 02:52:33
|
On Mon, 10 Jul 2000, Jeff Dairiki wrote: > 1. The string 'HammondWiki' is in the URL for my PhpWiki. It is also the name > of a page within my PhpWiki. Lines which contain an old-style (bumpy-word) > link followed by an old-style link to HammondWiki get munged. E.g., the > line: > > Use FindPage to search HammondWiki. > > gets transformed to: > > Use <a href="http://www.dairiki.org:80/<a href="http://www.dairiki.org:80/ > HammondWiki/index.php3?HammondWiki">HammondWiki</a>/index.php3?FindPage"> > FindPage</a> to search <a href="http://www.dairiki.org:80/HammondWiki/index.php > 3?HammondWiki">HammondWiki</a> I'm trying to duplicate this, can you reproduce it on http://phpwiki.sourceforge.net/phpwki/ ? sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Nicholas <nic...@sy...> - 2000-07-11 05:03:30
|
I've noticed the lag too, maybe SourceForge is suffering SuccessCrisis
At 09:51 PM 7/10/00 -0700, Jeff Dairiki wrote:
>In message
><Pin...@bo...>,Steve Wa
>instead writes:
> >I want to >set it up so that we only need one call to OpenDataBase(), ...
>
>I agree, and have been thinking along the same lines.
>
>We should anticipate the addition of version control too. I see something
>like (letting my bias towards objects show through):
>
>In wiki_config.php3:
>
>$dbi = new WikiMySqlDB('localhost','test','guest','');
>
>
>Then in wiki_xxx.php3:
>
>// Get current version of page
>$current_page = $dbi->retrievePage($pagename);
>
>// Get most recent archived page
>$archived_page = $dbi->retrievePage($pagename, -1);
>
>// Get second most recent archived page (or false if there is none)
>$old_archived_page = $dbi->retrievePage($pagename, -2);
>
>// Get version 12 of the page (o4 false if version 12 is not in the database).
>$page_version12 = $dbi->retrievePage($pagename, 12);
>
>
>This would be a good time to clean up the InitXXX(), XXXNextMatch() interface
>too. As stated in a previous post, I suggest iterators:
>
>$search = $dbi->fullSearch("findme");
>while ($page = $search->next()) {
> // do something;
>}
>
>or similar...
>
>Jeff
>
>PS Anyone else notice anything funny going on with phpwiki-talk? I posted
>two notes earlier today (one re: ZIP file stuff, and one about bugs in
>wiki_transform) and I've only seen one come back. Also I can't get into the
>archives (I get the oh-so-informative "An error occured.")
>
>
>
>_______________________________________________
>Phpwiki-talk mailing list
>Php...@li...
>http://lists.sourceforge.net/mailman/listinfo/phpwiki-talk
-N
--
Nicholas Roberts, Webmaster/Director
mailto:Nic...@SY...
--
Synarchy Australia Pty Ltd
ACN: 052 408 849
11/281a Edgecliff Rd,
Woollahra 2025,
NSW, Australia
http://SYNARCHY.NET
Mob: 0414 642 316
Ph/Fax: +612 9475 4399
--
SYDNIC ARCHITECTURE:
A New Architectural Style with the Sydney Opera House as a Signature Building
http://synarchy.net/Sydnic
--
Evolution and the Polymath Entrepreneur
http://phpwiki.sourceforge.net/1.1.6/index.php3?NicholasRoberts --- Open
Editing - Have Your Say!
--
|
|
From: Jeff D. <da...@da...> - 2000-07-11 04:57:22
|
In message <Pin...@bo...>,Steve Wa
instead writes:
>I want to >set it up so that we only need one call to OpenDataBase(), ...
I agree, and have been thinking along the same lines.
We should anticipate the addition of version control too. I see something
like (letting my bias towards objects show through):
In wiki_config.php3:
$dbi = new WikiMySqlDB('localhost','test','guest','');
Then in wiki_xxx.php3:
// Get current version of page
$current_page = $dbi->retrievePage($pagename);
// Get most recent archived page
$archived_page = $dbi->retrievePage($pagename, -1);
// Get second most recent archived page (or false if there is none)
$old_archived_page = $dbi->retrievePage($pagename, -2);
// Get version 12 of the page (o4 false if version 12 is not in the database).
$page_version12 = $dbi->retrievePage($pagename, 12);
This would be a good time to clean up the InitXXX(), XXXNextMatch() interface
too. As stated in a previous post, I suggest iterators:
$search = $dbi->fullSearch("findme");
while ($page = $search->next()) {
// do something;
}
or similar...
Jeff
PS Anyone else notice anything funny going on with phpwiki-talk? I posted
two notes earlier today (one re: ZIP file stuff, and one about bugs in
wiki_transform) and I've only seen one come back. Also I can't get into the
archives (I get the oh-so-informative "An error occured.")
|
|
From: Steve W. <sw...@wc...> - 2000-07-11 04:12:50
|
I added two new markup rules tonight. I want to do away with the use of tabs in the markup language since tabs are too difficult to use in Windows browsers. Right now we have: * one level ** two levels *** three levels # one # two ## one ## two and it seems to be working. It was a trivial change, but I think it will be a breath of fresh air. I am not going to implement the <dt><dd> tag set (term/definition) since noone uses them (even in HTML noone ever used them). The old rules are still in there as well. I've added TestPage which I hope to be a page that tests all the markup rules and we'll always have a quick and easy way to verify it all works. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Steve W. <sw...@wc...> - 2000-07-11 03:28:38
|
Hola,
Something has been bugging me for a while now. Consider:
$wiki = RetrievePage($dbi, $pagename);
$dba = OpenDataBase($ArchiveDataBase);
$archive= RetrievePage($dba, $pagename);
This is out of wiki_diff.php3. What's wrong here?
PhpWiki 1.03 was based on two DBM files, and only one at a time was opened
(except when a copy was saved to the archive, I think).
It's not right to make the relational implementations "fake" this
behavior because of the DBM heritage; it will be confusing to other
programmers and it needlessly gets a second database handle. The
relational database implementations actually pass the table name around in
$WikiDataBase and $ArchiveDataBase, which is very misleading.
I want to add more DBM files to add the new functionality we've been
adding, and I will change the way the DBM files are opened... I want to
set it up so that we only need one call to OpenDataBase(), and we can do
away with:
// All requests require the database
if ($copy) {
// we are editing a copy and want the archive
$dbi = OpenDataBase($ArchiveDataBase);
include "wiki_editpage.php3";
CloseDataBase($dbi);
exit();
} else {
// live database
$dbi = OpenDataBase($WikiDataBase);
}
as well. (from index.php3)
I don't think this means a lot of hacking... well, outside of the changes
to the DBM implementation. Once I have the new DBM code worked out we can
go back and weed out the extra OpenDataBase calls. One OpenDataBase() call
should serve the rest of the invocation for the most part.
Also, there's an interesting comparison of MySQL and Postgesql on
PHPbuilder.com right now, if you are interested in how they stack up.
And last, I have been experiencing weird behavior with the mSQL version I
have set up at http://wcsb.org/~swain/phpwiki/. Most of the time I get an
"mSQL database has gone away" error even though it never goes away (the
radio station's program schedule runs off that same database and is never
down.) I wonder if there isn't some bug in the msql_pconnect() call? At
home I use PHP4+mSQL and never see this error.
sw
...............................ooo0000ooo.................................
Hear FM quality freeform radio through the Internet: http://wcsb.org/
home page: www.wcsb.org/~swain
|
|
From: Jeff D. <da...@da...> - 2000-07-10 20:24:50
|
I've uncovered a few bugs in wiki_transform.php3.
1. The string 'HammondWiki' is in the URL for my PhpWiki. It is also the name
of a page within my PhpWiki. Lines which contain an old-style (bumpy-word)
link followed by an old-style link to HammondWiki get munged. E.g., the
line:
Use FindPage to search HammondWiki.
gets transformed to:
Use <a href="http://www.dairiki.org:80/<a href="http://www.dairiki.org:80/
HammondWiki/index.php3?HammondWiki">HammondWiki</a>/index.php3?FindPage">
FindPage</a> to search <a href="http://www.dairiki.org:80/HammondWiki/index.php
3?HammondWiki">HammondWiki</a>
2. The line
[[Link] produces [Link].
gets munged.
3. '''''Bold italic''' and italic''
yields: <strong><em>Bold italic</strong> and italic</em>.
(Tags not nested properly.)
'''Bold and ''bold-italic'''''
has the same problem.
Fixes for bugs 1 & 2, I think, should be straightforward. (Though I haven't
stared at the wiki_transform code long enough to come up with one.) Bug 3
is somewhat insidious and may not be easily fixable.
Jeff
|
|
From: Jeff D. <da...@da...> - 2000-07-10 19:26:19
|
In message <Pin...@bo...>,Steve Wa instead writes: > >Hmm. It might be overkill. How long will it take for you to do? The zipping is done. The CVS version works. Unzipping will be easy enough to implement --- I'm just not sure precisely what to do with the data (how to restore the Wiki) once it's unzipped. >Since this is an admin feature speed is not important; Good point. >but if I read you right, you're going to implement zip in PHP? Yes, but only in a limited way. The unzip code will only be able to unzip archives which were generated by PhpWiki. (Not all (de)compression methods will be supported, and the special headers containing page meta-data must be present.) Jeff. |
|
From: Steve W. <sw...@wc...> - 2000-07-08 03:27:48
|
Hmm. It might be overkill. How long will it take for you to do? I'm worried that you might spend 100 hours on something that might not have a great demand... but it would be great to have a secure way to dump a Wiki. Since this is an admin feature speed is not important; but if I read you right, you're going to implement zip in PHP? Is there anything you can't do? :-) sw On Fri, 7 Jul 2000, Jeff Dairiki wrote: > I've just checked in to the CVS the beginnings of on-the-fly ZIP file > creation. To use, just click on the link near the bottom of admin/index.php3. > > Current Features: > o If PHP has zlib compiled in, pages are compressed (deflated). > (If PHP doesn't have zlib, pages are just stored --- also ZIP production > is quite a bit slower on account of CRC32 computation in PHP.) > (And I've just discovered that my web hosts PHP doesn't have zlib :-/) > > o Page meta-data (author, version, etc...) is saved in a special custom > header field in the zip file. This information is not accessible via > any standard zip tools, but I plan on writing an unzipper which can > use this information to restore a Wiki from the zip file. (The zip file > is (should be) still readable using any unzipper.) > > o Currently, only the most recent version of a page is archived. > If this is an issue, we can easily add an option for including > all saved versions of every page. > > Known Issues: > o Speed and volume of output might be an issue for large Wikis (especially > when PHP doesn't have zlib support). Certainly the PHP execution timeout > should be increased. > > o I still need to write the unzipper. > > What's the concensus? Is this cool or just overkill? > > Jeff > > > > _______________________________________________ > Phpwiki-talk mailing list > Php...@li... > http://lists.sourceforge.net/mailman/listinfo/phpwiki-talk > ................................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-07 20:19:45
|
I've just checked in to the CVS the beginnings of on-the-fly ZIP file
creation. To use, just click on the link near the bottom of admin/index.php3.
Current Features:
o If PHP has zlib compiled in, pages are compressed (deflated).
(If PHP doesn't have zlib, pages are just stored --- also ZIP production
is quite a bit slower on account of CRC32 computation in PHP.)
(And I've just discovered that my web hosts PHP doesn't have zlib :-/)
o Page meta-data (author, version, etc...) is saved in a special custom
header field in the zip file. This information is not accessible via
any standard zip tools, but I plan on writing an unzipper which can
use this information to restore a Wiki from the zip file. (The zip file
is (should be) still readable using any unzipper.)
o Currently, only the most recent version of a page is archived.
If this is an issue, we can easily add an option for including
all saved versions of every page.
Known Issues:
o Speed and volume of output might be an issue for large Wikis (especially
when PHP doesn't have zlib support). Certainly the PHP execution timeout
should be increased.
o I still need to write the unzipper.
What's the concensus? Is this cool or just overkill?
Jeff
|
|
From: Steve W. <sw...@wc...> - 2000-07-07 03:26:51
|
After some grevious hacking I got the two older Wikis on Sourceforge merged into the one at http://phpwiki.sourceforge.net/phpwiki/. A few notes: * the 1.03 porting script worked very nicely * DO NOT try to use it with a 1.1.4-1.1.5 PhpWiki. Bad things happen. * References were lost going from 1.1.6b -> 1.1.x but I'm not sure why. It might be because at one point I loaded 1.03 into 1.1.6b, and then got the previous version of the FrontPage from the archive and Nicholas' Open Source links were gone. Otherwise I think I will switch the link from 1.1.6 -> phpwiki and all that needs to be done to bring it up to date is cvs update -d periodically. I should probably only do that with tagged releases but I'm impulsive sometimes. ;-) sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-06 20:48:35
|
>> What's the best output format? Tar? Or zip for windows friendliness? >> Or something else I haven't thought of? > >Best: user chooses .zip, .Z, .gz. > >Minimum: .zip, since almost everyone can decompress them >Philisophicaly: gz and bz2 :-) I think we need to use some kind of archive format --- ie. we want to pack multiple files (pages) into one file. This rules out straight .Z and .gz. Tar the .gz is fine. I was thinking that zip would be the most portable, so I started looking into the format. There are CRC32 checksums in the file headers --- I suspect these will be expensive to compute in PHP, as AFAIK, there is no built-in function to do it. So now I'm looking at the tar file format, which is looking much more straightforward to implement --- I think I'll start with that. Jeff. |
|
From: Steve W. <sw...@wc...> - 2000-07-06 18:19:06
|
On Thu, 6 Jul 2000, Jeff Dairiki wrote: > It might be interesting to do a user survey to find out just what environments > phpwiki is being run in. What do you and Arno deal with? Dunno about Arno; I use Red Hat 6.2 at home with two Apache servers: 1. the default Apache+PHP+Postgresl, which RH ships with 2. Apache 1.3.12+PHP4+mSQL2+Postgresql On wcsb.org: RH 4.2 + mSQL2 At Sourceforge: whatever they have, probably Apache 1.3+MySQL on Debian I *usually* test on all these evironments though I've never found a difference in any of them... > (Confused yet?) auth gives me headaches :-) > What's the best output format? Tar? Or zip for windows friendliness? > Or something else I haven't thought of? Best: user chooses .zip, .Z, .gz. Minimum: .zip, since almost everyone can decompress them Philisophicaly: gz and bz2 :-) > See ignore_user_abort(). See also register_shutdown_function() which allows > you to continue executing PHP after the html output stream has been closed. > (There may be other better ways to do this, but I don't know them.) > (There's also flush() which flushes the output, allowing you to add to it > later.) > > I was thinking of an update count stored in the DBM. When it exceeds a > threshold, the DBM gets rebuilt. Excellent idea. This would be cleaner than using lynx+cron, by far! sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-06 16:20:35
|
In message <Pin...@bo...>,Steve Wa instead writes: >It would only be as secure for as many people out there who don't know >there is a lib/ and I'm sure lots of script kiddies read Freshmeat on a >daily basis... With apache if one creates a lib/.htaccess that reads: order allow,deny deny from all Then no one can get at anything in lib via http. (The same should be done for templates/.htaccess too.) Other httpds (including at least Netscape's and NCSA's) offer similar abilities, though the actual directives may differ. I'd be surprised (I often am) if most httpd's don't allow the user some kind of similar control. When I put together HammondWiki, the first thing I did was move the wiki_*.php3's into a non-readable lib directory. In my case (apache) this required no source code tweaks at all, as I set the php include path in HammondWiki/.htaccess. >Any changes that mean something won't work "out of the box" is going to >meet stiff opposition from me :-) I agree completely. That's one of the advantages to using PHP in my opinion. On the other hand, I'm sure there are plenty of non-portable things you can do in PHP --- and I don't have much experience other than in an apache environment. It might be interesting to do a user survey to find out just what environments phpwiki is being run in. What do you and Arno deal with? >> When apache is configured to do external authorization,... > >(Is that, btw, the same as the .htaccess file?) Uh, sort of. One needs the following (or similar) directives in http.conf or an .htaccess file. (These directives can be disallowed in .htaccess files by http.conf.) # User and group password databases. AuthUserFile /some/password AuthGroupFile /some/group Then either in a <Directory> (or similar) section or in .htaccess you put something like: require user dairiki Then only dairiki can access that stuff. The confusing thing is that once you've set an AuthUserFile (and I can't find a way to unset it locally in an .htaccess file), even in a directory with no 'require' directives (ie. no authentification required), neither the username nor password will ever make it through httpd. (Confused yet?) >> Files and directories which are writeable by through httpd make me nervous,... > >Whenever I've added something that means "the server can write to it," be >it a DBM file or a directory, I think very carefully about it. Being able >to dump Wiki pages to a directory doesn't sound too dangerous, since the >input has to be a dbm file (or it fails completely). .... Yes, it's seems pretty safe. I can think of any exploits other than filling the disk DOS type attacks (which are possible anyhow...) >Also, a friend recently asked me what's to stop him from uploading a >uuencoded warez file and I said nothing; but maybe we want to set a hard >limit on page sizes anyway, like 1M or maybe 500,000K. That could be a >define() in the config file and a check on the size of the page in >wiki_savepage.php3. Good point, a hard limit sounds like a good idea. I would make it considerably smaller than 500,000K (or even 500K :-). It would be interesting to find the biggest current Wiki page. >> Another slick alternative might be a PHP script which creates a tar- (or zip > -) >> file dump of the wiki on the fly > >THAT is a very interesting idea... I'll look into it a bit more. Compression will be a problem without local temporary files (which I suppose aren't a big problem.) (Of course compression will be a bigger problem if your PHP doesn't have zlib...) What's the best output format? Tar? Or zip for windows friendliness? Or something else I haven't thought of? >> Maybe wiki_dbmlib can do this automatically every once in awhile? > >It would have to be an admin function because if you decide that the first >user after 4am every day triggers the rebuild, you are counting on that >user to not stop the transaction before it's done. (i.e., click the "stop" >button and interrupt the rebuild.) That makes me nervous. See ignore_user_abort(). See also register_shutdown_function() which allows you to continue executing PHP after the html output stream has been closed. (There may be other better ways to do this, but I don't know them.) (There's also flush() which flushes the output, allowing you to add to it later.) I was thinking of an update count stored in the DBM. When it exceeds a threshold, the DBM gets rebuilt. Jeff |
|
From: Steve W. <sw...@wc...> - 2000-07-06 14:05:45
|
On Wed, 5 Jul 2000, Jeff Dairiki wrote: > Why is this in it's own subdirectory? Why not just an admin.php3 in the > main directory? I wanted to keep files grouped according to their function, more or less. Putting everything in admin/ solved this, from my point of view... > I think all the files which get included or required should be moved into > a 'lib' subdirectory. This is mostly a security issue as it makes it much > easier to prevent people from directly browsing eg. > http://blah/wiki_display.php3. > (Not that this necessarily does anything bad, but there's no reason for > that to be a valid URL at all.) It would only be as secure for as many people out there who don't know there is a lib/ and I'm sure lots of script kiddies read Freshmeat on a daily basis... > I suggest that only index.php3 and admin.php3 should be in the top level > directory. These will 'include "lib/wiki_config.php3"' (or maybe 'include > "wikilib/config.php3"'?) That might be a cleaner solution; but I wanted to start coding admin stuff now, and not clutter the main directory. I dunno. What do you think, Arno? admin/ or lib/? > (PHP, as you probably know, does support an include search path via the > configuration variable php_include_path. When PHP is run as an Apache > module, this path can be set in the local .htaccess file. With other > servers, this is probably not so easy. One could write ones version of > include > using file_exists().) I'm very flexible on the directory issue but I won't back down on ease of installation. A lot of people who've installed PhpWiki don't have access to the server itself. (Myself included; on Sourceforge!) Any changes that mean something won't work "out of the box" is going to meet stiff opposition from me :-) (Try installing a few other Wiki clones and see how long it takes to get a Wiki up and running to the point where PhpWiki is...) > When apache is configured to do external authorization, the variables > $PHP_AUTH_USER and $PHP_AUTH_PW never get set. The solution, in this > case, is just to delete the authorization stuff from admin/index.php3, > since the httpd is handling this anyway. > > This is confusing though (for admins setting up a phpwiki, that is). > To maintain maximum plug-and-playness, it might be better to implement > authentification entirely within php. The drawback to this is, as always, > added complication: it probably requires cookies and some sort of session > management. I wasn't aware of this, thanks for bringing it up... I intended to put in a comment in the code inviting a better solution to authentication. Perhaps we will just have to document the problem if the user wants to run PhpWiki on an Apache server that does auth. (Is that, btw, the same as the .htaccess file? I haven't set up one of those since 1997.) > Files and directories which are writeable by through httpd make me nervous, > and I try to minimize their number. (Of course the main databases need to > be writeable, so maybe my fear in moot.) Agreed. Ideally PhpWiki is run by someone who has a clue; I suppose that's wishful thinking on my part :-) At this stage we've been really lucky because people interested in Wikis in general are pretty intelligent people (present company included :-) so it's been pretty smooth. Whenever I've added something that means "the server can write to it," be it a DBM file or a directory, I think very carefully about it. Being able to dump Wiki pages to a directory doesn't sound too dangerous, since the input has to be a dbm file (or it fails completely). Where it's being written has to be writable by the server, so if some bad guy in a black hat decides to hack a PhpWiki and dump all the pages to /tmp he can do so... if the server runs as root (bad!) he could name a page "passwd" and dump the pages to /etc. So a careful check of the directory the user provides will be crucial. We can start by limiting it to anything under /tmp and anything under the server root; nothing with ".." will be allowed; and if the out-of-the-box constraints are too stiff they have GPL'd source code that they are free to do with as they please! That said, I'm sure another security vulnerability lurks somewhere. It might be better to ship PhpWiki with admin functions disabled, and the user has to enable it. (It has no login/password now and won't work until they edit the script.) Also, a friend recently asked me what's to stop him from uploading a uuencoded warez file and I said nothing; but maybe we want to set a hard limit on page sizes anyway, like 1M or maybe 500,000K. That could be a define() in the config file and a check on the size of the page in wiki_savepage.php3. > Another slick alternative might be a PHP script which creates a tar- (or zip-) > file dump of the wiki on the fly (to be saved on the web-clients, rather than > the > web-servers disk.) THAT is a very interesting idea... > >The Perl script shrank the DBM file on wcsb.org from 2,464,640 bytes to > >117,574 (there are 91 pages in it). > > Maybe wiki_dbmlib can do this automatically every once in awhile? It would have to be an admin function because if you decide that the first user after 4am every day triggers the rebuild, you are counting on that user to not stop the transaction before it's done. (i.e., click the "stop" button and interrupt the rebuild.) That makes me nervous. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-05 23:47:31
|
In message <Pin...@bo...>,Steve Wa
instead writes:
>
>OK, here's my first pass at an administrative module for PhpWiki.
>
>I made a new subdirectory, admin/, which has three files in it. One is
>index.php3, which will work much like the main index.php3: it opens the
>database and goes through an if/elseif/elseif/else block to decide which
>file to load.
Some comments:
First point:
Why is this in it's own subdirectory? Why not just an admin.php3 in the
main directory?
Which leads to (quoted from admin/index.php3):
// temporarily go up to the main directory. is there a way around this?
chdir("..");
include "wiki_config.php3";
include "wiki_stdlib.php3";
chdir("admin");
I think all the files which get included or required should be moved into
a 'lib' subdirectory. This is mostly a security issue as it makes it much
easier to prevent people from directly browsing eg.
http://blah/wiki_display.php3.
(Not that this necessarily does anything bad, but there's no reason for
that to be a valid URL at all.)
I suggest that only index.php3 and admin.php3 should be in the top level
directory. These will 'include "lib/wiki_config.php3"' (or maybe 'include
"wikilib/config.php3"'?)
(PHP, as you probably know, does support an include search path via the
configuration variable php_include_path. When PHP is run as an Apache
module, this path can be set in the local .htaccess file. With other
servers, this is probably not so easy. One could write ones version of
include
using file_exists().)
Second point:
When apache is configured to do external authorization, the variables
$PHP_AUTH_USER and $PHP_AUTH_PW never get set. The solution, in this
case, is just to delete the authorization stuff from admin/index.php3,
since the httpd is handling this anyway.
This is confusing though (for admins setting up a phpwiki, that is).
To maintain maximum plug-and-playness, it might be better to implement
authentification entirely within php. The drawback to this is, as always,
added complication: it probably requires cookies and some sort of session
management.
>The files it will choose from will be:
>
>* serialize all pages
>* dump all pages as HTML
>* load a set of serialized pages
Files and directories which are writeable by through httpd make me nervous,
and I try to minimize their number. (Of course the main databases need to
be writeable, so maybe my fear in moot.)
Mostly because of this, I, personally, favor using perl scripts to do the
dumping sorts of things.
Another slick alternative might be a PHP script which creates a tar- (or zip-)
file dump of the wiki on the fly (to be saved on the web-clients, rather than
the
web-servers disk.)
>* rebuild the DB files (for DBM-based Wikis)
>Third is a Perl script that reduces the size of a DBM file. I will write
>all of it in PHP later but wanted to prove I was right about how DBM files
>lose memory first, and I was... for the savvy sysadmin a Perl script will
>be faster or more flexible a solution (and can be easily cron'd.)
>
>The Perl script shrank the DBM file on wcsb.org from 2,464,640 bytes to
>117,574 (there are 91 pages in it).
Maybe wiki_dbmlib can do this automatically every once in awhile?
Jeff
|
|
From: Steve W. <sw...@wc...> - 2000-07-05 22:15:40
|
Hi Markus! Thanks for the kind words. It makes it all worthwhile to hear PhpWiki helps you! I started on the script that will move pages in a 1.0 .. 1.1.5 PhpWiki DBM file and load them into a 1.1.6 or later PhpWiki. Arno fleshed out some important parts today, so it's almost finshed. It will be part of the 1.1.7 release which will be out soon. cheers, sw On Wed, 5 Jul 2000, Markus Guske wrote: > Hi, > > > Are you trying to use the same database? The schema changed from 1.1.5 to > > 1.1.6. Page data is now written to columns instead of storing everything > > in one serialzed hash. > This is just fixed, thanks Arno. > > > I am working on a new script that might help you move your pages from > > 1.1.5 to 1.1.6, if you are concerned about saving them. > Yes, I really appreciate the idea. Otherwise I have to load and copy all of the 1.1.5 Wiki pages on the next > weekend :-) > > - Markus > > > --- > BITPlan GmbH smart solutions - Meerbuscher Str. 58-60 - 40670 Meerbusch-Osterath > Tel +49 2159 5236-0 - Fax +49 2159 5236-10 - mg...@bi... - http://www.bitplan.de > > Home Office: Markus Guske - Erftstrasse 17 - D-41564 Kaarst > Tel +49 173 946 1880 - Fax +49 2131 769-195 - mg...@gu... > ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
|
From: Jeff D. <da...@da...> - 2000-07-05 21:44:27
|
In message <146...@da...>,Arno Hollosi writes:
>Here are some thoughts/questions:
>
>- the db interface will become very large. I realized that when I
> added functions for MostPopular. For every such query we need
> two new functions. I don't like this.
I agree.
> Possible solution: all search functions return a $pagehash array.
> For some searches the hash may only be sparsely populated, e.g.
> when doing a title or mostpopular search, it's unnecessary to
> set $pagehash['content'].
I think this is basically the right idea. Now I will let me bias
towards classes run free:
This is where (at least the way I see it) making $pagehash into a class
(concrete
type) would make things cleaner. $pagehash['content'] turns into $page->
content()
which can do something smart (either signal error, or fetch the content) if the
content isn't there.
> There could be one general NextMatch function in this case.
> For the DBM interface that might be impossible - maybe that function
> has to have a switch() structure of some kind.
The db search/scan functions should return some kind of iterator. For
example, some usage like:
$hotlist = FindMostPopular($dbi, $limit);
// Better yet: $hotlist = $dbi->FindMostPopular($limit); :-)
while ($page = $hotlist->next())
echo $page->pagename() . " " . $page->hitcount() . "hits\n";
>- template facility:
> A template class that does the translation from $content to $html.
> Placeholder objects register with that class, and then get called
> from there.
Excellent! Some way to get arguments to placeholders would be nice as
well. For example, currently the ###IFCOPY### placeholder has as its argument
the remainder of the line --- however this is clumsy. As another example,
it would be nice to have an "###INCLUDE###" placeholder (taking a file name
as an argument) that could be used to suck in a sub-template.
Jeff
|