You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(103) |
Jul
(105) |
Aug
(16) |
Sep
(16) |
Oct
(78) |
Nov
(36) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(100) |
Feb
(155) |
Mar
(84) |
Apr
(33) |
May
(22) |
Jun
(77) |
Jul
(36) |
Aug
(37) |
Sep
(183) |
Oct
(74) |
Nov
(235) |
Dec
(165) |
2002 |
Jan
(187) |
Feb
(183) |
Mar
(52) |
Apr
(10) |
May
(15) |
Jun
(19) |
Jul
(43) |
Aug
(90) |
Sep
(144) |
Oct
(144) |
Nov
(171) |
Dec
(78) |
2003 |
Jan
(113) |
Feb
(99) |
Mar
(80) |
Apr
(44) |
May
(35) |
Jun
(32) |
Jul
(34) |
Aug
(34) |
Sep
(30) |
Oct
(57) |
Nov
(97) |
Dec
(139) |
2004 |
Jan
(132) |
Feb
(223) |
Mar
(300) |
Apr
(221) |
May
(171) |
Jun
(286) |
Jul
(188) |
Aug
(107) |
Sep
(97) |
Oct
(106) |
Nov
(139) |
Dec
(125) |
2005 |
Jan
(200) |
Feb
(116) |
Mar
(68) |
Apr
(158) |
May
(70) |
Jun
(80) |
Jul
(55) |
Aug
(52) |
Sep
(92) |
Oct
(141) |
Nov
(86) |
Dec
(41) |
2006 |
Jan
(35) |
Feb
(62) |
Mar
(59) |
Apr
(52) |
May
(51) |
Jun
(61) |
Jul
(30) |
Aug
(36) |
Sep
(12) |
Oct
(4) |
Nov
(22) |
Dec
(34) |
2007 |
Jan
(49) |
Feb
(19) |
Mar
(37) |
Apr
(16) |
May
(9) |
Jun
(38) |
Jul
(17) |
Aug
(31) |
Sep
(16) |
Oct
(34) |
Nov
(4) |
Dec
(8) |
2008 |
Jan
(8) |
Feb
(16) |
Mar
(14) |
Apr
(6) |
May
(4) |
Jun
(5) |
Jul
(9) |
Aug
(36) |
Sep
(6) |
Oct
(3) |
Nov
(3) |
Dec
(3) |
2009 |
Jan
(14) |
Feb
(2) |
Mar
(7) |
Apr
(16) |
May
(2) |
Jun
(10) |
Jul
(1) |
Aug
(10) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
(13) |
Apr
(11) |
May
(18) |
Jun
(44) |
Jul
(7) |
Aug
(2) |
Sep
(14) |
Oct
|
Nov
(6) |
Dec
|
2011 |
Jan
(2) |
Feb
(6) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(11) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(4) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
(5) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
(11) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(3) |
2024 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve W. <sw...@wc...> - 2000-07-15 21:28:08
|
I'm uploading the tar ball... carefully tested, no CVS files this time, wiki_config.php3 is vanilla, etc... sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-15 21:21:59
|
On Sat, 15 Jul 2000, Arno Hollosi wrote: > Quite useful. Nice output format. > Have a look yourself: > > http://www.red-bean.com/~kfogel/cvs2cl.shtml Thanks, I'll try it! > > Btw, could you clarify what you mean by your last entry in the TODO: > > > The History list in the browser no longer tells you where an edit page > > is; this happened when we switched to templating. > > What is an edit page? I had thought that when you view a page, edit a page, save the changes, and go to the page again, the web browser's Go dropdown (the web browser's History list) would look like: FrontPage Thanks for FrontPage edits Edit FrontPage FrontPage In Netscape anyway, but I was mistaken, I just tested this on a 1.03 PhpWiki and the behavior is the same (apologies for implying you broke something :-) But since we don't want the user to go back anymore to make further edits (they get the message that someone else edited the page, but it was in fact them) it's just as well. sw ................................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-07-15 21:19:30
|
> > Or do you think we should store the intermediate state instead? > > Yes, this is an idea I had a long time ago. > I thought that [...] it would open up some interesting possibilities Sure :o) I'm in favour of this change. We have to think about some side effects like links to pages not yet existing. Jeff: I guess you have to take this into account when creating your classes for pages and stuff. Basically it means we also need a routine to transform the intermediate state back to 'normal' wiki markup for EditPages. I guess this will lead to some major refactoring. > Yes, got it, it will go out in the 1.1.7 release, which I am close to > doing. Are you going to make any more commits? No. Go ahead :o) /Arno |
From: Arno H. <aho...@in...> - 2000-07-15 21:12:44
|
> Here's what I have for the HISTORY file, please let me know if I > overlooked something: > > 1.1.7 07/15/00: Seems complete. > I culled this list from > http://phpwiki.sourceforge.net:80/phpwiki/index.php3?CVS%20log%20entries%20for%20release-1_1_6%20-%20release-1_1_7. > > which is what you get from: > > cvs log -d">2000-06-25" Ouch - quite a list to walk through. I use cvs2cl.pl to create a ChangeLog file. Quite useful. Nice output format. Have a look yourself: http://www.red-bean.com/~kfogel/cvs2cl.shtml Btw, could you clarify what you mean by your last entry in the TODO: > The History list in the browser no longer tells you where an edit page > is; this happened when we switched to templating. What is an edit page? /Arno |
From: Steve W. <sw...@wc...> - 2000-07-15 21:07:40
|
On Sat, 15 Jul 2000, Arno Hollosi wrote: > > > > One wish I have is this: make the wiki_transform generic enough, so > > > that it can be used for savepage as well. > > > This would allow the renaming of pages as well, and all links for that > > page would be changed automatically, e.g. SteveWainsteadEatsWorms to > > SteveWainsteadIsANiceGuy. > > I haven't thought of this one :o) > Wiki pages would still get saved as they are now, only that we save > links in the wikilinks table as well. So, renaming a page would need a > larger function than simply replacing the pagename in the wikilinks table. > Or do you think we should store the intermediate state instead? Yes, this is an idea I had a long time ago. I had a problem at work where we wanted to merge two separate Wikis and there were all sorts of namespace collisions I had to think about. I thought that if all pages links were stored externally, and the pages were stored with tokens, that it would open up some interesting possibilities, among them the renaming of pages and therefore all the links to that page. I had some other ideas about the possibility of this architecture which I hope I wrote down somewhere... if not we'll rediscover them! :-) > Btw, I committed a changed UpdateRecentChanges() which adds a diff > link :o) And I fixed two more magic_quotes_gpc=1 bugs. This time in > wiki_pageinfo and wiki_diff Yes, got it, it will go out in the 1.1.7 release, which I am close to doing. Are you going to make any more commits? sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-07-15 21:02:11
|
> > One wish I have is this: make the wiki_transform generic enough, so > > that it can be used for savepage as well. > This would allow the renaming of pages as well, and all links for that > page would be changed automatically, e.g. SteveWainsteadEatsWorms to > SteveWainsteadIsANiceGuy. I haven't thought of this one :o) Wiki pages would still get saved as they are now, only that we save links in the wikilinks table as well. So, renaming a page would need a larger function than simply replacing the pagename in the wikilinks table. Or do you think we should store the intermediate state instead? Btw, I committed a changed UpdateRecentChanges() which adds a diff link :o) And I fixed two more magic_quotes_gpc=1 bugs. This time in wiki_pageinfo and wiki_diff /Arno |
From: Steve W. <sw...@wc...> - 2000-07-15 18:14:04
|
On Sat, 15 Jul 2000, Arno Hollosi wrote: > One wish I have is this: make the wiki_transform generic enough, so > that it can be used for savepage as well. Let me explain: sooner or > later we want to populate the wikilinks database table. In order to > get the outgoing links they have to be extracted from the document at > save time. Extracting those links is exactly what wiki_transform does > right now (among other things). > > So some sort of PreparePage() or AnalyzePage() which transforms the > wiki markup into a semi-state with placeholders and separate arrays > for wiki links and external links would be most useful. It could then > be reused in wiki_savepage. This would allow the renaming of pages as well, and all links for that page would be changed automatically, e.g. SteveWainsteadEatsWorms to SteveWainsteadIsANiceGuy. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-15 18:12:24
|
On Sat, 15 Jul 2000, Jeff Dairiki wrote: > Next question: > > What should the structure of a type 1 ZIP archive be? > > 2. One file for each page. Each file contains all versions of the page. > If we use the internet message format, this becomes an 'mbox format' file. I like this one... also, you could parse a file that contains 20 versions of a page, and start using InsertPage() from version 1 through version 20, as a cheap solution to replicating the database. However if we store diffs... things will get hairy. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-15 18:08:02
|
On Sat, 15 Jul 2000, Jeff Dairiki wrote: > On what to archive: > I think we all agree that we don't want to archive everything. The > question is how to filter out the small edits. > > When the author changes we should force the old version to be archived. > This keeps competing authors from wiping each others work. I agree... author change == "commit" > So the question remains what to do when an author is editing his own work: > > Personally, there have been many times I've checked something into CVS > (or some other version control system) when I've almost immediately > discovered some braino and wanted to erase the checkin. > > I envision a check-box on the edit form labelled something like > "This is a minor edit. Do not archive previous version." The default > state of this button could be made to depend on the time since the > last edit (if it's been awhile, the default would be to make a backup). > > To me it seems better to decide at edit time whether a change is "minor", > than to try to guess whether the next change will be "minor". > > I suppose this is not incompatible with another checkbox labelled "commit", > which would set a PRECIOUS bit in $pagehash['flags']. Doable; I would be happy with a "commit" button, but people would probably just commit every change they make... hmm... maybe store whole versions when authors change, and diffs when an author edits their own work? Just a thought. > Cool idea which just popped into head: > > Once the version history stuff is in place, must add 'diffs' link to > RecentChanges entries. > I am drooling... :-) sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-15 17:57:00
|
I couldn't sit in my apartment all day working on the project and listening to CDs if it weren't raining. Here's what I have for the HISTORY file, please let me know if I overlooked something: 1.1.7 07/15/00: * Page diffs, with color * MostPopular page added which dynamically tracks most viewed pages (MySQL only so far) * Admin functions: page dumps, page loads, Zip dumps, page locking * MySQL, DBM, mSQL and Postgresql support all functional and appear stable * Full HTML compliance in page output * Fixed raw HTML exploit in [info] page * Perl script included to reduce the size of a DBM file * documentation updates * page source updates * gazillion feature enhancements and bug fixes, no doubt necessitating another gazillion feature enhancements and bug fixes ;-) I culled this list from http://phpwiki.sourceforge.net:80/phpwiki/index.php3?CVS%20log%20entries%20for%20release-1_1_6%20-%20release-1_1_7. which is what you get from: cvs log -d">2000-06-25" sw ................................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-07-15 17:43:46
|
Jeff, > > In other news: the bug bit me, so I've started working on a modularized, > > OOPified > > version of wiki_transform and GeneratePage() (a la Arno's suggestions). could you post your class definition, before you start implementing? Maybe the list can give you some valuable feedback. One wish I have is this: make the wiki_transform generic enough, so that it can be used for savepage as well. Let me explain: sooner or later we want to populate the wikilinks database table. In order to get the outgoing links they have to be extracted from the document at save time. Extracting those links is exactly what wiki_transform does right now (among other things). So some sort of PreparePage() or AnalyzePage() which transforms the wiki markup into a semi-state with placeholders and separate arrays for wiki links and external links would be most useful. It could then be reused in wiki_savepage. /Arno |
From: Jeff D. <da...@da...> - 2000-07-15 17:19:23
|
In message <Pin...@bo...>,Steve Wai nstead writes: >> An Internet-messageish format might work well, for example: > >This works for me. While XML would be keen, "don't let the best be the >enemy of the good." (best=XML, good=simple reliable solution) > >Human-readable and human-editable are good things... I 'gree. >One file probably is better... and the mail-message format is simple and >tried-and-true. NNTP also uses a similar format. >> Also, what's the thinking about whether we should include all the archived >> versions of a page (rather than just the most recent) in the ZIP? >> >> I.e.: do we want to be able to: >> 1) Make a snapshot of the complete state of the database (all versions >> of all pages)? >> 2) Make a snapshot of the current state of the Wiki (most recent version >> of all pages)? >> 3) Have the option to do either 1 or 2? > >I suppose in an ideal world, you get a nice wizard box that steps you >through all the choices for getting a snapshot, from the "whole thing" to >"just the pages, no metadata." But in a practical sense, the thing that >concerns me is having a simple way to make backup copies, and a way >to port from one Wiki to another. Yes, it would also be nice to use the ZIP archive to move a PhpWiki from one database format to another. I'd want to do this without losing archived versions, so this required a type 1 archive. I can definitely see the value of a type 2 archive as well. So I'll proceed towards option 3... >Anyway, my suggestion is: do the simplest thing first, which in fact >you've already done since I can make Zip files of PhpWiki.. your choice of >what comes next! I favor #3 and #2 in that order. Restoring a Wiki should >work through the InsertPage() interface, so unzipping and loading a Wiki >would not overwrite existing pages but move them to an archive. My take is that unzipping a type 2 archive should work as you say. Unzipping a type 1 archive should wipe the existing database. This is probably too dangerous to be done over the web interface. I envision being able to specify a ZIP archive to initialize from (instead of pgsrc/*) in wiki_config. In fact, once the unzip mechanism is in place, I think we should eliminate pgsrc/* in favor of a pgsrc.zip. Next question: What should the structure of a type 1 ZIP archive be? 1. One file for each version of each page; either in: 1a. Subdirectories: 'FrontPage/1', 'FrontPage/2', ... 1b. Funny-named files: 'FontPage~1~', 'FrontPage~2~', ... 2. One file for each page. Each file contains all versions of the page. If we use the internet message format, this becomes an 'mbox format' file. >Excellent! You know, CVS has the ability to allow you to develop your own >private experimental "branch" and tinker away w/o losing the benefits and >freedom CVS provide. I haven't tried it myself yet but if you used this >approach, others could check out your work and test it. Yeah, I know that that's possible. I'll look into it and see if I can figure out how. Jeff |
From: Jeff D. <da...@da...> - 2000-07-15 17:00:19
|
In message <Pin...@bo...>,Steve Wai nstead writes: > >And that's when it struck me that if she posted the page to a newsgroup, >that this was the same thing as doing a "commit" in CVS. And that's when I >had my moment of "Aha! | Doh!" > >This is my vision for version control for PhpWiki: the ability to commit >changes; the ability to retrieve any previous version. Not too difficult. Here's my thoughts: On how many archived pages to keep: Some wiki's may have storage space considerations, so I think there should be some way of configuring how many archived versions to keep. I definitely agree that all archived versions of a page should be accessible to everyone. On what to archive: I think we all agree that we don't want to archive everything. The question is how to filter out the small edits. When the author changes we should force the old version to be archived. This keeps competing authors from wiping each others work. So the question remains what to do when an author is editing his own work: Personally, there have been many times I've checked something into CVS (or some other version control system) when I've almost immediately discovered some braino and wanted to erase the checkin. I envision a check-box on the edit form labelled something like "This is a minor edit. Do not archive previous version." The default state of this button could be made to depend on the time since the last edit (if it's been awhile, the default would be to make a backup). To me it seems better to decide at edit time whether a change is "minor", than to try to guess whether the next change will be "minor". I suppose this is not incompatible with another checkbox labelled "commit", which would set a PRECIOUS bit in $pagehash['flags']. Cool idea which just popped into head: Once the version history stuff is in place, must add 'diffs' link to RecentChanges entries. >At this point I wanted to apologize to Jeff for what I think might have >been a miscommunication. When we had this thread I posed a few questions >to Jeff about doing diffs between versions: was storage an issue? Is doing >a diff between version 2 and version 99 hard/expensive? > >Jeff, I think I might have scared you or intimidated you out of doing what >you were working on... storing the diffs of pages. If I did I'm sorry. I >was asking questions so we could discuss the possibilities, not to >question your objectives. I'd like to revisit that discussion again. No, no ... no apology needed, no offense taken, no problem! I think your points are quite valid, and I think you have completely convinced me that we should get version control working first; then maybe add the storing of diffs. Applying 100 patch sets to generate version N-100 may well get to be too expensive. It might be best (in the long run) to save small changes as diffs, but save the whole page when large changes are made. Not to worry, I'm still planning on working on the version control stuff, but since this is going to involve changes to the database API, I've sort of been waiting for a concensus to develop on how that's going to go. Jeff |
From: Steve W. <sw...@wc...> - 2000-07-15 16:55:07
|
On Fri, 14 Jul 2000, Jeff Dairiki wrote: > > Yes, I see your point --- however php-serialized() meta-data is basically > human-inaccessible anyway. I don't see much point in making the meta-data > more easily accessible unless it's in some human-friendly format. Very true.. does this make me a hypocrite? ;-) > An Internet-messageish format might work well, for example: > > ---Snip--- > Author: 12.34.56.78 > Version: 23 > Flags: 0 > Lastmodified: 2000-07-14T21:39:08Z > Created: 2000-07-02T12:01:22Z > > !!!Sample Page > > Here's the page contents, with a WikiLink. > ---Snip--- This works for me. While XML would be keen, "don't let the best be the enemy of the good." (best=XML, good=simple reliable solution) Human-readable and human-editable are good things... if the metadata is in the Zip file it's hard to see, hard to edit. With simple plain text files you can write ex scripts to manipulate them, Perl scripts to transform them, change them in Wordpad, etc. (Admittedly I've been reading "The Pragmatic Programmer" again ;-) > (If we're devising our own metadata format, I see no reason to separate the > metadata and file content into two separate files.) > > Is this a good idea? Is it worth the effort? (Actually it's probably > not that much effort...) One file probably is better... and the mail-message format is simple and tried-and-true. NNTP also uses a similar format. > Also, what's the thinking about whether we should include all the archived > versions of a page (rather than just the most recent) in the ZIP? > > I.e.: do we want to be able to: > 1) Make a snapshot of the complete state of the database (all versions > of all pages)? > 2) Make a snapshot of the current state of the Wiki (most recent version > of all pages)? > 3) Have the option to do either 1 or 2? > > If you chose 2 or 3, a secondary question is: what are the semantics of > "restoring" from a type 2 snapshot? Some choices: > A) Wipe the entire wiki, reinitialize from the snapshot. > o Archived pages are lost. > > B) Essentially edit each page in the wiki so that it coincides with > the page in the snapshot: > o Resulting page version number won't necessarily agree with snapshot. > o Lastmodified date should probably be set to time of restore, > rather than the time in the snapshot. > o Current (pre-restore) version of the page gets archived? > This parallels the question "How do you restore a project to a particular release tag (in CVS)?" I suppose in an ideal world, you get a nice wizard box that steps you through all the choices for getting a snapshot, from the "whole thing" to "just the pages, no metadata." But in a practical sense, the thing that concerns me is having a simple way to make backup copies, and a way to port from one Wiki to another. Anyway, my suggestion is: do the simplest thing first, which in fact you've already done since I can make Zip files of PhpWiki.. your choice of what comes next! I favor #3 and #2 in that order. Restoring a Wiki should work through the InsertPage() interface, so unzipping and loading a Wiki would not overwrite existing pages but move them to an archive. > In other news: the bug bit me, so I've started working on a modularized, > OOPified > version of wiki_transform and GeneratePage() (a la Arno's suggestions). > When I get it to where I'm happy with it I'll post it here for comments > before CVSing it. Excellent! You know, CVS has the ability to allow you to develop your own private experimental "branch" and tinker away w/o losing the benefits and freedom CVS provide. I haven't tried it myself yet but if you used this approach, others could check out your work and test it. cheers! sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-15 16:01:40
|
A while back, I mentioned that I had an epiphany of sorts while I was on jury duty... after I explain it, you won't think it's so remarkable but for me it was a combination of the "Aha!" experience, and "Doh! Why didn't I see that before?" I was sitting in the waiting room of the Supreme Court Building here in Manhattan, which is filled with lovely WPA paintings of the city. I was reading "Practical Internet Groupware" by Jon Udell, which is about using SMTP, HTTP and NNTP in combination, across the intranet, extranet and internet spaces. He goes on in length about using private NNTP servers and posting in HTML, since Netscape Communicator and IE can read/write HTML in newsgroups and mail (a powerful combination). I stopped and thought about a probem we had with the Wikis at the New York Times. My project leader Noreen was really enthusiastic about using the Wiki for project documentation and she was dilligent in editing, adding and updating. But as more people used the Wiki it got harder to keep track of pages, what was out of date, who wrote what, and so on. In particular we needed a form of version control. And I thought, if Noreen edits a page and she feels it's complete and up to date, she could then post it to a private (intranet) news server and this would be a cheap form of version control. Once it's on the news server only she (or an admin) could "cancel" the post. And that's when it struck me that if she posted the page to a newsgroup, that this was the same thing as doing a "commit" in CVS. And that's when I had my moment of "Aha! | Doh!" A while back, we (this list) started discussing archive issues and version control, and I put off this post for a long time. But what I wanted to say is: with source code and CVS, you make a lot of little edits, test it, and then commit your changes. With a Wiki, you make lots of little edits, test it (in the browser) and ... well, you're done. But we could add a "commit" feature and that would put a new version in the archive.. This way, I could make an edit to FrontPage, commit it, edit it again a few months later and commit that, and I have two versions now. (Not possible under the current arrangement, unless I came from two different URL's.) How many versions do we want to make available to the user? Apparently, all of them: "Though malicious damage to ?CoWeb pages was infrequent, accidental damage was more common. A frequent cause was users overwriting one another's changes during heated contributions to a single page. "From the very beginning, ?CoWeb stored all versions of a page. Early users then made administrator requests to get pages restored. Later versions of PWS ?CoWeb allowed users to access the last three versions of a page on-line, but even that proved insufficient. The latest version of ?CoWeb allows access to all previous versions of the page." (from the forthcoming book "The Wiki Way," Addison-Wesley Press: http://leuf.net/cgi/wikiway?SwikiGoesEdu. CoWeb is the Squeak Wiki, formerly Swiki, at Georgia Tech.) This is my vision for version control for PhpWiki: the ability to commit changes; the ability to retrieve any previous version. Not too difficult. At this point I wanted to apologize to Jeff for what I think might have been a miscommunication. When we had this thread I posed a few questions to Jeff about doing diffs between versions: was storage an issue? Is doing a diff between version 2 and version 99 hard/expensive? Jeff, I think I might have scared you or intimidated you out of doing what you were working on... storing the diffs of pages. If I did I'm sorry. I was asking questions so we could discuss the possibilities, not to question your objectives. I'd like to revisit that discussion again. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-07-15 15:27:42
|
On Sat, 15 Jul 2000, Arno Hollosi wrote: > > > That said, if you want something in 1.1.7 that isn't there now speak up; > Maybe Jeff should clear up the ZIP issue first. > As soon as 1.1.7 is out we can start TheBigRefactoringOfPhpWiki. I've tried it and it works... you get a dump of all existing pages, sans metadata, plus the other dumping features work reasonbly well (for beta release), but I'll wait until tomorrow to see if Jeff wants to add something more. (Or soonder if you reply now, Jeff! :-) > A more rigorous testing of phpwiki would benefit us all :o) > Maybe I just write a small perl script using wget and netcat. Yeah, testing is always an issue. I don't know of anything that does what we need, haven't looked though. > E.g. you have a list of "10 steps to do this&that" > So you have a numbered list for those steps (level 1). > Point 1 checks if everything is here "* tool x, * tool y, * tool z" > The tools don't have any order - so I'd like to use simple bulleted lists. > Voila - bulleted list inside numbered list. What came to mind, reading your example, is that it's easy to switch back and forth between UL and OL while editing (that is, if the user changes their mind). For your example it would suffice to do: # point one ** item one ** item two [...] **item ten # point two # point three [...] # point ten But I'm splitting hairs here... it's just as simple to do mixed [#*] anyway, and we're in agreement. Whether my example works is an issue of testing :-) > I'm not sure I'm for a 100% oop approach - I'll give this some more thought. When I set out in December, I decided on a non-OOP approach at the time because I didn't fully understand the Wiki problem yet. I read once that you don't really understand a probem until you've solved it once, and I would like to think I've solved it now... There have been times in the last few months where I found myself wanting a DB object and a page object, because objects are easier to work with than a data structure and function library (sometimes). OTOH, good OO design is *hard*. I've had to work with some really *bad* OO designs (in Perl, at the NYT) and it can be a total nightmare. This is the single most compelling counterargument I have: PhpWiki is a scripted Web application that is completely procedural in design, fairly small, and easy to maintain. For a lot of users, webmasters, sysadmins, and programmers that means it's easy to tinker with. Not a lot of people know OOP and OOD. I think I am definitely in favor of a DB class, somewhat in favor of a page class, and I'm not sure about anything more after that. I'd like to see some proposed classes and we can discuss it. (I wonder if we can model the classes in the Wiki?) sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-07-15 08:14:51
|
> That said, if you want something in 1.1.7 that isn't there now speak up; > otherwise I will release 1.1.7 over the weekend. No special wishes this time :o) Maybe Jeff should clear up the ZIP issue first. As soon as 1.1.7 is out we can start TheBigRefactoringOfPhpWiki. Because of 1.1.6 I had a look into testing suites for web-applications. So far I haven't found anything proper. I found tools that check for links and stuff. But what we'd need is a tool that requests specific URL's, does POSTs as well, and then checks not only that a page is returned, but verifies the returned HTML as well (simple diff or by using regexp rules) Does anyone know of a beast like that? A more rigorous testing of phpwiki would benefit us all :o) Maybe I just write a small perl script using wget and netcat. > > [...lists...] > Yes, but I'm confused. Is there a reason to allow arbitrary mixing? E.g. you have a list of "10 steps to do this&that" So you have a numbered list for those steps (level 1). Point 1 checks if everything is here "* tool x, * tool y, * tool z" The tools don't have any order - so I'd like to use simple bulleted lists. Voila - bulleted list inside numbered list. > > About /lib vs. /admin: > > I think /lib is cleaner and reduces the clutter in the main > Hmm. I guess I'm in the minority on this one! As I said: it isn't urgent. We have more important things to do. In the long run /lib seems more appropriate. > * moving to PATH_INFO. Ari has code we can use. This involves more than > you think because we get things like $copy, $edit, $info for free now from > PHP and using PATH_INFO means we'll have to roll them by hand; Shouldn't be too hard. > * refactoring the database interface, which will start with the DBM > changes and then involve a lot of search/replace of variable names > (renaming all instances of $ArchiveDataBase to $ArchiveTable, or something > better, for example); I think this should go into 1.1.8 I'm not sure I'm for a 100% oop approach - I'll give this some more thought. /Arno |
From: Steve W. <sw...@wc...> - 2000-07-15 02:28:37
|
I've tarred the source code to the NBTSC PhpWiki with Ari's permission. I combed through it carefully to remove anything incriminating (logins, passwords, easter eggs). Available here: ftp://phpwiki.sourceforge.net/pub/phpwiki/nbtsc.phpwiki.tar.gz It's based on the 1.0 series... but I think you'll find the changes interesting. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-14 22:38:32
|
In message <Pin...@bo...>,Steve Wa instead writes: >On Fri, 14 Jul 2000, Arno Hollosi wrote: > >> Hm, I don't think using special fields within the ZIP is a good idea. >> That way, if someone should touch the ZIP for whatever reason, that >> data will be lost. I suggest using an extra file, or a meta-file for every >> page-file. > >I didn't catch this before, but now I do, and I agree with Arno.. the less >we rely on proprietary solutions the better. I'd rather hack a loader to >read two files per page that suffer Jeff with mucking with Zip files too >much. It hides the information from the user as well (a separate metadata >file can be edited in a text editor, can be cat'd, grep'd and so on). >Putting it in the Zip file means it's almost human-inaccessible. Yes, I see your point --- however php-serialized() meta-data is basically human-inaccessible anyway. I don't see much point in making the meta-data more easily accessible unless it's in some human-friendly format. So, I think we'd need to come up with some sort of metadata file format (XML comes to mind, but as lots of PHP's don't have XML support compiled in, something simpler is probably called for.) An Internet-messageish format might work well, for example: ---Snip--- Author: 12.34.56.78 Version: 23 Flags: 0 Lastmodified: 2000-07-14T21:39:08Z Created: 2000-07-02T12:01:22Z !!!Sample Page Here's the page contents, with a WikiLink. ---Snip--- (If we're devising our own metadata format, I see no reason to separate the metadata and file content into two separate files.) Is this a good idea? Is it worth the effort? (Actually it's probably not that much effort...) Also, what's the thinking about whether we should include all the archived versions of a page (rather than just the most recent) in the ZIP? I.e.: do we want to be able to: 1) Make a snapshot of the complete state of the database (all versions of all pages)? 2) Make a snapshot of the current state of the Wiki (most recent version of all pages)? 3) Have the option to do either 1 or 2? If you chose 2 or 3, a secondary question is: what are the semantics of "restoring" from a type 2 snapshot? Some choices: A) Wipe the entire wiki, reinitialize from the snapshot. o Archived pages are lost. B) Essentially edit each page in the wiki so that it coincides with the page in the snapshot: o Resulting page version number won't necessarily agree with snapshot. o Lastmodified date should probably be set to time of restore, rather than the time in the snapshot. o Current (pre-restore) version of the page gets archived? Jeff PS In other news: the bug bit me, so I've started working on a modularized, OOPified version of wiki_transform and GeneratePage() (a la Arno's suggestions). When I get it to where I'm happy with it I'll post it here for comments before CVSing it. |
From: Steve W. <sw...@wc...> - 2000-07-14 18:03:55
|
Welcome back! :-) On Fri, 14 Jul 2000, Arno Hollosi wrote: > > o Page meta-data (author, version, etc...) is saved in a special custom > > header field in the zip file. This information is not accessible via > Hm, I don't think using special fields within the ZIP is a good idea. > That way, if someone should touch the ZIP for whatever reason, that > data will be lost. I suggest using an extra file, or a meta-file for every > page-file. I didn't catch this before, but now I do, and I agree with Arno.. the less we rely on proprietary solutions the better. I'd rather hack a loader to read two files per page that suffer Jeff with mucking with Zip files too much. It hides the information from the user as well (a separate metadata file can be edited in a text editor, can be cat'd, grep'd and so on). Putting it in the Zip file means it's almost human-inaccessible. > I would like to be able to mix those two, e.g. > > # one > #* some here > #* more there > > Should be quite easy to do, no? Yes, but I'm confused. Is there a reason to allow arbitrary mixing? > About /lib vs. /admin: > I think /lib is cleaner and reduces the clutter in the main > directory. But it's not an urgent issue. Hmm. I guess I'm in the minority on this one! > When will 1.1.7 be shipped? > Where is Ari's code? Always one step ahead of me! I was waiting for you to return before calling for 1.1.7. Right now it's stable and I see no reason not to release 1.1.7 right away -- this will be the Jeff release! It includes all his cool diff stuff, the zip stuff, your additions to the admin/ files, and the few things I added. That said, if you want something in 1.1.7 that isn't there now speak up; otherwise I will release 1.1.7 over the weekend. I invited Ari to join us but never heard back. If there's something specific you want to see in the nbtsc.org Wiki's source I have access to it. Ari wanted to clean up the code and publicly release it, but I'll ask if I can make a tarball of it as-is since we are all professionals and we all do strange things in the privacy of our own servers, and Ari has nothing to be ashamed of :-) Things we've discussed, but have not reached consensus on (or maybe we did, phpwiki-talk has been extremely active) include: * moving to PATH_INFO. Ari has code we can use. This involves more than you think because we get things like $copy, $edit, $info for free now from PHP and using PATH_INFO means we'll have to roll them by hand; * moving files to lib/ which has the benefit of allowing better security like Jeff set up; * refactoring the database interface, which will start with the DBM changes and then involve a lot of search/replace of variable names (renaming all instances of $ArchiveDataBase to $ArchiveTable, or something better, for example); * possibly moving to an OO approach to the database interface after that; * I'm sure I missed something; and there are a number of features we discussed a couple of months ago that I would have to search the mail for, like all the pages we added tables to the database for etc. However these things can go in 1.1.8 or later; I think we are on track to release 1.2 in a couple of months (depending on how much time we all have) and there will probably be two or three releases between now and then. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-07-14 16:58:09
|
Hi there, I'm back and try to catch up with you guys. > It might be interesting to do a user survey to find out just > what environments phpwiki is being run in. > What do you and Arno deal with? I'm using a vanilla installation from Suse Linux: Apache+php3+mysql mySQL is updated to the latest stable version which is 3.22.32 > o Page meta-data (author, version, etc...) is saved in a special custom > header field in the zip file. This information is not accessible via > any standard zip tools, but I plan on writing an unzipper which can > use this information to restore a Wiki from the zip file. (The zip file > is (should be) still readable using any unzipper.) Hm, I don't think using special fields within the ZIP is a good idea. That way, if someone should touch the ZIP for whatever reason, that data will be lost. I suggest using an extra file, or a meta-file for every page-file. > I want to add more DBM files to add the new functionality we've been > adding, and I will change the way the DBM files are opened... I want to > set it up so that we only need one call to OpenDataBase(), and we can do Good idea. > I added two new markup rules tonight. I want to do away with the use of > tabs in the markup language since tabs are too difficult to use in Windows > browsers. Right now we have: > * one level, ** two levels, *** three levels > # one, # two, ## one, ## two I would like to be able to mix those two, e.g. # one #* some here #* more there #** even more here #** even more there #* some there #*# some there . one #*# some there . two # two Should be quite easy to do, no? Use a regexp like "^([#*]*)([#*])" - the last char determines the list type. The size of \1 plus \2 determines the level. If level or type changes then close current list and issue apropriate HTML tags. > 2. The line > [[Link] produces [Link]. > gets munged. Oddly enough I already had fixed this one a while ago. Apparanetly some changes outside wiki_transform invalidated my fix. > 3. '''''Bold italic''' and italic'' This is tricky. I suggest to use the new markup instead: ''__Bold italic__ and italic'' About /lib vs. /admin: I think /lib is cleaner and reduces the clutter in the main directory. But it's not an urgent issue. When will 1.1.7 be shipped? Where is Ari's code? /Arno |
From: Steve W. <sw...@wc...> - 2000-07-13 03:49:54
|
I updated the site today (http://phpwiki.sourceforge.net/phpwiki/) and tested the zip feature, which works great! sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-12 19:00:35
|
>That's weird, but the proof is in the pudding. I found that this works: > >Use [FindPage] to search [HammondWiki]. Yes, I've discovered that too. I've just fixed the "[[Link] [Link]" bug and the "HammondWiki" bug. It's in the CVS. Jeff. |
From: Steve W. <sw...@wc...> - 2000-07-12 15:26:06
|
That's weird, but the proof is in the pudding. I found that this works: Use [FindPage] to search [HammondWiki]. sw On Tue, 11 Jul 2000, Jeff Dairiki wrote: > > > >I'm trying to duplicate this, can you reproduce it on > >http://phpwiki.sourceforge.net/phpwki/ ? > > > > No, I don't think I can. The URL for the wiki has to have a BumpyWord in it. > > All the bugs in my note are demonstrated on > > http://www.dairiki.org/HammondWiki/index.php3?PhpWikiBugs > > which is running an unmodified version of the lastest CVS version > of wiki_transform.php3 (1.12). > > Jeff > > > _______________________________________________ > Phpwiki-talk mailing list > Php...@li... > http://lists.sourceforge.net/mailman/listinfo/phpwiki-talk > ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Jeff D. <da...@da...> - 2000-07-12 05:14:28
|
> >I'm trying to duplicate this, can you reproduce it on >http://phpwiki.sourceforge.net/phpwki/ ? > No, I don't think I can. The URL for the wiki has to have a BumpyWord in it. All the bugs in my note are demonstrated on http://www.dairiki.org/HammondWiki/index.php3?PhpWikiBugs which is running an unmodified version of the lastest CVS version of wiki_transform.php3 (1.12). Jeff |