You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(103) |
Jul
(105) |
Aug
(16) |
Sep
(16) |
Oct
(78) |
Nov
(36) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(100) |
Feb
(155) |
Mar
(84) |
Apr
(33) |
May
(22) |
Jun
(77) |
Jul
(36) |
Aug
(37) |
Sep
(183) |
Oct
(74) |
Nov
(235) |
Dec
(165) |
2002 |
Jan
(187) |
Feb
(183) |
Mar
(52) |
Apr
(10) |
May
(15) |
Jun
(19) |
Jul
(43) |
Aug
(90) |
Sep
(144) |
Oct
(144) |
Nov
(171) |
Dec
(78) |
2003 |
Jan
(113) |
Feb
(99) |
Mar
(80) |
Apr
(44) |
May
(35) |
Jun
(32) |
Jul
(34) |
Aug
(34) |
Sep
(30) |
Oct
(57) |
Nov
(97) |
Dec
(139) |
2004 |
Jan
(132) |
Feb
(223) |
Mar
(300) |
Apr
(221) |
May
(171) |
Jun
(286) |
Jul
(188) |
Aug
(107) |
Sep
(97) |
Oct
(106) |
Nov
(139) |
Dec
(125) |
2005 |
Jan
(200) |
Feb
(116) |
Mar
(68) |
Apr
(158) |
May
(70) |
Jun
(80) |
Jul
(55) |
Aug
(52) |
Sep
(92) |
Oct
(141) |
Nov
(86) |
Dec
(41) |
2006 |
Jan
(35) |
Feb
(62) |
Mar
(59) |
Apr
(52) |
May
(51) |
Jun
(61) |
Jul
(30) |
Aug
(36) |
Sep
(12) |
Oct
(4) |
Nov
(22) |
Dec
(34) |
2007 |
Jan
(49) |
Feb
(19) |
Mar
(37) |
Apr
(16) |
May
(9) |
Jun
(38) |
Jul
(17) |
Aug
(31) |
Sep
(16) |
Oct
(34) |
Nov
(4) |
Dec
(8) |
2008 |
Jan
(8) |
Feb
(16) |
Mar
(14) |
Apr
(6) |
May
(4) |
Jun
(5) |
Jul
(9) |
Aug
(36) |
Sep
(6) |
Oct
(3) |
Nov
(3) |
Dec
(3) |
2009 |
Jan
(14) |
Feb
(2) |
Mar
(7) |
Apr
(16) |
May
(2) |
Jun
(10) |
Jul
(1) |
Aug
(10) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
(13) |
Apr
(11) |
May
(18) |
Jun
(44) |
Jul
(7) |
Aug
(2) |
Sep
(14) |
Oct
|
Nov
(6) |
Dec
|
2011 |
Jan
(2) |
Feb
(6) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(11) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(4) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
(5) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
(11) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(3) |
2024 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve W. <sw...@wc...> - 2000-11-21 23:07:02
|
Hi Arno, This is a bit off, but I have been thinking about the problem on Sourceforge with the query: select distinct hitcount.pagename, hitcount.hits from wikilinks, hitcount where (wikilinks.frompage=hitcount.pagename and wikilinks.topage='AddingPages') or (wikilinks.topage=hitcount.pagename and wikilinks.frompage='AddingPages') order by hitcount.hits desc, hitcount.pagename; Which returns that error about the size being too large. It's not a problem since you set the variable in MySQL, but I've been trying to think of a way around the Cartisian join, if one indeed occurs. I was reading the docs for Postgresql this weekend and it supports UNION, and INTERSECT. I think UNION might have worked in this case, but unfortunately MySQL doesn't support it. We could have done: select topage from wikilinks where frompage='AddingPages' union select frompage from wikilinks where topage='AddingPages' to get all the page names that link to AddingPages. That could be an inner select, and then we do: select pagename, hits from hitcount where pagename in ( select topage from wikilinks where frompage='AddingPages' union select frompage from wikilinks where topage='AddingPages' ) order by hits desc, pagename; I think subselects are more efficient, if the database knows how to optimize for them. Just thinking out loud, sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-11-17 02:47:10
|
On Mon, 6 Nov 2000, Arno Hollosi wrote: > > Hi all, > > a friend of mine had an intersting idea about how to build a real > multilingual wiki. What do you think about it? Would something like > this work? > > ------------------ > > It could be very nice if the Wiki concept was enhanced with > multi language abilities. So people can add there translation > to some pages. But the problem is that if one of the translation > is modified the other must be.... > I have may be an idea : create a database of translation of lines > (the original one, not the displayed one) > and not of files. So when a page is computed, the translation > is done line by line with the untranslated part ``as is''. > So a the text file is stored in one or more language. > All the possible translations of lines are stored. > The translation to a language will translate the text file > in the target language, line untranslatable will stay > untranslated. The mSQL implementation is partly this way now... lines from the page are stored in individual rows of a page table. (This is because mSQL cannot search text blob types). So it wouldn't be hard to hack in, probably just add a new column indicating what language the line is in, defaulting to the local language. > > There is only one thing to add in the user interface, > it is the ``translate page'' command. This might be the tricky part to implement though... > It could usable, just an example : > - Somebody wrote an english page > (The text file is in english) > - Somebody view it as french (but in english on the screen > because it is untranslatable) > - He translates the page to write it in french WITHOUT > changing line breaking (with a form with many ENTRIES). If I follow, at runtime a form is generated with a TEXTAREA for every line in the page, along with a second TEXTAREA for translating the line. > (The text file could be in english or french, it doesn't matter) > - The english page is changed : for the french reader > there is some line of english in the french text > (The text file is in english because it is in this > language it was edited) > - A spanish translation is done > (The text file could be in english or spanish. It could > be also in french with some lines in english) > - The french page is edited by adding some lines > (The texte file is in french and english) > - The spanish and english reader see some text in french > - The french page is edited to remove some lines. > The lines vanish in the other language. > (The texte file is in french and english > The point is that the translator do it line by line and that > lines are not too short. > A + is that translation is done for all the files, > it is not linked to one file. It's interesting to think that all language versions of the pages are stored together, but in the long run keeping all the versions in sync would be hard. However if the Wiki were not too big, and the focus on only two languages, it might not be that hard. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-11-16 22:01:41
|
Today SF closed all my bug reports without resolving them, though the MySQL server is still unstable/unusable. I have filed a newer, more detailed report in case there was some confusion, plus a link to a test site that uses MySQL on SF so they could see the problems themselves. (The live site is still running on DBM files). sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain ---------- Forwarded message ---------- Date: Thu, 16 Nov 2000 13:34:21 -0800 From: no...@so... To: sw...@wc..., no...@so..., st...@so... Subject: [Bug #122627] MySQL server still unusable Bug #122627, was updated on 2000-Nov-16 13:34 Here is a current snapshot of the bug. Project: SourceForge Category: Project Database Server Status: Open Resolution: None Bug Group: PHP Programming Priority: 5 Summary: MySQL server still unusable Details: Thanks for closing all my bugs, but the problem was not resolved. Here is a link to demonstrate the problem: http://phpwiki.sourceforge.net/test/phpwiki/ I get this error: ---------------------------------- Warning: MySQL Connection Failed: Lost connection to MySQL server during query in /home/groups/phpwiki/htdocs/test/phpwiki/lib/mysql.php on line 32 WikiFatalError Cannot establish connection to database, giving up. MySQL error: ------------------------------- That is not the only error though; when it does connect to MySQL I get a missing file error, as previously reported: ---------------------------------- Inserting page AddingPages, version 1 from text file WikiFatalError Error writing page 'AddingPages' MySQL error: Can't find file: './phpwiki/wiki.frm' (errno: 24) ----------------------------------- Also I cannot connect from the command line on orbital: --------------------------------- [wainstead@orbital lib]$ !mysql mysql -h moby.p.sourceforge.net -u phpwiki -pnotshown phpwiki ERROR 2013: ^GLost connection to MySQL server during query [wainstead@orbital lib]$ -------------------------------- These problems have been going on since last Friday. There were no problems in the months before. Something is wrong with the MySQL server installation. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=122627&group_id=1 |
From: Arno H. <aho...@in...> - 2000-11-16 16:01:55
|
Jeff, > $pagehash["content"] = preg_split('/[ \t\r]*\n/', chop($content)); > The problem seems to be that preg_split returns an empty array when the > regex does not match anything in the string. I cannot reproduce this on my machine. My PHP version (4.00) returns an array with one entry for the first line. Which PHP version are you using? /Arno |
From: Jeff M. <jef...@sy...> - 2000-11-16 11:17:11
|
I've just setup phpwiki 1.1.9 and it seems to be working really well apart from on thing. When pages are entered on a single line they don't get saved to the DB properly instead it's just saving an empty string. I've tracked this down to line 85 in savepage.php which is the following. $pagehash["content"] = preg_split('/[ \t\r]*\n/', chop($content)); The problem seems to be that preg_split returns an empty array when the regex does not match anything in the string. I've added the following line just after this line and this solves the problem by checking to see if the array is empty and if it is setting it to the unsplit content if( count($pagehash["content"])<1){ $pagehash["content"][1]=$content; } Do you see this as a resonable fix or is there a better approach. |
From: Steve W. <sw...@wc...> - 2000-11-14 23:44:01
|
For your viewing pleasure, here is the bug report I've filed. At this point I've switched the wiki to DBM files, since there seems to be no immediate hope of resolution. I'm not too discouraged, since we share a database server with a lot of other development projects, and a lot of bad code is probably hitting MySQL. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain ---------- Forwarded message ---------- Date: Tue, 14 Nov 2000 15:39:11 -0800 From: no...@so... To: sw...@wc..., no...@so... Subject: [Bug #122321] Web server cannot connect to database Bug #122321, was updated on 2000-Nov-13 08:51 Here is a current snapshot of the bug. Project: SourceForge Category: Project Database Server Status: Open Resolution: None Bug Group: PHP Programming Priority: 5 Summary: Web server cannot connect to database Details: Hi, For the last few days the MySQL server is mostly unavailable for my project (phpwiki). We print out the error whenever there is some database problem and in this case, the web server is simply not connecting to MySQL. http://phpwiki.sourceforge.net/phpwiki/ Also we cannot connect from the command line on orbital: [wainstead@orbital phpwiki]$ !727 mysql -h moby.p.sourceforge.net -u phpwiki -pnotshown phpwiki ERROR 2013: ^GLost connection to MySQL server during query [wainstead@orbital phpwiki]$ thanks guys... great article in PHPbuilder on MySQL/Postgresql. Follow-Ups: Date: 2000-Nov-13 12:08 By: Wainstead Comment: This is up again. Thanks. ------------------------------------------------------- Date: 2000-Nov-14 08:09 By: Wainstead Comment: This continues to be a major problem. ------------------------------------------------------- Date: 2000-Nov-14 15:39 By: Wainstead Comment: Since the database is continually not available, thus making our site unavailable, I've switched to DBM files for the data store. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=122321&group_id=1 |
From: Steve W. <sw...@wc...> - 2000-11-13 20:03:33
|
After three service requests with the Sourceforge staff, they finally have shell access to orbital up again; the MySQL server appears stable again; and I was able to connect with the mysql client and repair the problem with the wikiscore table. It looks like an internal MySQL file for the wikiscore table was lost. I dropped the table and recreated it, which finally solved the errors we were getting. There is some mention of the problems they've had recently with MySQL in an article comparing MySQL to Postgresql: http://www.phpbuilder.com/ cheers sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Markus G. <mg...@bi...> - 2000-11-13 14:14:00
|
Hello Steve, i discussed the german translation process with Arno and we decided that I will help Arno to translate the full phpwiki pages. I hope to get the most things to be translated this week. Btw. the problems with the german wiki are solved, thanks to Arno and you. - Markus -- BITPlan GmbH smart solutions - Meerbuscher Str. 58-60 - 40670 Meerbusch-Osterath Tel +49 2159 5236-0 - Fax +49 2159 5236-10 - mg...@bi... - http://www.bitplan.de Home Office: Markus Guske - Erftstrasse 17 - D-41564 Kaarst Tel +49 173 946 1880 - Fax +49 2131 769-195 - mg...@gu... Möchten auch Sie zum Team gehören? http://www.bitplan.com/de/Jobs.html |
From: Arno H. <aho...@in...> - 2000-11-13 11:31:48
|
> Arno is indeed working on a German translation. You can see the progress > either in the CVS repository or pull a nightly build off Sourceforge > I'm sure he'd like some help, if he hasn't finished yet. I am not too eager to translate those pages myself, so any help is greatly appreciated. As Steve said: check the CVS for the latest version. /Arno |
From: Steve W. <sw...@wc...> - 2000-11-12 19:22:38
|
Hi Markus, Arno is indeed working on a German translation. You can see the progress either in the CVS repository (http://cvs.sourceforge.net/cgi-bin/cvsweb.cgi/phpwiki/locale/de/?cvsroot=phpwiki&sortby=date) or pull a nightly build off Sourceforge (http://phpwiki.sourceforge.net/nightly/phpwiki.nightly.tar.gz). I'm sure he'd like some help, if he hasn't finished yet. thx sw On Sun, 12 Nov 2000, Markus Guske wrote: > Hello, > > is anybody working on a full german translation for the phpwiki initial pages? > > > Thanks in advance, > > - Markus > -- > BITPlan GmbH smart solutions - Meerbuscher Str. 58-60 - 40670 > Meerbusch-Osterath > Tel +49 2159 5236-0 - Fax +49 2159 5236-10 - mg...@bi... - > http://www.bitplan.de > > Home Office: Markus Guske - Erftstrasse 17 - D-41564 Kaarst > Tel +49 173 946 1880 - Fax +49 2131 769-195 - mg...@gu... > > Möchten auch Sie zum Team gehören? http://www.bitplan.com/de/Jobs.html > > _______________________________________________ > Phpwiki-talk mailing list > Php...@li... > http://lists.sourceforge.net/mailman/listinfo/phpwiki-talk > ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: <Mar...@t-...> - 2000-11-12 18:42:10
|
Hello, is anybody working on a full german translation for the phpwiki initial pages? Thanks in advance, - Markus -- BITPlan GmbH smart solutions - Meerbuscher Str. 58-60 - 40670 Meerbusch-Osterath Tel +49 2159 5236-0 - Fax +49 2159 5236-10 - mg...@bi... - http://www.bitplan.de Home Office: Markus Guske - Erftstrasse 17 - D-41564 Kaarst Tel +49 173 946 1880 - Fax +49 2131 769-195 - mg...@gu... Möchten auch Sie zum Team gehören? http://www.bitplan.com/de/Jobs.html |
From: Steve W. <sw...@wc...> - 2000-11-09 04:23:29
|
On Tue, 7 Nov 2000, Arno Hollosi wrote: > > I think we should rethink the wikiscore table. > > While the used metric may be quite useful in a web-like environment, within > a wiki it is questionable. The problem is that people usually sign their > contributions with their WikiName, and thus user's wiki-homepages get a > very high wikiscore. And generally speaking homepages are not that > important. So the wikiscore metric fails. > > For more info see MeatballWiki > http://www.usemod.com/cgi-bin/mb.pl?IndexingScheme > and look at MostReferencedPages / MostLinkedPages / ShortestPathPages (The following started as a reply but becomes more and more pedantic as it goes, so I apologize for the tone... but sometimes you work these things out in your head as you write.) This relates the the Semantic Web article on the O'Reilly Network (http://www.xml.com/pub/2000/11/01/semanticweb/index.html?wwwrrr_rss). The problem is Wiki does not distinguish between pages: all pages are the same and have equal meaning, more or less. We are trying to make Wiki (the program, the machine) give meaning to pages created by humans. So we try graphing problems: how many pages does this page link to? how many pages link to this one? how many times has this page been edited? when was this page last edited? how long ago was this page created? how many times has this page been viewed? how many degrees of separation are there between this page and that page? how many link paths are there from this page to that page (if they do not link directly)? what is the shortest path from this page to that page? what pages have names similar to this one? But the machine cannot know ArnoHollosi is a personal page while WhyWikiWorks is a discussion, DesignPatterns describes an abstract concept and DesignPatternsBook describes a textbook published in 1994. So these approaches (and several more listed by Nicolas Roberts) have shortcomings. On c2.com they introduced CategoryDesignPatterns and TopicExtremeProgramming (the Category- and Topic- prefixes) to get around this problem; in the large, then, any Wiki needs a WikiLibrarian, someone who sorts, labels and classifies information. An Information Architect. With a Wiki, everyone who edits pages has to be a WikiLibrarian; i.e. it's a community effort. So we come back to another problem (really, an interesting aspect/feature but also a usability problem) of a WikiWikiWeb: a lot of the organization of the information is by social contract. People agree to social conventions like adding a Category- link at the bottom of their page. I read an article on Lotus Notes recently that reinforced a perception I've had for the last year, ever since I read Jon Udell's "Practical Internet Groupware": it's really hard to get people to adapt new social conventions, or in the case of Notes, learn new ways of doing things. It took email a relatively long time to make it into the workplace because people were used to phones, voice mail, faxes, post-it notes and so on. (Granted email penetrated the workplace pretty fast for a new technology, but then most businesses today still don't have email. My corner deli doesn't). So we can perhaps write a set of guidelines for using a Wiki, include it in pgsrc/, and trust the universe. We can provide a certain number of clues to the user though hitcount, wikiscore and so on. But I think our current model is limited to just that (and any groupware system, ultimately, is too). I remember reading Steven Levy's "Hackers: Heroes of the Computer Revolution," and one of the MIT hackers from the 1960's later remarks that he couldn't believe what they were trying to do on the hardware of the era; he felt they had been naive about what they could accomplish on a PDP-11. Perhaps we are looking at the limitations of a Wiki as well. > ShortestPath seems to be most interesting but too expensive to compute. > (unless someone comes up with a good incremental algorithm) If I read the page correctly it's the Traveling Salesman problem :-) > Thus I suggest we change the following for 1.2.0: > * drop wikiscore table > * related pages are reduced to incoming/outgoing links which are sorted by > hitcount. Now that I've read your code (finally!), and really understand what it's doing, I think we might what to keep it after all. All the count (the number in the parantheses) is, is: select the incoming links for this page and rank them by how many pages link to it select the outgoing links for this page and rank them by how many pages link to it but then again, I might be confused once again. I confess that ever since you added these, I have to stop and reason out what it is they are doing... which leads to your next question: > Btw, some people find the terms "incoming/outgoing links" confusing. Is > there a better way to describe these? They confuse me too :-) Let me see if I get it right: ---- Five pages that link to this one, that are themselves linked to the most by other pages: Five pages this links to, ranked by how many pages link to those pages: Five most popular pages that either link to this one or are linked to by this one: ---- I think the trouble is we don't know why ErikBagfors links to the PhpWiki page. We can guess, is all. I like the idea more and more of conventions like Category- and Topic-. Noone can sort info like a human. We can alternately provide tools to try to help users be good WikiLibrarians, and then the machine can infer meaning from the metadata they provide. That has its own pitfalls, like trying to get everyone to use the same keywords for <META> tags (so then search engines can infer from that too). Rather than think about how to make PhpWiki compute meaningful numbers based on the data, I am going to doodle for a while and think about what information is going to help users find useful information in a Wiki. That's what this is all about, in the end. Right off, a more sophisticated search engine comes to mind. The Meatball Wiki is quite interesting. cheers sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-11-08 16:04:25
|
Hi there, I have committed the new admin structure (the orginal idea for this is from Jeff.) Admins just use admin.php as entry point to the wiki. Please test it out and post your findings here. Note that there are changes in /templates, /pgsrc as well. I'd appreciate if translators send me the necessary updates. If you are using mySQL you can remove pages now. Admin is not yet complete -- dumpHTML, convert1.0, and rebuildDBM is missing. There are changes to the template syntax of ###IF### too. I'll update the template README soon. /Arno |
From: Arno H. <aho...@in...> - 2000-11-07 11:06:52
|
I think we should rethink the wikiscore table. While the used metric may be quite useful in a web-like environment, within a wiki it is questionable. The problem is that people usually sign their contributions with their WikiName, and thus user's wiki-homepages get a very high wikiscore. And generally speaking homepages are not that important. So the wikiscore metric fails. For more info see MeatballWiki http://www.usemod.com/cgi-bin/mb.pl?IndexingScheme and look at MostReferencedPages / MostLinkedPages / ShortestPathPages ShortestPath seems to be most interesting but too expensive to compute. (unless someone comes up with a good incremental algorithm) Thus I suggest we change the following for 1.2.0: * drop wikiscore table * related pages are reduced to incoming/outgoing links which are sorted by hitcount. Btw, some people find the terms "incoming/outgoing links" confusing. Is there a better way to describe these? /Arno |
From: Sandino A. <sa...@sa...> - 2000-11-07 05:10:46
|
Steve Wainstead wrote: > > What would be infinitely better is if we used some kind of generic DB > library, like Perl's DBI/DBD. Pear DB (or any other db abstraction lib) can be used, but most of the times the DB portability problem resides in the SQL differences between databases, so a DBMS-specific driver is always needed. -- Sandino Araico Sánchez Diga no a la piratería, use software libre. |
From: Steve W. <sw...@wc...> - 2000-11-07 04:19:07
|
Hi Arno, There is a request for more character support right now on http://phpwiki.sourceforge.net:80/phpwiki/index.php?PhpWikiBrainstorm. It's something I think you're more qualified for, since I only use the pithy little American character set. ;-) I think it's a good idea, but there must be a cleaner way than just listing all the characters in the regexp. cheers sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-11-06 11:11:26
|
Hi all, a friend of mine had an intersting idea about how to build a real multilingual wiki. What do you think about it? Would something like this work? ------------------ It could be very nice if the Wiki concept was enhanced with multi language abilities. So people can add there translation to some pages. But the problem is that if one of the translation is modified the other must be.... I have may be an idea : create a database of translation of lines (the original one, not the displayed one) and not of files. So when a page is computed, the translation is done line by line with the untranslated part ``as is''. So a the text file is stored in one or more language. All the possible translations of lines are stored. The translation to a language will translate the text file in the target language, line untranslatable will stay untranslated. There is only one thing to add in the user interface, it is the ``translate page'' command. It could usable, just an example : - Somebody wrote an english page (The text file is in english) - Somebody view it as french (but in english on the screen because it is untranslatable) - He translates the page to write it in french WITHOUT changing line breaking (with a form with many ENTRIES). (The text file could be in english or french, it doesn't matter) - The english page is changed : for the french reader there is some line of english in the french text (The text file is in english because it is in this language it was edited) - A spanish translation is done (The text file could be in english or spanish. It could be also in french with some lines in english) - The french page is edited by adding some lines (The texte file is in french and english) - The spanish and english reader see some text in french - The french page is edited to remove some lines. The lines vanish in the other language. (The texte file is in french and english) The point is that the translator do it line by line and that lines are not too short. A + is that translation is done for all the files, it is not linked to one file. |
From: Ori F. <or...@co...> - 2000-11-04 22:57:20
|
Hi As you and Arno said, OOP is not appropiate for a wiki. In fact, I have ceased development of my own version and hope to contribute to phpwiki instead. What I hoped to achieve with my wiki, and this has little to do with OOP, is modularity. I not only wanted a different data storage approach, but also a different parser and a few more advanced features. I felt that a wiki which will be easy to install and configure and yet offer a lot of the possible wiki features would be a good idea. As to an abstract database interface, I find that the one that comes with PHPLib is easy to use. My wiki should work fine with any of the RDBMS's supported by PHPLib (well, maybe the random page feature will fail since it relies on a specific MySQL feature). There is also a DBI-like class being worked on as part of PEAR, the future PHP version of CPAN, you might want to look for that. - Calanya, aka Ori Folger "Don't anthropomorphize computers -- they hate it." |
From: Arno H. <aho...@in...> - 2000-11-03 23:12:09
|
> We have an extensive rewrite in OO as well, done by Jeff Dairiki. Have > you seen it? It has all the features (I think) of the 1.1.7 release plus > some improvements. Yes, Jeff has done some outstandig work within very short time. > Couple this with the problem that object systems can be much harder to > understand than procedural ones, and that gives me second thoughts about > introducing OO into PhpWiki. The problem I see with OO in phpwiki is this: most of the time we only have one instance per class - so what do we gain by using OO? What makes $data->function() better than function($data) ? In other cases like e.g. the iterator class in Jeff's branch OO can be very useful. And I guess in 1.3.x we will use OO to some extension where it makes sense but keep to the procedural approach otherwise. OO excels when you can take advantage of inheritance and other more sophisticated features. But those cases are hardly needed in phpwiki. Unless you can convince me otherwise I think a 100% OO approach has more cons than pros. /Arno |
From: Steve W. <sw...@wc...> - 2000-11-03 21:46:13
|
I'm offline again until Sunday night. I think we have plenty of new things in the code that we should get 1.1.9 out. I can do this Sunday night. Commit what you want and complain otherwise... I'll see you all Sunday. cheers sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-11-03 20:34:50
|
Hi Ori! Sorry for the long wait in replying. I'd love to see it. I hope I have time to read it properly. I'm interested in your comments on PHP's object model too; I have read recently that it's too immature (after the PHP Kongress in Colonge). We have an extensive rewrite in OO as well, done by Jeff Dairiki. Have you seen it? It has all the features (I think) of the 1.1.7 release plus some improvements. I've added the task to our Task List on Sourceforge to objectify the database access. This generally makes things easier for the developer: packaging data and the functions that operate on that data. But PHP doesn't seem to offer much more after that. Couple this with the problem that object systems can be much harder to understand than procedural ones, and that gives me second thoughts about introducing OO into PhpWiki. In a well designed OO system, the classes are loosely coupled, they form subsystems that are easy to understand, and they are easy to discover and modify. If we only objectify one interface, we would have to do quite a bit of surgery on the code (if we stick with the current code base, and I don't see why not) for what might be little gain. What would be infinitely better is if we used some kind of generic DB library, like Perl's DBI/DBD. Then PhpWiki could run on any relational database, and we would only have to maintain one interface and the various schemas. If we were really lucky, this library would support flat file and DBM access as well, but we can wish all day for PHP to be Perl and it's not going to happen ;-) cheers sw On Wed, 1 Nov 2000, Ori Folger wrote: > Hi > > I have written an OOP based Wiki in PHP a while back, before I knew of > phpwiki. It's not as feature rich as phpwiki, but I think it's a good start. > > Would you be interested in seeing it? > > Ori > ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Steve W. <sw...@wc...> - 2000-11-03 19:43:53
|
As a test I downloaded the latest nightly build and edited the config to use Spanish. You can see it at: http://wcsb.org:80/~swain/es/phpwiki/index.php It's running on DBM. sw ...............................ooo0000ooo................................. Hear FM quality freeform radio through the Internet: http://wcsb.org/ home page: www.wcsb.org/~swain |
From: Arno H. <aho...@in...> - 2000-11-02 21:22:20
|
> Completed the translation of the spanish file names and some contents. I have comitted the patch. Thanks a lot Sandino. Check out the latest CVS to test it out - or wait till tomorrow and download the nightly tarball. Also, I have comitted the inital set for a German translation. To the lurkers on this list: if you speak Dutch, Spanish, or German then you can help us translating. If you speak another language you would like to see included in phpwiki then just go ahead and send in your patch. /Arno |
From: Arno H. <aho...@in...> - 2000-11-02 20:01:29
|
Neil, could you point me to a patch so that I can look at your changes? > So, I would prefer that list items could continue onto multiple > lines. I have implemented a change that achieves this and it works quite > well. When transform.php finds a line that starts with normal text, > it just includes it in the current context. To terminate a list, you > just need a blank line, and this feels quite natural. Hm, but you still could not indent the following lines, right? Or is this allowed as well? If so, is it then a <pre> block, or just normal text? If you can't indent the lines, what's the difference? It wouldn't be visually more appealing, or am I wrong? > ... add a new markup which means "paragraph break at this level" > 1/ ";;;" is currently line break, so maybe ";;;;" could be new-para, You mean "%%%" right? > 2/ "." on a line by itself or at the start of a line might be good > as it is visually similar to the required concept: Looks better, but generally speaking I'm no fan of creating too much new markup. After all, we don't want to reinvent HTML. We will modularize lib/transform.php in 1.3.x so it will be easier to integrate such custom markup. > Somewhat related to this, consider the lists in TextFormattingRules. > That are actually sequences of singleton lists. That's true. Actually, I'm not too happy about the current state of SetHTMLOutputMode() but for different reasons. But apart from that, why is the current HTML that bad? It makes no difference to browsers, unless you are handicapped of some sort and rely on proper lists. In that case just include a style-sheet that spreads list-items. Not everything has to be done within wiki. /Arno |
From: Sandino A. <sa...@sa...> - 2000-11-02 10:35:31
|
Some postgres fixes which Arno already included Completed the translation of the spanish file names and some contents. Patch against 1.1.8 http://sandino.net/patch/phpwiki-1.1.8-pgsql-patch-8.patch -- Sandino Araico Sánchez Diga no a la piratería, use software libre. |