You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(103) |
Jul
(105) |
Aug
(16) |
Sep
(16) |
Oct
(78) |
Nov
(36) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(100) |
Feb
(155) |
Mar
(84) |
Apr
(33) |
May
(22) |
Jun
(77) |
Jul
(36) |
Aug
(37) |
Sep
(183) |
Oct
(74) |
Nov
(235) |
Dec
(165) |
2002 |
Jan
(187) |
Feb
(183) |
Mar
(52) |
Apr
(10) |
May
(15) |
Jun
(19) |
Jul
(43) |
Aug
(90) |
Sep
(144) |
Oct
(144) |
Nov
(171) |
Dec
(78) |
2003 |
Jan
(113) |
Feb
(99) |
Mar
(80) |
Apr
(44) |
May
(35) |
Jun
(32) |
Jul
(34) |
Aug
(34) |
Sep
(30) |
Oct
(57) |
Nov
(97) |
Dec
(139) |
2004 |
Jan
(132) |
Feb
(223) |
Mar
(300) |
Apr
(221) |
May
(171) |
Jun
(286) |
Jul
(188) |
Aug
(107) |
Sep
(97) |
Oct
(106) |
Nov
(139) |
Dec
(125) |
2005 |
Jan
(200) |
Feb
(116) |
Mar
(68) |
Apr
(158) |
May
(70) |
Jun
(80) |
Jul
(55) |
Aug
(52) |
Sep
(92) |
Oct
(141) |
Nov
(86) |
Dec
(41) |
2006 |
Jan
(35) |
Feb
(62) |
Mar
(59) |
Apr
(52) |
May
(51) |
Jun
(61) |
Jul
(30) |
Aug
(36) |
Sep
(12) |
Oct
(4) |
Nov
(22) |
Dec
(34) |
2007 |
Jan
(49) |
Feb
(19) |
Mar
(37) |
Apr
(16) |
May
(9) |
Jun
(38) |
Jul
(17) |
Aug
(31) |
Sep
(16) |
Oct
(34) |
Nov
(4) |
Dec
(8) |
2008 |
Jan
(8) |
Feb
(16) |
Mar
(14) |
Apr
(6) |
May
(4) |
Jun
(5) |
Jul
(9) |
Aug
(36) |
Sep
(6) |
Oct
(3) |
Nov
(3) |
Dec
(3) |
2009 |
Jan
(14) |
Feb
(2) |
Mar
(7) |
Apr
(16) |
May
(2) |
Jun
(10) |
Jul
(1) |
Aug
(10) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
(13) |
Apr
(11) |
May
(18) |
Jun
(44) |
Jul
(7) |
Aug
(2) |
Sep
(14) |
Oct
|
Nov
(6) |
Dec
|
2011 |
Jan
(2) |
Feb
(6) |
Mar
(3) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(11) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(4) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
(5) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
(11) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(3) |
2024 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jeff D. <da...@da...> - 2003-03-08 20:56:35
|
> First, when I access the phpwiki site locally, there is no problem. > When I access the site remotely from the internet there is a problem. > The links created using [ and ] point to > "http://mycomputername.local/phpwiki/index.php/thenameofthewikipage". > As I said, this works on my local network, but obviously not from the > web. If I manually type in > "http://myipaddress/phpwiki/index.php/thenameofthewikipage" the page > does come up fine. It's really a problem with your http server configuration. By default, PhpWiki deduces the server host name from the $SERVER_NAME environment variable which is set by the http server (e.g. apache). So the best way to fix the problem is to fix your web server configuration so that it reports the correct server name. > Is there a way to change the "mycomputername.local" > in the index.php so that it just points to the local server instead of > the name my computer has in the local network? What part of index.php > do I need to change, and how do I need to change it? As a work-around you can manually set SERVER_NAME in index.php. Uncomment the line which says: if (!defined('SERVER_NAME')) define('SERVER_NAME', 'some.host.com'); and change 'some.host.com' to something more appropriate. > My second questions is, where are the new pages being created? In the > "INSTALL" file it says that new pages are created in a "tmp" folder. I > do not have a "tmp" folder in my phpwiki folder. So where are these > pages being created? The pages are stored in the MySQL database (if you're using the MySQL backend, which you are.) -- -- Jeff Dairiki <da...@da...> |
From: Alexander D. <me...@ea...> - 2003-03-08 19:51:53
|
As a follow up to my previous question about trouble accessing my phpwiki files from the internet... Could changing the line 'dsn' => 'mysql://wikiuser:mypassword@localhost/phpwiki', to 'dsn' => 'mysql://wikiuser:mypassword@/phpwiki', be the answer? To clarify, I had trouble accessing my phpwiki site from the internet because the links would put "http://mycomputername.local/" before the "phpwiki/nameofmywikipage" instead of putting "http://myipaddress/". This allowed me to access my phpwiki from my local network, but the links would not work if I accessed my phpwiki from the internet. From the internet, I could access my wiki pages by manually typing in "http://myipaddress/phpwiki/nameofmywikipage", but the mechanism to have the links work correctly was not working right. Alexander |
From: Alexander D. <me...@ea...> - 2003-03-08 19:37:32
|
I have installed phpwiki and am using it with mySQL. I am hosting the site on my own computer. Part of it runs just fine, but I have two questions. First, when I access the phpwiki site locally, there is no problem. When I access the site remotely from the internet there is a problem. The links created using [ and ] point to "http://mycomputername.local/phpwiki/index.php/thenameofthewikipage". As I said, this works on my local network, but obviously not from the web. If I manually type in "http://myipaddress/phpwiki/index.php/thenameofthewikipage" the page does come up fine. Is there a way to change the "mycomputername.local" in the index.php so that it just points to the local server instead of the name my computer has in the local network? What part of index.php do I need to change, and how do I need to change it? My second questions is, where are the new pages being created? In the "INSTALL" file it says that new pages are created in a "tmp" folder. I do not have a "tmp" folder in my phpwiki folder. So where are these pages being created? The first question is more practical and the second question is more academic. Thank you for any help you can offer me. Alexander |
From: charlotte r. <cha...@ne...> - 2003-03-08 10:53:13
|
<html> <head> <title>Untitled Document</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div align="center"> <table width="99%" border="0" height="97"> <tr> <td width="21%" height="153"> <div align="right"><img src="http://www.privacy-control.com/images/woman.gif" width="162" height="151"></div> </td> <td width="79%" height="153"> <p><font face=3D"Tahoma, Arial, Helvetica, san= s-serif" size=3D"2" color=3D"3C3F7E"><font color="#BF0026" size="4">IS YOUR SPOUSE CHEATING ONLINE?</font></font></p> <p><font face="3D"Tahoma," verdana,="verdana," arial,="arial," helve="tica," sans-serif="sans-serif" color="#000066" size="3D"5""><b>Find Out Who They Are Chatting/Emailing With All Those Hours!!!</b></font></p> <p><font color="#000066"><b>Protect your family on the Internet<br> Make sure they are being safe on the Internet with Privacy-Control Monitoring Software.</b></font> </p> </td> </tr> </table> <div align="left"><b><font color="#000066">Privacy-Control will hide on your computer and secretly record all instant messages, chat, email, web sites and more! Once you install it, it becomes completely invisible. Then, after the computer is used, you just enter a secret key-sequence, and you can see everything that happened!</font></b></div> </div> <p align="left"><font color="#000066"><b>Why wonder what is going on...? Let Privacy Control make internet safe for your family.</b></font></p> <ul> <li><font color="#000071" face="Times New Roman, Times, serif"><b><font size="3">Records <font color="#BF0026">BOTH SIDES </font>of chat, instant messages and email.</font></b></font></li> <li><font color="#000071" face="Times New Roman, Times, serif"><b><font size="3">Records <font color="#BF0026">INCOMING</font> and <font color="#BF0026">OUTGOING</font> Hotmail, Yahoo Mail, AOL and more.</font></b></font></li> <li><font color="#000071" face="Times New Roman, Times, serif"><b><font size="3">Works in total secrecy.... cannot be bypassed or detected by user. Will not slow down the computer</font></b></font></li> </ul> <p align="left"><font color="#000066"><b>With Privacy-Control you can record: </b></font></p> <div align="center"> <table width="95%" border="0" height="65"> <tr> <td width="47%" height="89"> <ul> <li><b><font color="#000066">AOL Instant Messenger</font></b></li> <li><b><font color="#000066">MSN Messenger</font></b></li> <li><b><font color="#000066">All web sites visited</font></b></li> <li><b><font color="#000066">All keystrokes type</font></b></li> </ul> </td> <td width="53%" height="89"> <ul> <li><b><font color="#000066">Web based email</font></b></li> <li><b><font color="#000066">Compuserve</font></b></li> <li><b><font color="#000066">AOL </font></b></li> </ul> </td> </tr> </table> <div align="left"> <p><font color="#000066"><b>Privacy Control will NOT show up anywhere in the START menu, there will be no icons for it, and it won't even show up in the CTRL-ALT-DELETE menu. Only you can access it with a special key sequence/password.</b></font></p> <p><b><font color="#000066">Please visit us at <a href="http://www.privacy-control.com">http://www.privacy-control.com</a> for more information.</font></b></p> </div> </div> <p><b><font color="#000066"> Have peace of mind for </font></b><font color="#000066"><font color="#BF0026" size="4">ONLY $19.95</font></font><b><font color="#000066">. You can order securely and pay no shipping since it is a direct download!</font></b></p> <p><b><font color="#000066"><a href="http://www.privacy-control.com/order.html">Click Here To Order</a></font></b></p> <p>To be opt-out from our future mailing please email op...@ne...<br> </p> <p>You can also opt out by mail; Please send your request to: </p> <p>11314 Ventura Blvd Suite #141 <br> Studio City, CA 91604</p> <p>Or by Phone (213) 216 8305</p> </body> </html> |
From: Jeff D. <da...@da...> - 2003-03-07 22:52:26
|
Hi Andrey > does it possible to replace index.php:544 line > define("CHARSET", "iso-8859-1"); > with the > if (defined("CHARSET")) define("CHARSET", "iso-8859-1"); > ? Okay, done. > And would be great to have possibility to override > COMPRESS_OUTPUT variable and $DBParams array. COMPRESS_OUTPUT is commented out in the distributed index.php, so if you want to define it in your custom configu file, there should be no problem. (Note that the default behavior can only be obtained by leaving COMPRESS_OUTPUT undefined.) As for DBParams, I'd rather not add that logic/clutter to index.php right now. (Joby Walker is working on a whole new configuration scheme which should make the kind of customization you're doing easier.) > (I may post this to 'feature request' form, but my English is too poor > :( Your English is just fine! Jeff |
From: Jeff D. <da...@da...> - 2003-03-04 16:06:37
|
On Tue, 04 Mar 2003 20:03:02 +1000 Cameron Brunner <ga...@in...> wrote: > These diff's removed 3 warnings on my phpWiki setup. > > turns out the configurator's config doesnt seem to contain > > if (!defined('RECENT_CHANGES')) define ('RECENT_CHANGES', > 'RecentChanges'); The configurator script is not kept as up-to-date as might be desirable. (When in doubt, the distributed index.php is still the authoritative source for configuration information.) (Joby's working on a new scripted configuration system that will (when it works) probably replace index.php.) I'll go stick RECENT_CHANGES, and the CACHE_CONTROL defines into either the configurator script or lib/config.php right now... Does anybody know of any others that are missing? |
From: Jeff D. <da...@da...> - 2003-03-04 15:29:14
|
On Tue, 04 Mar 2003 19:18:03 +1000 Cameron Brunner <ga...@in...> wrote: > >I suspect my encoding is faster than your ord() method since > >in mine the inner loops are all done in C code. > Your function certainly is faster. Woohoo! I win, I win! (What do I get?) > The reason I was suggesting ord() is that I avoid regex's because nearly > > every time I use them they are slower than doing it the simplest way in > PHP. As long as the "simplest way" is a built-in PHP function, then yes. But if the regexp allows you to avoid inner loops in PHP, then it's probably a good thing. > OK, I'll agree to that, obviously there have been a lot of 'only a few > more ms' added into wiki tho over time. Yes. The motivation (particularly in 1.3.x) has been for features rather than for speed. If you want something light-weight, than PhpWiki is probably not for you. (I'm not saying that profiling/optimization wouldn't be helpful, but we've got enough lines of code, that we're never going to be blazingly fast.) > Profiling is god when it comes to cleaning up your code. So many > applications I have managed to triple or better their speed with careful > > use of APD. APD has since moved to pear (only found this out when i went > > to reinstall it). I can't get it to compile AND work tho so I dont know > whats up with that. http://pear.php.net/package-info.php?pacid=118 has > the latest versions. Oof. I'll take a quick look into it and see if I can get it to work. > I really could have done WITHOUT knowing that, now to rewrite a lot of > code. Thanks for the info tho. Well the way I set it then its a case of > setting to serialize and then falling back to begin/commit? Yes, that's the safe, paranoid approach. BUT note that if you use "serialize", then your UPDATEs and DELETEs will fail with "ERROR: Can't serialize access due to concurrent update" upon concurrent updates. (That doesn't happen if you don't use "serialize" transaction mode.) So you'll have to check for and handle that. (I believe at that point, you have to ROLLBACK and start the whole transaction again...) (I'm now remembering why we just lock the tables...) If you're really interested in performance, then you need to take a closer look at your code and figure out exactly which level of concurrency protection you do need. > I could scream now, turns out it REFUSES to output anything when using > suexec cgi-bin php, module php works tho, does anyone know the reason > for this? Is this done on purpose or what? Oh. Take a look at the giant funky nested IF tangle at the bottom of index.php. It's supposed to call main() when index.php is invoked as the top level PHP script, but not when index.php is included by some other script. (Someone added that to support image caching by plugins --- and it does need to be fixed.) Make sure 'lib/main.php' is actually being included. (If you're not using the image caching you can just get rid of all the ifs.) Then again, if you're getting SQL action out the back (like you said you were) I think that means it must be getting past index.php.... |
From: Cameron B. <ga...@in...> - 2003-03-04 10:02:35
|
These diff's removed 3 warnings on my phpWiki setup. I dont know if the request patch is as good as it could be but I could not find anywhere else that called the second function that I cleaned up so I assumed on the input. Hopefully I was right. That said, I have 1 more warning, here it is below, In template 'head'(In template 'html'?):58: Notice[8]: Use of undefined constant RECENT_CHANGES - assumed 'RECENT_CHANGES': * <link rel="alternate" type="application/rss+xml" title="RSS" href="<?=WikiURL(RECENT_CHANGES, array('format' => 'rss'))?>" /> turns out the configurator's config doesnt seem to contain if (!defined('RECENT_CHANGES')) define ('RECENT_CHANGES', 'RecentChanges'); and also the template doesnt do a defined (tho i assume that should be assumed). So far so good, only remaining bug I know of is the cgi php errors. Cameron Brunner inetsalestech.com |
From: Cameron B. <ga...@in...> - 2003-03-04 09:17:38
|
Jeff Dairiki wrote: >>>function pagename_to_filename ($pagename) { >>> return preg_replace('/(?<! % | %.)([A-Z])/x', '^$1', >>> rawurlencode($pagename)); >>>} >>> >>> >>>This should escape "~PageName" to "%7E^Page^Name" >>> >>> >>That's 1 way but the reason i suggested ord() was it will be quite fast, >> >>consider that filename length limit on windows is 128 or 256 (forget) >>its not much of an issue unless pages are 64/128chars long. As for >>readability, it wouldnt be hard to make a little script that opened the >>dir and listed like thi >> > >The current official page name limit in PhpWiki is 100 characters. > >I suspect my encoding is faster than your ord() method since >in mine the inner loops are all done in C code. If you want to >use ord, the inner loop (over each of the characters in the >page name) is in PHP. (Ord() only works on one character at a time.) > >However, again, I suspect either way would be plenty fast enough. > > Your function certainly is faster. 10000 loops with your regex real 0m0.379s 10000 loops with ord real 0m0.629s The reason I was suggesting ord() is that I avoid regex's because nearly every time I use them they are slower than doing it the simplest way in PHP. >>Just retested it now (last time it wouldnt work at all, blank page all >>the time) and I got the same as I'm getting in PgSQL, blank pages after >>loading the virgin wiki. >> >> > >Okay, so the problem you're seeing now (blank pages) probably is >not tied specifically to either backend. > > > > >>As for performance, you cant just write off optimizing it knowing that >>wiki is slow, you need to do what you can with all the little things >>first, if its as simple as profiling each function and seeing whats the >>slowest then so be it. (http://apd.communityconnect.com/ for profiling >>abilities >> > >When I wrote the gzcompress code I did profile it. >Oh my (not very fast) machine, the time to gzcompress a few kbytes >of marked up data was barely measurable (a few milliseconds). >That's truly insignificant compared to the total page save time. > > OK, I'll agree to that, obviously there have been a lot of 'only a few more ms' added into wiki tho over time. >You're right in that more careful profiling of PhpWiki would >be highly productive. And you're right --- if one is to >start worrying about non-trivial optimization the only way >to start is by profiling. > >I was unaware of APD. Thanks for pointing that out > Profiling is god when it comes to cleaning up your code. So many applications I have managed to triple or better their speed with careful use of APD. APD has since moved to pear (only found this out when i went to reinstall it). I can't get it to compile AND work tho so I dont know whats up with that. http://pear.php.net/package-info.php?pacid=118 has the latest versions. >>>I'm no pgsql expert. Are those begin/commits sufficient to prevent >>>concurrent PhpWikis from getting partially updated data? (When the >>>update happens over several UPDATE operations within a single >>>begin/commit, that is.) >>> >>> >>> >>> >>Yes, no data is updated that the other clients can read it until the >>commit is sent. It will also allow multiple people to hit a table at >>once updating not stop the others from doing their thing before they can >> >>write. >> >> > >That's not correct (at least by default). In this mornings readings, >I learned: By default, you get "'Read Committed' Isolation" which does >indeed guarantee that you see no uncommitted data. But that's not >equivalent >to a write lock, in that the state of the database can still change in the >middle of your transaction (through no action of yours). > >It's a subtle enough problem that I'd be very leery about changing the >locking code on a "production" PhpWiki without some very careful >inspection >and testing of the WikiDB code. > >Ref: >http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=transaction-iso.html > > > I really could have done WITHOUT knowing that, now to rewrite a lot of code. Thanks for the info tho. Well the way I set it then its a case of setting to serialize and then falling back to begin/commit? >>It will also allow multiple people to hit a table at >>once updating not stop the others from doing their thing before they can >> >>write. >> >> > >Actually, no it's not. > > > >>Enlighten me, I dont have much time but for worthwhile projects and >>various things I'll use daily im willing to spend a bit of time on them. >> >> > >Well, if you really want to get into the guts of the WikiDB, "use the >source, Luke!" > > I'm trying but with how insanely OOP it is its hard sometimes. 1 minute I have PEAR open, the next i have a wiki lib, the next a config file. Whole lot of fun there. >If you wanted to contribute to PhpWiki, a project which is probably pretty >finite in scope, which would be useful would be to get APD running and >give us a report of some profiling statistics... > > Once I get it working I'll organize some profiles of various tasks and see whats so slow (suspecting the regex's for page highlighting) >>I'll look into where its dying more today, I'v been quite busy. >> >> > >That should be your first priority, PhpWiki-wise, I should think. > > I could scream now, turns out it REFUSES to output anything when using suexec cgi-bin php, module php works tho, does anyone know the reason for this? Is this done on purpose or what? Cameron Brunner inetsalestech.com |
From: Jeff D. <da...@da...> - 2003-03-04 03:41:43
|
Okay, I think I've fixed the postgres binary data problem (in CVS). (But I'm still clueless as to the cause of Cameron's blank page problem --- though currently that appears to be unrelated to postgres.) Jeff On Sat, 01 Mar 2003 20:30:00 +1000 Cameron Brunner <ga...@in...> wrote: > i am using the latest phpwiki from cvs as of about 2hours ago (thats how >=20 > long its taken me to get this far) >=20 >=20 > lib/WikiDB/backend/PearDB.php:681: Fatal[256]: wikidb_backend_pgsql:=20 > fatal database error >=20 > * DB Error: unknown error > * (UPDATE page SET hits=3D0, > pagedata=3D'a:1:{s:12:"_cached_html";s:875:"x=DA=95UMo=DBF=10=BD=EB= W=0Cx=08Z@=90L)v=E2=95k > hQ=B4@=D2=A4=A9=8A=1E=83=159=127Z=EE=B2=DC=A1=14=D5=F0=7F=EF=9B=A5= =E5=F6=D0=BA=EEMZ=0Eg=DE=BE=8F=E1{S^=9ABz=1B=D26=F6-=D7=C2_=A40Ks=97 > =1E|=92S=C7=C5=EA=BD)_=99=A2=B3;=D6=FF=9F=8En=EF=C6=BA=0Bsw=9F=CC5 > * 76=B1=16=14=ABd=CA=D2=14o=EA=DA=85=DD=07=9C$=3Dz=8D=92*=06=E1 > =C5=CA=EA=D0;g.=B4=F6=EA=C2=147=B5;P=E5mJ=DF=14=8F=CDoo=BA=F3=99=B8= ]=83W=ED=C63I=EC=8A=DBu$[=D7d)=F0=91t,=8E=E97=BC9%=EB=3D=9D=E2@=8D=3D=E4=D3: > =92KT=C5=96i=E8=E8=E8=A4!;i=D9=06=C0=DB=0E=9E=C4=89=E7)U=B6sb=BD=FB= =83s=07i=98=8E=B1=AF=13=D9PO=8A=953=A5=B2 >=20 >=20 > i get that when it does "Loading up virgin wiki" |
From: Jeff D. <da...@da...> - 2003-03-04 00:01:23
|
> >function pagename_to_filename ($pagename) { > > return preg_replace('/(?<! % | %.)([A-Z])/x', '^$1', > > rawurlencode($pagename)); > >} > > > > > >This should escape "~PageName" to "%7E^Page^Name" > That's 1 way but the reason i suggested ord() was it will be quite fast, > > consider that filename length limit on windows is 128 or 256 (forget) > its not much of an issue unless pages are 64/128chars long. As for > readability, it wouldnt be hard to make a little script that opened the > dir and listed like this The current official page name limit in PhpWiki is 100 characters. I suspect my encoding is faster than your ord() method since in mine the inner loops are all done in C code. If you want to use ord, the inner loop (over each of the characters in the page name) is in PHP. (Ord() only works on one character at a time.) However, again, I suspect either way would be plenty fast enough. > Just retested it now (last time it wouldnt work at all, blank page all > the time) and I got the same as I'm getting in PgSQL, blank pages after > loading the virgin wiki. Okay, so the problem you're seeing now (blank pages) probably is not tied specifically to either backend. > As for performance, you cant just write off optimizing it knowing that > wiki is slow, you need to do what you can with all the little things > first, if its as simple as profiling each function and seeing whats the > slowest then so be it. (http://apd.communityconnect.com/ for profiling > abilities) When I wrote the gzcompress code I did profile it. Oh my (not very fast) machine, the time to gzcompress a few kbytes of marked up data was barely measurable (a few milliseconds). That's truly insignificant compared to the total page save time. You're right in that more careful profiling of PhpWiki would be highly productive. And you're right --- if one is to start worrying about non-trivial optimization the only way to start is by profiling. I was unaware of APD. Thanks for pointing that out. > >I'm no pgsql expert. Are those begin/commits sufficient to prevent > >concurrent PhpWikis from getting partially updated data? (When the > >update happens over several UPDATE operations within a single > >begin/commit, that is.) > > > > > Yes, no data is updated that the other clients can read it until the > commit is sent. It will also allow multiple people to hit a table at > once updating not stop the others from doing their thing before they can > > write. That's not correct (at least by default). In this mornings readings, I learned: By default, you get "'Read Committed' Isolation" which does indeed guarantee that you see no uncommitted data. But that's not equivalent to a write lock, in that the state of the database can still change in the middle of your transaction (through no action of yours). It's a subtle enough problem that I'd be very leery about changing the locking code on a "production" PhpWiki without some very careful inspection and testing of the WikiDB code. Ref: http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=transaction-iso.html > It will also allow multiple people to hit a table at > once updating not stop the others from doing their thing before they can > > write. Actually, no it's not. > > > Enlighten me, I dont have much time but for worthwhile projects and > various things I'll use daily im willing to spend a bit of time on them. Well, if you really want to get into the guts of the WikiDB, "use the source, Luke!" If you wanted to contribute to PhpWiki, a project which is probably pretty finite in scope, which would be useful would be to get APD running and give us a report of some profiling statistics... > I'll look into where its dying more today, I'v been quite busy. That should be your first priority, PhpWiki-wise, I should think. |
From: Zot O'C. <zo...@wh...> - 2003-03-03 23:16:42
|
On Mon, 2003-03-03 at 12:59, Jeff Dairiki wrote: > > QUESTIONS FOR THOSE WHO USE OR KNOW ABOUT POSTGRES: > ================================================== > > 1. Am I missing something? > Binary data can be a pain no matter what. If the connection to the DB goes through a driver (ODBC) or other proxy, then Binary data is one more hurdle. > 2. All you PhpWiki postgres users: what version of postgres do > you run? Does it support BYTEA? 1.3.4 I got working because dbm support was so broken. > > 3. Do you have opinions regarding solution #1 vs #2 (or #3)? > #2. #1 will not work on many installations out there, and one more dependency is a pain. Also as mentioned above, it will work with driver and things like pgaccess/phpMyAdmin without upgrading. > > Jeff > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Phpwiki-talk mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phpwiki-talk -- Zot O'Connor http://www.ZotConsulting.com http://www.WhiteKnightHackers.com |
From: Cameron B. <ga...@in...> - 2003-03-03 22:18:58
|
Jeff Dairiki wrote: >On Mon, 03 Mar 2003 08:56:18 +1000 >Cameron Brunner <ga...@in...> wrote: > > >>>(For a fairly well tested lighter-weight backend try the dba backend.) >>> >>> >>> >>> >>That requires recompiling php. >> >> > >Okay, never mind then. > > > >>>The flat-file backend is a recent addition by Jochen Kalmbach >>>(<Jo...@ka...>). That said the only bug I know of in it >>>is that it won't work on case-insensitive file systems (Windows). >>> >>> >>> >>Simple enough but just sitting here i had an idea how to fix that, >>instead of the filename on each character of the filename ord() it so as >> >>it just becomes a big int string, makes case insensitive a non issue. I >>would suggest md5 but with just using ord/chr you can reverse the >>process so you can get the real name back from the file. >> >> > >Yeah, I was thinking along similar lines. Current the file names >are rawurlencoded to convert non alphanumerics to %xx codes. >On those systems which have case insensitive file-systems, add >one more step to add a carat (or something) in front >of capital letters. (An underscore would be a better choice, >except that underscores are not escaped by rawurlencode.) >I think this is a good choice, both since it tends to preserve >the legibility of file names, and doesn't lengthen filenames >unnecessarily. (As I suspect filename length may become >an issue on some systems...) > >Untested code: > >function pagename_to_filename ($pagename) { > return preg_replace('/(?<! % | %.)([A-Z])/x', '^$1', > rawurlencode($pagename)); >} > >(The look-behind assertion in the regexp prevents caratifying of >capital letters in (%7E) rawurl escape sequences.) > >This should escape "~PageName" to "%7E^Page^Name" > > That's 1 way but the reason i suggested ord() was it will be quite fast, consider that filename length limit on windows is 128 or 256 (forget) its not much of an issue unless pages are 64/128chars long. As for readability, it wouldnt be hard to make a little script that opened the dir and listed like this realfilename intstringhere Then again, there's nothing to say you couldnt make a plugin function to do the filename encoding and in that detect if its unix, no encode, if win32, watevr you prefer. >(But since it looks like you're running on a unix server, >is there some other problem which is keeping you from using >the flat-file backend?) > > > Just retested it now (last time it wouldnt work at all, blank page all the time) and I got the same as I'm getting in PgSQL, blank pages after loading the virgin wiki. >>>There is an unadvertised install time config value which (is supposed >>>to) defeat the caching of the marked-up data altogether. >>> >>> >>> >>ummmm, im not saying dont cache, im saying dont compress it when u write >> >>it to the db/filesystem for performance reasons, as for suggesting bzip, >> >>if people are worried about disk usage bzip will be better than gzip, >>there should also be configurable compression level IMO, 9 takes a lot >>of cpu, most of the time i find 2 is fine for on the fly stuff and uses >>a fair bit less cpu. >> >> > >I understood what you meant. I think when gzip, performance >considerations >are a non-issue. The zipping is much faster than the rest of the PhpWiki >code. >Bzip would be fine --- however I suspect the space savings are not large, >and >I didn't do that to avoid code complications/complexity. > > As for performance, you cant just write off optimizing it knowing that wiki is slow, you need to do what you can with all the little things first, if its as simple as profiling each function and seeing whats the slowest then so be it. (http://apd.communityconnect.com/ for profiling abilities) >>>>Also I am curious why the LOCK TABLE's in the code? eel free. >>>> >>>> >>Simplest way is just to make lock() in pgsql do nothing, its already >>surrounded by begin/commit because of pear it seems. >> >> > >No we do the begin/commit's (in lib/WikiDB/backend/pgsql.php). > >I'm no pgsql expert. Are those begin/commits sufficient to prevent >concurrent PhpWikis from getting partially updated data? (When the >update happens over several UPDATE operations within a single >begin/commit, that is.) > > Yes, no data is updated that the other clients can read it until the commit is sent. It will also allow multiple people to hit a table at once updating not stop the others from doing their thing before they can write. >>when you call getpage, why not just have an extra flag on it for grab >>the latest version or something? it would only need to be simple. maybe >>make an extended getpage that you could feed the revision you wanted as >>well as the page? should be simple enough >> >> > >It's not that simple. And the two selects, I suspect, are among the >least of the SQL efficiency concerns in the current code. > Enlighten me, I dont have much time but for worthwhile projects and various things I'll use daily im willing to spend a bit of time on them. >>>Do you get HTTP headers back? >>> > >Those headers don't show any obvious problems. (However there should >be ETag and Last-Modified headers too if PhpWiki was running to >normal completion on a page view --- of course we already knew it wasn't.) > > I'll look into where its dying more today, I'v been quite busy. >I still plan on looking at the original postgres bug later today. >(I have to install postgres first --- among other things.) > If you would prefer a shell to work on let me know and I can organize it. Cameron Brunner inetsalestech.com |
From: Jeff D. <da...@da...> - 2003-03-03 20:59:03
|
After spending some time reading postgres docs, here's what I think I've figured out. It appears that a postgres TEXT is not binary safe. TEXTs have associated character encodings. It seems null bytes are explicitly not allowed within TEXTs. Recent postgreses have a BYTEA type which meant for storing binary data. I see two solutions to the current problem: 1) Change the pgsql schema to use BYTEAs for the page and version metadata fields. This brings some compatibility concerns, since it appears that only recent versions of postgres support BYTEAs. (It first shows up in the 7.2 docs, though my 7.1 installation seems to have basic support for it.) BYTEAs are also a bit of a pain to deal with since data has to be double-escaped: select octet_length('a\0b'::bytea); => 1 select octet_length('a\\000b'::bytea); => 3 2) Encode binary data (in this case the compressed cached page markup data) in some ASCII encoding. (probably base64). This has the advantages of maximum backwards compatibility, and of not requiring schema changes. (Disadvantage is 33% larger storage requirement for the binary data.) For now, I favor approach #2. QUESTIONS FOR THOSE WHO USE OR KNOW ABOUT POSTGRES: ================================================== 1. Am I missing something? 2. All you PhpWiki postgres users: what version of postgres do you run? Does it support BYTEA? 3. Do you have opinions regarding solution #1 vs #2 (or #3)? Jeff |
From: Jeff D. <da...@da...> - 2003-03-03 17:55:03
|
On Mon, 03 Mar 2003 08:56:18 +1000 Cameron Brunner <ga...@in...> wrote: > >(For a fairly well tested lighter-weight backend try the dba backend.) > > > > > That requires recompiling php. Okay, never mind then. > >The flat-file backend is a recent addition by Jochen Kalmbach > >(<Jo...@ka...>). That said the only bug I know of in it > >is that it won't work on case-insensitive file systems (Windows). > > > Simple enough but just sitting here i had an idea how to fix that, > instead of the filename on each character of the filename ord() it so as > > it just becomes a big int string, makes case insensitive a non issue. I > would suggest md5 but with just using ord/chr you can reverse the > process so you can get the real name back from the file. Yeah, I was thinking along similar lines. Current the file names are rawurlencoded to convert non alphanumerics to %xx codes. On those systems which have case insensitive file-systems, add one more step to add a carat (or something) in front of capital letters. (An underscore would be a better choice, except that underscores are not escaped by rawurlencode.) I think this is a good choice, both since it tends to preserve the legibility of file names, and doesn't lengthen filenames unnecessarily. (As I suspect filename length may become an issue on some systems...) Untested code: function pagename_to_filename ($pagename) { return preg_replace('/(?<! % | %.)([A-Z])/x', '^$1', rawurlencode($pagename)); } (The look-behind assertion in the regexp prevents caratifying of capital letters in (%7E) rawurl escape sequences.) This should escape "~PageName" to "%7E^Page^Name" (But since it looks like you're running on a unix server, is there some other problem which is keeping you from using the flat-file backend?) > >There is an unadvertised install time config value which (is supposed > >to) defeat the caching of the marked-up data altogether. > > > ummmm, im not saying dont cache, im saying dont compress it when u write > > it to the db/filesystem for performance reasons, as for suggesting bzip, > > if people are worried about disk usage bzip will be better than gzip, > there should also be configurable compression level IMO, 9 takes a lot > of cpu, most of the time i find 2 is fine for on the fly stuff and uses > a fair bit less cpu. I understood what you meant. I think when gzip, performance considerations are a non-issue. The zipping is much faster than the rest of the PhpWiki code. Bzip would be fine --- however I suspect the space savings are not large, and I didn't do that to avoid code complications/complexity. > >>Also I am curious why the LOCK TABLE's in the code? eel free. > > Simplest way is just to make lock() in pgsql do nothing, its already > surrounded by begin/commit because of pear it seems. No we do the begin/commit's (in lib/WikiDB/backend/pgsql.php). I'm no pgsql expert. Are those begin/commits sufficient to prevent concurrent PhpWikis from getting partially updated data? (When the update happens over several UPDATE operations within a single begin/commit, that is.) > when you call getpage, why not just have an extra flag on it for grab > the latest version or something? it would only need to be simple. maybe > make an extended getpage that you could feed the revision you wanted as > well as the page? should be simple enough It's not that simple. And the two selects, I suspect, are among the least of the SQL efficiency concerns in the current code. > >Do you get HTTP headers back? Those headers don't show any obvious problems. (However there should be ETag and Last-Modified headers too if PhpWiki was running to normal completion on a page view --- of course we already knew it wasn't.) I still plan on looking at the original postgres bug later today. (I have to install postgres first --- among other things.) |
From: Asheesh L. <as...@as...> - 2003-03-02 21:06:27
|
On Sun, 2 Mar 2003, harobed wrote: > Hello, > > I've many question about a configuration of phpwiki. > > 1. I don't success to edit page if I don't connect from localhost. I > searched in index.php to resolve this problem but no success. This is probably a problem with your Apache configuration. If Apache does know your fully-qualified domain name, because you don't specify it, any page that redirects will redirect to the one it does know, Localhost. Edit pages redirect, I believe, and that's why you get this. See http://httpd.apache.org/docs/mod/core.html#usecanonicalname for more details and how to disable it. > 2. How to define the first page to show ? > > 3. How to be log in wiki page ? Other takers? -- Asheesh. -- economics, n.: Economics is the study of the value and meaning of J.K. Galbraith. -- Mike Harding, "The Armchair Anarchist's Almanac" |
From: harobed <ma...@ha...> - 2003-03-02 20:49:51
|
Hello, I've many question about a configuration of phpwiki. 1. I don't success to edit page if I don't connect from localhost. I searched in index.php to resolve this problem but no success. 2. How to define the first page to show ? 3. How to be log in wiki page ? Thank you harobed |
From: Jeff D. <da...@da...> - 2003-03-02 17:03:32
|
(Points answered in order from simplest to hardest...) Cameron Brunner <ga...@in...> wrote: > And last but not least, the file storage format wont work for me :( > figured it might be an easy replacement for pgsql and be lightweight in > comparison but obviously not yet. If anyone wants help with the file > class I might have some time to work on them and help get it working. (For a fairly well tested lighter-weight backend try the dba backend.) The flat-file backend is a recent addition by Jochen Kalmbach (<Jo...@ka...>). That said the only bug I know of in it is that it won't work on case-insensitive file systems (Windows). That said, if you can either find of fix bugs in it, do let one of us know (preferably Jochen). > I'm not certain on this but i THINK pgsql requires to be sent a query > that tells it to use 'sql standard' quoting rather than addslashes so i > think this is where the issue comes in. TEXT column's have no issue > storing binary data. (I dont normally use PEAR so I dont know exactly > whats causing the problems) Thanks. I'll look into that (probably tomorrow). (If you feel like taking a crack at it, feel free!) > I highly suggest making that a install time configuration value, its > good for disk space but not brilliant for speeds in most cases and also > bad for stuff like this. PHP obviously also supports bzip2 as well as > gzip, might be worth adding for those with limited space. There is an unadvertised install time config value which (is supposed to) defeat the caching of the marked-up data altogether. I was going to suggest you try that until I noticed that it's currently broken and wouldn't help in your case (currently it prevents the use of the cached data, but not the generation of it. :-/). I'll fix that, and make it and advertised feature next week. I think speed is a non-issue for gzip compression. The time to parse/ mark-up the page text is two or three orders of magnitude more than the gzip time. Same for gzip decompression vs the rest of the display code. PhpWiki is not fast. Bzip compression is slower than gzip. I don't know if it's enough slower to be an issue (probably not.) I didn't add bzip support just because I figured the compression gain wasn't worth the added code complexity. The data being compressed, in this case, is very compressible (rather repetitive text), and I don't think bzip will compress it much better than gzip, in any case. > Also I am curious why the LOCK TABLE's in the code? I would have thought > > a transaction would have been sufficient? (by all means correct me if im > > wrong) It's because: 1) most of the developers (especially me) use mostly MySQL. 2) all (two) of the SQL backends share most of their code --- this favors using the lowest-common-denominator locking paradigm. It could certainly be fixed. Given how slow the PHP part of the PhpWiki code is, I'm not sure it's an issue worth worrying about. If you feel like taking it on, however, please feel free. > Also just a note, for optimize() in pgsql.php it would be better to do > VACUUM FULL $table then ANALYZE $table, vacuum full will lock the table > completely tho and is slower, > http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=sql-vacuum.html > > for more info. Okay. Any idea how long "takes much longer" is? Are we talking a second or two? If it could be more that, the delay might start to become significant. If not, then it's probably better to do the less obtrusive ANALYZE automatically, and let the admin do the FULL manually, (or via cron)... > Just a suggestion on above, you can probably > SELECT * FROM page, version WHERE page.id=version.id AND > pagename='HomePage' ORDER BY version DESC LIMIT 1 > to get latest version of a document and save a query. Yes. The precomputed latestversion is an optimization so that SQL doesn't have to sort the versions every time. Also it allows one to fetch information on the latest version of each page in on query. Those two selects you note though could certainly be combined into a single select (still using the precomputed latestversion.) This would take a lot of work though --- the reason for the separate selects has to do with the WikiDB API. The first SELECT results from the $page = $dbi->getPage() call and the second results from the $version = $page->getCurrentRevision() call. > Next issue on the list, i now goto index.php after loading the virgin > wiki as it so put it and i get nothing (blank document, no html at all). > > I added an echo into simpleQuery() into pear's pgsql.php and got this > > SELECT * FROM page WHERE pagename='global_data' > BEGIN WORK > LOCK TABLE page > LOCK TABLE version > LOCK TABLE link > LOCK TABLE recent > LOCK TABLE nonempty > SELECT latestversion FROM page, recent WHERE page.id=recent.id AND > pagename='HomePage' SELECT * FROM page, version WHERE page.id=version.id > AND pagename='HomePage' AND version=1 > COMMIT WORK Hmmm. I'm stumped on that one. Do you get HTTP headers back? The best way to debug these things is to (looking at the source) trace the program flow and add echo's along to way to figure out where things crap out. (My guess is that you're getting stuck somewhere in displayPage() (in lib/display.php).) |
From: Cameron B. <ga...@in...> - 2003-03-02 00:01:13
|
> > >>i am using the latest phpwiki from cvs as of about 2hours ago (thats ho= w >> >>long its taken me to get this far) >> >> >>lib/WikiDB/backend/PearDB.php:681: Fatal[256]: wikidb_backend_pgsql:=20 >>fatal database error >> >> * DB Error: unknown error >> * (UPDATE page SET hits=3D0, >> pagedata=3D'a:1:{s:12:"_cached_html";s:875:"x=DA=95UMo=DBF=10=BD=EB= W=0Cx=08Z@=90L)v=E2=95k >> hQ=B4@=D2=A4=A9=8A=1E=83=159=127Z=EE=B2=DC=A1=14=D5=F0?=EF=9B=A5=E5= =F6=D0=BA=EEMZ=0Eg=DE=BE=8F=E1{S^=9ABz=1B=D26=F6-=D7=C2_=A40Ks=97 >> =1E|=92S=C7=C5=EA=BD)_=99=A2=B3;=D6=FF=9F=8En=EF=C6=BA=0Bsw=9F=CC= 5 >> * 76=B1=16=14=ABd=CA=D2=14o=EA=DA=85=DD=07=9C$=3Dz=8D=92*=06=E1 >> =C5=CA=EA=D0;g.=B4=F6=EA=C2=147=B5;P=E5mJ=DF=14=8F=CDoo=BA=F3=99=B8= ]=83W=ED=C63I=EC=8A=DBu$[=D7d)=F0=91t,=8E=E97=BC9%=EB=3D=9D=E2@=8D=3D=E4=D3= : >> =92KT=C5=96i=E8=E8=E8=A4!;i=D9=06=C0=DB=0E=9E=C4=89=E7)U=B6sb=BD=FB= =83s=07i=98=8E=B1=AF=13=D9PO=8A=953=A5=B2 >> >> >>i get that when it does "Loading up virgin wiki" >> =20 >> > >Thanks for the report. Sorry for the trouble. > =20 > fact of life with free software >It may be a quote problem as you suggest or may be character encoding >problem. (Pagedata is currently a TEXT, so postgres might be trying >to interpret it as UTF-8? (I'm not a postgres expert)) > =20 > I'm not certain on this but i THINK pgsql requires to be sent a query=20 that tells it to use 'sql standard' quoting rather than addslashes so i=20 think this is where the issue comes in. TEXT column's have no issue=20 storing binary data. (I dont normally use PEAR so I dont know exactly=20 whats causing the problems) >I'll look into the problem further, but probably not until monday. > > >As a quick hack, if you want to get something up, you can try disabling >compression of the cached data. If you look near the top of >lib/CachedMarkup.php, there are two tests for >function_exists('gzcompress'). >If you replace those by false's, that should defeat the compression. > >(Then wipe your SQL database, and try again...) > =20 > I highly suggest making that a install time configuration value, its=20 good for disk space but not brilliant for speeds in most cases and also=20 bad for stuff like this. PHP obviously also supports bzip2 as well as=20 gzip, might be worth adding for those with limited space. Next issue on the list, i now goto index.php after loading the virgin=20 wiki as it so put it and i get nothing (blank document, no html at all).=20 I added an echo into simpleQuery() into pear's pgsql.php and got this SELECT * FROM page WHERE pagename=3D'global_data' BEGIN WORK LOCK TABLE page LOCK TABLE version LOCK TABLE link LOCK TABLE recent LOCK TABLE nonempty SELECT latestversion FROM page, recent WHERE page.id=3Drecent.id AND pag= ename=3D'HomePage' SELECT * FROM page, version WHERE page.id=3Dversion.id AND pagename=3D'H= omePage' AND version=3D1 COMMIT WORK its obviously doing something, just not sure what *grin* adding=20 ob_implicit_flush() gives me nothing useful. also nothing is found in=20 error_log. Just a suggestion on above, you can probably SELECT * FROM page, version WHERE page.id=3Dversion.id AND=20 pagename=3D'HomePage' ORDER BY version DESC LIMIT 1 to get latest version of a document and save a query. Also just a note, for optimize() in pgsql.php it would be better to do=20 VACUUM FULL $table then ANALYZE $table, vacuum full will lock the table=20 completely tho and is slower,=20 http://www.postgresql.org/docs/view.php?version=3D7.3&idoc=3D1&file=3Dsql= -vacuum.html=20 for more info. Also I am curious why the LOCK TABLE's in the code? I would have thought=20 a transaction would have been sufficient? (by all means correct me if im=20 wrong) And last but not least, the file storage format wont work for me :(=20 figured it might be an easy replacement for pgsql and be lightweight in=20 comparison but obviously not yet. If anyone wants help with the file=20 class I might have some time to work on them and help get it working. Cameron Brunner inetsalestech.com |
From: Carsten K. <car...@us...> - 2003-03-01 16:48:33
|
Haven't had a chance to fully investigate the setlocale problem on Mac OS X. I saw this PHP warning today when loading up a virgin wiki. May or may not be directly related: /Library/WebServer/Documents/finkwiki/lib/FileFinder.php:248: Warning[2]: setlocale() [<a href='http://www.php.net/function.setlocale'>function.setlocale</a>]: Passing locale category name as string is deprecated. Use the LC_* -constants instead. Using 4.3.0pre1. Carsten |
From: Jeff D. <da...@da...> - 2003-03-01 16:36:26
|
On Sat, 01 Mar 2003 20:30:00 +1000 Cameron Brunner <ga...@in...> wrote: > i am using the latest phpwiki from cvs as of about 2hours ago (thats how >=20 > long its taken me to get this far) >=20 >=20 > lib/WikiDB/backend/PearDB.php:681: Fatal[256]: wikidb_backend_pgsql:=20 > fatal database error >=20 > * DB Error: unknown error > * (UPDATE page SET hits=3D0, > pagedata=3D'a:1:{s:12:"_cached_html";s:875:"x=DA=95UMo=DBF=10=BD=EB= W=0Cx=08Z@=90L)v=E2=95k > hQ=B4@=D2=A4=A9=8A=1E=83=159=127Z=EE=B2=DC=A1=14=D5=F0=7F=EF=9B=A5= =E5=F6=D0=BA=EEMZ=0Eg=DE=BE=8F=E1{S^=9ABz=1B=D26=F6-=D7=C2_=A40Ks=97 > =1E|=92S=C7=C5=EA=BD)_=99=A2=B3;=D6=FF=9F=8En=EF=C6=BA=0Bsw=9F=CC5 > * 76=B1=16=14=ABd=CA=D2=14o=EA=DA=85=DD=07=9C$=3Dz=8D=92*=06=E1 > =C5=CA=EA=D0;g.=B4=F6=EA=C2=147=B5;P=E5mJ=DF=14=8F=CDoo=BA=F3=99=B8= ]=83W=ED=C63I=EC=8A=DBu$[=D7d)=F0=91t,=8E=E97=BC9%=EB=3D=9D=E2@=8D=3D=E4=D3: > =92KT=C5=96i=E8=E8=E8=A4!;i=D9=06=C0=DB=0E=9E=C4=89=E7)U=B6sb=BD=FB= =83s=07i=98=8E=B1=AF=13=D9PO=8A=953=A5=B2 >=20 >=20 > i get that when it does "Loading up virgin wiki" Thanks for the report. Sorry for the trouble. It may be a quote problem as you suggest or may be character encoding problem. (Pagedata is currently a TEXT, so postgres might be trying to interpret it as UTF-8? (I'm not a postgres expert)) I'll look into the problem further, but probably not until monday. As a quick hack, if you want to get something up, you can try disabling compression of the cached data. If you look near the top of lib/CachedMarkup.php, there are two tests for function_exists('gzcompress'). If you replace those by false's, that should defeat the compression. (Then wipe your SQL database, and try again...) |
From: Cameron B. <ga...@in...> - 2003-03-01 10:30:02
|
i am using the latest phpwiki from cvs as of about 2hours ago (thats how=20 long its taken me to get this far) lib/WikiDB/backend/PearDB.php:681: Fatal[256]: wikidb_backend_pgsql:=20 fatal database error * DB Error: unknown error * (UPDATE page SET hits=3D0, pagedata=3D'a:1:{s:12:"_cached_html";s:875:"x=DA=95UMo=DBF=10=BD=EB= W=0Cx=08Z@=90L)v=E2=95k hQ=B4@=D2=A4=A9=8A=1E=83=159=127Z=EE=B2=DC=A1=14=D5= =F0=7F=EF=9B=A5=E5=F6=D0=BA=EEMZ=0Eg=DE=BE=8F=E1{S^=9ABz=1B=D26=F6-=D7=C2= _=A40Ks=97=0C=1E|=92S=C7=C5=EA=BD)_=99=A2=B3;=D6=FF=9F=8En=EF=C6=BA=0Bsw=9F= =CC5 * 76=B1=16=14=ABd=CA=D2=14o=EA=DA=85=DD=07=9C$=3Dz=8D=92*=06=E1 =C5=CA=EA=D0;g.=B4=F6=EA=C2=147=B5;P=E5mJ=DF=14=8F=CDoo=BA=F3=99=B8= ]=83W=ED=C63I=EC=8A=DBu$[=D7d)=F0=91t,=8E=E97=BC9%=EB=3D=9D=E2@=8D=3D=E4=D3= :=92KT=C5=96i=E8=E8=E8=A4!;i=D9=06=C0=DB=0E=9E=C4=89=E7)U=B6sb=BD=FB=83s=07= i=98=8E=B1=AF=13=D9PO=8A=953=A5=B2 i get that when it does "Loading up virgin wiki" set to use postgresql, pgsql version 7.3.2 with the schema loaded, there=20 was a user connecting error before but i fixed that and there is 1 row=20 in the pages table so i know for a fact its connecting it just seems as=20 tho the string quote is bad. problem is to modify the string quote=20 function in pear is a BIG task because i'd have to modify it on every=20 select as well as far as i can see. anyone got any suggestions or=20 bugfixes for this? if someone needs access to the server to fix this im=20 willing to set it up. please reply to all not just list, im not currently subscribed. anything else that you might need about the setup should be at=20 http://www.inetsalestech.com/statistics/ Cameron Brunnner inetsalestech.com |
From: Jeff D. <da...@da...> - 2003-02-27 00:01:04
|
> Here's a small patch to fix linkUnknownWikiWord in the MacOSX and > Portland themes. Thanks! It's been applied to CVS. |
From: Todd M. <tm...@ne...> - 2003-02-26 23:20:47
|
Here's a small patch to fix linkUnknownWikiWord in the MacOSX and Portland themes. -- Todd Mokros <tm...@ne...> |
From: Jeff D. <da...@da...> - 2003-02-26 23:15:36
|
On Wed, 26 Feb 2003 12:35:59 -0700 Aredridel <are...@nb...> wrote: > May I suggest a "versioncompat.php" that, for example, moves > HTTP_POST_VARS to _POST, HTTP_GET_VARS to _GET, if is_a is not defined, > defines one (it's trivial to make a function that does the same), and > various other things like that. > If functions need _GET, one should global it, as it's not autoglobal in > < 4.3, but there's no reason not to use the new names. The reason we haven't moved to _GET in PhpWiki is just exactly the autoglobal semantics. Since you'd have to global $_GET; anyway, might was well do $_GET = &$GLOBALS['HTTP_GET_VARS']; (or just use HTTP_GET_VARS.) It makes it explicit that we're not using a real (autoglobal) $_GET. As far as implementing replacement functions, a compat.php is a fine idea. (We already do define functions like this, when they're needed, but their definitions are scattered all over the place. Grep for 'function_exists'.) It's probably a good idea to normalize some of the names, too. E.g. we have isa() instead of is_a() for historical reasons (we invented it before PHP did, I think). |