From: Jeff D. <da...@da...> - 2003-03-02 17:03:32
|
(Points answered in order from simplest to hardest...) Cameron Brunner <ga...@in...> wrote: > And last but not least, the file storage format wont work for me :( > figured it might be an easy replacement for pgsql and be lightweight in > comparison but obviously not yet. If anyone wants help with the file > class I might have some time to work on them and help get it working. (For a fairly well tested lighter-weight backend try the dba backend.) The flat-file backend is a recent addition by Jochen Kalmbach (<Jo...@ka...>). That said the only bug I know of in it is that it won't work on case-insensitive file systems (Windows). That said, if you can either find of fix bugs in it, do let one of us know (preferably Jochen). > I'm not certain on this but i THINK pgsql requires to be sent a query > that tells it to use 'sql standard' quoting rather than addslashes so i > think this is where the issue comes in. TEXT column's have no issue > storing binary data. (I dont normally use PEAR so I dont know exactly > whats causing the problems) Thanks. I'll look into that (probably tomorrow). (If you feel like taking a crack at it, feel free!) > I highly suggest making that a install time configuration value, its > good for disk space but not brilliant for speeds in most cases and also > bad for stuff like this. PHP obviously also supports bzip2 as well as > gzip, might be worth adding for those with limited space. There is an unadvertised install time config value which (is supposed to) defeat the caching of the marked-up data altogether. I was going to suggest you try that until I noticed that it's currently broken and wouldn't help in your case (currently it prevents the use of the cached data, but not the generation of it. :-/). I'll fix that, and make it and advertised feature next week. I think speed is a non-issue for gzip compression. The time to parse/ mark-up the page text is two or three orders of magnitude more than the gzip time. Same for gzip decompression vs the rest of the display code. PhpWiki is not fast. Bzip compression is slower than gzip. I don't know if it's enough slower to be an issue (probably not.) I didn't add bzip support just because I figured the compression gain wasn't worth the added code complexity. The data being compressed, in this case, is very compressible (rather repetitive text), and I don't think bzip will compress it much better than gzip, in any case. > Also I am curious why the LOCK TABLE's in the code? I would have thought > > a transaction would have been sufficient? (by all means correct me if im > > wrong) It's because: 1) most of the developers (especially me) use mostly MySQL. 2) all (two) of the SQL backends share most of their code --- this favors using the lowest-common-denominator locking paradigm. It could certainly be fixed. Given how slow the PHP part of the PhpWiki code is, I'm not sure it's an issue worth worrying about. If you feel like taking it on, however, please feel free. > Also just a note, for optimize() in pgsql.php it would be better to do > VACUUM FULL $table then ANALYZE $table, vacuum full will lock the table > completely tho and is slower, > http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=sql-vacuum.html > > for more info. Okay. Any idea how long "takes much longer" is? Are we talking a second or two? If it could be more that, the delay might start to become significant. If not, then it's probably better to do the less obtrusive ANALYZE automatically, and let the admin do the FULL manually, (or via cron)... > Just a suggestion on above, you can probably > SELECT * FROM page, version WHERE page.id=version.id AND > pagename='HomePage' ORDER BY version DESC LIMIT 1 > to get latest version of a document and save a query. Yes. The precomputed latestversion is an optimization so that SQL doesn't have to sort the versions every time. Also it allows one to fetch information on the latest version of each page in on query. Those two selects you note though could certainly be combined into a single select (still using the precomputed latestversion.) This would take a lot of work though --- the reason for the separate selects has to do with the WikiDB API. The first SELECT results from the $page = $dbi->getPage() call and the second results from the $version = $page->getCurrentRevision() call. > Next issue on the list, i now goto index.php after loading the virgin > wiki as it so put it and i get nothing (blank document, no html at all). > > I added an echo into simpleQuery() into pear's pgsql.php and got this > > SELECT * FROM page WHERE pagename='global_data' > BEGIN WORK > LOCK TABLE page > LOCK TABLE version > LOCK TABLE link > LOCK TABLE recent > LOCK TABLE nonempty > SELECT latestversion FROM page, recent WHERE page.id=recent.id AND > pagename='HomePage' SELECT * FROM page, version WHERE page.id=version.id > AND pagename='HomePage' AND version=1 > COMMIT WORK Hmmm. I'm stumped on that one. Do you get HTTP headers back? The best way to debug these things is to (looking at the source) trace the program flow and add echo's along to way to figure out where things crap out. (My guess is that you're getting stuck somewhere in displayPage() (in lib/display.php).) |