From: Jeff D. <da...@da...> - 2003-03-04 00:01:23
|
> >function pagename_to_filename ($pagename) { > > return preg_replace('/(?<! % | %.)([A-Z])/x', '^$1', > > rawurlencode($pagename)); > >} > > > > > >This should escape "~PageName" to "%7E^Page^Name" > That's 1 way but the reason i suggested ord() was it will be quite fast, > > consider that filename length limit on windows is 128 or 256 (forget) > its not much of an issue unless pages are 64/128chars long. As for > readability, it wouldnt be hard to make a little script that opened the > dir and listed like this The current official page name limit in PhpWiki is 100 characters. I suspect my encoding is faster than your ord() method since in mine the inner loops are all done in C code. If you want to use ord, the inner loop (over each of the characters in the page name) is in PHP. (Ord() only works on one character at a time.) However, again, I suspect either way would be plenty fast enough. > Just retested it now (last time it wouldnt work at all, blank page all > the time) and I got the same as I'm getting in PgSQL, blank pages after > loading the virgin wiki. Okay, so the problem you're seeing now (blank pages) probably is not tied specifically to either backend. > As for performance, you cant just write off optimizing it knowing that > wiki is slow, you need to do what you can with all the little things > first, if its as simple as profiling each function and seeing whats the > slowest then so be it. (http://apd.communityconnect.com/ for profiling > abilities) When I wrote the gzcompress code I did profile it. Oh my (not very fast) machine, the time to gzcompress a few kbytes of marked up data was barely measurable (a few milliseconds). That's truly insignificant compared to the total page save time. You're right in that more careful profiling of PhpWiki would be highly productive. And you're right --- if one is to start worrying about non-trivial optimization the only way to start is by profiling. I was unaware of APD. Thanks for pointing that out. > >I'm no pgsql expert. Are those begin/commits sufficient to prevent > >concurrent PhpWikis from getting partially updated data? (When the > >update happens over several UPDATE operations within a single > >begin/commit, that is.) > > > > > Yes, no data is updated that the other clients can read it until the > commit is sent. It will also allow multiple people to hit a table at > once updating not stop the others from doing their thing before they can > > write. That's not correct (at least by default). In this mornings readings, I learned: By default, you get "'Read Committed' Isolation" which does indeed guarantee that you see no uncommitted data. But that's not equivalent to a write lock, in that the state of the database can still change in the middle of your transaction (through no action of yours). It's a subtle enough problem that I'd be very leery about changing the locking code on a "production" PhpWiki without some very careful inspection and testing of the WikiDB code. Ref: http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=transaction-iso.html > It will also allow multiple people to hit a table at > once updating not stop the others from doing their thing before they can > > write. Actually, no it's not. > > > Enlighten me, I dont have much time but for worthwhile projects and > various things I'll use daily im willing to spend a bit of time on them. Well, if you really want to get into the guts of the WikiDB, "use the source, Luke!" If you wanted to contribute to PhpWiki, a project which is probably pretty finite in scope, which would be useful would be to get APD running and give us a report of some profiling statistics... > I'll look into where its dying more today, I'v been quite busy. That should be your first priority, PhpWiki-wise, I should think. |