pas-dev Mailing List for Perl Application Server (Page 18)
Status: Beta
Brought to you by:
mortis
You can subscribe to this list here.
2002 |
Jan
|
Feb
(6) |
Mar
(19) |
Apr
(3) |
May
(147) |
Jun
(6) |
Jul
(4) |
Aug
(12) |
Sep
(1) |
Oct
(12) |
Nov
(23) |
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(4) |
Feb
(12) |
Mar
(13) |
Apr
(16) |
May
(28) |
Jun
(9) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
|
Nov
(84) |
Dec
(25) |
2004 |
Jan
(5) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(13) |
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(5) |
Sep
(5) |
Oct
|
Nov
|
Dec
|
From: Kyle R . B. <mo...@vo...> - 2002-04-29 18:09:25
|
> As soon as I scrape off the bitter, I'll help. How do you wanna do this? > Outline what needs done, or just have at it? I'd like to look at using DocBook [http://www.docbook.org], as that will allow us to target txt, html, ps, and pdf all at once. I'd like to appraoch this a few ways. First, by finishing all of the interals documentation (the in-line POD API documentation). Then trying to come up with a table of contents for three major user style manuals. I think a good approach would be to write an Administrator's Guide (installation, configuration, on-going maintenence, external software dependencies, system issues, etc), a Developers's Guide (users guid - how the software is organized, a walk-through of how to develop PSP pages and Page objects [request handlers], a chaper on 'best practices', how to redirect, how to forward a request [and so on], a module reference [the api docs]), and an Internals Developers's Guide (the people on this list - code organizaion, minimum coding/documentation standards, etc.). How does that sound? Your idea of developing an outline is probably a good one. I'm up for doing that on the mailing list (which will help get input from multiple people, as well as be an archive of what we come up with). So, shall we comment on the 3 major user manuals as described above? Is that a good place to start? Does the description of their coverage sound appropriate? If we [most of us] agree on those 3 major manuals as a starting point, then the next issue will be the TOCs for them. I think we should probably start with the Admin guide, and the Developer's guide, as those will probably be the most used. The internals developer guide will probably be just fine as the POD api documentation. What say ye? Kyle -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: John B. <jo...@po...> - 2002-04-29 14:27:41
|
> Comments or suggestions? Do it, Kyle! John Bywater. |
From: Kyle R . B. <mo...@vo...> - 2002-04-29 14:06:11
|
To all who may be listening. I was up in New York over the weekend hanging out with a collegue/friend of mine that works for a large finincial institution. He does web development for them in Perl and Java. We worked togeather on building a large ecommerce site, so he's familliar with at lesat the basic design that Pas is based on. We got to talking, and he brought up that he had considered Pas (at least the PSP part of it) for a project at work. He said that even though Pas would have been a good fit, he could not get approval to use it because it is documented with alpha status, with a version number of less than 1.0. I have used Pas for projects where I work, and have not experienced any issues with it. The other users that I have talked to (not that there are that many) have had similiar experiences. I'd like to propose that we make an effort to complete documenting the software, including a manual (with administrator, developer, and internals developer sections [if not as seperate manuals]). If there are any kinks that can be worked out with out having to modify run-time code (installation issues, etc.), I'd like to have those be made, and I'd like to move to at least a 3.0 version number. 3.0 because frankly this version of Pas _is_ the third major version of the internal architecture of the software. Comments or suggestions? Thanks, Kyle R. Burton -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Kyle R . B. <mo...@vo...> - 2002-03-13 18:46:14
|
From ChangeLog: * Minor optimization for most subroutines, change {my $self = shift;} to {my($self) = @_;} I did this in stuff like Base.pm Base/Common.pm and Pas/*.pm -- as time allows, I'll do it in the rest of the codebase. I also removed sub _get_file from Base/Common.pm in favor of the newer (and more efficient) getFileAsScalar(). The only places where I saw it getting used were in Compiler.pm and in Config.pm. Kyle -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Kyle R . B. <mo...@vo...> - 2002-03-13 15:18:50
|
> Last night I spent some time and got started on the dependency object > creation. Any objections to RequestHandler using Data::Dumper? I didnt > see a generic serialize_data method, but I may have been grepping in the > wrong place. Session.pm had some stuff that was close to what I wanted, > but its not reusable. Since it's the compiler that is the entity that discovers the dependency list, perhaps we should add a method to the compiler which constructs our dependency object for us? my $depInfo = $compiler->getDepInfo( $pspFile ); Which (at least for now) could be as simple as: sub getDepInfo { my($self,$file) = @_; my $dep; eval { $dep = do $file; }; if($@) { $self->throw("Exception loading dependency info from '$file' : $@\n"); } return $dep; } hrm...maybe that should be instead: sub getDepInfo { my($slef,$file) = @_; return Org::Bgw::Pas::Psp::Dependency->newFromFile( $file ); } Where: sub newFromFile { my($pkg,$file) = @_; my $dep; eval { $dep = do $file; }; if($@) { die("Exception loading dependency info from '$file' : $@\n"); } bless $dep, $pkg; return $dep; } Serialization, IMO, should also be encapsulated in the Dep object: use Data::Dumper; sub saveToFile { my($self,$file) = @_; $self->writeFile( $file, Dumper(\$self) ); } > Would we benefit by having a $base->serialize_data(\$obj,\$d); method? You can basicly call Data::Dumper in 1 line of code (even if you want to twiddle it's parameters). I don't see serialization being core enough to the application to warrant it being in the base class. Where else can we see it being used? Session, Dependency, and what else? > Pass it a thingy and a scalar ref, it fills the ref with a serialized > thingy. This would be a more generic thing that the other objects could > take advantage of. This way RequestHandler doesnt 'know' about what > we're doing to the dependency object, I'm all for keeping the request handler in the dark about as much as possible -- unless it's something the request handler is directly responsible for. It'll keep the code simpler, cleaner and more maintainable. The less coupling we have in the objects in Pas the better. > and if I ever want to store a > serialized thing in a database or ldap, there's a simple way to get the > data formated. If we go this route, we could refactor Session.pm to use > it. Along with a $base->deserialize_data(\$ref). > > What do you think? Am I over complicating this? Should I just do a > simple Dump, write file? > > Nice trick with ->readFile btw. I like it. I think we also did a ->writeFile(), and a [my $fh = $self->openFile()]. All which throw exceptions when things go wrong. It's just basic abstraction/refactoring, but it help keeps lines of code to a minimum, which makes things easier to read and maintain. I think you're on the right track -- thanks for bringing it up on the list for discussion before diving in. Regardless of how you decide to do it, at least we've got a handle on how it will fit in to the rest of the code. Kyle -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Mental <Me...@Ne...> - 2002-03-13 15:03:10
|
Last night I spent some time and got started on the dependency object creation. Any objections to RequestHandler using Data::Dumper? I didnt see a generic serialize_data method, but I may have been grepping in the wrong place. Session.pm had some stuff that was close to what I wanted, but its not reusable. Would we benefit by having a $base->serialize_data(\$obj,\$d); method? Pass it a thingy and a scalar ref, it fills the ref with a serialized thingy. This would be a more generic thing that the other objects could take advantage of. This way RequestHandler doesnt 'know' about what we're doing to the dependency object, and if I ever want to store a serialized thing in a database or ldap, there's a simple way to get the data formated. If we go this route, we could refactor Session.pm to use it. Along with a $base->deserialize_data(\$ref). What do you think? Am I over complicating this? Should I just do a simple Dump, write file? Nice trick with ->readFile btw. I like it.=20 --=20 =20 Mental (Me...@Ne...) I got tired of calling the movies to listen to what is playing so I bought the album. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Mental <Me...@Ne...> - 2002-03-12 18:59:02
|
When I initially fixed that issue with updating include files not causing the pageobject to be rebuilt, I did so with a flatfile. We'd discussed offline ways to use an object to contain that data. I'd like to decide what object we're going to use. I'm thinking something like an Org::Bgw::Pas::Psp::Dependency object.=20 I've been hard pressed to think of other uses for it yet tho. Should we just stick with the flat files? I should hopefully have time this week to work on it depending on how long it takes me to help my fiance's son get his report ready for class. As always, reality conspires to delay me :) Anyways, I just would like a little feedback before I start flinging code around.=20 --=20 =20 Mental (Me...@Ne...) One time I went to a drive-in in a taxi cab. The movie cost me $95. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Mental <Me...@Ne...> - 2002-03-08 17:09:39
|
I commited a fix for some irksome path nonsense. If an extra / was in the path to a registered URI it wouldnt work. As far as I'm concerned if /usr/local/htdocs//index.html is valid for apache, it should be valid for pas as well. I did a quick fix in the request handler to strip down double slashes. Fixed my itch, hope it doesnt cause anyone trouble. -- J. |
From: Kyle R . B. <mo...@vo...> - 2002-03-05 18:19:50
|
> That doesnt happen. CGI's get served as text. Ok, then it doesn't work ;^) > > Another possibility is to use stacked handlers - specify something like: > > > > <Location /pas/> > > SetHandler perl-script > > PerlHandler Org::Bgw::Pas::RequestHandler Apache::PerlRun > > </Location> > > > > I think if you do that, Apache will keep calling handlers until one > > of them returns OK, or all of them have been called. > > > > I'm trying that tonight. I think that should work. > What I want ot be able to do is have psp's comingled iwth everything. I > was planing on making /pas/ the uri start for all registred page objects. That's probably a good approach and should work just fine. The only isuse I can see cropping up is with forward_request(). > So I was gonna have a > <Files *.psp> section in my docroot direcotry. And a <Location>PerlHandler > sction for /pas/. I think it should do what I want. Yeah, I like that approach. Kyle -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Kyle R . B. <mo...@vo...> - 2002-03-05 18:02:02
|
One other point. I think the way the Pas mod_perl Request Handler is set up, it should return DECLINED unless it thinks it's supposed to handle the request (i.e. if it's either a registered URI or a psp page). What that means is that for any other file type, Apache should be applying it's usual rules to. So CGIs should get executed if +ExecCGI is set and .cgi is registered to be handled correctly. Another possibility is to use stacked handlers - specify something like: <Location /pas/> SetHandler perl-script PerlHandler Org::Bgw::Pas::RequestHandler Apache::PerlRun </Location> I think if you do that, Apache will keep calling handlers until one of them returns OK, or all of them have been called. Kyle -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Kyle R . B. <mo...@vo...> - 2002-03-05 03:27:38
|
> For some reason the sourceforge list (which comes first) shows only 1 > message (the geocrawler archive does show the right number). I'm not sure why it's there. The fact that it's listed as: pas-dev Archives 1 messages (SF Mail Archive beta) makes me think that they're looking at servicing their own mailing lists but haven't yet gotten all the way there. > It installed and ran quite easily. The only small glitch was your use of > 00INDEX.HTML files, rather than index.html. I'm not sure where the form > you use comes from, but index.html seems fairly standard on most apache > servers, I think. 00INDEX.HTML was used because with Apache's default sorting order it will nearly always show up as the first file in the list. We didn't want to use an index.html because Apache would have shown that page instead of the directory listing. We purposly wanted to show the directory listing so users could see all of the examples and files from a web brwoser. > One thing I was puzzled about was that in the pas.conf file, there are > these lines: > > pas.pages.test=TestPage > pas.pages.foobar.quzbaz=TestPage > > pas.pages.examples.therm.Conv=Therm::ThermPage > pas.pages.examples.Redirect=Redirect > pas.pages.examples.dbExplorer.Explorer=DbExplorer::Explorer > > I don't understand how these relate to the following URL, which brings > up the test example #1: > > http://localhost/pas/examples/test.psp They are 'registerd' urls. What they do is map a url to a page object which will handle the request for the url. Similarly to Java Servlets. If you have Pas configured as per the INSTALL instructions, then the line: pas.pages.examples.therm.Conv=Therm::ThermPage Registers the url: http://your.host.tld/pas/examples/therm/Conv to be handled by a Therm::ThermPage object (which is derived from Org::Bgw::Pas::SessionPage). When an HTTP request is made to http://your.host.tld/pas/examples/therm/Conv, a Therm::ThermPage object is constructed, and it's execute() method is invoked to handle the request. Therm::ThermPage's execute looks like this: sub execute { my $self = shift; $self->set_default_temperature(); $self->convert_temperatures(); $self->save_values() if($self->temperatures_are_valid()); $self->clear_values() if($self->query()->param('clear')); return $self->forward_request('/examples/therm/therm.psp'); } It sets up a few defaults, performs any requested computations, saves valid computations to a history [a stack] in the session (and clears the stack if requested), and then forwards the request (which contains the results of the computation) to the psp page: '/examples/therm/therm.psp', which renders the results in HTML. > I've so far tried just the latest release. It wasn't clear what was in > the CVS. The current source tree in CVS should be stable enough for use. In fact right now I'm running an intranet website on the current HEAD. > > Thanks for taking a look. Any feedback at all is welcome. > > There's a lot about it that I like. I like the simplicity of your > templating language, and that it maps directly into a Perl class. The model is directly analogous to JSP and Java Servlets. The templating language isn't actauly a language, that's one of the ways in which PSPs are different from the templating languages. The PSP tag's aren't really Perl embedded into HTML, it's easier to understand if you think of it the other way around -- it's mark up [HTML] embedded in a Perl object. One of the easiest things you can do to see what I mean is to go into your psp-cache directory and look at the compiled page object that results from a PSP page. It's straight Perl, a bit noisy, but none the less its just standard Perl syntax. So there is no interpretation on the fly, in fact with Apache/mod_perl, once a PSP is compiled, Apache will keep the compiled page object in memory. > I'm not sure I'm totally comfortable with your page object model. I > would like to be able to abstract out pieces of the page, and then fit > them back together, in the template. Really each piece (component?) > needs to have its own implementation object, and a separate rendering > object. I'd like to be able fit the results back together dynamically. Actualy that's one of the main points of the page object model. As with the temprature conversion example, the idea is to implement all of the application's core functionality in normal Perl objects, use them within a page object (think servlet) to perform the business logic for the application, and then make the data available to the presentation layer (by placing data into the request and forwarding to a PSP page). Between using inheritence, using objects [that you write] outside of the Pas inheritence hierarchy, and choosing a stragety for the presentation layoer, you can work completely within, or nearly completely outside of the page object model. The page object modle is largely there to provide standard support services. Including session support, request processing, response buffering, and basic exception handeling. If you work within the architecture, you don't need to worry about any of those issues, they're all handled for you implicitly. > Your work on database connectivity looks interesting, but I haven't > checked it out yet. I am definitely interested in reducing the amount of > work I have to do when I implement yet-another-table in the database (I > want to factor out much of the fetch/store/search/list/modify > functionality). It's a good thing you brought that up, there have been some changes to that that need to be checked in to CVS. It's not connectivity so much as object generation. I personaly preferr using methods on objects to accessing hash or array elements. It is easier to remember that '$userProfile->firstName()' is the first_name field from the user_profile table, if it's a method than it is to try to remember that '$ar->[3]' is the first_name. So instead of having to write all of these objects to represent records from the database, we generate them. Saves time and makes for cleaner code (in my opinion). > I'm thinking that in order to understand what I *really* need to > re-implement my website so it's maintainable, is to build the framework > myself. So my current plan is to see what the most minimalist system I > can possibly build is. I'm thinking of doing it in Ruby. Well, the JSP/Servlet design is a good one to follow regardless of the language. Thank you again for your feedback. Best Regards, Kyle R. Burton -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Bob S. <bo...@or...> - 2002-03-05 01:19:10
|
> I've CC'd this on the pas-dev mailing list (the only list with any activiy at > this time). The SourceForge link to the mailing lists is: > > http://sourceforge.net/mail/?group_id=19226 > > The Geocrawler link for the dev list archive is here: > > http://www.geocrawler.com/lists/3/SourceForge/9231/0/ > > There should be about 9 messages from this month. For some reason the sourceforge list (which comes first) shows only 1 message (the geocrawler archive does show the right number). > > How did the installation and configuraiton go? Were there any issues > with the software? It installed and ran quite easily. The only small glitch was your use of 00INDEX.HTML files, rather than index.html. I'm not sure where the form you use comes from, but index.html seems fairly standard on most apache servers, I think. One thing I was puzzled about was that in the pas.conf file, there are these lines: pas.pages.test=TestPage pas.pages.foobar.quzbaz=TestPage pas.pages.examples.therm.Conv=Therm::ThermPage pas.pages.examples.Redirect=Redirect pas.pages.examples.dbExplorer.Explorer=DbExplorer::Explorer I don't understand how these relate to the following URL, which brings up the test example #1: http://localhost/pas/examples/test.psp > Also, are you using a 'release', or the current CVS tree? There has been > some significant development since the last release. I've so far tried just the latest release. It wasn't clear what was in the CVS. > Thanks for taking a look. Any feedback at all is welcome. There's a lot about it that I like. I like the simplicity of your templating language, and that it maps directly into a Perl class. I'm not sure I'm totally comfortable with your page object model. I would like to be able to abstract out pieces of the page, and then fit them back together, in the template. Really each piece (component?) needs to have its own implementation object, and a separate rendering object. I'd like to be able fit the results back together dynamically. Your work on database connectivity looks interesting, but I haven't checked it out yet. I am definitely interested in reducing the amount of work I have to do when I implement yet-another-table in the database (I want to factor out much of the fetch/store/search/list/modify functionality). I'm thinking that in order to understand what I *really* need to re-implement my website so it's maintainable, is to build the framework myself. So my current plan is to see what the most minimalist system I can possibly build is. I'm thinking of doing it in Ruby. Thanks, Bob |
From: Kyle R . B. <mo...@vo...> - 2002-03-04 23:48:54
|
> I looked again, somehow I overlooked that there *is* a mailing list > (with only one item in it). I did manage to get it working, and so far > I'm impressed. I also see that you have started doing a small amount of > work on it again, so that looks promising... I'll send comments from now > on to the list. I've CC'd this on the pas-dev mailing list (the only list with any activiy at this time). The SourceForge link to the mailing lists is: http://sourceforge.net/mail/?group_id=19226 The Geocrawler link for the dev list archive is here: http://www.geocrawler.com/lists/3/SourceForge/9231/0/ There should be about 9 messages from this month. How did the installation and configuraiton go? Were there any issues with the software? Also, are you using a 'release', or the current CVS tree? There has been some significant development since the last release. Thanks for taking a look. Any feedback at all is welcome. Best Regards, Kyle R. Burton -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Kyle R . B. <mo...@vo...> - 2002-03-04 22:05:25
|
I'd also like to point out that you're setting up the Pas Request Handler to handle your document root. Of course even if you choose a path to be handled by Pas, you could still use your type of configuration to allow .cgi's and .psp's to comingle in the same directory structure. I'm curious - how does the configuration you've detailed affect registered URIs for Page objects? If I understand the configuration correctly, then the Pas Request Handler is only invoked for *.psp requests, and not for bare URIs (which is how the registered Page objects work). Is there a way to configure the pas request handler for everything, and then set up *.cgi to be handled by the normal cgi handler? Kyle > Hey. This is just an FYI, but when you setup pas with the following > method: > > PerlInitHandler Apache::StatINC > PerlSetEnv PAS_BASE "/usr/local/httpd/" > PerlRequire /usr/local/httpd/src/startup.pl > <Location /> > SetHandler perl-script > PerlHandler Org::Bgw::Pas::RequestHandler > </Location> > > It kind of breaks other handlers you have for cgi's/whatnot. > > What I've done, and what may save others a few minutes on apache.org's > site, is setup my environment to allow psp's and cgi's to coexist. > > In the DocumentRoot's Directory section of your httpd.conf do this: > > <Directory "/usr/local/httpd/htdocs"> > Options Indexes -FollowSymLinks +Includes MultiViews +ExecCGI > AllowOverride None > Order allow,deny > Allow from all > <Files *.psp> > SetHandler perl-script > PerlHandler Org::Bgw::Pas::RequestHandler > </Files> > </Directory> > > This way your aliases like /server-info and /perl-status will continue to > be handled by their appropriate Handlers. > > Just a quick tip. I'll update the docs in cvs later tonight. > > -- > Mental (Me...@Ne...) > > There was a power outage at a department store yesterday. > Twenty people were trapped on the escalators. > --Steven Wright > > GPG public key: http://www.neverlight.com/Mental.asc > > > > _______________________________________________ > Pas-dev mailing list > Pa...@li... > https://lists.sourceforge.net/lists/listinfo/pas-dev -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Mental <Me...@Ne...> - 2002-03-04 21:38:33
|
Hey. This is just an FYI, but when you setup pas with the following method: PerlInitHandler Apache::StatINC PerlSetEnv PAS_BASE "/usr/local/httpd/" PerlRequire /usr/local/httpd/src/startup.pl <Location /> SetHandler perl-script PerlHandler Org::Bgw::Pas::RequestHandler </Location> It kind of breaks other handlers you have for cgi's/whatnot. What I've done, and what may save others a few minutes on apache.org's site, is setup my environment to allow psp's and cgi's to coexist. In the DocumentRoot's Directory section of your httpd.conf do this: <Directory "/usr/local/httpd/htdocs"> Options Indexes -FollowSymLinks +Includes MultiViews +ExecCGI AllowOverride None Order allow,deny Allow from all <Files *.psp> SetHandler perl-script PerlHandler Org::Bgw::Pas::RequestHandler </Files> </Directory> This way your aliases like /server-info and /perl-status will continue to be handled by their appropriate Handlers. Just a quick tip. I'll update the docs in cvs later tonight. -- Mental (Me...@Ne...) There was a power outage at a department store yesterday. Twenty people were trapped on the escalators. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Mental <Me...@Ne...> - 2002-03-01 22:29:45
|
On Fri, Mar 01, 2002 at 03:01:19PM -0500, Kyle R . Burton wrote: > Thats right. I think we're kind of saying the smae thing. Just from > different approaches. If we create the build tool as an object so either > the request handler, _or_ some command line tool can invoke it to > both decide if a file should be recompiled, and recompile it, then great. > > I just don't want duplicated code/behavior -- since both the build tool > and the request handler have to perform the 'is out fo date' check based > on dep information, the compiler feels like the right place to put it for > now. Though a build tool as an object would be better. > I was thinking of having the build tool invoke the RequestHandler. But now that I think aobut it we could do what you're talking about in the compiler. But I still think the compiler should just compile. Maybe the request handler should talk to an object broker that knows how to talk to the compiler (if it thinks it needs to). This way both the build tool and the request handler share the same code (the object broker) and the compiler stays a compiler. My fear is that we'll wind up over engineering the compiler and wind up with emacs. :) I just want something that compiles stuff. I have no problems at all with asking Org::Bgw::Pas::ObjectBroker for an object. In fact, I could care less what it does to get me the object, so long as it tells me where it is. Besides, then we'll be all buzz word compliant :) At least I think so. Seems to fit the definition: In brokering a client request, an ORB may provide all of these services: * Life cycle services, which define how to create, copy, move, and * delete a component * Persistence service, which provide the ability to store data on * object database, , and plain files * Naming service, which allows a component to find another component * by name and also supports existing naming systems or directories, * including DCE, , and Sun's NIS (Network Information System). * Event service, which lets components specify events that they want * to be notified of * Concurrency control service, which allows an ORB to manage locks to * data that transactions or threads may compete for * transaction service, which ensures that when a transaction is * completed, changes are committed, or that, if not, database changes * are restored to their pre-transaction state * Relationship service, which creates dynamic associations between * components that haven't "met" before and for keeping track of these * associations * Externalization service, which provides a way to get data to and * from a component in a "stream" * Query service, which allows a component to query a database. This * service is based on the SQL3 specification and the Object Database * Management Group's (ODMG) Object Query Language (OQL). * Licensing service, which allows the use of a component to be * measured for purposes of compensation for use. Charging can be done * by session, by node, by instance creation, and by site. * Properties service, which lets a component contain a * self-description that other components can use. Maybe my definition is a bit off, but you get teh idae. I hope. -- Mental (Me...@Ne...) I watched the Indy 500, and I was thinking that if they left earlier they wouldn't have to go so fast. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Kyle R . B. <mo...@vo...> - 2002-03-01 20:01:36
|
> I was thinking just using perl code. I'm on the fence about using objects. > I suppose it wouldnt be a big deal either way. But if the dep file > contained an array called @includes you could find the includes. If I had > a hash called %metatags you could do something with that. I'm not sure > what else we'd put in it, but this leaves the door open. I preferr objects to naked data structures. An object's methods are in at least a small part documentation about what they are -- if we keep following the convention of POD documenting all of the methods, then any data element that shows up in the .meta file can be tracked directly back to some human readable documentation. If it's a naked hash the elements have absolutely no meaning. Objects are much more opaque, which actualy aids in understanding of what's going on. Object methods also represent actions, behavior. Data structures don't represent anyting other than a grouping of dta elements. That might beon one of the core reasons OO is [generaly] considered easier to understand than procedural programming. Especially as things scale up. > In my mind, needing ot flush the cache when you upgrade pas isnt a _huge_ > deal. In fact, the app could tell if the page object needed rebuilt by > doing an eval on the .dep(meta) file. I refuse to let a little thing like > backwards compatibility deter improoved functionality. Safve that mindset > for windows :) Yeah, but if we include a data member VERSION from the start, we can have the app self check itself. We like self checking/verifying code right? It's like a unit test. It might not need to be part of the request handler, but we could make it part of the build tool, or part of a system check program that users are supposed to run between upgrades. If we do it we have the opportunity to win, even if we decide not to do it. If we don't do it, we have no opportunity to win. Is it worth the extra effort? I think it is. > > hrm, but that prevents us from just using "do $file;" to re-gen the > > file iteself... > > Overkill. Just flush it if you cant eval it. Its generated crud. No user > data is in there, so nothing is lost. Brute force good. Thinking bad. > > It stand for a word that starts with s. And sounds funny. :) > Whats awk stand for? If you answer that I'll shoot you :) Aho, Weinberger and Kernighan (see man awk). It's the initials of the authors of the tool. X-) > I figured the opposite. The RequestHandler knows and cares about where > stuff is and how it should be used and if it should be served up. The > Compiler is stupid. Uhm, it's the compiler that decides how to turn a psp file path into a package name, and then turn a package name into a psp-cache file name. I'm not talking about coding the behavior of deciding to recompile into the compiler -- just the functionality your patch added - functions that check for file existance, for the ctime critera, for the dependency ctime criteria. Let the external caller decide if compilation should be done or not, but the compiler should be the thing that answers the folowing questions: "does the compiled page exist" "is the psp newer than the compiled page" "are any of the includes newer than the cmpiled page" and ultimately: "is the compiled page out of date" Regardless of these answers, the request handler, or the build tool [pants] will be the thing that decides if the page should be recompiled or not. I just think we should hide all of this behavior behind an interface -- the compiler feels like the right place to do that. The request handler shold be just that -- a request handler. It might check for recompilation, but it shouldn't have to figure out dependency relationships (that's the comilers job). At least in my opinion. > RequestHandler says "Do this", compiler does as its told. If its asked for > dependency data, it'll provide it. Otherwise why should it care? What ifd > dynamic recompile is off? What if... I dunno. I just dont see the harm of > leting RequestHandler decide on what The Right Thing is and telling the > compiler how (and when) to do it. > > But, if you want to move it thats fine. I just figured the RequestHandler > was already doing the time checks (and so will our precompiler). The time > doesnt affect _how_ a psp is compiled, so why shoulld it matter to the > compiler? gcc doesnt look at mtime, make does. Thats right. I think we're kind of saying the smae thing. Just from different approaches. If we create the build tool as an object so either the request handler, _or_ some command line tool can invoke it to both decide if a file should be recompiled, and recompile it, then great. I just don't want duplicated code/behavior -- since both the build tool and the request handler have to perform the 'is out fo date' check based on dep information, the compiler feels like the right place to put it for now. Though a build tool as an object would be better. k -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Mental <Me...@Ne...> - 2002-03-01 19:09:47
|
On Fri, Mar 01, 2002 at 01:49:55PM -0500, Kyle R . Burton wrote: > Why have the meta data contain perlcode? Oh, wait, I think I get it. We > make the meta data an object, so it can store _more_ information than just > includes (in the future of course). So the pageObject.pm.deps becomes > pageObject.pm.meta, which is just a serialized Org::Bgw::Pas::Psp::PageMeta > object? That way in the code we can: > > use Data::Dumper; > $self->writeFile( $metaData, Dumper( $metaData ) ); > > and: I was thinking just using perl code. I'm on the fence about using objects. I suppose it wouldnt be a big deal either way. But if the dep file contained an array called @includes you could find the includes. If I had a hash called %metatags you could do something with that. I'm not sure what else we'd put in it, but this leaves the door open. > > my $metaData = do $metaData; > > Then as we update the PageMeta object, everything continues to work? > > What about version differences? We have issues either if we use a > syntax for the meta data file, or if we use serialized perl -- we > might add/remove/rename properties of the PageMeta object. Which would > break old versions of the .meta files. In my mind, needing ot flush the cache when you upgrade pas isnt a _huge_ deal. In fact, the app could tell if the page object needed rebuilt by doing an eval on the .dep(meta) file. I refuse to let a little thing like backwards compatibility deter improoved functionality. Safve that mindset for windows :) > > Maybe we should store a version as the first thing in the meta file: > > use Data::Dumper; > $self->writeFile( $metaData, "V" . $Org::Bgw::Psp::PageMeta::VERSION . "\n" . Dumper( $metaData ) ); > > hrm, but that prevents us from just using "do $file;" to re-gen the > file iteself... > > Overkill. Just flush it if you cant eval it. Its generated crud. No user data is in there, so nothing is lost. Brute force good. Thinking bad. > Oh, wait, we can store the version as a data element of the PageMeta object. > Then after we re-load it: > > my $metaData = do $metaData; > if( $metaData->versionMisMatch() ) { > # throw out this meta data, and try to recompile the page to > # bring the meta data up to date because versions changed... > $self->warning("Meta version mismatch, attempting to recompile.\n"); > } > > > Actually, I should write an antlike tool for the site and call it pants > > :) > > I get "Perl Ant", but whats the 's' stand for? > It stand for a word that starts with s. And sounds funny. :) Whats awk stand for? If you answer that I'll shoot you :) > There is already an attempt at a build tool in the bin directory. Though > compared to the new features you're adding, it's lacking. > So? Its a start and can be all smothered in fixed. > Oh, another thing -- we should look at refactoring your chagnes into the > compiler itself. I think it'd be more widely useable if the dependency > checking were part of the compiler -- it doesn't feel like it should be > pat of the request handler. > I figured the opposite. The RequestHandler knows and cares about where stuff is and how it should be used and if it should be served up. The Compiler is stupid. RequestHandler says "Do this", compiler does as its told. If its asked for dependency data, it'll provide it. Otherwise why should it care? What ifd dynamic recompile is off? What if... I dunno. I just dont see the harm of leting RequestHandler decide on what The Right Thing is and telling the compiler how (and when) to do it. But, if you want to move it thats fine. I just figured the RequestHandler was already doing the time checks (and so will our precompiler). The time doesnt affect _how_ a psp is compiled, so why shoulld it matter to the compiler? gcc doesnt look at mtime, make does. -- Mental (Me...@Ne...) It's a small world, but I wouldn't want to have to paint it. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Kyle R . B. <mo...@vo...> - 2002-03-01 18:50:14
|
> After I make the metadata contain perlcode you can eval instead of being a > flat file list, I'm going to write a tool to compile psp's. Sort of like > jspc or whatever. Why have the meta data contain perlcode? Oh, wait, I think I get it. We make the meta data an object, so it can store _more_ information than just includes (in the future of course). So the pageObject.pm.deps becomes pageObject.pm.meta, which is just a serialized Org::Bgw::Pas::Psp::PageMeta object? That way in the code we can: use Data::Dumper; $self->writeFile( $metaData, Dumper( $metaData ) ); and: my $metaData = do $metaData; Then as we update the PageMeta object, everything continues to work? What about version differences? We have issues either if we use a syntax for the meta data file, or if we use serialized perl -- we might add/remove/rename properties of the PageMeta object. Which would break old versions of the .meta files. Maybe we should store a version as the first thing in the meta file: use Data::Dumper; $self->writeFile( $metaData, "V" . $Org::Bgw::Psp::PageMeta::VERSION . "\n" . Dumper( $metaData ) ); hrm, but that prevents us from just using "do $file;" to re-gen the file iteself... Oh, wait, we can store the version as a data element of the PageMeta object. Then after we re-load it: my $metaData = do $metaData; if( $metaData->versionMisMatch() ) { # throw out this meta data, and try to recompile the page to # bring the meta data up to date because versions changed... $self->warning("Meta version mismatch, attempting to recompile.\n"); } > Actually, I should write an antlike tool for the site and call it pants > :) I get "Perl Ant", but whats the 's' stand for? There is already an attempt at a build tool in the bin directory. Though compared to the new features you're adding, it's lacking. Oh, another thing -- we should look at refactoring your chagnes into the compiler itself. I think it'd be more widely useable if the dependency checking were part of the compiler -- it doesn't feel like it should be pat of the request handler. k -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Mental <Me...@Ne...> - 2002-03-01 18:37:12
|
After I make the metadata contain perlcode you can eval instead of being a flat file list, I'm going to write a tool to compile psp's. Sort of like jspc or whatever. Actually, I should write an antlike tool for the site and call it pants :) -- Mental (Me...@Ne...) If you can't hear me, it's because I'm in parentheses. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Mental <Me...@Ne...> - 2002-03-01 17:03:11
|
On Fri, Mar 01, 2002 at 11:26:27AM -0500, Kyle R . Burton wrote: > v3.0.14 Fri Mar 1 11:03:37 EST 2002 > * Applied patch from Jason Stelzer <me...@ne...> that adds > dependency (include file) checking for dynamic recompilation of the > PSP files from within the mod_perl request handler. After applying > the patch, minor refactoring of the code was done. > > The following files were affected: > > AUTHORS > ChangeLog > FAQ > INSTALL > TODO > conf/pas.conf > - applied patch > src/Org/Bgw/Base/Common.pm > - added new methods: openFile(), readFile(), writeFile(), > getFileAsLines(), and getFileAsScalar() > src/Org/Bgw/Pas/RequestHandler.pm > - applied patch > - refactored some of the code - Jason, since it was your patch, > please review the changes > src/Org/Bgw/Pas/Psp/Compiler.pm > - applied patch > > This is marked as 3.0.14 in the ChangeLog, but I'm not going to put > togeather a release at this time. > > > Jason, thanks for the patch! > Thanks for applying it! I wanted to get our two trees in sync. I kind of spaced on a few things. There's a few more changes I'll need to make. I'll be syncing up my code with the trunk tonight. Still, for only spending a couple hours doinga quick hack, its a nice improovment. -- Mental (Me...@Ne...) I just got out of the hospital. I was in a speed reading accident. I hit a book mark and flew across the room. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Kyle R . B. <mo...@vo...> - 2002-03-01 16:26:43
|
v3.0.14 Fri Mar 1 11:03:37 EST 2002 * Applied patch from Jason Stelzer <me...@ne...> that adds dependency (include file) checking for dynamic recompilation of the PSP files from within the mod_perl request handler. After applying the patch, minor refactoring of the code was done. The following files were affected: AUTHORS ChangeLog FAQ INSTALL TODO conf/pas.conf - applied patch src/Org/Bgw/Base/Common.pm - added new methods: openFile(), readFile(), writeFile(), getFileAsLines(), and getFileAsScalar() src/Org/Bgw/Pas/RequestHandler.pm - applied patch - refactored some of the code - Jason, since it was your patch, please review the changes src/Org/Bgw/Pas/Psp/Compiler.pm - applied patch This is marked as 3.0.14 in the ChangeLog, but I'm not going to put togeather a release at this time. Jason, thanks for the patch! Kyle -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Mental <Me...@Ne...> - 2002-02-28 15:50:46
|
On Thu, Feb 28, 2002 at 10:33:31AM -0500, Kyle R . Burton wrote: > Then the compile can "eval { use $package; };" and then get at the dependency > array. I think that is a better solution than making it a method on the > page object itself - the reason is so the compiler doesn't have to instantiate > an instance of the page object before it can compile it. Hrm...that raises > another issue - what if the psp has compile time errors in it (typos)? > > What about storing dependency data seperate from the generated PageObject.pm > file? Maybe in a mirror file called PageObject.deps? It could just be a > list of the psp pages that went into it...That way the compile wouldn't have > to look at any of the compiled page object's code, so it wouldnt matter > how broken up it was, the dependency data would all be external - and maybe > faster for the compiler to access too. > > Having it in an external .dep file will make writing an ant clone easier. This way we can do that idea about precompiling all the pages on a site pre-release. This way the webserver NEVER has write access ANYWHERE important. Like I said before, compiling on the fly for dev is fine, but in a production environment, why would you want that? Youd want to release a prebuilt production release. Anyways... yeah. I think a metadata file for includes (and perhaps other stuff as it comes up) is a good idea. > Actualy, after saying that, I'm now settled on the descision - we're going > to remove prev() from Pas. It encourages coupling between the page objects > and the psp pages, which is bad OO. > Fair enough. > > In my mind it might be a good idea to keep our stuff in its own sandbox. > > That way we cna differentiate between stuff we inserted and stuff that the > > browser passed. I'm not sur ethe best way to do this, but it could be > > something like $self->query() is for cgi.pm stuff (the browser post/get > > crud). Something ike $self->pasdata() could be used for stuff te > > application generates and uses internally. There would be no way for the > > browser to get its evil into it and could therfore be more trusted to > > contain 'good' data. Segregate user input from application generated > > stuff. Or maybe I should stop huffing paint. > > That's a great point. Keeping applicaiton data away from get/post data > is a good seperation paradigm. As to where we store it, we should try > to come up with a data object (just a get/set storage container, with > a design similar to the session - dumb key/value pair storage) that is > used for passing that kind of data. One nice thing about that, mod_perl, > and the request handler is that we can make that storage object global > for the http request itself. > > But what do we call it that will be clear and make sense? > $self->__ ? :) > Well, that's one option at least. > > <!-- page1.psp --> > <% return $self->forward_request('page2.psp'); %> > > ... > <SNIP> Personally, I will always have $self->forward_request('/path/to/psp'); Yes. Its double edged, however no matter how you do it, if you move stuff around its a lot of maintenence. > > I think all those examples sum up all of the ways a forward can > be invoked. I am of the opinion that all of those should work. > The last one raises the issue of the Pas document root vs the > web server's doc root. Which one is at '/' for the purposes > of forwarding? Currently '/' is the Pas doc root for forwarding, > and '/' is the webservers doc root for the purposes of http redirection > (http 302 responses). > The docroot and pasroot are one and the same for me. :) -- Mental (Me...@Ne...) Ever notice how irons have a setting for *permanent* press? I don't get it... --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |
From: Kyle R . B. <mo...@vo...> - 2002-02-28 15:33:49
|
> At this point I'm actually doing stuff. So I plan on patching up the > compiler first. We can do the rewrite as a phase 2. Actually, we could > start the rewrite now as I've forked off my own version of pas for my use. > Still, I dont think it'd take me long to fix up includes stuff so I might > as well do it as an interim fix. Yeah, the best thing I think we can do to solve this is to have the compiler track all the dependant files and put them into the compiled page object: ... use strict; use warnings; use Org::Bgw::Pas::Page; our @ISA = qw( Org::Bgw::Pas::Page ); our @DEPS = qw( file1.psp file2.psp file3.psp ); ... Then the compile can "eval { use $package; };" and then get at the dependency array. I think that is a better solution than making it a method on the page object itself - the reason is so the compiler doesn't have to instantiate an instance of the page object before it can compile it. Hrm...that raises another issue - what if the psp has compile time errors in it (typos)? What about storing dependency data seperate from the generated PageObject.pm file? Maybe in a mirror file called PageObject.deps? It could just be a list of the psp pages that went into it...That way the compile wouldn't have to look at any of the compiled page object's code, so it wouldnt matter how broken up it was, the dependency data would all be external - and maybe faster for the compiler to access too. > Just because it isnt part of the servlet model doesnt mean its not a good > idea. The servlet model has its own shortcomings. I just got done helping > Kevin sort out some stuff that I feel was a shortcoming of the whole > servlet thing. I'd go into it, but its long, I forget most of the details > and I'm busy. :) Yeah, for the MEL project we spent the bulk of the design time getting around the anti-OO nature of the servlet model. It was annoying, but by the time we finished comming up with the page model, it felt just about the same as Pas. > Wouldnt the stuff you wanted to see be in $self->prev()? > Shouldnt it be? Am I not getting it? Sort of. You could access a get/set from the prev() page object. The idea is to pass data to the psp page for it to display. If it has access to methods on the prev() page object, it gets coupled (tied) to the interface of that page object -- which keeps you from forwarding to that particular psp from some other page object. Does that make sense? So it's better to have it pull it's data from some stash instead of from methods directly on the prev() page. Actualy, after saying that, I'm now settled on the descision - we're going to remove prev() from Pas. It encourages coupling between the page objects and the psp pages, which is bad OO. > In my mind it might be a good idea to keep our stuff in its own sandbox. > That way we cna differentiate between stuff we inserted and stuff that the > browser passed. I'm not sur ethe best way to do this, but it could be > something like $self->query() is for cgi.pm stuff (the browser post/get > crud). Something ike $self->pasdata() could be used for stuff te > application generates and uses internally. There would be no way for the > browser to get its evil into it and could therfore be more trusted to > contain 'good' data. Segregate user input from application generated > stuff. Or maybe I should stop huffing paint. That's a great point. Keeping applicaiton data away from get/post data is a good seperation paradigm. As to where we store it, we should try to come up with a data object (just a get/set storage container, with a design similar to the session - dumb key/value pair storage) that is used for passing that kind of data. One nice thing about that, mod_perl, and the request handler is that we can make that storage object global for the http request itself. But what do we call it that will be clear and make sense? > I'll have ot play with forwrd_requst a little to see what you mean. It > might not be a bad idea to forward to fully pathed uri's anyways. At least > from a paranoid insecure standpoint. But that could just be me > misundersanding sometihg I'm unfamiliar with. Well, that's one option at least. <!-- page1.psp --> <% return $self->forward_request('page2.psp'); %> ... <!-- page2.psp in same directory as page1.psp --> <% return $self->forward_request('../page3.psp'); %> ... <!-- page3.psp one directory up from page[12].psp --> <% return $self->forward_request('foo/page4.psp'); %> ... <!-- page4.psp one directory down from page[3].psp --> <% return $self->forward_request('/page5.psp'); %> ... <!-- page5.psp in the application's or document root --> I think all those examples sum up all of the ways a forward can be invoked. I am of the opinion that all of those should work. The last one raises the issue of the Pas document root vs the web server's doc root. Which one is at '/' for the purposes of forwarding? Currently '/' is the Pas doc root for forwarding, and '/' is the webservers doc root for the purposes of http redirection (http 302 responses). k -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mo...@vo... http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ |
From: Mental <Me...@Ne...> - 2002-02-28 15:17:25
|
On Thu, Feb 28, 2002 at 09:57:16AM -0500, Kyle R . Burton wrote: > At this point, the compiler could probably use a re-write (the code isn't > the prettiest). Perhaps we should back down on the policy of few to no > external dependencies and base the compiler on HTML::Parser? I'm sure > it would end up being a whole lot cleaner and more extensible. In fact > using HTML::Parser would probably allow us to start the process of moving > towards JSP style taglibs. > At this point I'm actually doing stuff. So I plan on patching up the compiler first. We can do the rewrite as a phase 2. Actually, we could start the rewrite now as I've forked off my own version of pas for my use. Still, I dont think it'd take me long to fix up includes stuff so I might as well do it as an interim fix. > Some other things I've noticed working with Pas lately are that $self->prev() > doesn't seem to be working correctly. Since it's not part of the servlet > model, maybe we just get rid of it? > Just because it isnt part of the servlet model doesnt mean its not a good idea. The servlet model has its own shortcomings. I just got done helping Kevin sort out some stuff that I feel was a shortcoming of the whole servlet thing. I'd go into it, but its long, I forget most of the details and I'm busy. :) > Passing data to the psp from a Page object in the query object is starting > to feel wrong. The issue that caused this to itch for me was on a diagnostics > page where I was trying to dump all of the parameters from the query. I > had a page object that added data to the query, then forwarded the request > to the psp diagnostic page. In the psp page, it had code like this: > > <TABLE> > <TR><TD>key</TD><TD>value</TD></TR> > <% foreach my $k ($self->query()->param()) { %> > <TR><TD><%= $k %></TD><TD><%= $self->query()->param($k) %></TD></TR> > <% } %> > </TABLE> > Wouldnt the stuff you wanted to see be in $self->prev()? Shouldnt it be? Am I not getting it? > But the added data didn't show up in the key names returend from > $self->query()->param(). CGI.pm must be caching the originaly parsed > list of parameters - anything that get's added through a call to param(k,v) > isn't included in the list returned by param(). > > Perhaps we can use the reuqest object directly? Ala the servlet model? > They have a getParameter() that gives access to the GET/POST parameters, > and a getAttribute()/setAttribute() that is a storage area where the > servlet/jsp (page object/psp in our case) are supposed to use to pass > data back and forth. > In my mind it might be a good idea to keep our stuff in its own sandbox. That way we cna differentiate between stuff we inserted and stuff that the browser passed. I'm not sur ethe best way to do this, but it could be something like $self->query() is for cgi.pm stuff (the browser post/get crud). Something ike $self->pasdata() could be used for stuff te application generates and uses internally. There would be no way for the browser to get its evil into it and could therfore be more trusted to contain 'good' data. Segregate user input from application generated stuff. Or maybe I should stop huffing paint. > The last big itch is in the forward_request() method of the page object. It > doesn't seem to work with relativly pathed requests correctly. I've had > to tweak the path to the psp page to get it to work under all circumstances. > Oh, and one last thing is PATH_INFO for registered page objects. It would > be nice of the Pas mod_perl request handler could figure out that a path > lead past a registered page object, and set the PATH_INFO accordingly. > > I'll add this stuf to the TODO file. > I'll have ot play with forwrd_requst a little to see what you mean. It might not be a bad idea to forward to fully pathed uri's anyways. At least from a paranoid insecure standpoint. But that could just be me misundersanding sometihg I'm unfamiliar with. -- Mental (Me...@Ne...) There was a power outage at a department store yesterday. Twenty people were trapped on the escalators. --Steven Wright GPG public key: http://www.neverlight.com/Mental.asc |