Re: [Filterproxy-devel] Re: New module for FilterProxy
Brought to you by:
mcelrath
From: John F W. <way...@WP...> - 2001-08-23 01:57:46
|
On Wed, 22 Aug 2001, Bob McElrath wrote: > Me too. Just got back from California. And I'm leaving friday early to go to school, and who KNOWS when I'll be done moving in and have my email set up and all that... > Yes, I think I fixed that by explicitly calling > FilterProxy::handle_filtering(-10,1,2,3), which then calls Header (and > other modules, if necessary). Note it does not call handle_filtering > for Orders that modify the content. (See comments at the beginning of > Skeleton.pm) That's something on the order of what I was thinking. Except why not modify the content, if you're going to hilite Rewrite.pm changes? > CGI is already a dependency...and is included in LWP, which is a > dependency Righto, so Url::Escape isn't needed. > Well, what I've done is set a flag ($markupinstead) which tells Rewrite > to build a @markup data structure instead of modifying the source. Then > I call FilterProxy::handle_filtering for Rewrite's Order. I then parse > this data structure, marking up with the name of the rule as I parse it. > Both the flag and the data structure are variables in the > FilterProxy::Rewrite namespace. This isn't a race condition since it is > all executed by a single FilterProxy child process, which resets the > flag when it's done. Ugly, but it works. > > It turns out that the really hard part is when there is overlapping > modifications. (which is pretty common, actually) Marking up > nonoverlapping ones was easy, and could be done in one pass. The two > pass method described above is necessary in case two matches overlap. > (Matches can grow backwards, and would grow over a previously marked up > section!) It's complicated by the fact that Rewrite also has to parse > the data structure to make sure the piece it's examining hasn't already > been "stripped". Ugh! Yeah, @markuip was something on the order of what I was thinking. Good luck with the overlapping changes thing. That algorithm/concept rings familiar, I think something like that's been written before, or at least used as an example to torture CS students... > Well, for the time being I'll keep both methods. So it will still be > possible to do http://source/... Since the long URL is browser->proxy, > only a browser limitation would cause a problem. HTTP::Daemon, which > parses the headers for FilterProxy, has a limitation of 16k for the URI, > so we should be ok. Neither rfc 2068 or 2616 specifies how long URI's > or headers can be, but both specify the 413 and 414 error codes for > headers/URI's that are too long. I think it's safe to test it with a few long uri's in common browsers, and leave it at that. > The fix has also gone into the mozilla trunk. Maybe easier to get a new > nightly since I'm deathly slow... ;) Ok, I'll grab the new mozilla when i have bandwidth :) |