From: Steven N. <ste...@th...> - 2003-05-01 17:02:47
|
Put this in my drafts folder last week while out of the office, and forgot to send. In case anyone's interested in the end of this thread... ------ > > I briefly played with setting a fixedNumberOfProcesses earlier today. I > > haven't had time to poke in the code and confirm, but it looked like the > > different children weren't sharing a cache. > > > > It will likely be a few days before I can look at this more closely, so > > I thought I'd just ask: > > > > 1. Do forked processes share a cache? > > Well, since they're separate processes and all cached document are in RAM, > the answer is no, they don't share a cache. Each of them has its own cache. > So, if a client asks for document D1 for the first time and the request goes > to process P1, then P1 will build the document and cache it. Then, if a > client asks for document D1 and the request goes to process P1 as well, then > P1 will use the cached document. But if the request goes to process P2, then > P2 will build the document and cache it. All subsequent requests for > document D1 to either process P1 or P2 will be served from cache. That makes sense, and is what I suspected. I didn't have a chance to verify...oh, and I'm lazy. > > So this means that each process will have to build its own version of the > document once. Given the fact that caching is usually used when documents > need to be served thousands of times, having to build them a few times > instead of just once is not that big of a deal in my opinion. If it is for > you, I'd be curious to hear why :-) We do have some pages that will have thousands of hits per cache-life. We're also exploring the utility of caching for our query results, which can take 2-30 seconds to generate. Our results pages (or direct links to them) are embedded in weblogs around the Web, so there's a good chance that a given dynamically generated results page will have multiple hits in a cache-life--not thousands, but the thinking is that with such long build times, even saving three or four page constructions per results page might improve the user experience and keep us from requiring too many concurrent CherryPy processes. Our web server has lots of extra RAM (by design), so a large, shallowly accessed cache isn't necessarily a bad thing for us, though we're not yet sure it's helpful. > > > (Do they cache at all?) > > Yes :-) > > > 2. If not, can they be made to? (I figure this question could save me a > > week of frustration.) > > Well, if we want processes to share a cache, we'd have to store the cached > documents either in files, in a database or in shared memory ... That means > tweaking the CherryPy code .. > > Remi > > > PS: By the way, how is the WayPath project going ? Well. Thanks for asking. Earlier this week, we started serving a portion of our site (www.waypath.com) with the CherryPy front end. Part is still on the old architecture, proxied through CherryPy. I was planning on making an announcement to the list once I was sure it was up and stable, but there you go. We're doing a lot of different things with the implementation: - parsing xml-rpc requests, reformulating them to send to the Nav4 engine (on another box) and returning them to the user without unmarshalling/remarhalling--using CherryPy as a router, more or less; - taking HTML requests, turning them into xml-rpc requests for the Nav4 engine, and then translating the results to HTML for browser display; - serving pages that pull data out of a mysql db - embedding HTML pulled from other pages - > > > |