Tripp Lilley wrote:
> On Mon, 21 Jul 2003, Geoffrey Talvola wrote:
>> I'm not conviced that a dynamic algorithm is better than just using
>> a fixed pool (assuming we're talking about threads, or servlet
>> instances). Your box has to have enough memory and horsepower to
>> handle the worse case of a maxed-out pool, right? If that's the
>> case, then why not just allocate the maximum right up front (or let
>> the pools grow as needed like the servlet pools)? Surely having
>> some extra threads or extra instances lying around doesn't
>> noticeably hurt performance. Also, having to allocate extra
>> instances or threads to handle a surge of activity is bound to be
>> somewhat costly just at the time when you need the CPU to actually
>> _handle_ the requests.
>> When I deploy WebKit I set the minimum, maximum, and initial thread
>> pool sizes to the same value. Can someone convince me why this is a
>> bad idea?
> Well, in my case, I run multiple Webware instances and various other
> services on the same box. The box is spec'd to handle my best guess
> at the maximum simultaneous load on all of the services, but I'd
> really like it if the services could beg, borrow, and steal from one
> another when their peak demands are out of sync (which they often
> are, since they're different customers having different production
> schedules, etc.).
> So that's -my- motivation for a dynamic algorithm and not setting
> min/max/initial to the same value.
OK, fair enough -- although the operating system generally does a pretty
good job at this all by itself. I'm not convinced that WebKit's simple
thread adjustment algorithm doesn't do more harm than good, for example.