Firstly let's keep this in jetty-discuss as others should be interested.
It is the java io model that you use a thread per connection and that
they block in read waiting for work to do. Given this model, there
is no better place for a thread to wait than waiting for a client to
send a HTTP request.
With java nio, there is scope for connections to be handled without
a thread allocated to them - but the servlet API does not lend itself
to nio - as it assumes blocking IO.
I did write an NIO listener for Jetty that took threads away from
idle connections and put those connections into select sets. But i
discovered 2 key things:
+ Many operating systems handle thousands of java threads better than
they handle thousands of tcp/ip connections. ie reducing the java threads
did not solve the problems of operating systems that were not coping
with thousands of persistent connections. Jetty's low resource idle
time is a much better way to reduce both the thread and connection load
of a busy machine.
+ For servlets, you waste a lot more CPU and resources constantly rebuilding
selectSets than you save with this model.
Finally, if you think more CPU is being used than should be, it should
not be coming from threads in read (unless you have a VERY short idle timeout).
Tools like OptimizeIt are very good at identifying which threads are actually
Loren Siebert wrote:
> Hey Greg-
> Thanks for the reply. I'm not sure I agree with you on this one, but I
> would be happy to be proven wrong. I see the CPU (as reported by vmstat) to
> be busier than I would expect for the traffic load on each machine. I
> profiled it to see where the time was being spent, and was somewhat
> surprised to see it spent waiting for the client to send the HTTP request. I
> was hoping to increase the traffic by 5x on these machines, but I'm nervous
> because I think all these client requests will in effect be a denial of
> service to Jetty, with all my allocated HTTP acceptor threads trying to read
> HTTP headers from those slow client socket connections.
> If they are truly just in a wait state and not doing that tight
> do/while loop, then I think I'm still in trouble because I will be spending
> all my time context switching between thousands of accepter threads.
> Well, RH9.0/NPTL/1.4.2 is supposed to sort out the Linux thread limit
> and allow me to handle thousands of threads at once, so I guess this is a
> good opportunity to see if it really works. I'll let you know if I learn
> anything more when I push more traffic through it.
> ----- Original Message -----
> From: "Greg Wilkins" <gregw@...>
> Newsgroups: gmane.comp.java.jetty.general
> To: <loren@...>
> Sent: Thursday, September 18, 2003 7:14 PM
> Subject: Re: [jetty-discuss] profiling of handleConnection <repost>
>>Spending time within a method like read is OK as you are blocked
>>and not using CPU.
>>It is only bad to spend a long time in such a method if you are burning
>>The fact that you are spending more time blocked in read than anything
>>else just means that your handling of requests is much faster than
>>your network latency. It should also mean that you have spare threads
>>and CPU available to handle requests from other connections from
>>For my own profiling, I exclude blocking methods from the calculations.
>>Loren Siebert wrote:
>>>Apologies if you get this twice...I originally posted to the yahoo
>>>I have a jetty 4.2.12 app running on rh9.0/sun1.4.2_01. The app is a
>>>web service, so it gets HTTP requests and spits back about 10K of XML
>>>data in response. In profiling my app using -Xrunhprof, I see that
>>>about 80% of the time is apparently spent reading the HTTP request
>>>from the client. This is the stack trace where 80% of the time is
>>>I'm wondering if it's this loop in
>>>it spends all the time:
>>>throw new EOFException();
>>>Taking a wild guess, I suspect that some of the clients have a slow
>>>connection, and are taking a bit of time to send the actual HTTP
>>>request once the socket is accepted. Does this sound correct?
>>>If so, is there anything I can do about it? Is there some way I can
>>>drop these clients if they are slow to send in their HTTP request?
>>>Perhaps I can modify the above loop so that it only tries N number of
>>>times to get line_buffer.size > 0?
>>>This sf.net email is sponsored by:ThinkGeek
>>>Welcome to geek heaven.
>>>jetty-discuss mailing list
Greg Wilkins<gregw@...> Phone/fax: +44 7092063462
Mort Bay Consulting Australia and UK. http://www.mortbay.com