From: Dan C. <da...@da...> - 2002-03-08 18:24:08
|
This is maybe a bit offtopic, and a bit long, but this is one of those serindipitous occasions where a conversation strikes on something somebody else is already working on... Greg Wilkins wrote: > >+ Use java.nio as: > * if you care about performance you should be using jdk1.4 anyway > * if you want one machine to handle more connections than the sum of > all your servers, then java.net.ServerSocket aint going to do the job! > * I want to play with nio before I write the NioListener for Jetty! Hmm. I've been writing a sort of general purpose TCP server using the NIO stuff that I'd actually rather like to try grafting onto the front end of Jetty, if you don't mind. Bear in mind that this is fairly early work yet, and there are probably lots of holes. Basically, the way it's structure is there is a Server (MBean) which contains listeners (one for each declared hostname:port). The Server has a single thread that uses an NIO selector to find out when it needs to accept on any of them. The resulting SocketChannels are handed off to a ReaderManager, which chooses a ReaderThread to handle that channel. Each reader thread selects on all of its channels, reads data and delegates to a Protocol implementation to build a Request and put appropriate data into it. Once the request gets to the state HEADERS_READ (the headers have all been read and there is enough there to be dispatched) the Reader thread hands gets a worker thread from a pool and hands the Request off to a ContextManager which uses information from the request to dispatch to the appropriate RequestHandler stack (which is where I _think_ Jetty would come in). When the request needs to be written to its output (because it's buffer is full, or the request has been completely handled) it (carrying its SocketChannel) gets handed off/hands itself to a WriterManager, which chooses a WriteThread to handle it in a similiar non-blocking fashion as the ReaderThread. Note that a POST request (or an SMTP command, or a JBossMQ message on a new protocol, or anything that can distinguish header/envelope from body/payload) can be dispatched and processing can begin while the request body is still being read. Likewise on the output: if the buffer is full, it will start to be written while processing continues. ReaderThreads hold onto SocketChannels until they're closed (to enable keep alive), WriterThreads do not. Direct Byte Buffers are used and pooled wherever possible, allowing the underlying implementation to optimize transfers. I think the biggest advantage here is the level of control it gives administrators over thread usage: you can configure read, write, and work pools separately. One of the things people notice in JBoss under linux is the number of threads used - they'd notice it under Windows or any commercial Unix as well if they looked, it's just that Linux puts its threads right out there for you to see and freak out on in ps and top et. al. Another thought I've had is to use this same framework as the basis for a new invoker for JBoss, so that remote client invocations don't put the server at the mercy of the whims of RMI's thread usage. thoughts, chortels, flames? thanks for your time, danch |