[JSch-users] Deadlock on creating new channel + fix.
Status: Alpha
Brought to you by:
ymnk
From: Alexander K. <al...@tm...> - 2005-07-28 20:57:26
|
Hello All, There is a possible deadlock that may occur on creating new channel, in = case request for creation sent while reading data from existing channel. = The reason of the deadlock is fixed pipe buffer size (1024 bytes in = previous versions and 32K in the latest one). While enlarging buffer = size may reduce the possibility of the lock, it is obviously not a = proper way to fix the problem. The lock occurs in the following situation: 1. startup: exec channel #1 is created and receives data from the = server. 2. session thread: there is let's say 64K of data is received. 32K of = data is written to the channel's pipe and session thread is locked (on = pipe.wrtie()). 3. user's thread: channel's user reads 1K of data and asks session to = create another channel (channel#2) 4. user's thread: session sends channel creation request and waits for = server's response to be received in session's thread. 5. session thread: server response is never read because session's = thread is blocked on the pipe (see point 2). To fix that problem (it prevents the library I'm working on to work = properly) I made the following modifications in Channel code (added = SmartInputStream class and modified getInputStream() method): public InputStream getInputStream() throws IOException { PipedInputStream in=3Dnew SmartInputStream(); io.setOutputStream(new PassiveOutputStream(in)); return in; } class PassiveOutputStream extends PipedOutputStream{ private PipedInputStream mySinc; PassiveOutputStream(PipedInputStream in) throws IOException{ super(in); mySinc =3D in; } public void write(byte b[], int off, int len) throws IOException { if (mySinc instanceof SmartInputStream) { for (int i =3D 0; i < len; i++) { ((SmartInputStream) mySinc).receive(b[off + i]); } } else { super.write(b, off, len); } } } class SmartInputStream extends PipedInputStream { protected synchronized void receive(int b) throws IOException { // extend buffer if needed. if (in + 1 >=3D buffer.length) { byte[] newBuffer =3D new byte[(buffer.length * 3)/2]; System.arraycopy(buffer, 0, newBuffer, 0, buffer.length); buffer =3D newBuffer; } super.receive(b); } } New implementaion of piped input stream checks whether pipe buffer is = about to be full blocked _before_ receiving new data and enlarges buffer = to avoid blocking. This is done automatically, so there is no need to = set up initial buffer size to 32K - it will be enlarged on demand. Thus = all the data received in ssh "window" is immideatly written to the pipe = and session thread will never lock on pipe.write() call. AFAIU there is no chance of going out of memory, because amount of = received data will never exceed size of the ssh data "window" which size = is fixed and should be more or less reasonable, and also all the data is = buffered anyway in the session's thread before sending to the pipe. Alexander Kitaev, TMate Software, http://tmatesoft.com/ |