Connection errors with 0.8-rc1

2004-04-01
2012-08-15
  • Luis Filipe Neves

    Hi all,

    This benchmark[1] works fine using 0.7.1 but barfes with lots of connection errors with version 0.8-rc1.
    Can anyone confirm this?
    Does the new Connection implementation require changes in code in order to work correctly?

    Regards,
    Luis Neves

    [1] - http://www.inetsoftware.de/English/Produkte/JDBC2/BenchTest2_1.zip

     
    • Alin Sinpalean

      Alin Sinpalean - 2004-04-01

      Luis,

      Thanks for submitting this issue. It took me quite a while to figure out what was happening, but it was worth it. ;o)

      The main cause for all the mayhem is the fact that we are trying to move the internal implementation from byte[] to Blob for IMAGE fields. Unfortunately this move is not complete, and was causing the benchmark to crash. To be more exact, jTDS is returning Blob instances when reading an IMAGE column but is not able to handle Blobs as input for ResultSet.updateObject(). For the moment, the fix is to make jTDS return byte[] again for IMAGE columns, not Blob. For this, just change line 3327 of Tds.java from "return java.sql.Types.BLOB" to "return java.sql.Types.LONGVARBINARY". I will, of course, apply this fix to the CVS version, too. This will make all the tests work.

      However, this was not the most serious issue. As you noted the connection was also closed, but this behavior was not consistent. After applying the fix described above there were no more problems with the connection getting closed, but I decided to search for the cause of it anyway. And after 3 or 4 hours of sheer bliss, just when I thought I could not figure it out, I finally found it. It was caused by the new code that serializes all requests to a single connection. In some (not so uncommon) cases, if part of a request was already buffered and then that request got the hold on the connection, the packets already buffered were not sent.

      To be more clear: suppose Statement A has the hold on the connection; Statement B generates a request broken into 3 packets; Statement B submits packets 1 and 2, which are buffered until Statement A is done with the connection; Statement A releases the connection; when Statement B submitts packet 3, packet 3 is sent to the server (without checking for cached packets 1 and 2); this causes the server to not understand the packet and close the connection.

      I don't know if you were interested in this much detail, I posted all this so that the other developers may take a look at my suppositions and let me know if I'm mistaking or not. The fix should be in CVS in a day or two or you could do the change described for the first issue yourself and you should be able to run the test.

      Thanks,
      Alin.

       
      • Brian Heineman

        Brian Heineman - 2004-04-01

        Alin,

        I believe I have corrected the issues with Blob and Clob support (from an API perspective).  I am not sure what you want your plans are for releasing the next version but I was planning on looking at using streams for Blobs and Clobs this weekend (as well as fixing the compile issues with Savepoints).

        -Brian

         
        • Alin Sinpalean

          Alin Sinpalean - 2004-04-01

          Brian,

          Thanks a lot. I just reverted to byte[] and String but them when I tried to commit I bumped into your changes and so gave up mine.

          In the end it seems to work just fine.

          As for plans, I think we will release a 0.8-rc2 (no idea when) because we seem to have a number of outstanding issues and I'm not sure we will be able to fix them completely (at least not in a short while). So you can go ahead with the Blob/Clob support, just take care. ;o)

          Alin.

           
      • Brian Heineman

        Brian Heineman - 2004-04-01

        Alin,

        I just noticed that you already made the Savepoint changes so I won't be making those this weekend.  =)

        -Brian

         
    • Brian Heineman

      Brian Heineman - 2004-04-01

      Luis,

      I have corrected these issues by adding Blob and Clob support to ResultSet.updateObject() and to cursor based result sets.  This was an oversight on my part and I apologize for any inconvenience this may have caused you.

      -Brian

       
    • Mike Hutchinson

      Mike Hutchinson - 2004-04-01

      Alin,

      Hmm iteresting

      There is a definite problem here although it is one that I think should only occur when more than one thread is active on a connection. My reasoning is that the SQL requests are pretty much atomic in the sense that even if the request spans multiple packets it should complete before (in the single thread case) any existing response data can be cleared by another statement. This means that a sequence of sends should either all be sent directly or all be cached and sent when the client attempts to read the first response packet. As far as I can see the benchmark is not multi threaded which makes the failure a bit hard to explain. The code could certainly fail in the way you suggest in a multithreaded environment.

      The following code added to the sendNetPacket() method immediately before the existing write statement should flush any pending send packets.

      //
      // At this point we know that we are able to send the first
      // or subsequent packet of a new request.
      //
      // OK send any cached request packets
      byte[] tmpBuf = dequeueOutput(vsock);
      while (tmpBuf != null) {
          out.write(tmpBuf, 0, TdsComm.ntohs(tmpBuf, 2));
          tmpBuf = dequeueOutput(vsock);
      }
      // Now send current request packet

      In the mark one version of the code I had the sender cache any pending results for another TDSComm object and then send the packet immediately. I changed to the current design precisely to try and prevent different threads from trying to send their packets in the middle of another request and screwing things up. The current approach means that either request packets or response packets for a virtual socket are cached and never both which also helps to simplify the logic.

      The first thread to find the socket available sets the currentSender variable, which forces any other concurrent senders to cache their packets. When the thread, which is able to send, sends the last packet the responseOwner variable is set which means that if this thread moves the read state first it is able to read directly from the socket. If another thread reads first the first senders response will be cached in the normal way and then the second senders request will be sent and that thread will move straight to the read state. The first senders read request will be locked out until the second senders first packet is received from the server whereupon the first sender will enter the method and return the first cached packet. Sorry if this is a bit confusing but maybe in conjunction with the code it makes sense.

      Mike.

       
      • Alin Sinpalean

        Alin Sinpalean - 2004-04-01

        Mike,

        Incidentally, my fix is exactly the same as yours. :o)

        Indeed, the test is not multithreaded, but there are multiple threads in the JVM at any time even if the application is single threaded. In this case it was the garbage collector closing a cursor-based ResultSet while the main thread was trying to insert a Blob large enough to require two packets.

        After searching so long for this bug one thing came to my mind: it could be possible that an outgoing LOB is cached into the output queue causing an OutOfMemoryError (very unlikely, but possible). We should think about this issue too in the future (not necessarily cache it, we could just block the sender until the connection becomes available).

        Alin.

         
    • Mike Hutchinson

      Mike Hutchinson - 2004-04-01

      Alin,

      Ha! I had overlooked the garbage collection aspect I didnt think we had any finalizers any more but of course there is one in the CursorResult set.
      I think we are OK with sending large LOBs because request packets are cached to disk as well as response ones. Not very efficient but at least the driver should not run out of memory :- )

      Not sure about the idea of blocking the sender with the current design as things might deadlock awaiting the current response reader emptying the read queue.

      Mike.

       
    • Alin Sinpalean

      Alin Sinpalean - 2004-04-01

      Luis,

      Sorry for the whole mess, it's working now. You really helped a lot by submitting this issue.

      All I hope now is that the benchmark results are convincing enough. ;o)

      Alin.

       
    • Luis Filipe Neves

      Hi,

      You guys rock!! I pulled the code from CVS and it's working perfectly ... and it's fast too :-)

      I thank you all for the fast reply.

      Regards,
      Luis Neves

       

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks