Menu

Could not Deserialize exception seen when running Ha setup with HSQL DB

Help
Ganesh
2013-02-11
2013-03-08
  • Ganesh

    Ganesh - 2013-02-11

    Hi,

    My setup consists of 2 or more HSQL (2.2.9 Version) nodes running in a HA setup using Ha-jdbc (2.0.16 - latest code). Initially the nodes are started with no data in them, so whatever is inserted should supposedly be replicated uniformly amongst the HSQL nodes.

    Now, one of the tables has a Blob column in it. We insert 3 or 4 rows in that table and everything is working fine at that instant. But after leaving that setup untouched for some time (say 1 or 2 days), we observe that on reading blob content from some of the rows (in some particular DB), we encounter a "Could not Deserialize exception" exception (Stack trace at the end). Also it was observed that some of the blobs had different sizes across the HSQL nodes.

    We are not able to find out the what maybe causing this issue. Also we are clear about the exact steps for reproducing the same (the exact sequence of steps).

    How can we debug such an issue, and also what can cause this issue to appear after a while (it never appears instantly after insertion of records) ?

    Note: This always occurs only in Ha setups, when we run a single node without using ha-jdbc we haven't faced this issue.

    Stacktrace below:-
    org.hibernate.type.SerializationException: could not deserialize
    at org.hibernate.internal.util.SerializationHelper.doDeserialize(SerializationHelper.java:244) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.internal.util.SerializationHelper.deserialize(SerializationHelper.java:300) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.type.SerializableToBlobType.fromBytes(SerializableToBlobType.java:91) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.type.SerializableToBlobType.get(SerializableToBlobType.java:83) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.type.AbstractLobType.nullSafeGet(AbstractLobType.java:79) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.type.AbstractType.hydrate(AbstractType.java:106) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.persister.entity.AbstractEntityPersister.hydrate(AbstractEntityPersister.java:2688) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.loadFromResultSet(Loader.java:1554) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:1486) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.getRow(Loader.java:1386) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:640) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.doQuery(Loader.java:860) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:289) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.doList(Loader.java:2449) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.doList(Loader.java:2435) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2276) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.Loader.list(Loader.java:2271) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:121) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1473) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at org.hibernate.internal.CriteriaImpl.list(CriteriaImpl.java:373) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]
    at com.guavus.rubix.filter.FilterService.getFilterByUser_aroundBody6(FilterService.java:197) [rubix-3.13.jar:na]
    at com.guavus.rubix.filter.FilterService$AjcClosure7.run(FilterService.java:1) [rubix-3.13.jar:na]
    Caused by: java.io.EOFException: null
    at java.io.ObjectInputStream$BlockDataInputStream.peekByte(Unknown Source) ~[na:1.6.0_25]
    at java.io.ObjectInputStream.readObject0(Unknown Source) ~[na:1.6.0_25]
    at java.io.ObjectInputStream.readArray(Unknown Source) ~[na:1.6.0_25]
    at java.io.ObjectInputStream.readObject0(Unknown Source) ~[na:1.6.0_25]
    at java.io.ObjectInputStream.readArray(Unknown Source) ~[na:1.6.0_25]
    at java.io.ObjectInputStream.readObject0(Unknown Source) ~[na:1.6.0_25]
    at java.io.ObjectInputStream.readObject(Unknown Source) ~[na:1.6.0_25]
    at org.hibernate.internal.util.SerializationHelper.doDeserialize(SerializationHelper.java:238) ~[hibernate-core-4.0.0.CR1.jar:4.0.0.CR1]

    Regards,
    R Ganesh

     
  • Paul Ferraro

    Paul Ferraro - 2013-02-11

    What value does HSQLDB return for connection.getMetaData().locatorsUpdateCopy()?
    HA-JDBC handles LOBs differently depending on the value returned by this method.

     
  • Ganesh

    Ganesh - 2013-02-12

    It is returning false.

    Also we are using hibernate for all the database interaction. The pojo has a composite object (which is to saved as blob) with @Lob annotation on it.

     
  • Paul Ferraro

    Paul Ferraro - 2013-02-12

    OK - I was able to reproduce the issue locally. I'll have a fix ready soon.

     
  • Ganesh

    Ganesh - 2013-02-13

    That's great. I would also like to understand where the issue was.
    Waiting for your update. Thanks for the prompt response.

     
  • Paul Ferraro

    Paul Ferraro - 2013-02-13

    If DatabaseMetaData.locatorsUpdateCopy() returns false, that means the JDBC driver makes updates directly to the blob instead of a copy of the blob. Consequently, the blob returned by ResultSet.getBlob(...) will proxy the blob returned by each database. Thus, calls to Blob.setBytes(...) will get properly applied to each database. However, if your JDBC client uses Blob.setBinaryStream(long) - which is the case with Hibernate @Blob types - this isn't currently handled correctly. This method returns an OutputStream, which needs to write bytes to the stream on each database. Currently, the output stream returned by this method only points to a single database, so blob updates aren't getting replicated correctly to all databases.

    To fix this, HA-JDBC need to proxy the output stream as well. The solution isn't trivial, since OutputStream isn't an interface, and can't be proxied in the same way as our other JDBC objects.

    The same bug applies to Clob.setAsciiStream(long)/setCharacterStream(long), and SQLXML.setBinaryStream()/setCharacterStream()/setResult(Class).

     
  • Ganesh

    Ganesh - 2013-02-14

    Hi Paul, Thanks for the prompt response once again.
    But, I am not able to relate a few aspects of my issue with the problem description you have provided.
    1) Our issue is of intermittent nature, it occurs only in some rows (blobs), not all.
    2) The issue on reading the blob never appears immediately after row(blob) insertion, it occurs after a day or two. What could be the reason for delay? Could it be that hibernate is serving the blob content from its L2 cache (which we have enabled) initially?
    3) Also when you say "updates", do you mean row (blob content) updation? I am asking this because the problematic rows(blobs) in our case have never been updated/modified. The Version column which we use for optimistic locking in hibernate is 0 in all the cases seen so far.

    Now if I keep all the above points aside and override the "useInputStreamToInsertBlob" method in HSQLDialect (provided with HSQL) to return false, it would enable hibernate to set Blobs rather than BinaryStream. Would this approach work and solve our issues? What is the reason that the default implementation of this method in "Dialect" class of Hibernate returns true?

     
  • Paul Ferraro

    Paul Ferraro - 2013-02-18

    If you're using a 2nd level cache, this easily be responsible for the intermittent nature of the issue.

    You're right that this will affect lob creation as well as updates, depending on the access syntax. If the hsql dialect in hibernate creates a blob using Connection.createBlob(), then the Blob returned by HA-JDBC is a proxy to Blob objects for each database in your cluster. If hibernate writes data to this blob using the Blob.setBinaryStream(long) method (as opposed to any of the setBytes(...) methods, data written to the output stream returned by this method will only go to your primary database.

    Overriding "useInputStreamToInsertBlob" should be an adequate workaround, since Hibernate will then avoid the use of LOB objects at altogether.

     

Log in to post a comment.