$ cvs -z3 diff -u JDBCUtil.java
Index: JDBCUtil.java
=========================================
==========================
RCS
file: /cvsroot/jboss/jboss/src/main/org/jboss/ejb/plugins/c
mp/jdbc/JDBCUtil.java,v
retrieving revision 1.9.2.2
diff -u -r1.9.2.2 JDBCUtil.java
--- JDBCUtil.java 6 Aug 2002 04:15:29 -0000
1.9.2.2
+++ JDBCUtil.java 27 Sep 2002 19:59:57 -0000
@@ -208,6 +208,20 @@
return;
}
+ if(jdbcType == Types.LONGVARCHAR)
+ {
+ String string = value.toString();
+ InputStream is = null;
+ try {
+ is = new ByteArrayInputStream(string.getBytes
());
+ ps.setAsciiStream(index, is, string.length());
+ } finally
+ {
+ safeClose(is);
+ }
+ return;
+ }
+
//
// Binary types need to be converted to a byte array
and set
//
Logged In: YES
user_id=463096
setCharacterStream is the JDBC 2.0 friendly way to do this, because it's
nicer to unicode, when the DBMS supports it.
I've tested it, and this seems to work just as well as setAsciiStream.
Therefore, I think the CLOB code can also be used for
LONGVARCHAR.
This seems to be one of the rare situations in which the Oracle thin
drivers support data columns bigger than around 4-5k.