You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(21) |
Jun
(56) |
Jul
(6) |
Aug
(2) |
Sep
|
Oct
|
Nov
(1) |
Dec
(3) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
|
Feb
(10) |
Mar
(11) |
Apr
(8) |
May
(4) |
Jun
(10) |
Jul
(15) |
Aug
(5) |
Sep
(2) |
Oct
(12) |
Nov
|
Dec
|
| 2004 |
Jan
(18) |
Feb
(33) |
Mar
(7) |
Apr
(3) |
May
(3) |
Jun
|
Jul
(3) |
Aug
(3) |
Sep
(17) |
Oct
(17) |
Nov
(6) |
Dec
(1) |
| 2005 |
Jan
|
Feb
|
Mar
(1) |
Apr
(8) |
May
(4) |
Jun
(2) |
Jul
|
Aug
(15) |
Sep
(5) |
Oct
(11) |
Nov
(5) |
Dec
|
| 2006 |
Jan
(10) |
Feb
(4) |
Mar
|
Apr
(3) |
May
(13) |
Jun
(1) |
Jul
(1) |
Aug
(9) |
Sep
(1) |
Oct
(1) |
Nov
(4) |
Dec
(32) |
| 2007 |
Jan
(15) |
Feb
(10) |
Mar
(9) |
Apr
(4) |
May
(9) |
Jun
(8) |
Jul
(8) |
Aug
(4) |
Sep
(43) |
Oct
(12) |
Nov
(8) |
Dec
(11) |
| 2008 |
Jan
(7) |
Feb
(52) |
Mar
(92) |
Apr
(19) |
May
(101) |
Jun
(212) |
Jul
(136) |
Aug
(102) |
Sep
(53) |
Oct
(58) |
Nov
(115) |
Dec
(122) |
| 2009 |
Jan
(58) |
Feb
(66) |
Mar
(82) |
Apr
(29) |
May
(27) |
Jun
(13) |
Jul
(27) |
Aug
(59) |
Sep
(104) |
Oct
(111) |
Nov
(77) |
Dec
(31) |
| 2010 |
Jan
(79) |
Feb
(52) |
Mar
(18) |
Apr
(19) |
May
(18) |
Jun
(10) |
Jul
(7) |
Aug
(45) |
Sep
(50) |
Oct
(36) |
Nov
(11) |
Dec
(36) |
| 2011 |
Jan
(10) |
Feb
(26) |
Mar
(11) |
Apr
(5) |
May
(6) |
Jun
(2) |
Jul
(8) |
Aug
(6) |
Sep
(6) |
Oct
(5) |
Nov
(2) |
Dec
(5) |
| 2012 |
Jan
(4) |
Feb
(1) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(16) |
Jul
(10) |
Aug
(1) |
Sep
(17) |
Oct
(22) |
Nov
(2) |
Dec
(5) |
| 2013 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
(5) |
Dec
(3) |
| 2014 |
Jan
|
Feb
|
Mar
(1) |
Apr
(3) |
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(16) |
Jul
(2) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(11) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2020 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
(2) |
Nov
(2) |
Dec
(4) |
| 2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
|
Dec
|
| 2024 |
Jan
(4) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2025 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: SourceForge.net <no...@so...> - 2010-06-28 10:12:33
|
Bugs item #3022281, was opened at 2010-06-28 11:31 Message generated for change (Tracker Item Submitted) made by wujeksrujek You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3022281&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: wujek (wujeksrujek) Assigned to: matthias g (gommma) Summary: NPE when setting properties Initial Comment: The method: DatabaseConfig.convertIfNeeded is not NPE safe: private Object convertIfNeeded(String property, Object value) { logger.trace("convertIfNeeded(property={}, value={}) - start", property, value); ConfigProperty prop = findByName(property); Class allowedPropType = prop.getPropertyType(); // HERE if(allowedPropType == Boolean.class || allowedPropType == boolean.class) { // String -> Boolean is a special mapping which is allowed if(value instanceof String) { return Boolean.valueOf((String)value); } } return value; } When a property that is not supported is set, NPE happens. It would be much better to simply ignore such a property and maybe issue a warning. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3022281&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-06-23 15:54:31
|
Bugs item #3006008, was opened at 2010-05-23 16:25 Message generated for change (Comment added) made by arronax50 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3006008&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Igor Turkin () Assigned to: matthias g (gommma) Summary: postgre export error on the column name that is keyword Initial Comment: e.g. if a table has a column 'limit', then export throws an exception ---------------------------------------------------------------------- Comment By: Pierre Gardin (arronax50) Date: 2010-06-23 17:54 Message: OK, I hadn't seen it could be configured. It's invalid then. http://www.dbunit.org/properties.html I thought the `` were standardized. ---------------------------------------------------------------------- Comment By: Pierre Gardin (arronax50) Date: 2010-06-23 17:42 Message: Same thing for me. The fix would be to protect the table names, the column names and the values. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3006008&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-06-23 15:42:49
|
Bugs item #3006008, was opened at 2010-05-23 16:25 Message generated for change (Comment added) made by arronax50 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3006008&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Igor Turkin () Assigned to: matthias g (gommma) Summary: postgre export error on the column name that is keyword Initial Comment: e.g. if a table has a column 'limit', then export throws an exception ---------------------------------------------------------------------- Comment By: Pierre Gardin (arronax50) Date: 2010-06-23 17:42 Message: Same thing for me. The fix would be to protect the table names, the column names and the values. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3006008&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-06-08 04:38:20
|
Bugs item #3013019, was opened at 2010-06-08 15:37 Message generated for change (Settings changed) made by bigmikew You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3013019&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: Mike Watson (bigmikew) Assigned to: matthias g (gommma) Summary: Updatable statement used in ant export. Initial Comment: Attempting to use the Ant task to export a database I get the following exception: org.dbunit.dataset.DataSetException: com.sybase.jdbc2.jdbc.SybSQLException: The optimizer could not find a unique index which it could use to scan table 'myTable' for cursor 'jconnect_implicit_1'. It is correct that the table has no unique index and adding one isn't an option (we have several tables that lack a unique index). Attempting to reproduce this error with a standalone class using the same sybase JDBC driver (jconn2.jar) shows that this error occurs when create statement is called with CONCUR_UPDATABLE as a parameter (See code sample below), if CONCUR_READONLY is used no error occurs. Statement s = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE); ResultSet rs = s.executeQuery("select * from myTable"); int rowCount = 0; while (rs.next()) { rowCount++; } System.out.println("Returned rows: " + String.valueOf(rowCount)); I have had a quick look but can't see where (or imagine why) you would want to use CONCUR_UPDATABLE when doing an export. This seems to be very reproducible connecting to a Sybase 15 instance using the jconn2.jar JDBC driver and running the Ant task to do an export from a schema that includes a table with no unique index. ---------------------------------------------------------------------- >Comment By: Mike Watson (bigmikew) Date: 2010-06-08 16:38 Message: Looks like this might be an error with the jconn2 driver. Using jTDS I don't encounter this issue. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3013019&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-06-08 03:37:34
|
Bugs item #3013019, was opened at 2010-06-08 15:37 Message generated for change (Tracker Item Submitted) made by bigmikew You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3013019&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: Mike Watson (bigmikew) Assigned to: matthias g (gommma) Summary: Updatable statement used in ant export. Initial Comment: Attempting to use the Ant task to export a database I get the following exception: org.dbunit.dataset.DataSetException: com.sybase.jdbc2.jdbc.SybSQLException: The optimizer could not find a unique index which it could use to scan table 'myTable' for cursor 'jconnect_implicit_1'. It is correct that the table has no unique index and adding one isn't an option (we have several tables that lack a unique index). Attempting to reproduce this error with a standalone class using the same sybase JDBC driver (jconn2.jar) shows that this error occurs when create statement is called with CONCUR_UPDATABLE as a parameter (See code sample below), if CONCUR_READONLY is used no error occurs. Statement s = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE); ResultSet rs = s.executeQuery("select * from myTable"); int rowCount = 0; while (rs.next()) { rowCount++; } System.out.println("Returned rows: " + String.valueOf(rowCount)); I have had a quick look but can't see where (or imagine why) you would want to use CONCUR_UPDATABLE when doing an export. This seems to be very reproducible connecting to a Sybase 15 instance using the jconn2.jar JDBC driver and running the Ant task to do an export from a schema that includes a table with no unique index. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3013019&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-06-05 02:20:20
|
Bugs item #3005153, was opened at 2010-05-21 08:25 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005153&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: ggsoft (ggsoft) Assigned to: Nobody/Anonymous (nobody) Summary: DBUnitAssert comparing Datasets Initial Comment: DBUnitAssert l. 364 // Verify row count int expectedRowsCount = expectedTable.getRowCount(); int actualRowsCount = actualTable.getRowCount(); if (expectedRowsCount != actualRowsCount) { String msg = "row count (table=" + expectedTableName + ")"; throw failureHandler.createFailure(msg, String.valueOf(expectedRowsCount), String.valueOf(actualRowsCount)); } The input of an expected flat xml dataset fails to test against the database dataset because NULL columns are not present in expected and can not be defined. version:2.4.7 ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2010-06-05 02:20 Message: This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: Jeff Jensen (jeffjensen) Date: 2010-05-21 12:33 Message: If I understand your use case correctly (some more info would help a lot, such as a row each from expected and actual), you need to use a ReplacementDataSet. Read about datasets here: http://dbunit.sourceforge.net/components.html http://dbunit.sourceforge.net/apidocs/org/dbunit/dataset/ReplacementDataSet.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005153&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-06-04 12:21:40
|
Bugs item #3011519, was opened at 2010-06-04 14:21 Message generated for change (Tracker Item Submitted) made by gillesgosuin You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3011519&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: GillesG (gillesgosuin) Assigned to: matthias g (gommma) Summary: DatabaseSequenceFilter , Qualified table names and MySQL Initial Comment: Mixing qualified table names, MySQL and DatabaseSequenceFilter leads to wrong table ordering and constraint violations when inserting data. I'm not sure why exactly, the code involved in DatabaseSequenceFilter is way too complicated for what it has to do. I guess it's related to the fact that MySQL has no "schema" per se, and uses catalogs instead. I don't quite understand why the DatabaseSequenceFilter class is so complicated; basically, what it needs to do is : create a graph of tables, where edges are table dependencies, check for cycles and perform a topological sort on it. I rewrote it using JGrapht (no need to reinvent the wheel) and it's 100 lines long. Here is how it looks; unfortunately, it's MySQL-dependent, because the current DBunit API doesn't allow for a database agnostic implementation without many contorsions (IMHO, IMetaDataHandler lacks a few functions). package org.dbunit.util; import org.apache.commons.lang.builder.EqualsBuilder; import org.apache.commons.lang.builder.HashCodeBuilder; public class CollectionFriendlyQualifiedTableName extends QualifiedTableName { public CollectionFriendlyQualifiedTableName(String tableName, String defaultSchema, String escapePattern) { super(tableName, defaultSchema, escapePattern); } public CollectionFriendlyQualifiedTableName(String tableName, String defaultSchema) { super(tableName, defaultSchema); } @Override public boolean equals(Object o) { if (this == o) { return true; } if (!(o instanceof QualifiedTableName)) { return false; } QualifiedTableName n = (QualifiedTableName) o; return new EqualsBuilder() .append(getQualifiedName().toUpperCase(), n.getQualifiedName().toUpperCase()) .isEquals(); } @Override public int hashCode() { return new HashCodeBuilder() .append(getQualifiedName().toUpperCase()) .toHashCode(); } } //------------------------------------------------ package org.dbunit.database; import java.sql.Connection; import java.sql.DatabaseMetaData; import java.sql.ResultSet; import java.sql.SQLException; import java.util.ArrayList; import java.util.Collections; import java.util.List; import org.dbunit.dataset.DataSetException; import org.dbunit.dataset.filter.SequenceTableFilter; import org.dbunit.util.CollectionFriendlyQualifiedTableName; import org.dbunit.util.QualifiedTableName; import org.jgrapht.DirectedGraph; import org.jgrapht.alg.CycleDetector; import org.jgrapht.graph.DefaultDirectedGraph; import org.jgrapht.traverse.TopologicalOrderIterator; public class MySqlDatabaseSequenceFilter extends SequenceTableFilter { public MySqlDatabaseSequenceFilter(IDatabaseConnection databaseConnection) throws DataSetException, SQLException { this(databaseConnection, databaseConnection.createDataSet().getTableNames()); } public MySqlDatabaseSequenceFilter(IDatabaseConnection databaseConnection, String[] tableNames) throws DataSetException, SQLException { super(sortTableNames(databaseConnection, tableNames)); } private static String[] sortTableNames(IDatabaseConnection databaseConnection, String[] tableNames) throws DataSetException, SQLException { Connection connection = databaseConnection.getConnection(); DatabaseMetaData databaseMetadata = connection.getMetaData(); String defaultSchema = databaseConnection.getSchema(); DirectedGraph<QualifiedTableName, Object> graph = new DefaultDirectedGraph<QualifiedTableName, Object>(Object.class); DatabaseConfig databaseConfiguration = databaseConnection.getConfig(); String[] tableTypes = (String[]) databaseConfiguration.getProperty(DatabaseConfig.PROPERTY_TABLE_TYPE); for (String tableName : tableNames) { QualifiedTableName qualifiedTableName = new CollectionFriendlyQualifiedTableName(tableName, defaultSchema); if (qualifiedTableName.getSchema() != null && qualifiedTableName.getSchema().equals(defaultSchema)) { ResultSet tableResultSet = databaseMetadata.getTables(defaultSchema, null, qualifiedTableName.getTable(), tableTypes); try { if (tableResultSet.next()) { graph.addVertex(qualifiedTableName); if (tableResultSet.next()) { throw new AmbiguousTableNameException(tableName); } } } finally { tableResultSet.close(); } } } for (QualifiedTableName qualifiedTableName : graph.vertexSet()) { ResultSet importedKeysResultSet = databaseMetadata.getImportedKeys(qualifiedTableName.getSchema(), null, qualifiedTableName.getTable()); try { while (importedKeysResultSet.next()) { String referencedTableSchema = importedKeysResultSet.getString(1); String referencedTable = importedKeysResultSet.getString(3); QualifiedTableName qualifiedReferencedTableName = new CollectionFriendlyQualifiedTableName(referencedTable, referencedTableSchema); if (graph.containsVertex(qualifiedReferencedTableName)) { graph.addEdge(qualifiedTableName, qualifiedReferencedTableName); } } } finally { importedKeysResultSet.close(); } } CycleDetector<QualifiedTableName, Object> cycleDetector = new CycleDetector<QualifiedTableName, Object>(graph); if (cycleDetector.detectCycles()) { throw new CyclicTablesDependencyException("Table dependency cycle found"); } TopologicalOrderIterator<QualifiedTableName, Object> iterator = new TopologicalOrderIterator<QualifiedTableName, Object>(graph); List<String> result = new ArrayList<String>(); while (iterator.hasNext()) { result.add(iterator.next().getQualifiedName()); } Collections.reverse(result); return result.toArray(new String[0]); } } Hope this helps... somehow. Unfortunately, I currently don't have enough time to get more involved into this :-( ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3011519&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-23 14:25:50
|
Bugs item #3006008, was opened at 2010-05-23 18:25 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3006008&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Igor Turkin () Assigned to: matthias g (gommma) Summary: postgre export error on the column name that is keyword Initial Comment: e.g. if a table has a column 'limit', then export throws an exception ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3006008&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-21 12:33:25
|
Bugs item #3005153, was opened at 2010-05-21 03:25 Message generated for change (Comment added) made by jeffjensen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005153&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Pending Resolution: None Priority: 5 Private: No Submitted By: ggsoft (ggsoft) Assigned to: Nobody/Anonymous (nobody) Summary: DBUnitAssert comparing Datasets Initial Comment: DBUnitAssert l. 364 // Verify row count int expectedRowsCount = expectedTable.getRowCount(); int actualRowsCount = actualTable.getRowCount(); if (expectedRowsCount != actualRowsCount) { String msg = "row count (table=" + expectedTableName + ")"; throw failureHandler.createFailure(msg, String.valueOf(expectedRowsCount), String.valueOf(actualRowsCount)); } The input of an expected flat xml dataset fails to test against the database dataset because NULL columns are not present in expected and can not be defined. version:2.4.7 ---------------------------------------------------------------------- >Comment By: Jeff Jensen (jeffjensen) Date: 2010-05-21 07:33 Message: If I understand your use case correctly (some more info would help a lot, such as a row each from expected and actual), you need to use a ReplacementDataSet. Read about datasets here: http://dbunit.sourceforge.net/components.html http://dbunit.sourceforge.net/apidocs/org/dbunit/dataset/ReplacementDataSet.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005153&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-21 08:25:49
|
Bugs item #3005153, was opened at 2010-05-21 10:25 Message generated for change (Tracker Item Submitted) made by ggsoft You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005153&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: ggsoft (ggsoft) Assigned to: Nobody/Anonymous (nobody) Summary: DBUnitAssert comparing Datasets Initial Comment: DBUnitAssert l. 364 // Verify row count int expectedRowsCount = expectedTable.getRowCount(); int actualRowsCount = actualTable.getRowCount(); if (expectedRowsCount != actualRowsCount) { String msg = "row count (table=" + expectedTableName + ")"; throw failureHandler.createFailure(msg, String.valueOf(expectedRowsCount), String.valueOf(actualRowsCount)); } The input of an expected flat xml dataset fails to test against the database dataset because NULL columns are not present in expected and can not be defined. version:2.4.7 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005153&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-21 08:22:15
|
Bugs item #3005151, was opened at 2010-05-21 10:11 Message generated for change (Comment added) made by ggsoft You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005151&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: ggsoft (ggsoft) Assigned to: matthias g (gommma) Summary: Case sensitivity in DBUnitAssert Initial Comment: When testing on linux machine with MySQL the dbunitassert l.253 public void assertEquals(IDataSet expectedDataSet, IDataSet actualDataSet, FailureHandler failureHandler) throws DatabaseUnitException checks table names with // tables for (int i = 0; i < expectedNames.length; i++) { String name = expectedNames[i]; assertEquals(expectedDataSet.getTable(name), actualDataSet.getTable(name), failureHandler); } these are get by l. 232 String[] expectedNames = getSortedUpperTableNames(expectedDataSet); String[] actualNames = getSortedUpperTableNames(actualDataSet); and are uppercase. so the expected dataset (which is flat xml) upper case table names are tested against mysql (which is case sensitive) and therefore the table is not found. assertEquals(expectedDataSet.getTable(name), actualDataSet.getTable(name), failureHandler); Maybe this is relating to: problem because tableName is case insensitive - ID: 1214252 ---------------------------------------------------------------------- >Comment By: ggsoft (ggsoft) Date: 2010-05-21 10:22 Message: version: 2.4.7 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005151&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-21 08:11:27
|
Bugs item #3005151, was opened at 2010-05-21 10:11 Message generated for change (Tracker Item Submitted) made by ggsoft You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005151&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: ggsoft (ggsoft) Assigned to: matthias g (gommma) Summary: Case sensitivity in DBUnitAssert Initial Comment: When testing on linux machine with MySQL the dbunitassert l.253 public void assertEquals(IDataSet expectedDataSet, IDataSet actualDataSet, FailureHandler failureHandler) throws DatabaseUnitException checks table names with // tables for (int i = 0; i < expectedNames.length; i++) { String name = expectedNames[i]; assertEquals(expectedDataSet.getTable(name), actualDataSet.getTable(name), failureHandler); } these are get by l. 232 String[] expectedNames = getSortedUpperTableNames(expectedDataSet); String[] actualNames = getSortedUpperTableNames(actualDataSet); and are uppercase. so the expected dataset (which is flat xml) upper case table names are tested against mysql (which is case sensitive) and therefore the table is not found. assertEquals(expectedDataSet.getTable(name), actualDataSet.getTable(name), failureHandler); Maybe this is relating to: problem because tableName is case insensitive - ID: 1214252 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3005151&group_id=47439 |
|
From: Jeff J. <jj...@ap...> - 2010-05-19 01:55:11
|
For the curious, the final answer was to flush(). Not needed with local transactions but needed with the global ones so it makes it to the driver so the dbUnit query would see it.
-----Original Message-----
From: Zdeněk Vráblík [mailto:zd...@vr...]
Sent: Tuesday, May 18, 2010 6:59 AM
To: dbu...@li...
Subject: Re: [dbunit-developer] dbUnit & JTA
Could you send me the code? I could try it with Bitronix.
Regards,
Zdenek
On Tue, May 18, 2010 at 12:40 PM, Jeff Jensen <jj...@ap...> wrote:
> Same result.
>
> dbDataSourceEmbedded is the database datasource (no affiliation to the transaction manager). From the "dataSource" class' Javadoc:
>
> "The preferred class for using Atomikos connection pooling. Use an instance of this class if you want to use Atomikos JTA-
> enabled connection pooling. All you need to do is construct an instance and set the required properties as outlined
> below. The resulting bean will automatically register with the transaction service (for recovery) and take part in active
> transactions. All SQL done over connections (gotten from this class) will participate in JTA transactions."
>
>
> -----Original Message-----
> From: Zdeněk Vráblík [mailto:zd...@vr...]
> Sent: Tuesday, May 18, 2010 2:02 AM
> To: dbu...@li...
> Subject: Re: [dbunit-developer] dbUnit & JTA
>
> Hi,
> Could you try use dbDataSourceEmbedded instead of dataSource in
> databaseTester ?
>
> Transaction manager must be aware of the transaction otherwise the
> connection is in autocommit mode.
>
> Regards,
> Zdenek
> On Tue, May 18, 2010 at 6:23 AM, Jeff Jensen <jj...@ap...> wrote:
>> Hi Zdeněk, thanks for the reply. I'll add some more info for clarity and see what you think -
>>
>> Right on the commit - I only added that JTA commit to the test to prove the point that if it was committed, then dbUnit would see the inserted rows for the dataset verify.
>>
>> Thanks for the tip to Bitronix - I was not aware of it. I wonder if a different one than Atomikos would have same/similar issues, or different!
>>
>> Regarding DataSource, I've configured DataSourceDatabaseTester, which takes the DataSource and calls getConnection() on it when called for.
>> I've configured the creation of the JTA related beans like this:
>>
>> <tx:annotation-driven />
>>
>> <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
>> <property name="transactionManager" ref="atomikosTransactionManager" />
>> <property name="userTransaction" ref="atomikosUserTransaction" />
>> </bean>
>>
>> <bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager"
>> init-method="init" destroy-method="close">
>> <property name="forceShutdown" value="false" />
>> </bean>
>>
>> <bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
>> <property name="transactionTimeout" value="300" />
>> </bean>
>>
>> <bean id="dbDataSourceEmbedded" class="org.apache.derby.jdbc.EmbeddedXADataSource">
>> <property name="databaseName" value="${cmr.databaseName.cmr3}" />
>> <property name="createDatabase" value="${cmr.createDatabase.cmr3}"/>
>> <property name="connectionAttributes" value="${cmr.connectionAttributes.cmr3}" />
>> </bean>
>>
>> (2 more EmbeddedXADataSource configured the same)
>>
>> <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close">
>> <property name="uniqueResourceName" value="cmr3DataSource" />
>> <property name="xaDataSource" ref="${cmr.dbDataSourceBeanId.cmr3}" />
>> <property name="minPoolSize" value="${cmr.minPoolSize.cmr3}" />
>> <property name="maxPoolSize" value="${cmr.maxPoolSize.cmr3}" />
>> <property name="defaultIsolationLevel" value="${cmr.defaultIsolationLevel.cmr3}" />
>> </bean>
>>
>> (2 more AtomikosDataSourceBean configured the same)
>>
>> Then 3 LocalContainerEntityManagerFactoryBean configurations each using one of the 3 dataSource beans.
>>
>> As part of dbUnit configuration, I configure a DataSourceDatabaseTester with the dataSource:
>>
>> <bean id="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
>> <constructor-arg ref="dataSource" />
>> <property name="tearDownOperation">
>> <util:constant static-field="org.dbunit.operation.DatabaseOperation.DELETE_ALL" />
>> </property>
>> </bean>
>>
>> So Atomikos is serving up the connection, but is not in same transaction (?). I set transaction isolation to 1 (read uncommitted) in case that would help, but it does not.
>>
>>
>> Can you think of anything else needed so that dbUnit will join the JTA transaction in progress?
>>
>>
>> -----Original Message-----
>> From: Zdeněk Vráblík [mailto:zd...@vr...]
>> Sent: Monday, May 17, 2010 11:31 AM
>> To: dbu...@li...
>> Subject: Re: [dbunit-developer] dbUnit & JTA
>>
>> Hi, I was using JTA but not with DbUnit.
>>
>> The code between begin and commit /rollback of distributed transaction
>> shouldn't contain
>> any local transaction code ( commit, rollback, savepoints of
>> connection). Some db drivers could throw exceptions if commit is
>> called during transaction.
>>
>> I am using Bitronix in Tomcat as Distributed transaction manager.
>> http://docs.codehaus.org/display/BTM/
>>
>> The only difference when you are using distributed transaction is how
>> the datasource is creating the connection.
>> Getting connection is same as getting connection without using JTA.
>>
>> I think you could instantiate transactional manager. Configure
>> datasource to be transactional and get connection.
>> It is different class than without distributed transacton, but
>> implements Connection interface.
>> Than you could start global transaction.
>>
>> Instantiate the DbUnit connection
>> public static DatabaseConnection getDbUnitConnection ( Connection
>> jdbcConnection ) throws DatabaseUnitException
>> {
>> DatabaseConnection connection = new DatabaseConnection( jdbcConnection );
>> DatabaseConfig config = connection.getConfig();
>> config.setProperty( DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new
>> Oracle10DataTypeFactory( jdbcConnection ) );
>> return connection;
>> }
>>
>> perform any tests
>> and than you could just rollback or commit the global transaction
>> during or after your test.
>>
>> If the transaction manager is not instantiated and/or the connection
>> is not configured to be part of distributed transaction than
>> the connection is in autocommit mode.
>>
>> Regards,
>> Zdenek
>>
>> On Mon, May 17, 2010 at 1:48 PM, Jeff Jensen <jj...@ap...> wrote:
>>> Thanks for the reply and ideas, John.
>>>
>>>
>>>
>>> If I add a UserTransaction.commit() in the test, just before calling the
>>> dbUnit verification, dbUnit sees the data (as one would expect!). I think
>>> the issue is each connection has its own transaction vs a shared/global
>>> one. I’m wondering if dbUnit needs some tweaks to support this… I will
>>> also revisit the read_uncommited setting today.
>>>
>>>
>>>
>>> dbUnit needs to join the JTA transaction in progress and I thought this was
>>> automatic per the spec when getting a connection from the same
>>> datasource/any datasource managed by JTA. I configured dbUnit with the same
>>> datasource, but something is not correct yet!
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> From: John Hurst [mailto:joh...@gm...]
>>> Sent: Monday, May 17, 2010 4:34 AM
>>> To: dbu...@li...
>>> Subject: Re: [dbunit-developer] dbUnit & JTA
>>>
>>>
>>>
>>> Jeff,
>>>
>>> No experience here using DbUnit with JTA, sorry.
>>>
>>> I believe it's possible for JPA findById() to return a result from the
>>> identify map without hitting the database -- I am sure you have considered
>>> this.
>>>
>>> Have you considered adding a JDBC logger to see what's going through your
>>> JDBC driver?
>>>
>>> I've used p6spy several times in the past.
>>>
>>> I noticed a new one recently that I've been meaning to try:
>>>
>>> http://code.google.com/p/jdbcdslog/
>>>
>>> JH
>>>
>>> On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote:
>>>
>>> You guys have examples to have dbUnit use a datasource with JTA?
>>>
>>> We have a bunch of tests that have worked great with non-datasource (JDBC
>>> connection/transaction), but recently needed to add 2 more connections to
>>> the app/configuration, so I reconfigured for them and added JTA.
>>>
>>> The problem is, after converting to JTA, the dbUnit connection doesn't see
>>> the changes in progress. I am using the same datasource Spring bean for the
>>> app and dbUnit config. For example, if a test does a JPA persist then uses
>>> dbUnit to compare with expected XML dataset, it fails because no row is
>>> found in the database. If in the test I instead just do a JPA findById, it
>>> of course finds it.
>>>
>>> I also experimented with isolation level to 1 and created a Spring pointcut
>>> to apply Spring transactions to all dbUnit methods (!), but same result in
>>> both cases.
>>>
>>> I've spent the weekend working on this conversion and have a lot working
>>> now, and hoping one of you has an example/advice!
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> dbunit-developer mailing list
>>> dbu...@li...
>>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>>
>>>
>>> --
>>> Life is interfering with my game
>>>
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> _______________________________________________
>>> dbunit-developer mailing list
>>> dbu...@li...
>>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>>
>>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
------------------------------------------------------------------------------
_______________________________________________
dbunit-developer mailing list
dbu...@li...
https://lists.sourceforge.net/lists/listinfo/dbunit-developer
|
|
From: Zdeněk V. <zd...@vr...> - 2010-05-18 11:58:45
|
Could you send me the code? I could try it with Bitronix.
Regards,
Zdenek
On Tue, May 18, 2010 at 12:40 PM, Jeff Jensen <jj...@ap...> wrote:
> Same result.
>
> dbDataSourceEmbedded is the database datasource (no affiliation to the transaction manager). From the "dataSource" class' Javadoc:
>
> "The preferred class for using Atomikos connection pooling. Use an instance of this class if you want to use Atomikos JTA-
> enabled connection pooling. All you need to do is construct an instance and set the required properties as outlined
> below. The resulting bean will automatically register with the transaction service (for recovery) and take part in active
> transactions. All SQL done over connections (gotten from this class) will participate in JTA transactions."
>
>
> -----Original Message-----
> From: Zdeněk Vráblík [mailto:zd...@vr...]
> Sent: Tuesday, May 18, 2010 2:02 AM
> To: dbu...@li...
> Subject: Re: [dbunit-developer] dbUnit & JTA
>
> Hi,
> Could you try use dbDataSourceEmbedded instead of dataSource in
> databaseTester ?
>
> Transaction manager must be aware of the transaction otherwise the
> connection is in autocommit mode.
>
> Regards,
> Zdenek
> On Tue, May 18, 2010 at 6:23 AM, Jeff Jensen <jj...@ap...> wrote:
>> Hi Zdeněk, thanks for the reply. I'll add some more info for clarity and see what you think -
>>
>> Right on the commit - I only added that JTA commit to the test to prove the point that if it was committed, then dbUnit would see the inserted rows for the dataset verify.
>>
>> Thanks for the tip to Bitronix - I was not aware of it. I wonder if a different one than Atomikos would have same/similar issues, or different!
>>
>> Regarding DataSource, I've configured DataSourceDatabaseTester, which takes the DataSource and calls getConnection() on it when called for.
>> I've configured the creation of the JTA related beans like this:
>>
>> <tx:annotation-driven />
>>
>> <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
>> <property name="transactionManager" ref="atomikosTransactionManager" />
>> <property name="userTransaction" ref="atomikosUserTransaction" />
>> </bean>
>>
>> <bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager"
>> init-method="init" destroy-method="close">
>> <property name="forceShutdown" value="false" />
>> </bean>
>>
>> <bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
>> <property name="transactionTimeout" value="300" />
>> </bean>
>>
>> <bean id="dbDataSourceEmbedded" class="org.apache.derby.jdbc.EmbeddedXADataSource">
>> <property name="databaseName" value="${cmr.databaseName.cmr3}" />
>> <property name="createDatabase" value="${cmr.createDatabase.cmr3}"/>
>> <property name="connectionAttributes" value="${cmr.connectionAttributes.cmr3}" />
>> </bean>
>>
>> (2 more EmbeddedXADataSource configured the same)
>>
>> <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close">
>> <property name="uniqueResourceName" value="cmr3DataSource" />
>> <property name="xaDataSource" ref="${cmr.dbDataSourceBeanId.cmr3}" />
>> <property name="minPoolSize" value="${cmr.minPoolSize.cmr3}" />
>> <property name="maxPoolSize" value="${cmr.maxPoolSize.cmr3}" />
>> <property name="defaultIsolationLevel" value="${cmr.defaultIsolationLevel.cmr3}" />
>> </bean>
>>
>> (2 more AtomikosDataSourceBean configured the same)
>>
>> Then 3 LocalContainerEntityManagerFactoryBean configurations each using one of the 3 dataSource beans.
>>
>> As part of dbUnit configuration, I configure a DataSourceDatabaseTester with the dataSource:
>>
>> <bean id="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
>> <constructor-arg ref="dataSource" />
>> <property name="tearDownOperation">
>> <util:constant static-field="org.dbunit.operation.DatabaseOperation.DELETE_ALL" />
>> </property>
>> </bean>
>>
>> So Atomikos is serving up the connection, but is not in same transaction (?). I set transaction isolation to 1 (read uncommitted) in case that would help, but it does not.
>>
>>
>> Can you think of anything else needed so that dbUnit will join the JTA transaction in progress?
>>
>>
>> -----Original Message-----
>> From: Zdeněk Vráblík [mailto:zd...@vr...]
>> Sent: Monday, May 17, 2010 11:31 AM
>> To: dbu...@li...
>> Subject: Re: [dbunit-developer] dbUnit & JTA
>>
>> Hi, I was using JTA but not with DbUnit.
>>
>> The code between begin and commit /rollback of distributed transaction
>> shouldn't contain
>> any local transaction code ( commit, rollback, savepoints of
>> connection). Some db drivers could throw exceptions if commit is
>> called during transaction.
>>
>> I am using Bitronix in Tomcat as Distributed transaction manager.
>> http://docs.codehaus.org/display/BTM/
>>
>> The only difference when you are using distributed transaction is how
>> the datasource is creating the connection.
>> Getting connection is same as getting connection without using JTA.
>>
>> I think you could instantiate transactional manager. Configure
>> datasource to be transactional and get connection.
>> It is different class than without distributed transacton, but
>> implements Connection interface.
>> Than you could start global transaction.
>>
>> Instantiate the DbUnit connection
>> public static DatabaseConnection getDbUnitConnection ( Connection
>> jdbcConnection ) throws DatabaseUnitException
>> {
>> DatabaseConnection connection = new DatabaseConnection( jdbcConnection );
>> DatabaseConfig config = connection.getConfig();
>> config.setProperty( DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new
>> Oracle10DataTypeFactory( jdbcConnection ) );
>> return connection;
>> }
>>
>> perform any tests
>> and than you could just rollback or commit the global transaction
>> during or after your test.
>>
>> If the transaction manager is not instantiated and/or the connection
>> is not configured to be part of distributed transaction than
>> the connection is in autocommit mode.
>>
>> Regards,
>> Zdenek
>>
>> On Mon, May 17, 2010 at 1:48 PM, Jeff Jensen <jj...@ap...> wrote:
>>> Thanks for the reply and ideas, John.
>>>
>>>
>>>
>>> If I add a UserTransaction.commit() in the test, just before calling the
>>> dbUnit verification, dbUnit sees the data (as one would expect!). I think
>>> the issue is each connection has its own transaction vs a shared/global
>>> one. I’m wondering if dbUnit needs some tweaks to support this… I will
>>> also revisit the read_uncommited setting today.
>>>
>>>
>>>
>>> dbUnit needs to join the JTA transaction in progress and I thought this was
>>> automatic per the spec when getting a connection from the same
>>> datasource/any datasource managed by JTA. I configured dbUnit with the same
>>> datasource, but something is not correct yet!
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> From: John Hurst [mailto:joh...@gm...]
>>> Sent: Monday, May 17, 2010 4:34 AM
>>> To: dbu...@li...
>>> Subject: Re: [dbunit-developer] dbUnit & JTA
>>>
>>>
>>>
>>> Jeff,
>>>
>>> No experience here using DbUnit with JTA, sorry.
>>>
>>> I believe it's possible for JPA findById() to return a result from the
>>> identify map without hitting the database -- I am sure you have considered
>>> this.
>>>
>>> Have you considered adding a JDBC logger to see what's going through your
>>> JDBC driver?
>>>
>>> I've used p6spy several times in the past.
>>>
>>> I noticed a new one recently that I've been meaning to try:
>>>
>>> http://code.google.com/p/jdbcdslog/
>>>
>>> JH
>>>
>>> On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote:
>>>
>>> You guys have examples to have dbUnit use a datasource with JTA?
>>>
>>> We have a bunch of tests that have worked great with non-datasource (JDBC
>>> connection/transaction), but recently needed to add 2 more connections to
>>> the app/configuration, so I reconfigured for them and added JTA.
>>>
>>> The problem is, after converting to JTA, the dbUnit connection doesn't see
>>> the changes in progress. I am using the same datasource Spring bean for the
>>> app and dbUnit config. For example, if a test does a JPA persist then uses
>>> dbUnit to compare with expected XML dataset, it fails because no row is
>>> found in the database. If in the test I instead just do a JPA findById, it
>>> of course finds it.
>>>
>>> I also experimented with isolation level to 1 and created a Spring pointcut
>>> to apply Spring transactions to all dbUnit methods (!), but same result in
>>> both cases.
>>>
>>> I've spent the weekend working on this conversion and have a lot working
>>> now, and hoping one of you has an example/advice!
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> dbunit-developer mailing list
>>> dbu...@li...
>>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>>
>>>
>>> --
>>> Life is interfering with my game
>>>
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> _______________________________________________
>>> dbunit-developer mailing list
>>> dbu...@li...
>>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>>
>>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
|
|
From: Jeff J. <jj...@ap...> - 2010-05-18 11:40:27
|
Same result.
dbDataSourceEmbedded is the database datasource (no affiliation to the transaction manager). From the "dataSource" class' Javadoc:
"The preferred class for using Atomikos connection pooling. Use an instance of this class if you want to use Atomikos JTA-
enabled connection pooling. All you need to do is construct an instance and set the required properties as outlined
below. The resulting bean will automatically register with the transaction service (for recovery) and take part in active
transactions. All SQL done over connections (gotten from this class) will participate in JTA transactions."
-----Original Message-----
From: Zdeněk Vráblík [mailto:zd...@vr...]
Sent: Tuesday, May 18, 2010 2:02 AM
To: dbu...@li...
Subject: Re: [dbunit-developer] dbUnit & JTA
Hi,
Could you try use dbDataSourceEmbedded instead of dataSource in
databaseTester ?
Transaction manager must be aware of the transaction otherwise the
connection is in autocommit mode.
Regards,
Zdenek
On Tue, May 18, 2010 at 6:23 AM, Jeff Jensen <jj...@ap...> wrote:
> Hi Zdeněk, thanks for the reply. I'll add some more info for clarity and see what you think -
>
> Right on the commit - I only added that JTA commit to the test to prove the point that if it was committed, then dbUnit would see the inserted rows for the dataset verify.
>
> Thanks for the tip to Bitronix - I was not aware of it. I wonder if a different one than Atomikos would have same/similar issues, or different!
>
> Regarding DataSource, I've configured DataSourceDatabaseTester, which takes the DataSource and calls getConnection() on it when called for.
> I've configured the creation of the JTA related beans like this:
>
> <tx:annotation-driven />
>
> <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
> <property name="transactionManager" ref="atomikosTransactionManager" />
> <property name="userTransaction" ref="atomikosUserTransaction" />
> </bean>
>
> <bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager"
> init-method="init" destroy-method="close">
> <property name="forceShutdown" value="false" />
> </bean>
>
> <bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
> <property name="transactionTimeout" value="300" />
> </bean>
>
> <bean id="dbDataSourceEmbedded" class="org.apache.derby.jdbc.EmbeddedXADataSource">
> <property name="databaseName" value="${cmr.databaseName.cmr3}" />
> <property name="createDatabase" value="${cmr.createDatabase.cmr3}"/>
> <property name="connectionAttributes" value="${cmr.connectionAttributes.cmr3}" />
> </bean>
>
> (2 more EmbeddedXADataSource configured the same)
>
> <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close">
> <property name="uniqueResourceName" value="cmr3DataSource" />
> <property name="xaDataSource" ref="${cmr.dbDataSourceBeanId.cmr3}" />
> <property name="minPoolSize" value="${cmr.minPoolSize.cmr3}" />
> <property name="maxPoolSize" value="${cmr.maxPoolSize.cmr3}" />
> <property name="defaultIsolationLevel" value="${cmr.defaultIsolationLevel.cmr3}" />
> </bean>
>
> (2 more AtomikosDataSourceBean configured the same)
>
> Then 3 LocalContainerEntityManagerFactoryBean configurations each using one of the 3 dataSource beans.
>
> As part of dbUnit configuration, I configure a DataSourceDatabaseTester with the dataSource:
>
> <bean id="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
> <constructor-arg ref="dataSource" />
> <property name="tearDownOperation">
> <util:constant static-field="org.dbunit.operation.DatabaseOperation.DELETE_ALL" />
> </property>
> </bean>
>
> So Atomikos is serving up the connection, but is not in same transaction (?). I set transaction isolation to 1 (read uncommitted) in case that would help, but it does not.
>
>
> Can you think of anything else needed so that dbUnit will join the JTA transaction in progress?
>
>
> -----Original Message-----
> From: Zdeněk Vráblík [mailto:zd...@vr...]
> Sent: Monday, May 17, 2010 11:31 AM
> To: dbu...@li...
> Subject: Re: [dbunit-developer] dbUnit & JTA
>
> Hi, I was using JTA but not with DbUnit.
>
> The code between begin and commit /rollback of distributed transaction
> shouldn't contain
> any local transaction code ( commit, rollback, savepoints of
> connection). Some db drivers could throw exceptions if commit is
> called during transaction.
>
> I am using Bitronix in Tomcat as Distributed transaction manager.
> http://docs.codehaus.org/display/BTM/
>
> The only difference when you are using distributed transaction is how
> the datasource is creating the connection.
> Getting connection is same as getting connection without using JTA.
>
> I think you could instantiate transactional manager. Configure
> datasource to be transactional and get connection.
> It is different class than without distributed transacton, but
> implements Connection interface.
> Than you could start global transaction.
>
> Instantiate the DbUnit connection
> public static DatabaseConnection getDbUnitConnection ( Connection
> jdbcConnection ) throws DatabaseUnitException
> {
> DatabaseConnection connection = new DatabaseConnection( jdbcConnection );
> DatabaseConfig config = connection.getConfig();
> config.setProperty( DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new
> Oracle10DataTypeFactory( jdbcConnection ) );
> return connection;
> }
>
> perform any tests
> and than you could just rollback or commit the global transaction
> during or after your test.
>
> If the transaction manager is not instantiated and/or the connection
> is not configured to be part of distributed transaction than
> the connection is in autocommit mode.
>
> Regards,
> Zdenek
>
> On Mon, May 17, 2010 at 1:48 PM, Jeff Jensen <jj...@ap...> wrote:
>> Thanks for the reply and ideas, John.
>>
>>
>>
>> If I add a UserTransaction.commit() in the test, just before calling the
>> dbUnit verification, dbUnit sees the data (as one would expect!). I think
>> the issue is each connection has its own transaction vs a shared/global
>> one. I’m wondering if dbUnit needs some tweaks to support this… I will
>> also revisit the read_uncommited setting today.
>>
>>
>>
>> dbUnit needs to join the JTA transaction in progress and I thought this was
>> automatic per the spec when getting a connection from the same
>> datasource/any datasource managed by JTA. I configured dbUnit with the same
>> datasource, but something is not correct yet!
>>
>>
>>
>>
>>
>>
>>
>> From: John Hurst [mailto:joh...@gm...]
>> Sent: Monday, May 17, 2010 4:34 AM
>> To: dbu...@li...
>> Subject: Re: [dbunit-developer] dbUnit & JTA
>>
>>
>>
>> Jeff,
>>
>> No experience here using DbUnit with JTA, sorry.
>>
>> I believe it's possible for JPA findById() to return a result from the
>> identify map without hitting the database -- I am sure you have considered
>> this.
>>
>> Have you considered adding a JDBC logger to see what's going through your
>> JDBC driver?
>>
>> I've used p6spy several times in the past.
>>
>> I noticed a new one recently that I've been meaning to try:
>>
>> http://code.google.com/p/jdbcdslog/
>>
>> JH
>>
>> On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote:
>>
>> You guys have examples to have dbUnit use a datasource with JTA?
>>
>> We have a bunch of tests that have worked great with non-datasource (JDBC
>> connection/transaction), but recently needed to add 2 more connections to
>> the app/configuration, so I reconfigured for them and added JTA.
>>
>> The problem is, after converting to JTA, the dbUnit connection doesn't see
>> the changes in progress. I am using the same datasource Spring bean for the
>> app and dbUnit config. For example, if a test does a JPA persist then uses
>> dbUnit to compare with expected XML dataset, it fails because no row is
>> found in the database. If in the test I instead just do a JPA findById, it
>> of course finds it.
>>
>> I also experimented with isolation level to 1 and created a Spring pointcut
>> to apply Spring transactions to all dbUnit methods (!), but same result in
>> both cases.
>>
>> I've spent the weekend working on this conversion and have a lot working
>> now, and hoping one of you has an example/advice!
>>
>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>>
>> --
>> Life is interfering with my game
>>
>> ------------------------------------------------------------------------------
>>
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
------------------------------------------------------------------------------
_______________________________________________
dbunit-developer mailing list
dbu...@li...
https://lists.sourceforge.net/lists/listinfo/dbunit-developer
|
|
From: Zdeněk V. <zd...@vr...> - 2010-05-18 07:02:28
|
Hi,
Could you try use dbDataSourceEmbedded instead of dataSource in
databaseTester ?
Transaction manager must be aware of the transaction otherwise the
connection is in autocommit mode.
Regards,
Zdenek
On Tue, May 18, 2010 at 6:23 AM, Jeff Jensen <jj...@ap...> wrote:
> Hi Zdeněk, thanks for the reply. I'll add some more info for clarity and see what you think -
>
> Right on the commit - I only added that JTA commit to the test to prove the point that if it was committed, then dbUnit would see the inserted rows for the dataset verify.
>
> Thanks for the tip to Bitronix - I was not aware of it. I wonder if a different one than Atomikos would have same/similar issues, or different!
>
> Regarding DataSource, I've configured DataSourceDatabaseTester, which takes the DataSource and calls getConnection() on it when called for.
> I've configured the creation of the JTA related beans like this:
>
> <tx:annotation-driven />
>
> <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
> <property name="transactionManager" ref="atomikosTransactionManager" />
> <property name="userTransaction" ref="atomikosUserTransaction" />
> </bean>
>
> <bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager"
> init-method="init" destroy-method="close">
> <property name="forceShutdown" value="false" />
> </bean>
>
> <bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
> <property name="transactionTimeout" value="300" />
> </bean>
>
> <bean id="dbDataSourceEmbedded" class="org.apache.derby.jdbc.EmbeddedXADataSource">
> <property name="databaseName" value="${cmr.databaseName.cmr3}" />
> <property name="createDatabase" value="${cmr.createDatabase.cmr3}"/>
> <property name="connectionAttributes" value="${cmr.connectionAttributes.cmr3}" />
> </bean>
>
> (2 more EmbeddedXADataSource configured the same)
>
> <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close">
> <property name="uniqueResourceName" value="cmr3DataSource" />
> <property name="xaDataSource" ref="${cmr.dbDataSourceBeanId.cmr3}" />
> <property name="minPoolSize" value="${cmr.minPoolSize.cmr3}" />
> <property name="maxPoolSize" value="${cmr.maxPoolSize.cmr3}" />
> <property name="defaultIsolationLevel" value="${cmr.defaultIsolationLevel.cmr3}" />
> </bean>
>
> (2 more AtomikosDataSourceBean configured the same)
>
> Then 3 LocalContainerEntityManagerFactoryBean configurations each using one of the 3 dataSource beans.
>
> As part of dbUnit configuration, I configure a DataSourceDatabaseTester with the dataSource:
>
> <bean id="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
> <constructor-arg ref="dataSource" />
> <property name="tearDownOperation">
> <util:constant static-field="org.dbunit.operation.DatabaseOperation.DELETE_ALL" />
> </property>
> </bean>
>
> So Atomikos is serving up the connection, but is not in same transaction (?). I set transaction isolation to 1 (read uncommitted) in case that would help, but it does not.
>
>
> Can you think of anything else needed so that dbUnit will join the JTA transaction in progress?
>
>
> -----Original Message-----
> From: Zdeněk Vráblík [mailto:zd...@vr...]
> Sent: Monday, May 17, 2010 11:31 AM
> To: dbu...@li...
> Subject: Re: [dbunit-developer] dbUnit & JTA
>
> Hi, I was using JTA but not with DbUnit.
>
> The code between begin and commit /rollback of distributed transaction
> shouldn't contain
> any local transaction code ( commit, rollback, savepoints of
> connection). Some db drivers could throw exceptions if commit is
> called during transaction.
>
> I am using Bitronix in Tomcat as Distributed transaction manager.
> http://docs.codehaus.org/display/BTM/
>
> The only difference when you are using distributed transaction is how
> the datasource is creating the connection.
> Getting connection is same as getting connection without using JTA.
>
> I think you could instantiate transactional manager. Configure
> datasource to be transactional and get connection.
> It is different class than without distributed transacton, but
> implements Connection interface.
> Than you could start global transaction.
>
> Instantiate the DbUnit connection
> public static DatabaseConnection getDbUnitConnection ( Connection
> jdbcConnection ) throws DatabaseUnitException
> {
> DatabaseConnection connection = new DatabaseConnection( jdbcConnection );
> DatabaseConfig config = connection.getConfig();
> config.setProperty( DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new
> Oracle10DataTypeFactory( jdbcConnection ) );
> return connection;
> }
>
> perform any tests
> and than you could just rollback or commit the global transaction
> during or after your test.
>
> If the transaction manager is not instantiated and/or the connection
> is not configured to be part of distributed transaction than
> the connection is in autocommit mode.
>
> Regards,
> Zdenek
>
> On Mon, May 17, 2010 at 1:48 PM, Jeff Jensen <jj...@ap...> wrote:
>> Thanks for the reply and ideas, John.
>>
>>
>>
>> If I add a UserTransaction.commit() in the test, just before calling the
>> dbUnit verification, dbUnit sees the data (as one would expect!). I think
>> the issue is each connection has its own transaction vs a shared/global
>> one. I’m wondering if dbUnit needs some tweaks to support this… I will
>> also revisit the read_uncommited setting today.
>>
>>
>>
>> dbUnit needs to join the JTA transaction in progress and I thought this was
>> automatic per the spec when getting a connection from the same
>> datasource/any datasource managed by JTA. I configured dbUnit with the same
>> datasource, but something is not correct yet!
>>
>>
>>
>>
>>
>>
>>
>> From: John Hurst [mailto:joh...@gm...]
>> Sent: Monday, May 17, 2010 4:34 AM
>> To: dbu...@li...
>> Subject: Re: [dbunit-developer] dbUnit & JTA
>>
>>
>>
>> Jeff,
>>
>> No experience here using DbUnit with JTA, sorry.
>>
>> I believe it's possible for JPA findById() to return a result from the
>> identify map without hitting the database -- I am sure you have considered
>> this.
>>
>> Have you considered adding a JDBC logger to see what's going through your
>> JDBC driver?
>>
>> I've used p6spy several times in the past.
>>
>> I noticed a new one recently that I've been meaning to try:
>>
>> http://code.google.com/p/jdbcdslog/
>>
>> JH
>>
>> On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote:
>>
>> You guys have examples to have dbUnit use a datasource with JTA?
>>
>> We have a bunch of tests that have worked great with non-datasource (JDBC
>> connection/transaction), but recently needed to add 2 more connections to
>> the app/configuration, so I reconfigured for them and added JTA.
>>
>> The problem is, after converting to JTA, the dbUnit connection doesn't see
>> the changes in progress. I am using the same datasource Spring bean for the
>> app and dbUnit config. For example, if a test does a JPA persist then uses
>> dbUnit to compare with expected XML dataset, it fails because no row is
>> found in the database. If in the test I instead just do a JPA findById, it
>> of course finds it.
>>
>> I also experimented with isolation level to 1 and created a Spring pointcut
>> to apply Spring transactions to all dbUnit methods (!), but same result in
>> both cases.
>>
>> I've spent the weekend working on this conversion and have a lot working
>> now, and hoping one of you has an example/advice!
>>
>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>>
>> --
>> Life is interfering with my game
>>
>> ------------------------------------------------------------------------------
>>
>>
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
|
|
From: Jeff J. <jj...@ap...> - 2010-05-18 05:24:02
|
Hi Zdeněk, thanks for the reply. I'll add some more info for clarity and see what you think -
Right on the commit - I only added that JTA commit to the test to prove the point that if it was committed, then dbUnit would see the inserted rows for the dataset verify.
Thanks for the tip to Bitronix - I was not aware of it. I wonder if a different one than Atomikos would have same/similar issues, or different!
Regarding DataSource, I've configured DataSourceDatabaseTester, which takes the DataSource and calls getConnection() on it when called for.
I've configured the creation of the JTA related beans like this:
<tx:annotation-driven />
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="atomikosTransactionManager" />
<property name="userTransaction" ref="atomikosUserTransaction" />
</bean>
<bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager"
init-method="init" destroy-method="close">
<property name="forceShutdown" value="false" />
</bean>
<bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
<property name="transactionTimeout" value="300" />
</bean>
<bean id="dbDataSourceEmbedded" class="org.apache.derby.jdbc.EmbeddedXADataSource">
<property name="databaseName" value="${cmr.databaseName.cmr3}" />
<property name="createDatabase" value="${cmr.createDatabase.cmr3}"/>
<property name="connectionAttributes" value="${cmr.connectionAttributes.cmr3}" />
</bean>
(2 more EmbeddedXADataSource configured the same)
<bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close">
<property name="uniqueResourceName" value="cmr3DataSource" />
<property name="xaDataSource" ref="${cmr.dbDataSourceBeanId.cmr3}" />
<property name="minPoolSize" value="${cmr.minPoolSize.cmr3}" />
<property name="maxPoolSize" value="${cmr.maxPoolSize.cmr3}" />
<property name="defaultIsolationLevel" value="${cmr.defaultIsolationLevel.cmr3}" />
</bean>
(2 more AtomikosDataSourceBean configured the same)
Then 3 LocalContainerEntityManagerFactoryBean configurations each using one of the 3 dataSource beans.
As part of dbUnit configuration, I configure a DataSourceDatabaseTester with the dataSource:
<bean id="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
<constructor-arg ref="dataSource" />
<property name="tearDownOperation">
<util:constant static-field="org.dbunit.operation.DatabaseOperation.DELETE_ALL" />
</property>
</bean>
So Atomikos is serving up the connection, but is not in same transaction (?). I set transaction isolation to 1 (read uncommitted) in case that would help, but it does not.
Can you think of anything else needed so that dbUnit will join the JTA transaction in progress?
-----Original Message-----
From: Zdeněk Vráblík [mailto:zd...@vr...]
Sent: Monday, May 17, 2010 11:31 AM
To: dbu...@li...
Subject: Re: [dbunit-developer] dbUnit & JTA
Hi, I was using JTA but not with DbUnit.
The code between begin and commit /rollback of distributed transaction
shouldn't contain
any local transaction code ( commit, rollback, savepoints of
connection). Some db drivers could throw exceptions if commit is
called during transaction.
I am using Bitronix in Tomcat as Distributed transaction manager.
http://docs.codehaus.org/display/BTM/
The only difference when you are using distributed transaction is how
the datasource is creating the connection.
Getting connection is same as getting connection without using JTA.
I think you could instantiate transactional manager. Configure
datasource to be transactional and get connection.
It is different class than without distributed transacton, but
implements Connection interface.
Than you could start global transaction.
Instantiate the DbUnit connection
public static DatabaseConnection getDbUnitConnection ( Connection
jdbcConnection ) throws DatabaseUnitException
{
DatabaseConnection connection = new DatabaseConnection( jdbcConnection );
DatabaseConfig config = connection.getConfig();
config.setProperty( DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new
Oracle10DataTypeFactory( jdbcConnection ) );
return connection;
}
perform any tests
and than you could just rollback or commit the global transaction
during or after your test.
If the transaction manager is not instantiated and/or the connection
is not configured to be part of distributed transaction than
the connection is in autocommit mode.
Regards,
Zdenek
On Mon, May 17, 2010 at 1:48 PM, Jeff Jensen <jj...@ap...> wrote:
> Thanks for the reply and ideas, John.
>
>
>
> If I add a UserTransaction.commit() in the test, just before calling the
> dbUnit verification, dbUnit sees the data (as one would expect!). I think
> the issue is each connection has its own transaction vs a shared/global
> one. I’m wondering if dbUnit needs some tweaks to support this… I will
> also revisit the read_uncommited setting today.
>
>
>
> dbUnit needs to join the JTA transaction in progress and I thought this was
> automatic per the spec when getting a connection from the same
> datasource/any datasource managed by JTA. I configured dbUnit with the same
> datasource, but something is not correct yet!
>
>
>
>
>
>
>
> From: John Hurst [mailto:joh...@gm...]
> Sent: Monday, May 17, 2010 4:34 AM
> To: dbu...@li...
> Subject: Re: [dbunit-developer] dbUnit & JTA
>
>
>
> Jeff,
>
> No experience here using DbUnit with JTA, sorry.
>
> I believe it's possible for JPA findById() to return a result from the
> identify map without hitting the database -- I am sure you have considered
> this.
>
> Have you considered adding a JDBC logger to see what's going through your
> JDBC driver?
>
> I've used p6spy several times in the past.
>
> I noticed a new one recently that I've been meaning to try:
>
> http://code.google.com/p/jdbcdslog/
>
> JH
>
> On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote:
>
> You guys have examples to have dbUnit use a datasource with JTA?
>
> We have a bunch of tests that have worked great with non-datasource (JDBC
> connection/transaction), but recently needed to add 2 more connections to
> the app/configuration, so I reconfigured for them and added JTA.
>
> The problem is, after converting to JTA, the dbUnit connection doesn't see
> the changes in progress. I am using the same datasource Spring bean for the
> app and dbUnit config. For example, if a test does a JPA persist then uses
> dbUnit to compare with expected XML dataset, it fails because no row is
> found in the database. If in the test I instead just do a JPA findById, it
> of course finds it.
>
> I also experimented with isolation level to 1 and created a Spring pointcut
> to apply Spring transactions to all dbUnit methods (!), but same result in
> both cases.
>
> I've spent the weekend working on this conversion and have a lot working
> now, and hoping one of you has an example/advice!
>
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
>
> --
> Life is interfering with my game
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
>
------------------------------------------------------------------------------
_______________________________________________
dbunit-developer mailing list
dbu...@li...
https://lists.sourceforge.net/lists/listinfo/dbunit-developer
|
|
From: Zdeněk V. <zd...@vr...> - 2010-05-17 16:30:39
|
Hi, I was using JTA but not with DbUnit. The code between begin and commit /rollback of distributed transaction shouldn't contain any local transaction code ( commit, rollback, savepoints of connection). Some db drivers could throw exceptions if commit is called during transaction. I am using Bitronix in Tomcat as Distributed transaction manager. http://docs.codehaus.org/display/BTM/ The only difference when you are using distributed transaction is how the datasource is creating the connection. Getting connection is same as getting connection without using JTA. I think you could instantiate transactional manager. Configure datasource to be transactional and get connection. It is different class than without distributed transacton, but implements Connection interface. Than you could start global transaction. Instantiate the DbUnit connection public static DatabaseConnection getDbUnitConnection ( Connection jdbcConnection ) throws DatabaseUnitException { DatabaseConnection connection = new DatabaseConnection( jdbcConnection ); DatabaseConfig config = connection.getConfig(); config.setProperty( DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new Oracle10DataTypeFactory( jdbcConnection ) ); return connection; } perform any tests and than you could just rollback or commit the global transaction during or after your test. If the transaction manager is not instantiated and/or the connection is not configured to be part of distributed transaction than the connection is in autocommit mode. Regards, Zdenek On Mon, May 17, 2010 at 1:48 PM, Jeff Jensen <jj...@ap...> wrote: > Thanks for the reply and ideas, John. > > > > If I add a UserTransaction.commit() in the test, just before calling the > dbUnit verification, dbUnit sees the data (as one would expect!). I think > the issue is each connection has its own transaction vs a shared/global > one. I’m wondering if dbUnit needs some tweaks to support this… I will > also revisit the read_uncommited setting today. > > > > dbUnit needs to join the JTA transaction in progress and I thought this was > automatic per the spec when getting a connection from the same > datasource/any datasource managed by JTA. I configured dbUnit with the same > datasource, but something is not correct yet! > > > > > > > > From: John Hurst [mailto:joh...@gm...] > Sent: Monday, May 17, 2010 4:34 AM > To: dbu...@li... > Subject: Re: [dbunit-developer] dbUnit & JTA > > > > Jeff, > > No experience here using DbUnit with JTA, sorry. > > I believe it's possible for JPA findById() to return a result from the > identify map without hitting the database -- I am sure you have considered > this. > > Have you considered adding a JDBC logger to see what's going through your > JDBC driver? > > I've used p6spy several times in the past. > > I noticed a new one recently that I've been meaning to try: > > http://code.google.com/p/jdbcdslog/ > > JH > > On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote: > > You guys have examples to have dbUnit use a datasource with JTA? > > We have a bunch of tests that have worked great with non-datasource (JDBC > connection/transaction), but recently needed to add 2 more connections to > the app/configuration, so I reconfigured for them and added JTA. > > The problem is, after converting to JTA, the dbUnit connection doesn't see > the changes in progress. I am using the same datasource Spring bean for the > app and dbUnit config. For example, if a test does a JPA persist then uses > dbUnit to compare with expected XML dataset, it fails because no row is > found in the database. If in the test I instead just do a JPA findById, it > of course finds it. > > I also experimented with isolation level to 1 and created a Spring pointcut > to apply Spring transactions to all dbUnit methods (!), but same result in > both cases. > > I've spent the weekend working on this conversion and have a lot working > now, and hoping one of you has an example/advice! > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > > > -- > Life is interfering with my game > > ------------------------------------------------------------------------------ > > > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > > |
|
From: Jeff J. <jj...@ap...> - 2010-05-17 12:49:11
|
Thanks for the reply and ideas, John. If I add a UserTransaction.commit() in the test, just before calling the dbUnit verification, dbUnit sees the data (as one would expect!). I think the issue is each connection has its own transaction vs a shared/global one. I'm wondering if dbUnit needs some tweaks to support this. I will also revisit the read_uncommited setting today. dbUnit needs to join the JTA transaction in progress and I thought this was automatic per the spec when getting a connection from the same datasource/any datasource managed by JTA. I configured dbUnit with the same datasource, but something is not correct yet! From: John Hurst [mailto:joh...@gm...] Sent: Monday, May 17, 2010 4:34 AM To: dbu...@li... Subject: Re: [dbunit-developer] dbUnit & JTA Jeff, No experience here using DbUnit with JTA, sorry. I believe it's possible for JPA findById() to return a result from the identify map without hitting the database -- I am sure you have considered this. Have you considered adding a JDBC logger to see what's going through your JDBC driver? I've used p6spy several times in the past. I noticed a new one recently that I've been meaning to try: http://code.google.com/p/jdbcdslog/ JH On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote: You guys have examples to have dbUnit use a datasource with JTA? We have a bunch of tests that have worked great with non-datasource (JDBC connection/transaction), but recently needed to add 2 more connections to the app/configuration, so I reconfigured for them and added JTA. The problem is, after converting to JTA, the dbUnit connection doesn't see the changes in progress. I am using the same datasource Spring bean for the app and dbUnit config. For example, if a test does a JPA persist then uses dbUnit to compare with expected XML dataset, it fails because no row is found in the database. If in the test I instead just do a JPA findById, it of course finds it. I also experimented with isolation level to 1 and created a Spring pointcut to apply Spring transactions to all dbUnit methods (!), but same result in both cases. I've spent the weekend working on this conversion and have a lot working now, and hoping one of you has an example/advice! ---------------------------------------------------------------------------- -- _______________________________________________ dbunit-developer mailing list dbu...@li... https://lists.sourceforge.net/lists/listinfo/dbunit-developer -- Life is interfering with my game |
|
From: John H. <joh...@gm...> - 2010-05-17 09:34:21
|
Jeff, No experience here using DbUnit with JTA, sorry. I believe it's possible for JPA findById() to return a result from the identify map without hitting the database -- I am sure you have considered this. Have you considered adding a JDBC logger to see what's going through your JDBC driver? I've used p6spy several times in the past. I noticed a new one recently that I've been meaning to try: http://code.google.com/p/jdbcdslog/ JH On Mon, May 17, 2010 at 4:19 AM, Jeff Jensen <jj...@ap...> wrote: > You guys have examples to have dbUnit use a datasource with JTA? > > We have a bunch of tests that have worked great with non-datasource (JDBC > connection/transaction), but recently needed to add 2 more connections to > the app/configuration, so I reconfigured for them and added JTA. > > The problem is, after converting to JTA, the dbUnit connection doesn't see > the changes in progress. I am using the same datasource Spring bean for > the > app and dbUnit config. For example, if a test does a JPA persist then uses > dbUnit to compare with expected XML dataset, it fails because no row is > found in the database. If in the test I instead just do a JPA findById, it > of course finds it. > > I also experimented with isolation level to 1 and created a Spring pointcut > to apply Spring transactions to all dbUnit methods (!), but same result in > both cases. > > I've spent the weekend working on this conversion and have a lot working > now, and hoping one of you has an example/advice! > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > -- Life is interfering with my game |
|
From: Jeff J. <jj...@ap...> - 2010-05-16 16:19:58
|
You guys have examples to have dbUnit use a datasource with JTA? We have a bunch of tests that have worked great with non-datasource (JDBC connection/transaction), but recently needed to add 2 more connections to the app/configuration, so I reconfigured for them and added JTA. The problem is, after converting to JTA, the dbUnit connection doesn't see the changes in progress. I am using the same datasource Spring bean for the app and dbUnit config. For example, if a test does a JPA persist then uses dbUnit to compare with expected XML dataset, it fails because no row is found in the database. If in the test I instead just do a JPA findById, it of course finds it. I also experimented with isolation level to 1 and created a Spring pointcut to apply Spring transactions to all dbUnit methods (!), but same result in both cases. I've spent the weekend working on this conversion and have a lot working now, and hoping one of you has an example/advice! |
|
From: SourceForge.net <no...@so...> - 2010-05-13 02:20:33
|
Bugs item #2993743, was opened at 2010-04-28 18:18 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=2993743&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: None >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: https://www.google.com/accounts () Assigned to: matthias g (gommma) Summary: cannot configure DatasourceDatabaseTester Initial Comment: The way DatasourceDatabaseTester implements getConnection() is by creating a new object each time: public IDatabaseConnection getConnection() throws Exception { logger.debug("getConnection() - start"); assertTrue( "DataSource is not set", dataSource!=null ); return new DatabaseConnection( dataSource.getConnection(), getSchema() ); } Calling getConfig() on the returned object will then have no effect since the next call to getConnection() (by the framework) will get a completely new object, which will obviously not have your customizations. Is this as simple as caching the IDatabaseConnection in that tester? ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2010-05-13 02:20 Message: This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: matthias g (gommma) Date: 2010-04-28 22:18 Message: Hi, I recommend to overwrite the "getConnection" method and set the config there. This should work for you. Even though its not very beautiful Rgds, matthias ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=2993743&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-12 10:59:48
|
Bugs item #1984596, was opened at 2008-06-04 16:16 Message generated for change (Comment added) made by johnmacenri You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=1984596&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: v2.2.2 Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Bustuila (bustuila) Assigned to: matthias g (gommma) Summary: Inserting of clobs fails Initial Comment: Since version 2.2.2, the insert operation fails with clobs and Oracle: java.lang.ClassCastException: java.lang.String cannot be cast to oracle.sql.CLOB at oracle.jdbc.driver.OraclePreparedStatement.setObjectCritical(OraclePreparedStatement.java:9229) at oracle.jdbc.driver.OraclePreparedStatement.setObjectInternal(OraclePreparedStatement.java:8843) at oracle.jdbc.driver.OraclePreparedStatement.setObject(OraclePreparedStatement.java:9316) at org.dbunit.dataset.datatype.ClobDataType.setSqlValue(ClobDataType.java:67) at org.dbunit.database.statement.SimplePreparedStatement.addValue(SimplePreparedStatement.java:73) at org.dbunit.database.statement.AutomaticPreparedBatchStatement.addValue(AutomaticPreparedBatchStatement.java:63) at org.dbunit.operation.AbstractBatchOperation.execute(AbstractBatchOperation.java:201) at org.dbunit.ant.Operation.execute(Operation.java:212) The change that triggers this problem seems to come from this: ClobDataType.getSqlValue(int column, ResultSet resultSet) contains statement.setObject(column, typeCast(value), getSqlType()); where getSqlType() returns Types.CLOB -> 2005 In version 2.2.1, where it works, it was: statement.setObject(column, typeCast(value), DataType.LONGVARCHAR.getSqlType()); where DataType.LONGVARCHAR.getSqlType() -> -1 the same as Types.LONGVARCHAR Oracle driver: 10.2.0.2.0 and 10.2.0.4.0 Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production ---------------------------------------------------------------------- Comment By: John MacEnri (johnmacenri) Date: 2010-05-12 11:59 Message: Hi, I also was hitting the same problem with v2.4.7 and latest Oracle JDBC driver. I took the easy way out and built 2.4.7 myself with the one line change shown in this bug, in the org.dbunit.dataset.datatype.ClobDataType.setSqlValue(...) method. Works fine now for Oracle CLOBs, but not sure it would work across other databases. In the comments there is a mention of the fix being applied to the v3 branch, but that does not seem to have appeared. John ---------------------------------------------------------------------- Comment By: Paul P (paulp3210) Date: 2010-03-11 15:59 Message: I am using the latest version of DBUnit (2.4.7), on Oracle 11GR2. I'm using Java 6 (1.6.0_15) and the latest version of Oracle's client jar (jdbc6.jar) I've been unable to successfully load any data referenced by a CLOB Oracle field from an XML file into the database. I've used all sorts of versions of the Oracle JDBC library / Hibernate library etc...I think the problem lies in DBUnit. I've tried: both FlatXmlDataSet and XmlDataset, I've tried both OracleDataTypeFactory , Oracle10DataTypeFactory and Oracle11DataTypeFactory found here: https://sourceforge.net/tracker/index.php?func=detail&aid=2010567&group_id=47439&atid=449494 I'm fairly convinced that its a DBunit problem as reverting all the way down to DBunit 2.2.1 seems to fix the problem. Please let me know if you want any additional info. The exception being reported is: * Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to oracle.sql.CLOB ---------------------------------------------------------------------- Comment By: shai o (go19) Date: 2008-08-06 14:52 Message: Logged In: YES user_id=2169370 Originator: NO i tried the latest build from today the clob problem still occures ---------------------------------------------------------------------- Comment By: matthias g (gommma) Date: 2008-08-05 20:13 Message: Logged In: YES user_id=1803108 Originator: NO Hi there, I committed the change on rev. 773/trunk for the upcoming 2.3.0 release. The change affects OracleNClobDataType/OracleClobDataType/OracleBlobDataType which now load the classes via the JDBC connection's classloader. Thanks again and regards, mat ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-24 09:09 Message: Logged In: YES user_id=711602 Originator: YES Hello, sorry for the long pause. It works with this change :-) I also had to modify org.dbunit.ant.Operation, to fix a NPE, in toString(): replace result.append(", src=" + _src == null ? null : _src.getAbsolutePath()); with result.append(", src=" + _src == null ? "null" : _src.getAbsolutePath()); Thank you ---------------------------------------------------------------------- Comment By: matthias g (gommma) Date: 2008-06-08 18:58 Message: Logged In: YES user_id=1803108 Originator: NO Hi Bustuila, Thanks a lot for the hints. I think chances are good that your second exception is a classloader issue since the dbunit ant task uses the AntClassLoader to load the db driver class. If you have a couple of minutes you could test this as follows: - Get dbunit trunk sources from repository - Replace one line in the file org.dbunit.ext.oracle.OracleClobDataType#getClob(): <<<OLD line: Class aClobClass = Class.forName("oracle.sql.CLOB"); >>>NEW line: Class aClobClass = connection.getClass().getClassLoader().loadClass("oracle.sql.CLOB"); - Invoke "mvn install" in the dbunit trunk directory Then try using the newly created 2.3.0-SNAPSHOT version of dbunit. I think you are right in terms of checking and updating the oracle support in dbunit. It would be nice to start a discussion on how to do that (perhaps dbunit could introduce a new Oracle10DataTypeFactory that uses the newer JDBC API with java.sql.CLOB) Hope this helps. Regards, mat ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-08 13:03 Message: Logged In: YES user_id=711602 Originator: YES I'm glad the test project was useful, especially since I forgot to put the schema.sql in it... I don't run dbunit with the maven plugin in my work project, we're still using ant. I just used maven for this example, which indeed works after using the oracle datatype factory... I think the problem with the "oracle.jdbc.driver.T4CConnection cannot be cast to oracle.jdbc.OracleConnection" comes from the ant classloader, because I see before that another error: [taskdef] log4j:ERROR A "org.apache.log4j.xml.DOMConfigurator" object is not assignable to a "org.apache.log4j.spi.Configurator" variable. [taskdef] log4j:ERROR The class "org.apache.log4j.spi.Configurator" was loaded by [taskdef] log4j:ERROR [AntClassLoader[...]]. [taskdef] log4j:ERROR Could not instantiate configurator [org.apache.log4j.xml.DOMConfigurator]. Another thing... it seems that these newer Oracle drivers are using the standard way of working with lobs: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14249/adlob_api_overview.htm "You can make changes to an entire persistent LOB, or to pieces of the beginning, middle, or end of a persistent LOB in Java by means of the JDBC API using the classes: oracle.sql.BLOB oracle.sql.CLOB These classes implement java.sql.Blob and java.sql.Clob interfaces according to the JDBC 3.0 specification, which has methods for LOB modification. They also include legacy Oracle proprietary methods for LOB modification. These legacy methods are marked as deprecated and may be removed in a future release. If you use JDK 1.4 or higher, then you can use variables typed java.sql.Blob and java.sql.Clob. The JDBC 3.0 methods are included in classes12.jar, so that they can be used in JDK 1.2 or 1.3, but since they are not part of the java.sql.Blob and java.sql.Clob interfaces in those JDK versions, you must use variables typed or cast to oracle.sql.BLOB or oracle.sql.CLOB." When using the application with the spring framework and hibernate, it has no problem in inserting and updating lobs with org.springframework.jdbc.support.lob.DefaultLobHandler, so I guess this is another hint that the new drivers are much improved and they can be used with the standard jdbc interfaces. I've tried using org.springframework.jdbc.support.lob.OracleLobHandler, but it needs another dependency(NativeJdbcExtractor), so, I was very hapy the default handler worked. So, I guess something could be improved in dbunit in detecting in some way that the standard interfaces can be used... spring framework seems to use java.sql.PreparedStatement.setClob(int i, Clob x) and maybe dbunit could do it too... And then, I will try to see what I workaround I can find to make ant not screw up things.. Thanks for your help. ---------------------------------------------------------------------- Comment By: Sébastien Le Callonnec (slecallonnec) Date: 2008-06-07 21:15 Message: Logged In: YES user_id=1232035 Originator: NO Hi Bustuila, Thanks a mil for that project, that really helps! If only all the bug reports were coming with these packages test cases, life would be so much easier! Also, apologies, I hadn't realised you were using dbunit through the maven plugin. As mat mentioned below, adding: <dataTypeFactoryName>org.dbunit.ext.oracle.OracleDataTypeFactory</dataTypeFactoryName> solves the problem for me too. Is that the case for you too, or is it what caused your second problem?: java.lang.ClassCastException: oracle.jdbc.driver.T4CConnection cannot be cast to oracle.jdbc.OracleConnection I haven't been able to reproduce that second issue, though (on Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product). Did you get this my tweaking your maven config? Regards, Sébastien. ---------------------------------------------------------------------- Comment By: matthias g (gommma) Date: 2008-06-07 18:44 Message: Logged In: YES user_id=1803108 Originator: NO Hi all, I tried to reproduce the problem using your testcase and I got the same exception as you mentioned initially. I added the dataTypeFactory to the pom.xml and the test was green afterwards. ... <configuration> <driver>${driver}</driver> <url>${url}</url> <username>${username}</username> <password>${password}</password> <skip>${maven.test.skip}</skip> <dataTypeFactoryName>org.dbunit.ext.oracle.OracleDataTypeFactory</dataTypeFactoryName> </configuration> ... Note that if you want to set it in the java code you can do it as follows: ... DatabaseConnection dbConnection = new DatabaseConnection(jdbcConnection); dbConnection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new OracleDataTypeFactory()); ... I also found that you might get into this problem when you work inside a Container (Tomcat or J2EE) while having the ojdbc.jar in your deployment artifact AND in the server lib dir. The cause are the ClassLoader strategies used by the container. See http://forums.oracle.com/forums/thread.jspa?threadID=554480&tstart=0 for further information. If this would be true (just speculating a bit) the cause could be that dbunit's OracleClobDataType uses "Class.forName("oracle.sql.CLOB")" instead of "connection.getClass().getClassLoader().loadClass("oracle.sql.CLOB")" Regards, mat ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-06 13:53 Message: Logged In: YES user_id=711602 Originator: YES Maybe you can get pointers from here: http://springframework.cvs.sourceforge.net/springframework/spring/src/org/springframework/jdbc/support/lob/OracleLobHandler.java?revision=1.25&view=markup ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-06 11:28 Message: Logged In: YES user_id=711602 Originator: YES Btw, you would have to use the instructions at http://maven.apache.org/guides/development/guide-plugin-snapshot-repositories.html to be able to use the required plugin ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-06 08:34 Message: Logged In: YES user_id=711602 Originator: YES I've attached a project that demonstrates the problem. File Added: 1984596.zip ---------------------------------------------------------------------- Comment By: Sébastien Le Callonnec (slecallonnec) Date: 2008-06-05 18:10 Message: Logged In: YES user_id=1232035 Originator: NO If you could still provide us with a test case reproducing the issue, that would help us investigate the problem, as I haven't been able to reproduce the issue so far. Thanks, Sébastien. ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-05 16:33 Message: Logged In: YES user_id=711602 Originator: YES No problem... If I find the time, I'll try to fix it using your sugestions and provide a patch. ---------------------------------------------------------------------- Comment By: Roberto Lo Giacco (rlogiacco) Date: 2008-06-05 16:30 Message: Logged In: YES user_id=57511 Originator: NO I suggest rollback your dbunit version until we find out a solution or, in case you need some of the patches applied after the 2.2.1 release, you can rollback the commit relative to the #1806363 issue. Sorry for the inconvenience, we already know we should set up a multi rdbms integration test platform, but time and money are needed for such activities and we actually have none of them :\ ---------------------------------------------------------------------- Comment By: Sébastien Le Callonnec (slecallonnec) Date: 2008-06-05 13:30 Message: Logged In: YES user_id=1232035 Originator: NO This fix was introduced because LONGVARCHAR was causing issues with other RSBMSes. Cf. https://sourceforge.net/tracker/?func=detail&atid=449491&aid=1806363&group_id=47439 Given that we have an Oracle-specific data type, I think that fix is legitimate since the target is everything but Oracle. Now as Roberto indicated, the problem seems to be caused because the OracleDataTypeFactory should probably support T4CConnection along with OracleConnection. Regards, Sébastien. ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-05 13:21 Message: Logged In: YES user_id=711602 Originator: YES It seems it's because of the fix for 1806363 ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-05 12:30 Message: Logged In: YES user_id=711602 Originator: YES The thing I don't understand is why was the change that broke 2.2 was required... It worked perfectly fine. I've searched Hibernate sources for oracle.sql.CLOB and they don't seem to used it, so maybe it's not required at all. ---------------------------------------------------------------------- Comment By: Roberto Lo Giacco (rlogiacco) Date: 2008-06-05 12:07 Message: Logged In: YES user_id=57511 Originator: NO Oviously the problem is OracleDataTypeFactory is not aware of the pure JDBC oracle driver and recognizes only the previous classes12.jar driver I suggest to introduce a new factory or patch the actual one to work with both (the latter seems a little bit harder to me) ---------------------------------------------------------------------- Comment By: Bustuila (bustuila) Date: 2008-06-05 09:03 Message: Logged In: YES user_id=711602 Originator: YES I've tried using OracleDataTypeFactory(with driver class oracle.jdbc.OracleDriver and oracle.jdbc.driver.OracleDriver), but it doesn't work: java.lang.ClassCastException: oracle.jdbc.driver.T4CConnection cannot be cast to oracle.jdbc.OracleConnection at oracle.sql.CLOB.createTemporary(CLOB.java:754) at oracle.sql.CLOB.createTemporary(CLOB.java:716) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.dbunit.ext.oracle.OracleClobDataType.getClob(OracleClobDataType.java:70) at org.dbunit.ext.oracle.OracleClobDataType.setSqlValue(OracleClobDataType.java:56) at org.dbunit.database.statement.SimplePreparedStatement.addValue(SimplePreparedStatement.java:62) at org.dbunit.database.statement.AutomaticPreparedBatchStatement.addValue(AutomaticPreparedBatchStatement.java:52) at org.dbunit.operation.AbstractBatchOperation.execute(AbstractBatchOperation.java:177) at org.dbunit.ant.Operation.execute(Operation.java:183) I'll try and create a simple test case. ---------------------------------------------------------------------- Comment By: Sébastien Le Callonnec (slecallonnec) Date: 2008-06-04 17:48 Message: Logged In: YES user_id=1232035 Originator: NO Hi there, Would you have a simple test case reproducing the issue? If you use an OracleConnection, OracleClobDataType should be used instead of the "normal" ClobDataType. Regards, Sébastien ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=1984596&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-09 23:46:35
|
Bugs item #2999084, was opened at 2010-05-10 01:46 Message generated for change (Tracker Item Submitted) made by dfghi You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=2999084&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Feature Request Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: dfghi (dfghi) Assigned to: Roberto Lo Giacco (rlogiacco) Summary: Add support for Postgresql 8 Blobs. Initial Comment: Recent versions of Postgresql prefer to handle blobs (large objects or "lo", as posgresql documentation calls them) in a separate system table, leaving only a reference (oid) in the original table. This means that it's not possible to use DbUnit to insert Blobs directly in tables. The attached file provides a patch to the existing DataTypeFactory and a new DataType for Postgresql Blobs. It's a draft because I'm not totally sure of a couple of things, but it's better than nothing. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=2999084&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2010-05-02 02:20:20
|
Feature Requests item #2986323, was opened at 2010-04-13 07:40 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=2986323&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Next Release >Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Luigi Talamona (luigitalamona) Assigned to: matthias g (gommma) Summary: Added new db driver Initial Comment: Added driver for Mckoi db. ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2010-05-02 02:20 Message: This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: Luigi Talamona (luigitalamona) Date: 2010-04-19 11:47 Message: Hi Matthias, I got trunk from svn (rev 1181) and I built the package. Everything runs fine!! I attached log test. Regards Luigi ---------------------------------------------------------------------- Comment By: matthias g (gommma) Date: 2010-04-17 15:27 Message: Hi Luigi, I added your sources to the SVN repository (rev 1181). Will be available with the next dbunit release. Could you be so kind and SVN checkout the current trunk, build the package and check whether everything functions as you expect? thx, matthias ---------------------------------------------------------------------- Comment By: matthias g (gommma) Date: 2010-04-17 15:27 Message: This feature is currently available from CVS repository. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=2986323&group_id=47439 |