You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(21) |
Jun
(56) |
Jul
(6) |
Aug
(2) |
Sep
|
Oct
|
Nov
(1) |
Dec
(3) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
|
Feb
(10) |
Mar
(11) |
Apr
(8) |
May
(4) |
Jun
(10) |
Jul
(15) |
Aug
(5) |
Sep
(2) |
Oct
(12) |
Nov
|
Dec
|
| 2004 |
Jan
(18) |
Feb
(33) |
Mar
(7) |
Apr
(3) |
May
(3) |
Jun
|
Jul
(3) |
Aug
(3) |
Sep
(17) |
Oct
(17) |
Nov
(6) |
Dec
(1) |
| 2005 |
Jan
|
Feb
|
Mar
(1) |
Apr
(8) |
May
(4) |
Jun
(2) |
Jul
|
Aug
(15) |
Sep
(5) |
Oct
(11) |
Nov
(5) |
Dec
|
| 2006 |
Jan
(10) |
Feb
(4) |
Mar
|
Apr
(3) |
May
(13) |
Jun
(1) |
Jul
(1) |
Aug
(9) |
Sep
(1) |
Oct
(1) |
Nov
(4) |
Dec
(32) |
| 2007 |
Jan
(15) |
Feb
(10) |
Mar
(9) |
Apr
(4) |
May
(9) |
Jun
(8) |
Jul
(8) |
Aug
(4) |
Sep
(43) |
Oct
(12) |
Nov
(8) |
Dec
(11) |
| 2008 |
Jan
(7) |
Feb
(52) |
Mar
(92) |
Apr
(19) |
May
(101) |
Jun
(212) |
Jul
(136) |
Aug
(102) |
Sep
(53) |
Oct
(58) |
Nov
(115) |
Dec
(122) |
| 2009 |
Jan
(58) |
Feb
(66) |
Mar
(82) |
Apr
(29) |
May
(27) |
Jun
(13) |
Jul
(27) |
Aug
(59) |
Sep
(104) |
Oct
(111) |
Nov
(77) |
Dec
(31) |
| 2010 |
Jan
(79) |
Feb
(52) |
Mar
(18) |
Apr
(19) |
May
(18) |
Jun
(10) |
Jul
(7) |
Aug
(45) |
Sep
(50) |
Oct
(36) |
Nov
(11) |
Dec
(36) |
| 2011 |
Jan
(10) |
Feb
(26) |
Mar
(11) |
Apr
(5) |
May
(6) |
Jun
(2) |
Jul
(8) |
Aug
(6) |
Sep
(6) |
Oct
(5) |
Nov
(2) |
Dec
(5) |
| 2012 |
Jan
(4) |
Feb
(1) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(16) |
Jul
(10) |
Aug
(1) |
Sep
(17) |
Oct
(22) |
Nov
(2) |
Dec
(5) |
| 2013 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
(5) |
Dec
(3) |
| 2014 |
Jan
|
Feb
|
Mar
(1) |
Apr
(3) |
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(16) |
Jul
(2) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(11) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2020 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
(2) |
Nov
(2) |
Dec
(4) |
| 2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
|
Dec
|
| 2024 |
Jan
(4) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2025 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Jeff J. <jj...@ap...> - 2013-11-11 13:39:30
|
On the dbUnit SF site, it is the "Git" tab/link, next to the "Svn" one: https://sourceforge.net/p/dbunit/code.git/ci/master/tree/ You need to select your protocol to clone with. git.code.sf.net/p/dbunit/code.git git branch branch-1-5 branch-exml2sax branch-iterator dbunit-3.x * master vendor I have only pushed master so far though. If master looks good to you, I will push them all. Thank you for reviewing! On Sun, Nov 10, 2013 at 10:16 PM, John Hurst <joh...@gm...> wrote: > Jeff, > > Can you give a URL to clone the repo from? > > Did you include all the SVN branches in the Git repo? > > Cheers > > John Hurst > > > > On Mon, Nov 11, 2013 at 5:53 AM, Jeff Jensen <jj...@ap...> wrote: > >> Hi John, >> >> The Git repo is on SF. Hopefully it soon becomes the new master and we >> remove the SF svn one. >> I have only been using SF svn for 2.x work, which also means the Git >> conversion is from it. >> >> IIRC, Codehaus svn repo was 3.x. I'm not aware of anything for 2.x >> there. I have not seen anything for the Codehaus work for long time. >> >> >> >> >> On Sun, Nov 10, 2013 at 10:34 AM, John Hurst <joh...@gm...>wrote: >> >>> Jeff, >>> >>> Where is the Git repo? Is this the master now? >>> >>> I have Git clones from the svn repo and was hoping to move the project >>> to Git too. I got a bit confused about where the master SVN repo was, last >>> time I looked. Almost all changes where done in the SourceForge repo, but I >>> believe the master actually moved to Codehaus, and that copy received a few >>> more changes on trunk that are not reflected in the SourceForge copy. I >>> presume you used the Codehaus SVN repo as the base for your Git clone? >>> >>> Regards >>> >>> John Hurst >>> >>> >>> >>> On Sun, Nov 3, 2013 at 7:58 AM, Jeff Jensen <jj...@ap...> wrote: >>> >>>> I converted the entire svn repo to git using svn2git and pushed it this >>>> morning. I found no problems with the conversion. Please also review the >>>> new Git repo for any issues and let me know. >>>> >>>> Hopefully dbUnit will experience similar contribution benefits as other >>>> products have when using Git. >>>> >>>> My intention is to soon disable and remove the svn repo. I also intend >>>> "soon" to be days or weeks - so hopefully not too long. >>>> >>>> If necessary, svn2git has a nice feature to update the converted git >>>> repo with svn changes. However, it is unlikely we'll have svn changes in >>>> the near future. >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Android is increasing in popularity, but the open development platform >>>> that >>>> developers love is also attractive to malware creators. Download this >>>> white >>>> paper to learn more about secure code signing practices that can help >>>> keep >>>> Android apps secure. >>>> >>>> http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> dbunit-developer mailing list >>>> dbu...@li... >>>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer >>>> >>>> >>> >>> >>> -- >>> Life is interfering with my game >>> >>> >>> ------------------------------------------------------------------------------ >>> November Webinars for C, C++, Fortran Developers >>> Accelerate application performance with scalable programming models. >>> Explore >>> techniques for threading, error checking, porting, and tuning. Get the >>> most >>> from the latest Intel processors and coprocessors. See abstracts and >>> register >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk >>> >>> _______________________________________________ >>> dbunit-developer mailing list >>> dbu...@li... >>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer >>> >>> >> >> >> ------------------------------------------------------------------------------ >> November Webinars for C, C++, Fortran Developers >> Accelerate application performance with scalable programming models. >> Explore >> techniques for threading, error checking, porting, and tuning. Get the >> most >> from the latest Intel processors and coprocessors. See abstracts and >> register >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk >> _______________________________________________ >> dbunit-developer mailing list >> dbu...@li... >> https://lists.sourceforge.net/lists/listinfo/dbunit-developer >> >> > > > -- > Life is interfering with my game > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > > |
|
From: John H. <joh...@gm...> - 2013-11-11 04:16:07
|
Jeff, Can you give a URL to clone the repo from? Did you include all the SVN branches in the Git repo? Cheers John Hurst On Mon, Nov 11, 2013 at 5:53 AM, Jeff Jensen <jj...@ap...> wrote: > Hi John, > > The Git repo is on SF. Hopefully it soon becomes the new master and we > remove the SF svn one. > I have only been using SF svn for 2.x work, which also means the Git > conversion is from it. > > IIRC, Codehaus svn repo was 3.x. I'm not aware of anything for 2.x there. > I have not seen anything for the Codehaus work for long time. > > > > > On Sun, Nov 10, 2013 at 10:34 AM, John Hurst <joh...@gm...>wrote: > >> Jeff, >> >> Where is the Git repo? Is this the master now? >> >> I have Git clones from the svn repo and was hoping to move the project to >> Git too. I got a bit confused about where the master SVN repo was, last >> time I looked. Almost all changes where done in the SourceForge repo, but I >> believe the master actually moved to Codehaus, and that copy received a few >> more changes on trunk that are not reflected in the SourceForge copy. I >> presume you used the Codehaus SVN repo as the base for your Git clone? >> >> Regards >> >> John Hurst >> >> >> >> On Sun, Nov 3, 2013 at 7:58 AM, Jeff Jensen <jj...@ap...> wrote: >> >>> I converted the entire svn repo to git using svn2git and pushed it this >>> morning. I found no problems with the conversion. Please also review the >>> new Git repo for any issues and let me know. >>> >>> Hopefully dbUnit will experience similar contribution benefits as other >>> products have when using Git. >>> >>> My intention is to soon disable and remove the svn repo. I also intend >>> "soon" to be days or weeks - so hopefully not too long. >>> >>> If necessary, svn2git has a nice feature to update the converted git >>> repo with svn changes. However, it is unlikely we'll have svn changes in >>> the near future. >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Android is increasing in popularity, but the open development platform >>> that >>> developers love is also attractive to malware creators. Download this >>> white >>> paper to learn more about secure code signing practices that can help >>> keep >>> Android apps secure. >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> dbunit-developer mailing list >>> dbu...@li... >>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer >>> >>> >> >> >> -- >> Life is interfering with my game >> >> >> ------------------------------------------------------------------------------ >> November Webinars for C, C++, Fortran Developers >> Accelerate application performance with scalable programming models. >> Explore >> techniques for threading, error checking, porting, and tuning. Get the >> most >> from the latest Intel processors and coprocessors. See abstracts and >> register >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk >> >> _______________________________________________ >> dbunit-developer mailing list >> dbu...@li... >> https://lists.sourceforge.net/lists/listinfo/dbunit-developer >> >> > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > > -- Life is interfering with my game |
|
From: Jeff J. <jj...@ap...> - 2013-11-10 16:54:23
|
Hi John, The Git repo is on SF. Hopefully it soon becomes the new master and we remove the SF svn one. I have only been using SF svn for 2.x work, which also means the Git conversion is from it. IIRC, Codehaus svn repo was 3.x. I'm not aware of anything for 2.x there. I have not seen anything for the Codehaus work for long time. On Sun, Nov 10, 2013 at 10:34 AM, John Hurst <joh...@gm...> wrote: > Jeff, > > Where is the Git repo? Is this the master now? > > I have Git clones from the svn repo and was hoping to move the project to > Git too. I got a bit confused about where the master SVN repo was, last > time I looked. Almost all changes where done in the SourceForge repo, but I > believe the master actually moved to Codehaus, and that copy received a few > more changes on trunk that are not reflected in the SourceForge copy. I > presume you used the Codehaus SVN repo as the base for your Git clone? > > Regards > > John Hurst > > > > On Sun, Nov 3, 2013 at 7:58 AM, Jeff Jensen <jj...@ap...> wrote: > >> I converted the entire svn repo to git using svn2git and pushed it this >> morning. I found no problems with the conversion. Please also review the >> new Git repo for any issues and let me know. >> >> Hopefully dbUnit will experience similar contribution benefits as other >> products have when using Git. >> >> My intention is to soon disable and remove the svn repo. I also intend >> "soon" to be days or weeks - so hopefully not too long. >> >> If necessary, svn2git has a nice feature to update the converted git repo >> with svn changes. However, it is unlikely we'll have svn changes in the >> near future. >> >> >> >> ------------------------------------------------------------------------------ >> Android is increasing in popularity, but the open development platform >> that >> developers love is also attractive to malware creators. Download this >> white >> paper to learn more about secure code signing practices that can help keep >> Android apps secure. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk >> _______________________________________________ >> dbunit-developer mailing list >> dbu...@li... >> https://lists.sourceforge.net/lists/listinfo/dbunit-developer >> >> > > > -- > Life is interfering with my game > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > > |
|
From: John H. <joh...@gm...> - 2013-11-10 16:34:37
|
Jeff, Where is the Git repo? Is this the master now? I have Git clones from the svn repo and was hoping to move the project to Git too. I got a bit confused about where the master SVN repo was, last time I looked. Almost all changes where done in the SourceForge repo, but I believe the master actually moved to Codehaus, and that copy received a few more changes on trunk that are not reflected in the SourceForge copy. I presume you used the Codehaus SVN repo as the base for your Git clone? Regards John Hurst On Sun, Nov 3, 2013 at 7:58 AM, Jeff Jensen <jj...@ap...> wrote: > I converted the entire svn repo to git using svn2git and pushed it this > morning. I found no problems with the conversion. Please also review the > new Git repo for any issues and let me know. > > Hopefully dbUnit will experience similar contribution benefits as other > products have when using Git. > > My intention is to soon disable and remove the svn repo. I also intend > "soon" to be days or weeks - so hopefully not too long. > > If necessary, svn2git has a nice feature to update the converted git repo > with svn changes. However, it is unlikely we'll have svn changes in the > near future. > > > > ------------------------------------------------------------------------------ > Android is increasing in popularity, but the open development platform that > developers love is also attractive to malware creators. Download this white > paper to learn more about secure code signing practices that can help keep > Android apps secure. > http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > > -- Life is interfering with my game |
|
From: Jeff J. <jj...@ap...> - 2013-11-02 18:59:29
|
I converted the entire svn repo to git using svn2git and pushed it this morning. I found no problems with the conversion. Please also review the new Git repo for any issues and let me know. Hopefully dbUnit will experience similar contribution benefits as other products have when using Git. My intention is to soon disable and remove the svn repo. I also intend "soon" to be days or weeks - so hopefully not too long. If necessary, svn2git has a nice feature to update the converted git repo with svn changes. However, it is unlikely we'll have svn changes in the near future. |
|
From: Adolfo <ado...@ya...> - 2013-08-10 17:36:46
|
Hi all. First of all, I'd like to congratulate all members for this nice job DBUnit. I'd like to post an issue and if possible apply a patch to something I think will be better for end users of dbunit lib. First I'll give you some detail about the problem I was having trying to use the lib in a specific scenario: Let's consider the following dataset: <dataset> <Category id="10" name="Games"/> <Category id="1000" name="Sports"/> <Product id="1" description="blender" name="blender" price="50.00"/> <Product id="6" description="PS3 Battlefield 3" name="Battlefield" price="15.90" category_id="10"/> <Product id="7" description="Super Street Fighter IV" name="Street Fighter IV" price="16.00" category_id="10"/> </dataset> Looking at the dataset note that Product table have 5 columns: id, description, name, price and category_id (because of a relationship) As category_id isn't required, the first row being described does not set category_id. The problem is: at AbstractBatchOperation.execute method, a statement will be generated only when ignoreMapping is null or equalsIgnoreMapping returns false as follows: // If current row have a different ignore value mapping than // previous one, we generate a new statement if (ignoreMapping == null || !equalsIgnoreMapping(ignoreMapping, table, row)) To do my unit tests running fine I had to change the order of my rows inside the dataset as follows (so the correct statement were generated correctly): <dataset> <Category id="10" name="Games"/> <Category id="1000" name="Sports"/> <Product id="6" description="PS3 Battlefield 3" name="Battlefield" price="15.90" category_id="10"/> <Product id="7" description="Super Street Fighter IV" name="Street Fighter IV" price="16.00" category_id="10"/> <Product id="1" description="blender" name="blender" price="50.00"/> </dataset> What do you think to consider when the columns change when iterating throughout the rows? Discovering what was happening was time consuming and I think end users of the lib can still stop using when falling into the same scenario. Best regards and Thank you. Adolfo Eloy |
|
From: Eduardo de V. <edu...@zi...> - 2013-07-17 13:20:40
|
Hello DBUnit developers, Currently at my company we are working with DBUnit for database integration tests and we are quite happy with it. We have found an scenario under which we need to extend DBUnit in order to be able to use readable uuids instead of having to byte64 encode the bytes. Please find the patch here: http://sourceforge.net/p/dbunit/feature-requests/178/ How can I know if this patch will ever make it to the final product? Best regards, -- Eduardo de Vera Toquero Cloud Software Architect Zimory GmbH Alexanderstr. 3 10178 Berlin +49 (0)30 6098507 24 Unternehmenssitz (corporate seat): Revaler Str. 100, D-10245 Berlin, Germany; Geschäftsführer (managing director): Rüdiger Baumann, Maximilian Ahrens; Amtsgericht (local court): Berlin-Charlottenburg; Handelsregister (commercial register): HRB 110640 ---------------------------------------------------------------------------- Disclaimer Die in dieser E-Mail und den dazugehoerigen Anhaengen (zusammen die "Nachricht") enthaltenen Informationen sind nur fuer den Adressaten bestimmt und koennen vertrauliche und/oder rechtlich geschuetzte Informationen enthalten. Sollten Sie die Nachricht irrtuemlich erhalten haben, loeschen Sie die Nachricht bitte und benachrichtigen Sie den Absender, ohne die Nachricht zu kopieren oder zu verteilen oder ihren Inhalt an andere Personen weiterzugeben. Ausser bei Vorsatz oder grober Fahrlaessigkeit schliessen wir jegliche Haftung fuer Verluste oder Schaeden aus, die durch virenbefallene Software oder E-Mails verursacht werden. ---------------------------------------------------------------------------- |
|
From: Roy B. <roy...@do...> - 2013-03-14 13:07:30
|
Thanks for the heads up!
Re interpretation of the "columns"-value: The current idea is that a
sequence defines column ordering and, optionally, default values, whereas a
mapping defines default values only. Maybe they should use different keys,
to avoid confusion. A column name order would be valid until a new sequence
replaces it. I think that named values ought to be allowed in the rows as
well. So. there is no declaration of columns as such; the set of column
names in a table's metadata is the union of all column names occurring in
the dataset for that table.
An example using a named value inside a row:
- mytable:
columns: [ id, name, version:0, created_by:1 ]
rows: [ 1, "A name" ]
- mytable
columns: [ id, version, name, created_by: 1 ]
rows: [ 2, 1, "Another", description: "Description of a name" ]
Admittedly, putting a value that's inherently unordered into a sequence
smells a little. Defining a row as a mapping with two different tags for
ordered (positional) and unordered (named) values would be more precise,
but more also more verbose, and perhaps less readable, even.
Example translated to FlatXml:
<mytable id="1" name="A name" version="0" created_by="1"
description="[null]"/>
<mytable id="2" name="Another" version="1" created_by="1"
description="Description of a name"/>
Re tabluar representation: I like your example as well - I just think the
choice of dataset language should depend mostly on the kind/nature of the
data.
Regards,
2013/3/14 John Hurst <joh...@gm...>
> Hello,
>
> I am very interested in alternative formats for DbUnit and would consider
> putting this implementation into the core.
>
> The second entry provides new default values for some columns. Does that
> mean that the full list of columns from the first entry is retained? What
> if the second entry mentions new columns? What happens to the ordering?
>
> I think the idea of default values is a very interesting one for DbUnit
> and probably should be supported somehow as a specific concept separate
> from specific dataset formats.
>
> Personally I think that YAML suffers from the same basic problem as XML --
> it is essentially hierarchical, not tabular.
>
> In my work lately we have been using the Creativyst Tabular Exchange
> format (http://www.creativyst.com/Doc/Std/ctx/ctx.htm). I have written
> simple DbUnit support for it and it is working pretty well.
>
> If I interpret your example correctly, it would look like this in CTX:
>
> \T mytable
> \L id | name | description | version | created_by |
> last_modified_by
> 1 | Just | The first row | 0 | 1 |
> 1
> 2 | another | The second row | 0 | 1 |
> 1
> 3 | example | (NULL) | 0 | 1 |
> 1
> 4 | created by | (NULL) | 0 | 4 |
> 2
> 5 | another user | (NULL) | 0 | 4 |
> 2
>
> (Meant to be read monospaced.)
>
> We've found this makes for nice clean inline datasets. (We tend to embed
> test data in test code.)
>
> I need to get this into DbUnit core at some point too. (The current impl
> is in Groovy, I need to rewrite it in Java.)
>
> More later. (Train reaching station.)
>
> Regards
>
> John Hurst
>
>
>
> On Thu, Mar 14, 2013 at 4:31 AM, Roy Brokvam <roy...@do...>wrote:
>
>> (Sorry about sending that previous incomplete snippet!)
>>
>> I know Arquillian Persistence Extension supports YAML datasets, but that
>> one has two major shortcomings:
>> * It's part of a bigger testing framework that not all DbUnit users use
>> * It doesn't exploit YAML's structured model - it's basically just
>> another simple dataset
>>
>> At my work, I have started specifying a YAML dataset format that uses
>> positional values (less need to name each value for every row) and default
>> values (less need to mention each column in each row). I'm pretty sure that
>> others have the same problem that we're trying to solve: Unnecessary long
>> FlatXml datasets that are really hard to read.
>>
>> An illustrative example along the lines I'm thinking:
>>
>> - mytable:
>> columns: [ id, name, description, version: 0, created_by: 1,
>> last_modified_by: 1 ] # defines order and default values
>> rows:
>> - [ 1, "Just", "The first row" ]
>> - [ 2, "another", "The second row" ]
>> - [ 3, "example" ] # No comment here, default is null
>>
>> - mytable:
>> columns: { created_by : 2, last_modified_by : 2 } # new default
>> values
>> rows:
>> - [ 4, "created by" ]
>> - [ 5, "another user" ]
>>
>> I am also thinking of including some sequence generating mechanism, as
>> well as supporting YAML anchors and aliases.
>>
>> I am encouraged by my employer to provide the specification and an
>> implementation of it to the open source community, and DbUnit would of
>> course be a natural home for it. The question is - is there any chance that
>> such an extension would be included, or should I just create a separate
>> open source project for it?
>>
>> Regards,
>> RoyB
>>
>>
>>
>> ------------------------------------------------------------------------------
>> Everyone hates slow websites. So do we.
>> Make your web apps faster with AppDynamics
>> Download AppDynamics Lite for free today:
>> http://p.sf.net/sfu/appdyn_d2d_mar
>> _______________________________________________
>> dbunit-developer mailing list
>> dbu...@li...
>> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>>
>>
>
>
> --
> Life is interfering with my game
>
>
> ------------------------------------------------------------------------------
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_d2d_mar
> _______________________________________________
> dbunit-developer mailing list
> dbu...@li...
> https://lists.sourceforge.net/lists/listinfo/dbunit-developer
>
>
|
|
From: John H. <joh...@gm...> - 2013-03-14 07:14:22
|
Hello, I am very interested in alternative formats for DbUnit and would consider putting this implementation into the core. The second entry provides new default values for some columns. Does that mean that the full list of columns from the first entry is retained? What if the second entry mentions new columns? What happens to the ordering? I think the idea of default values is a very interesting one for DbUnit and probably should be supported somehow as a specific concept separate from specific dataset formats. Personally I think that YAML suffers from the same basic problem as XML -- it is essentially hierarchical, not tabular. In my work lately we have been using the Creativyst Tabular Exchange format (http://www.creativyst.com/Doc/Std/ctx/ctx.htm). I have written simple DbUnit support for it and it is working pretty well. If I interpret your example correctly, it would look like this in CTX: \T mytable \L id | name | description | version | created_by | last_modified_by 1 | Just | The first row | 0 | 1 | 1 2 | another | The second row | 0 | 1 | 1 3 | example | (NULL) | 0 | 1 | 1 4 | created by | (NULL) | 0 | 4 | 2 5 | another user | (NULL) | 0 | 4 | 2 (Meant to be read monospaced.) We've found this makes for nice clean inline datasets. (We tend to embed test data in test code.) I need to get this into DbUnit core at some point too. (The current impl is in Groovy, I need to rewrite it in Java.) More later. (Train reaching station.) Regards John Hurst On Thu, Mar 14, 2013 at 4:31 AM, Roy Brokvam <roy...@do...> wrote: > (Sorry about sending that previous incomplete snippet!) > > I know Arquillian Persistence Extension supports YAML datasets, but that > one has two major shortcomings: > * It's part of a bigger testing framework that not all DbUnit users use > * It doesn't exploit YAML's structured model - it's basically just another > simple dataset > > At my work, I have started specifying a YAML dataset format that uses > positional values (less need to name each value for every row) and default > values (less need to mention each column in each row). I'm pretty sure that > others have the same problem that we're trying to solve: Unnecessary long > FlatXml datasets that are really hard to read. > > An illustrative example along the lines I'm thinking: > > - mytable: > columns: [ id, name, description, version: 0, created_by: 1, > last_modified_by: 1 ] # defines order and default values > rows: > - [ 1, "Just", "The first row" ] > - [ 2, "another", "The second row" ] > - [ 3, "example" ] # No comment here, default is null > > - mytable: > columns: { created_by : 2, last_modified_by : 2 } # new default values > rows: > - [ 4, "created by" ] > - [ 5, "another user" ] > > I am also thinking of including some sequence generating mechanism, as > well as supporting YAML anchors and aliases. > > I am encouraged by my employer to provide the specification and an > implementation of it to the open source community, and DbUnit would of > course be a natural home for it. The question is - is there any chance that > such an extension would be included, or should I just create a separate > open source project for it? > > Regards, > RoyB > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > dbunit-developer mailing list > dbu...@li... > https://lists.sourceforge.net/lists/listinfo/dbunit-developer > > -- Life is interfering with my game |
|
From: Roy B. <roy...@do...> - 2013-03-13 18:12:41
|
I know Arquillian Persistence Extension supports YAML datasets, but that one has two major shortcomings: |
|
From: Roy B. <roy...@do...> - 2013-03-13 17:55:23
|
(Sorry about sending that previous incomplete snippet!)
I know Arquillian Persistence Extension supports YAML datasets, but that
one has two major shortcomings:
* It's part of a bigger testing framework that not all DbUnit users use
* It doesn't exploit YAML's structured model - it's basically just another
simple dataset
At my work, I have started specifying a YAML dataset format that uses
positional values (less need to name each value for every row) and default
values (less need to mention each column in each row). I'm pretty sure that
others have the same problem that we're trying to solve: Unnecessary long
FlatXml datasets that are really hard to read.
An illustrative example along the lines I'm thinking:
- mytable:
columns: [ id, name, description, version: 0, created_by: 1,
last_modified_by: 1 ] # defines order and default values
rows:
- [ 1, "Just", "The first row" ]
- [ 2, "another", "The second row" ]
- [ 3, "example" ] # No comment here, default is null
- mytable:
columns: { created_by : 2, last_modified_by : 2 } # new default values
rows:
- [ 4, "created by" ]
- [ 5, "another user" ]
I am also thinking of including some sequence generating mechanism, as well
as supporting YAML anchors and aliases.
I am encouraged by my employer to provide the specification and an
implementation of it to the open source community, and DbUnit would of
course be a natural home for it. The question is - is there any chance that
such an extension would be included, or should I just create a separate
open source project for it?
Regards,
RoyB
|
|
From: SourceForge.net <no...@so...> - 2012-12-28 14:57:54
|
Bugs item #3598751, was opened at 2012-12-28 06:57 Message generated for change (Tracker Item Submitted) made by foguinho_peruca You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3598751&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Feature Request Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jefferson Campos (foguinho_peruca) Assigned to: Roberto Lo Giacco (rlogiacco) Summary: JUnit 4 Initial Comment: Hello! This is a nice project! It is help me a lot ;) I have some trouble to use DBUnit and all the features of JUnit 4.x. It would be nice to have JUnit 4.x in dependency instead of JUnit 3.8.2. There is some plan to add JUnit 4 in dependency tree? If not, I would consider to implement this feature. How I can get started? Thanks, Jeff ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3598751&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-12-22 16:20:42
|
Bugs item #3587542, was opened at 2012-11-15 08:22 Message generated for change (Comment added) made by ar-morozov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3587542&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Hill (mrhcon01) Assigned to: matthias g (gommma) Summary: SQLHelper Oracle Issue Initial Comment: When using and Oracle database, the SQLHelper class throws an exception on line 404 because isAutoIncrement = resultSet.getString(23); fails. I have tried this with ojdbc v4, 5 and 6. While the code catches an exception and continues, it only catches a SQLException and a JDBCException is actually being thrown. try { isAutoIncrement = resultSet.getString(23); } catch(SQLException e){ if(logger.isDebugEnabled()) logger.debug("Could not retrieve the 'isAutoIncrement' property because not yet running on Java 1.5 - defaulting to NO. " + "Table=" + tableName + ", Column=" +columnName, e); // Ignore this one here } This should either be fixed for Oracle or change the Catch to be more generic. Thanks ---------------------------------------------------------------------- Comment By: Artem (ar-morozov) Date: 2012-12-22 08:20 Message: I have same issue with hibernate 4.1.9.Final and SqlServer. Hibernates org.hibernate.exception.internal.SQLStateConversionDelegate throws SQLGrammarException, witch is not catched by code, listed in bug details. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3587542&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-12-19 22:30:36
|
Bugs item #3597583, was opened at 2012-12-19 14:30 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3597583&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: https://www.google.com/accounts () Assigned to: matthias g (gommma) Summary: PHPUnit 3.7 does not parse embedded PHP contained within YML Initial Comment: I posted an issue over at PHPUnit (https://github.com/sebastianbergmann/phpunit/issues/756), and the dev indicated that it wasn't a PHPUnit issue but a DBUnit issue, so I am posting over here. If this not the correct process, I apologize. I'm not sure which version of dbunit PHPUnit is built upon. Here is the contents of the post: We recently came across an issue where unit tests on one server were running fine, but on another server they kept failing. After some investigation it appears the ability for PHPUnit to parse embedded within Yaml files was broke between v3.6 and v3.7. // Debug code within one of our tests var_dump((string) $this->getDataset()); exit; Results of running test on v3.6: +----------------------+----------------------+----------------------+----------------------+ | sitemanager.clients | +----------------------+----------------------+----------------------+----------------------+ | id | clientid | parentid | clientname | +----------------------+----------------------+----------------------+----------------------+ | 1 | unittest | unittest | Unit Test Client | +----------------------+----------------------+----------------------+----------------------+ Results of running test on v3.7 +----------------------+----------------------+----------------------+----------------------+ | sitemanager.clients | +----------------------+----------------------+----------------------+----------------------+ | id | clientid | parentid | clientname | +----------------------+----------------------+----------------------+----------------------+ | 1 | <?php echo Test_Envi | <?php echo Test_Envi | <?php echo Test_Envi | +----------------------+----------------------+----------------------+----------------------+ Notice in the v3.7 output, the php code is treated as a literal string, which obviously breaks unit tests that assert a specific value. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3597583&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-12-04 06:32:18
|
Feature Requests item #3592360, was opened at 2012-12-03 22:32 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3592360&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Yannick () Assigned to: Nobody/Anonymous (nobody) Summary: Accept column names including the character # Initial Comment: Hello all, We've come across an old legacy database where some of the columns are named for instance EDL#DTTRS or ESO#NMDBT. We are using Hibernate as an ORM. We have created DAOs for our entities and created integration tests connecting directly to the "real" test database which worked fine. When I tried to create a unit test for my DAO with a local in memory (HSQLDB) database, using the @DataSet of Unitils (and my custom DataSetFactory using the XmlDataSet instead of the FlatXmlDataSet), I got the "table not found" error for my entities that had columns with the # sign in them. After some debugging, I found out that the initialize method of the DatabaseDataSet class find all the tables defined in my entities, but the only 2 that have those kinds of column names. Could it be possible for you guys to do something about this? Using latest version of everything (Hibernate 4.1.7.Final, DBUnit 2.4.9, JUnit 4.10). Thanks a bunch in advance ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3592360&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-12-03 16:38:17
|
Bugs item #3592128, was opened at 2012-12-03 08:38 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3592128&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: https://www.google.com/accounts () Assigned to: matthias g (gommma) Summary: CsvDataFileLoader load from filename doesn't work Initial Comment: DBUnit version = 2.4.9 Call to org.dbunit.util.fileloader.CsvDataFileLoader.load(String filename), triggers org.dbunit.util.fileloader.AbstractDataFileLoader#load() implementation, where the file is loaded via the following line: URL url = this.getClass().getResource(filename); which causes a resource search local to AbstractDataFileLoader class. In practice, no CSV file can be put at this place (or it would be dirty) Instead, the following line this.getClass().getClassLoader().getResource(filename) would have allow a resource search from root folder. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3592128&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-11-29 22:22:05
|
Bugs item #3591169, was opened at 2012-11-29 14:22 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3591169&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: https://www.google.com/accounts () Assigned to: matthias g (gommma) Summary: Improper Exception handling in SQLHelper Initial Comment: This is related to bug id 3587542. On line 406 of SQLHelper.java, the catch block attempts to catch a SQLException in the event that the column does not exist. Unfortunately this is not the behavior, instead you get an ArrayIndexOutOfBoundsException, and since this is not handled execution terminates. The simple fix is to add an additional catch block to catch an ArrayIndexOutOfBoundsException for this scenario. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3591169&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-11-15 16:22:19
|
Bugs item #3587542, was opened at 2012-11-15 08:22 Message generated for change (Tracker Item Submitted) made by mrhcon01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3587542&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Hill (mrhcon01) Assigned to: matthias g (gommma) Summary: SQLHelper Oracle Issue Initial Comment: When using and Oracle database, the SQLHelper class throws an exception on line 404 because isAutoIncrement = resultSet.getString(23); fails. I have tried this with ojdbc v4, 5 and 6. While the code catches an exception and continues, it only catches a SQLException and a JDBCException is actually being thrown. try { isAutoIncrement = resultSet.getString(23); } catch(SQLException e){ if(logger.isDebugEnabled()) logger.debug("Could not retrieve the 'isAutoIncrement' property because not yet running on Java 1.5 - defaulting to NO. " + "Table=" + tableName + ", Column=" +columnName, e); // Ignore this one here } This should either be fixed for Oracle or change the Catch to be more generic. Thanks ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3587542&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-10-31 09:52:03
|
Bugs item #3582178, was opened at 2012-10-31 02:52 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3582178&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: Torbjørn Knutsen () Assigned to: matthias g (gommma) Summary: Self-referencing tables using excel does not work Initial Comment: I read in your changelog that self-referencing tables was supposed to be fixed (2439408) in v.2.4.1, but I can't really see that it is. I am using v.2.4.8 in combination with excel spreadsheets for holding the data. I get errors when I add a self-reference to tables. I have a table Application, which contains the following fields: (id, name, nextVersion_id). If I have data set up like this: (1, SomeApplication, 2) (2, SomeApplication, ) I get the following error on all tests that uses dbUnit: java.sql.SQLException: Integrity constraint violation - no parent FK93F0B94270757822 table: SOKNAD in statement [insert into APPLICATION(ID, NAME, NEXTVERSION_ID values (?, ?, ? )] at org.hsqldb.jdbc.Util.throwError(Unknown Source) at org.hsqldb.jdbc.jdbcPreparedStatement.execute(Unknown Source) at org.dbunit.database.statement.SimplePreparedStatement.addBatch(SimplePreparedStatement.java:80) at org.dbunit.database.statement.AutomaticPreparedBatchStatement.addBatch(AutomaticPreparedBatchStatement.java:70) at org.dbunit.operation.AbstractBatchOperation.execute(AbstractBatchOperation.java:195) at org.dbunit.operation.CompositeOperation.execute(CompositeOperation.java:79) Thrown by a call to DatabaseOperation.CLEAN_INSERT.execute(con, dataSet); I'm guessing that this is due to the referencing entry being created before the referenced entry. But, when I flip the two: (2, SomeApplication, ) (1, SomeApplication, 2) The first test run will execute, but the following will throw: java.sql.SQLException: Integrity constraint violation FK93F0B94270757822 table: APPLICATION at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.fetchResult(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.execute(Unknown Source) at org.dbunit.database.statement.SimpleStatement.executeBatch(SimpleStatement.java:69) at org.dbunit.operation.DeleteAllOperation.execute(DeleteAllOperation.java:126) at org.dbunit.operation.CompositeOperation.execute(CompositeOperation.java:79) I'm guessing that this has something to do with the CLEAN part of DatabaseOperation.CLEAN_INSERT.execute(con, dataSet);, that it removes stuff in reverse order, resulting in an attempt to remove a referenced entry before the referencing entry is removed, and thus a constraint violation. Is it possible to have the clean part go through the entries in the same order as the create part? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3582178&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-10-30 12:03:07
|
Bugs item #3581883, was opened at 2012-10-30 05:03 Message generated for change (Tracker Item Submitted) made by anewsome1 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3581883&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Bug Group: v2.4.* Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andrew Newsome (anewsome1) Assigned to: matthias g (gommma) Summary: NoSuchTableException - Multiple Schema Initial Comment: Apologies if this is a duplicate - I have read through the older bugs and the issue is still affecting me, so I thought I'd raise a new bug. Environment ----------------- DBUnit 2.4.9, Mysql 5.5, MySQL connector 5.1.16. Issue ------- The issue is that on a DELETE_ALL execution for a dataset which has fully qualified table names, I get a NoSuchTableException for 'SchemaName.TableName', when I have explicitly set the fully qualified name feature to true: dbUnitConnection = new DatabaseConnection(connection); dbUnitConnection.getConfig().setProperty(DatabaseConfig.FEATURE_QUALIFIED_TABLE_NAMES, true); I am connecting to MySQL with root privileges and the connection URL is 'jdbc:mysql://localhost:3306/'. I have also tried 'jdbc:mysql://localhost:3306/information_schema', and 'jdbc:mysql://localhost:3306/<SchemaName>' , and none of them appear to work. I am connecting to the root of Mysql, rather than specifying a schema because I am deleting/inserting data into multiple schemas. As you can see in the code snippet above, I am not explicitly setting a schema name within the DatabaseConnection constructor, again, because there are multiple schemas. I have debugged through the DBUnit code, and can see that in the org.dbunit.database.DatabaseDataSet class (line 210) no table meta data rows are returned from the Mysql driver. I understand that there may be a problem with the way in which I am using DBUnit, in the way in which I am not specifying a schema anywhere other than in the Dataset XML files, however I haven't come across any documentation which directs me in to using a different approach for the setup I am wishing to perform. If a schema name is required within the DatabaseConnection constructor, then can it be any of them? I would have thought that the meta data returned would only be for one of the multiple schemas that I am using then, in that case. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449491&aid=3581883&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-10-20 21:00:59
|
Feature Requests item #3578765, was opened at 2012-10-20 13:56 Message generated for change (Comment added) made by jeffjensen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3578765&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 8 Private: No Submitted By: Jeff Jensen (jeffjensen) Assigned to: Jeff Jensen (jeffjensen) Summary: Change release process to use Sonatype OSSRH Initial Comment: SourceForge sync to Maven Central was stopped last July. dbUnit release 2.4.9 was synced to Central the old way as a one-time favor to allow for time to setup dbUnit to deploy via Sonatype's OSSRH. ---------------------------------------------------------------------- >Comment By: Jeff Jensen (jeffjensen) Date: 2012-10-20 14:00 Message: Commit 1263. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3578765&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-10-20 20:56:58
|
Feature Requests item #3578765, was opened at 2012-10-20 13:56 Message generated for change (Tracker Item Submitted) made by jeffjensen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3578765&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 8 Private: No Submitted By: Jeff Jensen (jeffjensen) Assigned to: Jeff Jensen (jeffjensen) Summary: Change release process to use Sonatype OSSRH Initial Comment: SourceForge sync to Maven Central was stopped last July. dbUnit release 2.4.9 was synced to Central the old way as a one-time favor to allow for time to setup dbUnit to deploy via Sonatype's OSSRH. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3578765&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-10-20 18:58:08
|
Feature Requests item #564041, was opened at 2002-06-03 13:06 Message generated for change (Settings changed) made by jeffjensen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=564041&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Pending >Resolution: None Priority: 5 Private: No Submitted By: Jeremy Stein (jeremystein) >Assigned to: Jeff Jensen (jeffjensen) Summary: Share transaction with InsertIdentityOp. Initial Comment: Please change InsertIdentityOperation to allow it to share an existing transaction rather than assume that autocommit is on. One database testing technique is to do the following: 1) Begin a transaction 2) Set up test data (with DbUnit!) 3) Call the tested method 4) Verify the results (with DbUnit!) 5) Rollback the transaction A rollback ensures that all data changes are correctly removed even if the tested method made unexpected data changes. And it can be faster than deleting the records. (For example, when a table has many unindexed foreign keys pointing to it.) This technique cannot currently be used with InsertIdentityOperation because this operation expects to control the transaction itself. It would be nice if there was a switch of some sort that would cause it to use the existing transaction without comitting or changing the autocommit flag. ---------------------------------------------------------------------- >Comment By: Jeff Jensen (jeffjensen) Date: 2012-10-20 11:58 Message: Reopened. Please attach the patch. ---------------------------------------------------------------------- Comment By: Oliver (oman002) Date: 2012-09-24 20:50 Message: NB: The fix by gomma in rev 860 doesn't work - but I have a patch that does. ---------------------------------------------------------------------- Comment By: Oliver (oman002) Date: 2012-09-24 20:44 Message: Hi - I have the problem in current trunk (v2.4.9-SNAPSHOT). I have a patch that fixes this as described by james_a_woods. How can I get it into trunk? ---------------------------------------------------------------------- Comment By: SourceForge Robot (sf-robot) Date: 2008-11-21 18:20 Message: This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: matthias g (gommma) Date: 2008-11-02 11:46 Message: Hi there, I committed the change as proposed by james in rev. 860/trunk for the next 2.4 release. Could please somebody having a running MSSQL server test this feature? You can get the latest dbunit build from the parabuild server in the section "Results" (see http://parabuild.viewtier.com:8080/parabuild/index.htm?view=detailed&buildid=30 ). Thanks and regards, mat ---------------------------------------------------------------------- Comment By: James Woods (james_a_woods) Date: 2005-06-01 01:25 Message: Logged In: YES user_id=1200768 Sorry, cut and paste problem. I copied the original code. Here is the real version. I appologise for the formatting, but this input box wraps lines a bit short. public void execute(IDatabaseConnection connection, IDataSet dataSet) throws DatabaseUnitException, SQLException { Connection jdbcConnection = connection.getConnection(); Statement statement = jdbcConnection.createStatement(); boolean wasAutoCommit = false; try { IDataSet databaseDataSet = connection.createDataSet(); // INSERT_IDENTITY need to be enabled/disabled inside the // same transaction if (jdbcConnection.getAutoCommit() == true) { wasAutoCommit = true; jdbcConnection.setAutoCommit(false); } // Execute decorated operation one table at a time ITableIterator iterator = dataSet.iterator(); while (iterator.next()) { ITable table = iterator.getTable(); String tableName = table.getTableMetaData().getTableName(); ITableMetaData metaData = databaseDataSet.getTableMetaData(tableName); // enable identity insert boolean hasIdentityColumn = hasIdentityColumn(metaData, connection); if (hasIdentityColumn) { StringBuffer sqlBuffer = new StringBuffer(128); sqlBuffer.append("SET IDENTITY_INSERT "); sqlBuffer.append(getQualifiedName(connection.getSchema(), metaData .getTableName(), connection)); sqlBuffer.append(" ON"); statement.execute(sqlBuffer.toString()); } try { _operation.execute(connection, new DefaultDataSet(table)); } finally { // disable identity insert if (hasIdentityColumn) { StringBuffer sqlBuffer = new StringBuffer(128); sqlBuffer.append("SET IDENTITY_INSERT "); sqlBuffer.append(getQualifiedName(connection.getSchema(), metaData .getTableName(), connection)); sqlBuffer.append(" OFF"); statement.execute(sqlBuffer.toString()); } if (wasAutoCommit) jdbcConnection.commit(); } } } finally { if (wasAutoCommit) jdbcConnection.setAutoCommit(true); statement.close(); } } ---------------------------------------------------------------------- Comment By: James Woods (james_a_woods) Date: 2005-06-01 01:23 Message: Logged In: YES user_id=1200768 I have run into the same problem using version 2.1, so I am assuming that you havn't had a time to look at this yet. I have had some success by copying the InsertIdentityOperation class into my own source code and changing the execute method as shown below. Feel free to do with this as you like if you find it useful. public void execute(IDatabaseConnection connection, IDataSet dataSet) throws DatabaseUnitException, SQLException { Connection jdbcConnection = connection.getConnection(); Statement statement = jdbcConnection.createStatement(); try { IDataSet databaseDataSet = connection.createDataSet(); // INSERT_IDENTITY need to be enabled/disabled inside the // same transaction if (jdbcConnection.getAutoCommit() == false) { throw new ExclusiveTransactionException(); } jdbcConnection.setAutoCommit(false); // Execute decorated operation one table at a time ITableIterator iterator = dataSet.iterator(); while(iterator.next()) { ITable table = iterator.getTable(); String tableName = table.getTableMetaData().getTableName(); ITableMetaData metaData = databaseDataSet.getTableMetaData(tableName); // enable identity insert boolean hasIdentityColumn = hasIdentityColumn(metaData, connection); if (hasIdentityColumn) { StringBuffer sqlBuffer = new StringBuffer(128); sqlBuffer.append("SET IDENTITY_INSERT "); sqlBuffer.append(getQualifiedName(connection.getSchema(), metaData.getTableName(), connection)); sqlBuffer.append(" ON"); statement.execute(sqlBuffer.toString()); } try { _operation.execute(connection, new DefaultDataSet(table)); } finally { // disable identity insert if (hasIdentityColumn) { StringBuffer sqlBuffer = new StringBuffer(128); sqlBuffer.append("SET IDENTITY_INSERT "); sqlBuffer.append(getQualifiedName(connection.getSchema(), metaData.getTableName(), connection)); sqlBuffer.append(" OFF"); statement.execute(sqlBuffer.toString()); } jdbcConnection.commit(); } } } finally { jdbcConnection.setAutoCommit(true); statement.close(); } } ---------------------------------------------------------------------- Comment By: Manuel Laflamme (mlaflamm) Date: 2002-06-13 05:57 Message: Logged In: YES user_id=466344 This fix/functionality should be available for version 1.5 ---------------------------------------------------------------------- Comment By: Jeremy Stein (jeremystein) Date: 2002-06-03 13:12 Message: Logged In: YES user_id=492988 Or, rather than using a switch, the operation could determine its behavior based on the value of autocommit when it starts. If autocommit is already off, then it doesn't commit or set autocommit. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=564041&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-10-20 02:35:43
|
Feature Requests item #2905970, was opened at 2009-11-30 05:47 Message generated for change (Comment added) made by jeffjensen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=2905970&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: Accepted Priority: 5 Private: No Submitted By: pkamm (pkamm) Assigned to: John Hurst (jbhurst) Summary: Upgrade to POI Version 3.5-FINAL Initial Comment: The current Distribution of DBUnit is incompatible with POI Version 3.5-FINAL causing the following Error java.lang.NoSuchMethodError: org.apache.poi.hssf.usermodel.HSSFDateUtil.isCellDateFormatted(Lorg/apache/poi/hssf/usermodel/HSSFCell;)Z at org.dbunit.dataset.excel.XlsTable.getValue(XlsTable.java:153) at org.dbunit.operation.AbstractBatchOperation.isEmpty(AbstractBatchOperation.java:77) at org.dbunit.operation.AbstractBatchOperation.execute(AbstractBatchOperation.java:135) at com.carlson.cwt.test.db.HsqlDbSqlFileLoader.importXlsDataSet(HsqlDbSqlFileLoader.java:166) ---------------------------------------------------------------------- >Comment By: Jeff Jensen (jeffjensen) Date: 2012-10-19 19:35 Message: John, if we continue to build with Java 6 now, you happy to move forward with this? See also 3147832. ---------------------------------------------------------------------- Comment By: John Hurst (jbhurst) Date: 2010-01-23 14:18 Message: Perhaps we should put reflection code in here to get this working in the meantime, until we get to DbUnit 3.0 with Java 5 support. ---------------------------------------------------------------------- Comment By: John Hurst (jbhurst) Date: 2010-01-23 14:17 Message: I've reopened this ticket and reverted the previous changes, because POI-3.5 is not compatible with Java 1.4, required for the DbUnit "official" profile build. This needs to be resolved before we can include a POI-3.5 dependency. ---------------------------------------------------------------------- Comment By: John Hurst (jbhurst) Date: 2009-12-27 13:36 Message: Fixed in svn:1132. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=2905970&group_id=47439 |
|
From: SourceForge.net <no...@so...> - 2012-10-20 02:27:21
|
Feature Requests item #3405335, was opened at 2011-09-06 23:12 Message generated for change (Comment added) made by jeffjensen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3405335&group_id=47439 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Pending Resolution: None Priority: 5 Private: No Submitted By: Stéphane Landelle (slandelle) Assigned to: Nobody/Anonymous (nobody) Summary: Reduce FlatXmlProducer memory footprint Initial Comment: I use DbUnit for loading large flat datasets. Memory footprint is very high because FlatXmlDataSet is a CachedDataSet and it keep all the data in memory. The problem is that FlatXmlProducer instanciate Columns directly from SAX output, so attribute names and values all always new Strings, even though they might be equal to previously seen data (column names, foreign keys). Why not use a cache (this behavior could be optional) that would reuse String instances? I can provide a patch if you're interested. Sincerely, Stéphane Landelle ---------------------------------------------------------------------- >Comment By: Jeff Jensen (jeffjensen) Date: 2012-10-19 19:27 Message: If you have improvements, yes please attach a patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=449494&aid=3405335&group_id=47439 |