You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
(1) |
Apr
(41) |
May
(41) |
Jun
(50) |
Jul
(14) |
Aug
(21) |
Sep
(37) |
Oct
(8) |
Nov
(4) |
Dec
(135) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(145) |
Feb
(110) |
Mar
(216) |
Apr
(101) |
May
(42) |
Jun
(42) |
Jul
(23) |
Aug
(17) |
Sep
(33) |
Oct
(15) |
Nov
(18) |
Dec
(6) |
2011 |
Jan
(8) |
Feb
(10) |
Mar
(8) |
Apr
(41) |
May
(48) |
Jun
(62) |
Jul
(7) |
Aug
(9) |
Sep
(7) |
Oct
(11) |
Nov
(49) |
Dec
(1) |
2012 |
Jan
(17) |
Feb
(63) |
Mar
(4) |
Apr
(13) |
May
(17) |
Jun
(21) |
Jul
(10) |
Aug
(10) |
Sep
|
Oct
|
Nov
|
Dec
(16) |
2013 |
Jan
(10) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(5) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: William P. <wil...@ya...> - 2010-01-12 20:14:47
|
On Jan 12, 2010, at 2:57 PM, Hilmar Lapp wrote: > Wouldn't that complicate the data import in some way? I'm OK if it doesn't, but if it does I think we may need to defer until after the release. If the citation-update-scirpt can take the Endnote output and update 284 records, I don't see why it can't take a bigger Endnote output and update 2,350 records (which is total number of citations now in TreeBASE1). I don't recall that this is a slow process, and it should have been designed for multi- and incremental updates. If Mark didn't design it for multiple & incremental updating (e.g. if each time you run it it creates more and more duplicate author records), than yes, we should only do the 284-record update. Perhaps Vladimir can examine the code and confirm that it is designed for multiple runs of the same script on the same set of records. bp |
From: Vladimir G. <vla...@du...> - 2010-01-12 20:10:18
|
Youjun has noted that the JNDI update will break the tests: they are executed on the developer side, where JNDI data sources are not available. (Technical details: my update will re-define the dataSource bean in treebase-core/src/main/resources/applicationContext- dao.xml to look itself up in JNDI, instead of creating itself as an object of class com.mchange.v2.c3p0.ComboPooledDataSource.) A correct resolution, I believe, should provide a Spring configuration file that is an alternative to (new) applicationContext-dao.xml and configure the tests to use this configuration. Anyone has ideas how to make this happen? I, unfortunately, do not yet know enough about Spring and Maven to figure this out. --Vladimir On Jan 11, 2010, at 3:45 PM, Vladimir Gapeyev wrote: > It looks like I have figured out how to set up Treebase to use JNDI > data sources. Surgery on code and on the build procedures is > surprisingly minor, but if anyone is concerned about effects on their > not-yet-committed code, react soon. I'll commit my changes and post > switch instructions Tuesday morning. > > A bonus question: TB currently does its Connection pooling via the > package com.mchange.v2.c3p0.ComboPooledDataSource see treebase-core/ > src/main/resources/applicationContext-dao.xml. Is anyone aware why > this (obscure?) choice? Tomcat, by default, uses another pooling > library (Apache commons-dbcp) behind DataSources that it serves via > JNDI. I'd rather stick with the Tomcat's default. > > --Vladimir > > ------------------------------------------------------------------------------ > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast > and easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel |
From: Hilmar L. <hl...@ne...> - 2010-01-12 19:57:21
|
Wouldn't that complicate the data import in some way? I'm OK if it doesn't, but if it does I think we may need to defer until after the release. -hilmar On Jan 12, 2010, at 2:52 PM, William Piel wrote: > > I don't think so because I believe she was working with the citation > data up until Jan 2009. This Endnote file is for Jan 09 through Dec > 09. But indeed, let's fuse Rosie's pre 09 data with this 2009 data > and used the fused file to update all data in TreeBASE2. > > bp > > > On Jan 12, 2010, at 2:06 PM, Ryan Scherle wrote: > >> Does this include the updates that Rosie did last Fall? She filled >> out the majority of (volume, issue, pages, DOIs) for content that >> TreeBASE and Dryad have in common. The Endnote file she created is >> in the subversion, in trunk/treebase-curation/studyCitations.enl >> >> -- Ryan >> >> On Jan 11, 2010, at 4:16 PM, William Piel wrote: >> >>> >>> The citation metadata are now available here: >>> >>> http://www.treebase.org/treebase/migration/Dec-09/citations_utf8.zip >>> >>> Much of the metadata still need looking-up (e.g. volume, issue, >>> pages, DOIs) but these can be fixed by our student help directly >>> into the database after the migration is done. >>> >>> This completes the data files required for the Jan09 thru Dec09 >>> migration. Once the record-id-sequence problem if fixed, the >>> migration scripts can be run. >>> >>> Naturally, I continue to edit and accession new submissions to >>> TreeBASE1, which means that we will need to do a small additional >>> Jan2010 migration after the Dec09 migration is complete. But this >>> should be easy since, finger's crossed, seeing as all teething >>> issues re. migration will have been resolved. >>> >>> bp > > > ------------------------------------------------------------------------------ > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast > and easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: William P. <wil...@ya...> - 2010-01-12 19:52:15
|
I don't think so because I believe she was working with the citation data up until Jan 2009. This Endnote file is for Jan 09 through Dec 09. But indeed, let's fuse Rosie's pre 09 data with this 2009 data and used the fused file to update all data in TreeBASE2. bp On Jan 12, 2010, at 2:06 PM, Ryan Scherle wrote: > Does this include the updates that Rosie did last Fall? She filled out the majority of (volume, issue, pages, DOIs) for content that TreeBASE and Dryad have in common. The Endnote file she created is in the subversion, in trunk/treebase-curation/studyCitations.enl > > -- Ryan > > On Jan 11, 2010, at 4:16 PM, William Piel wrote: > >> >> The citation metadata are now available here: >> >> http://www.treebase.org/treebase/migration/Dec-09/citations_utf8.zip >> >> Much of the metadata still need looking-up (e.g. volume, issue, pages, DOIs) but these can be fixed by our student help directly into the database after the migration is done. >> >> This completes the data files required for the Jan09 thru Dec09 migration. Once the record-id-sequence problem if fixed, the migration scripts can be run. >> >> Naturally, I continue to edit and accession new submissions to TreeBASE1, which means that we will need to do a small additional Jan2010 migration after the Dec09 migration is complete. But this should be easy since, finger's crossed, seeing as all teething issues re. migration will have been resolved. >> >> bp |
From: Ryan S. <rsc...@ne...> - 2010-01-12 19:07:04
|
Does this include the updates that Rosie did last Fall? She filled out the majority of (volume, issue, pages, DOIs) for content that TreeBASE and Dryad have in common. The Endnote file she created is in the subversion, in trunk/treebase-curation/studyCitations.enl -- Ryan On Jan 11, 2010, at 4:16 PM, William Piel wrote: > > The citation metadata are now available here: > > http://www.treebase.org/treebase/migration/Dec-09/citations_utf8.zip > > Much of the metadata still need looking-up (e.g. volume, issue, pages, DOIs) but these can be fixed by our student help directly into the database after the migration is done. > > This completes the data files required for the Jan09 thru Dec09 migration. Once the record-id-sequence problem if fixed, the migration scripts can be run. > > Naturally, I continue to edit and accession new submissions to TreeBASE1, which means that we will need to do a small additional Jan2010 migration after the Dec09 migration is complete. But this should be easy since, finger's crossed, seeing as all teething issues re. migration will have been resolved. > > bp > > > > ------------------------------------------------------------------------------ > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast and easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel |
From: Hilmar L. <hl...@ne...> - 2010-01-12 16:11:51
|
Taken together with Bill's email, this would mean that we push our release target date to Feb 5. That's right within our current target date estimate, so I think we will need to go in during our next call and carefully revisit the whole plan to see what can be cut off and still make the release. -hilmar On Jan 12, 2010, at 9:51 AM, Rutger Vos wrote: > I will be away the last week of January (family visit) and the second > week of February (BioHackathon). > > On Mon, Jan 11, 2010 at 10:17 PM, Hilmar Lapp <hl...@ne...> > wrote: >> We would like to firm up a release date for February. More >> specifically, it'd be highly desirable to release by Wed, Feb 17, at >> the latest. >> >> Because I will be away that day and earlier that week at a workshop, >> I'd strongly prefer that we be able to release by Friday, Feb 12. >> >> I would like us during the next teleconference to focus on narrowing >> down what needs to happen by what time to accomplish this. If you >> have >> concerns about meeting that date, please let me know in time so we >> can >> address these things at the next conference call. >> >> -hilmar >> -- >> =========================================================== >> : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : >> =========================================================== >> >> >> >> >> ------------------------------------------------------------------------------ >> This SF.Net email is sponsored by the Verizon Developer Community >> Take advantage of Verizon's best-in-class app development support >> A streamlined, 14 day to market process makes app distribution fast >> and easy >> Join now and get one step closer to millions of Verizon customers >> http://p.sf.net/sfu/verizon-dev2dev >> _______________________________________________ >> Treebase-devel mailing list >> Tre...@li... >> https://lists.sourceforge.net/lists/listinfo/treebase-devel >> > > > > -- > Dr. Rutger A. Vos > School of Biological Sciences > Philip Lyle Building, Level 4 > University of Reading > Reading > RG6 6BX > United Kingdom > Tel: +44 (0) 118 378 7535 > http://www.nexml.org > http://rutgervos.blogspot.com -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Rutger V. <rut...@gm...> - 2010-01-12 14:51:27
|
I will be away the last week of January (family visit) and the second week of February (BioHackathon). On Mon, Jan 11, 2010 at 10:17 PM, Hilmar Lapp <hl...@ne...> wrote: > We would like to firm up a release date for February. More > specifically, it'd be highly desirable to release by Wed, Feb 17, at > the latest. > > Because I will be away that day and earlier that week at a workshop, > I'd strongly prefer that we be able to release by Friday, Feb 12. > > I would like us during the next teleconference to focus on narrowing > down what needs to happen by what time to accomplish this. If you have > concerns about meeting that date, please let me know in time so we can > address these things at the next conference call. > > -hilmar > -- > =========================================================== > : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : > =========================================================== > > > > > ------------------------------------------------------------------------------ > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast and easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel > -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: William P. <wil...@ya...> - 2010-01-12 14:32:04
|
On Jan 11, 2010, at 5:17 PM, Hilmar Lapp wrote: > We would like to firm up a release date for February. More > specifically, it'd be highly desirable to release by Wed, Feb 17, at > the latest. > > Because I will be away that day and earlier that week at a workshop, > I'd strongly prefer that we be able to release by Friday, Feb 12. I will be away (on mountaintops in the state of Michoacán -- i.e. largely *without* internet) between Feb 14 and Feb 22. This is an unfortunate coincidence because we can anticipate a flurry of problems/questions/issues from users due to the transition. Ideally, then, the release would be either after Feb 22 (e.g. Feb 25), or significantly before Feb 14 (e.g. Feb 1); or we need someone other than me to be available for questions and editorial issues during this period. bp |
From: Hilmar L. <hl...@ne...> - 2010-01-12 14:13:56
|
We would like to firm up a release date for February. More specifically, it'd be highly desirable to release by Wed, Feb 17, at the latest. Because I will be away that day and earlier that week at a workshop, I'd strongly prefer that we be able to release by Friday, Feb 12. I would like us during the next teleconference to focus on narrowing down what needs to happen by what time to accomplish this. If you have concerns about meeting that date, please let me know in time so we can address these things at the next conference call. -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Hilmar L. <hl...@ne...> - 2010-01-11 21:44:33
|
I'd agree with that, commons-dbcp has worked OK in my hands. -hilmar Sent from away On Jan 11, 2010, at 3:45 PM, Vladimir Gapeyev <vga...@ne...> wrote: > Tomcat, by default, uses another pooling > library (Apache commons-dbcp) behind DataSources that it serves via > JNDI. I'd rather stick with the Tomcat's default. |
From: William P. <wil...@ya...> - 2010-01-11 21:16:50
|
The citation metadata are now available here: http://www.treebase.org/treebase/migration/Dec-09/citations_utf8.zip Much of the metadata still need looking-up (e.g. volume, issue, pages, DOIs) but these can be fixed by our student help directly into the database after the migration is done. This completes the data files required for the Jan09 thru Dec09 migration. Once the record-id-sequence problem if fixed, the migration scripts can be run. Naturally, I continue to edit and accession new submissions to TreeBASE1, which means that we will need to do a small additional Jan2010 migration after the Dec09 migration is complete. But this should be easy since, finger's crossed, seeing as all teething issues re. migration will have been resolved. bp |
From: Vladimir G. <vga...@ne...> - 2010-01-11 20:45:39
|
It looks like I have figured out how to set up Treebase to use JNDI data sources. Surgery on code and on the build procedures is surprisingly minor, but if anyone is concerned about effects on their not-yet-committed code, react soon. I'll commit my changes and post switch instructions Tuesday morning. A bonus question: TB currently does its Connection pooling via the package com.mchange.v2.c3p0.ComboPooledDataSource see treebase-core/ src/main/resources/applicationContext-dao.xml. Is anyone aware why this (obscure?) choice? Tomcat, by default, uses another pooling library (Apache commons-dbcp) behind DataSources that it serves via JNDI. I'd rather stick with the Tomcat's default. --Vladimir |
From: youjun g. <you...@gm...> - 2010-01-11 15:51:58
|
No. 7 was solved. YOujun On Mon, Jan 11, 2010 at 9:35 AM, youjun guo <you...@gm...> wrote: > Thanks Rutger, > > While digging into the No. 1, It most likely due to the MeseuiteConverter > failed on create a matrix upon our test nexus file. Please also look at this > one. > > Talking about the No. 7, It looks to me that this test never got passed > since bean nexmlService and rdfaService were added to the config file. As > you sald the test class need a nexusService bean for mesquite test, and the > base class try to autowire a NexusService interface type for it. but there > are two other beans also implement the same interface. > > Youjun > > > > > On Mon, Jan 11, 2010 at 5:02 AM, Rutger Vos <rut...@gm...> wrote: > >> > No. 7 Spring complained: "There are 3 beans of type >> > [org.cipres.treebase.domain.nexus.NexusService] available for autowiring >> by >> > type: [nexusService, nexmlService, rdfaService]." Spring don't know >> which >> > one to use. Do we need all of them in the config file? >> >> Only the nexusService uses mesquite. >> >> -- >> Dr. Rutger A. Vos >> School of Biological Sciences >> Philip Lyle Building, Level 4 >> University of Reading >> Reading >> RG6 6BX >> United Kingdom >> Tel: +44 (0) 118 378 7535 >> http://www.nexml.org >> http://rutgervos.blogspot.com >> > > |
From: youjun g. <you...@gm...> - 2010-01-11 14:35:23
|
Thanks Rutger, While digging into the No. 1, It most likely due to the MeseuiteConverter failed on create a matrix upon our test nexus file. Please also look at this one. Talking about the No. 7, It looks to me that this test never got passed since bean nexmlService and rdfaService were added to the config file. As you sald the test class need a nexusService bean for mesquite test, and the base class try to autowire a NexusService interface type for it. but there are two other beans also implement the same interface. Youjun On Mon, Jan 11, 2010 at 5:02 AM, Rutger Vos <rut...@gm...> wrote: > > No. 7 Spring complained: "There are 3 beans of type > > [org.cipres.treebase.domain.nexus.NexusService] available for autowiring > by > > type: [nexusService, nexmlService, rdfaService]." Spring don't know which > > one to use. Do we need all of them in the config file? > > Only the nexusService uses mesquite. > > -- > Dr. Rutger A. Vos > School of Biological Sciences > Philip Lyle Building, Level 4 > University of Reading > Reading > RG6 6BX > United Kingdom > Tel: +44 (0) 118 378 7535 > http://www.nexml.org > http://rutgervos.blogspot.com > |
From: Rutger V. <rut...@gm...> - 2010-01-11 10:53:54
|
Just for context: study 22 is a dummy study that Mark Jason Dominus created when running the batch import java programs last time he ran them. The logic of the programs should have been that all dangling references to this study are updated, but apparently some of them were overlooked. In the same process, many dummy taxonlabels and taxonlabelsets with pointers to study 22 have been created - an issue which we have discussed elsewhere on the list. On Fri, Jan 8, 2010 at 6:28 PM, William Piel <wil...@ya...> wrote: > > On Jan 8, 2010, at 12:30 PM, Hilmar Lapp wrote: > >> two questions first: 1) Do you still have the original data file that would clearly prove that these treeblocks must be spurious? > > The 4,000+ orphaned treeblocks belong to study_id 22 which is not published, lacks a citation, and is owned by user "tb1" (which is for testing). Don't know how it acquired all these records. So no, I don't have any "original" data file, but then there is nothing original about this artifact. > >> and 2) why would we not delete only the spurious treeblocks and sub_treeblocks, rather than the entire study?00 > > It seemed easier to me to click once (delete study_id 22) rather than making 4,000+ mouse clicks to delete the treeblocks. > >> Note that your join below won't work the way I think you intend it to work > > thanks -- I'll check it over. It seemed to work (in that doing a count(treeblock_id) produced the correct number of records). > > bp > > > > > ------------------------------------------------------------------------------ > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast and easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel > -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: Rutger V. <rut...@gm...> - 2010-01-11 10:02:16
|
> No. 7 Spring complained: "There are 3 beans of type > [org.cipres.treebase.domain.nexus.NexusService] available for autowiring by > type: [nexusService, nexmlService, rdfaService]." Spring don't know which > one to use. Do we need all of them in the config file? Only the nexusService uses mesquite. -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: William P. <wil...@ya...> - 2010-01-09 15:15:18
|
On Jan 8, 2010, at 6:12 PM, Hilmar Lapp wrote: > which are public whoops! good to know. bp |
From: Hilmar L. <hl...@ne...> - 2010-01-08 23:12:20
|
On Jan 8, 2010, at 2:31 PM, William Piel wrote: > > On Jan 8, 2010, at 2:01 PM, Hilmar Lapp wrote: > >> I agree, deleting the study should work. Did you submit it as a bug >> report? > > I'll do that. Give it a priority of "8" ? > > For those who want to test it: the user "tb1" (where study_id 22 > sits) has the password "tb1". -- I won't put the password in the bug > report, for obvious reasons. Well, it's now in the mailing list archives which are public, so you might as well put it into the bug report. Or change it ... -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: William P. <wil...@ya...> - 2010-01-08 19:32:04
|
On Jan 8, 2010, at 2:01 PM, Hilmar Lapp wrote: > I agree, deleting the study should work. Did you submit it as a bug report? I'll do that. Give it a priority of "8" ? For those who want to test it: the user "tb1" (where study_id 22 sits) has the password "tb1". -- I won't put the password in the bug report, for obvious reasons. bp |
From: William P. <wil...@ya...> - 2010-01-08 19:27:56
|
On Jan 8, 2010, at 12:30 PM, Hilmar Lapp wrote: > Or are there other tables that have a foreign key to treeblock Yes -- the error is: ERROR: update or delete on table "treeblock" violates foreign key constraint "fk94d50830bfd107c3" on table "sub_treeblock" DETAIL: Key (treeblock_id)=(2848) is still referenced from table "sub_treeblock". so... another approach is to first delete the sub_treeblock record and then delete the treeblock record. Can I express that in a single delete query? I'm guessing not. But if I do it in two delete queries, deleting sub_treeblock first will cause treeblock to lose the connection with the study table (all this hangs off of study_id = 22). Which is why I'm thinking of building a big list of treeblock_ids, and then running two delete queries: 1. get [big list] like so: SELECT tb.treeblock_id FROM study st JOIN submission sub ON (st.study_id = sub.study_id) JOIN sub_treeblock stb ON (sub.submission_id = stb.submission_id) JOIN treeblock tb ON (stb.treeblock_id = tb.treeblock_id) LEFT JOIN phylotree pt ON (pt.treeblock_id = tb.treeblock_id) WHERE pt.phylotree_id IS NULL AND st.study_id = 22 Result is a list of 4492 distinct treeblock_ids -- reformat the result as comma-separated numbers. 2. First delete from the table that references treeblock.treeblock_id: DELETE FROM sub_treeblock WHERE treeblock_id IN ( [big list] ); 3. Then delete from treeblock_id: DELETE FROM treeblock WHERE treeblock_id IN ( [big list] ); How does that sound? bp |
From: youjun g. <you...@gm...> - 2010-01-08 19:16:22
|
Dear TreeBASEer, Here is the current situation about TreeBASE test, there are still 7 fails left: 1. testCreateRowSegments(org.cipres.treebase.service.matrix.RowSegmentServiceImplTest) 2. testAddDelete(org.cipres.treebase.service.matrix.MatrixServiceImplTest) 3. testCreateDelete(org.cipres.treebase.dao.tree.PhyloTreeDAOTest) 4. testFindTreeBlocksByNexusFileName(org.cipres.treebase.dao.tree.PhyloTreeDAOTest) 5. testLoadPhyloDataSet(org.cipres.treebase.service.nexus.NexusParserTest) 6. testAddDelete(org.cipres.treebase.service.study.StudyServiceImplTest) 7. testMesqutieFolderDir(org.cipres.treebase.domain.nexus.NexusMesquiteDataSetTest) Right now, I am working on the No.1 No. 2,3 and 6 related to the hibernate_sequence issue, I expect they will disappear after we solve the problem; No, 4 related to the broken foreign key that Bill mentioned in his email, and should be solved by clean the table; Rutger, please Check at the No, 5 and 7, they both related to mesqiute . No. 5 the NexusPaserTest call into org.cipres.datatypes.PhyloDataset.initializeFromStringOrFile and it throw back a null pointer exception, this class is out of the scope of treebase. No. 7 Spring complained: "There are 3 beans of type [org.cipres.treebase.domain.nexus.NexusService] available for autowiring by type: [nexusService, nexmlService, rdfaService]." Spring don't know which one to use. Do we need all of them in the config file? Youjun |
From: Hilmar L. <hl...@ne...> - 2010-01-08 19:01:31
|
I agree, deleting the study should work. Did you submit it as a bug report? -hilmar Sent from away On Jan 8, 2010, at 1:28 PM, William Piel <wil...@ya...> wrote: > > On Jan 8, 2010, at 12:30 PM, Hilmar Lapp wrote: > >> two questions first: 1) Do you still have the original data file >> that would clearly prove that these treeblocks must be spurious? > > The 4,000+ orphaned treeblocks belong to study_id 22 which is not > published, lacks a citation, and is owned by user "tb1" (which is > for testing). Don't know how it acquired all these records. So no, I > don't have any "original" data file, but then there is nothing > original about this artifact. > >> and 2) why would we not delete only the spurious treeblocks and >> sub_treeblocks, rather than the entire study?00 > > It seemed easier to me to click once (delete study_id 22) rather > than making 4,000+ mouse clicks to delete the treeblocks. > >> Note that your join below won't work the way I think you intend it >> to work > > thanks -- I'll check it over. It seemed to work (in that doing a > count(treeblock_id) produced the correct number of records). > > bp > > > > > --- > --- > --- > --------------------------------------------------------------------- > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast > and easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel |
From: William P. <wil...@ya...> - 2010-01-08 18:28:51
|
On Jan 8, 2010, at 12:30 PM, Hilmar Lapp wrote: > two questions first: 1) Do you still have the original data file that would clearly prove that these treeblocks must be spurious? The 4,000+ orphaned treeblocks belong to study_id 22 which is not published, lacks a citation, and is owned by user "tb1" (which is for testing). Don't know how it acquired all these records. So no, I don't have any "original" data file, but then there is nothing original about this artifact. > and 2) why would we not delete only the spurious treeblocks and sub_treeblocks, rather than the entire study?00 It seemed easier to me to click once (delete study_id 22) rather than making 4,000+ mouse clicks to delete the treeblocks. > Note that your join below won't work the way I think you intend it to work thanks -- I'll check it over. It seemed to work (in that doing a count(treeblock_id) produced the correct number of records). bp |
From: Hilmar L. <hl...@ne...> - 2010-01-08 17:30:47
|
Bill - two questions first: 1) Do you still have the original data file that would clearly prove that these treeblocks must be spurious? and 2) why would we not delete only the spurious treeblocks and sub_treeblocks, rather than the entire study? Are you suspicious that the entire study is corrupt in the way it is represented in the database? Note that your join below won't work the way I think you intend it to work. Either put the outer join last, or parenthesize the serious of inner joins that you join with. If the WHERE clause below were really correct (i.e., only hit treeblock records that do not have any phylotree dependent on them) then you should not get any foreign key violation, should you? Or are there other tables that have a foreign key to treeblock, and which have rows pointing to the so-called orphan treeblocks (which might not be so orphan after all then?). -hilmar On Jan 8, 2010, at 11:15 AM, William Piel wrote: > > One of the unit tests that is giving Youjun trouble is one that > checks for orphans (or in this case "childless" records) in the > following chain of tables: > > study --- submission --- sub_treeblock --- treeblock >-- phylotree > > Frankly, I'm not clear why we have sub_* tables to begin with, but > it turns out that there are about 4,500 orphaned treeblocks (i.e. > treeblock records that lack any related phylotree records) that each > have a sub_treeblock id, and almost all of them belong to a > submission that belongs to a study with study_id 22. So for some > strange reason, study_id 22 has thousands of sub_treeblock records > that each have a treeblock record, yet they lack phylotrees. Looks > to me like these were created as part of Mark Jason's last migration > effort -- probably purely spurious. > > I tried deleting these simply by deleting study_id 22, but after > many hours that resulted in the "data access failure" (below). Don't > know if anyone has some ideas re. why they doesn't work. > > The other approach is to run a SQL query like so: > > DELETE FROM treeblock WHERE treeblock_id IN ( > SELECT tb.treeblock_id > FROM phylotree pt RIGHT JOIN treeblock tb ON (pt.treeblock_id = > tb.treeblock_id) > JOIN sub_treeblock stb ON (tb.treeblock_id = stb.treeblock_id) > JOIN submission sub ON (stb.submission_id = sub.submission_id) > JOIN study st ON (sub.study_id = st.study_id) > WHERE pt.phylotree_id IS NULL > AND st.study_id = 22 > ) > > ... but the trouble is that foreign key constraints don't allow > these to be deleted (and postgres does not support CASCADE DELETE > unless the tables are altered to do this automatically). One way > around this would be to first build a list of orphaned > sub_treeblock, treeblock, etc, records, and then delete them from > the list of IDs flowing with the cascade. > > Anyway... I wanted to touch base with everyone in case there's an > easy way to do this the proper way -- i.e. to have our java code & > hibernate delete study_id 22 without running into the > "DataIntegrityViolationException" below. > > bp > > > > > Data Access Failure > > could not delete: > [org.cipres.treebase.domain.taxon.TaxonLabel#67521]; nested > exception is org.hibernate.exception.ConstraintViolationException: > could not delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521] > > org.springframework.dao.DataIntegrityViolationException: could not > delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521]; nested > exception is org.hibernate.exception.ConstraintViolationException: > could not delete: > [org.cipres.treebase.domain.taxon.TaxonLabel#67521] Caused by: > org.hibernate.exception.ConstraintViolationException: could not > delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521] at > org > .hibernate > .exception.SQLStateConverter.convert(SQLStateConverter.java:71) at > org > .hibernate > .exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) > at > org > .hibernate > .persister > .entity.AbstractEntityPersister.delete(AbstractEntityPersister.java: > 2546) at > org > .hibernate > .persister > .entity.AbstractEntityPersister.delete(AbstractEntityPersister.java: > 2702) at > org > .hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java: > 77) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java: > 279) at > org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java: > 263) at > org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java: > 172) at > org > .hibernate > .event > .def > .AbstractFlushingEventListener > .performExecutions(AbstractFlushingEventListener.java:298) at > org > .hibernate > .event > .def > .DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java: > 27) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) > at org.springframework.orm.hibernate3.HibernateTemplate > $27.doInHibernate(HibernateTemplate.java:811) at > org > .springframework > .orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java: > 372) at > org > .springframework > .orm.hibernate3.HibernateTemplate.flush(HibernateTemplate.java:809) > at org.cipres.treebase.dao.AbstractDAO.flush(AbstractDAO.java:158) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun > .reflect > .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun > .reflect > .DelegatingMethodAccessorImpl > .invoke(DelegatingMethodAccessorImpl.java:43) at > java.lang.reflect.Method.invoke(Method.java:616) at > org > .springframework > .aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java: > 304) at > org > .springframework > .aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java: > 198) at $Proxy50.flush(Unknown Source) at > org > .cipres > .treebase > .service > .study > .SubmissionServiceImpl.deleteSubmission(SubmissionServiceImpl.java: > 428) at > org > .cipres > .treebase > .service > .study > .SubmissionServiceImpl.deleteSubmission(SubmissionServiceImpl.java: > 900) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun > .reflect > .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun > .reflect > .DelegatingMethodAccessorImpl > .invoke(DelegatingMethodAccessorImpl.java:43) at > java.lang.reflect.Method.invoke(Method.java:616) at > org > .springframework > .aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java: > 304) at > org > .springframework > .aop > .framework > .ReflectiveMethodInvocation > .invokeJoinpoint(ReflectiveMethodInvocation.java:182) at > org > .springframework > .aop > .framework > .ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java: > 149) at > org > .springframework > .transaction > .interceptor > .TransactionInterceptor.invoke(TransactionInterceptor.java:106) at > org > .springframework > .aop > .framework > .ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java: > 171) at > org > .springframework > .aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java: > 204) at $Proxy80.deleteSubmission(Unknown Source) at > org > .cipres > .treebase > .web > .controllers > .DeleteStudyController.onSubmit(DeleteStudyController.java:71) at > org.springframework.web.servlet.mvc.SimpleFormController.processFormSubmission > (SimpleFormController.java:267) at org.springframework.web.servlet.mvc.CancellableFormController.processFormSubmission > (CancellableFormController.java:140) at org.springframework.web.servlet.mvc.AbstractFormController.handleRequestInternal > (AbstractFormController.java:265) at org.springframework.web.servlet.mvc.AbstractController.handleRequest > (AbstractController.java:153) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle > (SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch > (DispatcherServlet.java:858) at org.springframework.web.servlet.DispatcherServlet.doService > (DispatcherServlet.java:792) at org.springframework.web.servlet.FrameworkServlet.processRequest > (FrameworkServlet.java:476) at org.springframework.web.servlet.FrameworkServlet.doPost > (FrameworkServlet.java:441) at > javax.servlet.http.HttpServlet.service(HttpServlet.java:710) at > javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at > org > .apache > .catalina > .core > .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: > 269) at > org > .apache > .catalina > .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java: > 188) at > org > .displaytag > .filter.ResponseOverrideFilter.doFilter(ResponseOverrideFilter.java: > 125) at > org > .apache > .catalina > .core > .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: > 215) at > org > .apache > .catalina > .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java: > 188) at > org > .springframework > .orm > .hibernate3 > .support > .OpenSessionInViewFilter > .doFilterInternal(OpenSessionInViewFilter.java:198) at > org > .springframework > .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: > 75) at > org > .apache > .catalina > .core > .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: > 215) at > org > .apache > .catalina > .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java: > 188) at > com > .opensymphony > .module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:119) at > com > .opensymphony > .module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:55) at > org > .apache > .catalina > .core > .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: > 215) at > org > .apache > .catalina > .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java: > 188) at org.acegisecurity.util.FilterChainProxy > $VirtualFilterChain.doFilter(FilterChainProxy.java:264) at > org > .acegisecurity > .intercept > .web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java: > 107) at org.acegisecurity.intercept.web.FilterSecurityInterceptor.doFilter > (FilterSecurityInterceptor.java:72) at > org.acegisecurity.util.FilterChainProxy > $VirtualFilterChain.doFilter(FilterChainProxy.java:274) at > org > .acegisecurity > .ui > .ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java: > 110) at org.acegisecurity.util.FilterChainProxy > $VirtualFilterChain.doFilter(FilterChainProxy.java:274) at > org > .acegisecurity > .wrapper > .SecurityContextHolderAwareRequestFilter > .doFilter(SecurityContextHolderAwareRequestFilter.java:81) at > org.acegisecurity.util.FilterChainProxy > $VirtualFilterChain.doFilter(FilterChainProxy.java:274) at > org > .acegisecurity > .ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java: > 217) at org.acegisecurity.util.FilterChainProxy > $VirtualFilterChain.doFilter(FilterChainProxy.java:274) at > org > .acegisecurity > .context > .HttpSessionContextIntegrationFilter > .doFilter(HttpSessionContextIntegrationFilter.java:191) at > org.acegisecurity.util.FilterChainProxy > $VirtualFilterChain.doFilter(FilterChainProxy.java:274) at > org > .acegisecurity.util.FilterChainProxy.doFilter(FilterChainProxy.java: > 148) at > org > .acegisecurity > .util.FilterToBeanProxy.doFilter(FilterToBeanProxy.java:90) at > org > .apache > .catalina > .core > .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: > 215) at > org > .apache > .catalina > .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java: > 188) at > org > .apache > .catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java: > 210) at > org > .apache > .catalina.core.StandardContextValve.invoke(StandardContextValve.java: > 172) at > org > .apache > .catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) > at > org > .apache > .catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) > at > org > .apache > .catalina.core.StandardEngineValve.invoke(StandardEngineValve.java: > 108) at > org > .apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java: > 151) at > org > .apache.coyote.http11.Http11Processor.process(Http11Processor.java: > 870) at org.apache.coyote.http11.Http11BaseProtocol > $Http11ConnectionHandler.processConnection(Http11BaseProtocol.java: > 665) at > org > .apache > .tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java: > 528) at > org > .apache > .tomcat > .util > .net > .LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java: > 81) at org.apache.tomcat.util.threads.ThreadPool > $ControlRunnable.run(ThreadPool.java:685) at > java.lang.Thread.run(Thread.java:636) Caused by: > java.sql.BatchUpdateException: Batch entry 13 delete from TAXONLABEL > where TAXONLABEL_ID=71834 and VERSION=2 was aborted. Call > getNextException to see the cause. at > org.postgresql.jdbc2.AbstractJdbc2Statement > $BatchResultHandler.handleError(AbstractJdbc2Statement.java:2537) at > org > .postgresql > .core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java: > 1328) at > org > .postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java: > 351) at > org > .postgresql > .jdbc2 > .AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java: > 2674) at > com > .mchange > .v2 > .c3p0 > .impl > .NewProxyPreparedStatement > .executeBatch(NewProxyPreparedStatement.java:1723) at > org > .hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java: > 48) at > org.hibernate.jdbc.BatchingBatcher.addToBatch(BatchingBatcher.java: > 34) at > org > .hibernate > .persister > .entity.AbstractEntityPersister.delete(AbstractEntityPersister.java: > 2525) ... 85 more « Back > ------------------------------------------------------------------------------ > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast > and easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: William P. <wil...@ya...> - 2010-01-08 16:15:41
|
One of the unit tests that is giving Youjun trouble is one that checks for orphans (or in this case "childless" records) in the following chain of tables: study --- submission --- sub_treeblock --- treeblock >-- phylotree Frankly, I'm not clear why we have sub_* tables to begin with, but it turns out that there are about 4,500 orphaned treeblocks (i.e. treeblock records that lack any related phylotree records) that each have a sub_treeblock id, and almost all of them belong to a submission that belongs to a study with study_id 22. So for some strange reason, study_id 22 has thousands of sub_treeblock records that each have a treeblock record, yet they lack phylotrees. Looks to me like these were created as part of Mark Jason's last migration effort -- probably purely spurious. I tried deleting these simply by deleting study_id 22, but after many hours that resulted in the "data access failure" (below). Don't know if anyone has some ideas re. why they doesn't work. The other approach is to run a SQL query like so: DELETE FROM treeblock WHERE treeblock_id IN ( SELECT tb.treeblock_id FROM phylotree pt RIGHT JOIN treeblock tb ON (pt.treeblock_id = tb.treeblock_id) JOIN sub_treeblock stb ON (tb.treeblock_id = stb.treeblock_id) JOIN submission sub ON (stb.submission_id = sub.submission_id) JOIN study st ON (sub.study_id = st.study_id) WHERE pt.phylotree_id IS NULL AND st.study_id = 22 ) ... but the trouble is that foreign key constraints don't allow these to be deleted (and postgres does not support CASCADE DELETE unless the tables are altered to do this automatically). One way around this would be to first build a list of orphaned sub_treeblock, treeblock, etc, records, and then delete them from the list of IDs flowing with the cascade. Anyway... I wanted to touch base with everyone in case there's an easy way to do this the proper way -- i.e. to have our java code & hibernate delete study_id 22 without running into the "DataIntegrityViolationException" below. bp Data Access Failure could not delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521]; nested exception is org.hibernate.exception.ConstraintViolationException: could not delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521] org.springframework.dao.DataIntegrityViolationException: could not delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521]; nested exception is org.hibernate.exception.ConstraintViolationException: could not delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521] Caused by: org.hibernate.exception.ConstraintViolationException: could not delete: [org.cipres.treebase.domain.taxon.TaxonLabel#67521] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:71) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2546) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2702) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:77) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:172) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) at org.springframework.orm.hibernate3.HibernateTemplate$27.doInHibernate(HibernateTemplate.java:811) at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:372) at org.springframework.orm.hibernate3.HibernateTemplate.flush(HibernateTemplate.java:809) at org.cipres.treebase.dao.AbstractDAO.flush(AbstractDAO.java:158) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:198) at $Proxy50.flush(Unknown Source) at org.cipres.treebase.service.study.SubmissionServiceImpl.deleteSubmission(SubmissionServiceImpl.java:428) at org.cipres.treebase.service.study.SubmissionServiceImpl.deleteSubmission(SubmissionServiceImpl.java:900) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy80.deleteSubmission(Unknown Source) at org.cipres.treebase.web.controllers.DeleteStudyController.onSubmit(DeleteStudyController.java:71) at org.springframework.web.servlet.mvc.SimpleFormController.processFormSubmission(SimpleFormController.java:267) at org.springframework.web.servlet.mvc.CancellableFormController.processFormSubmission(CancellableFormController.java:140) at org.springframework.web.servlet.mvc.AbstractFormController.handleRequestInternal(AbstractFormController.java:265) at org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:153) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:858) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:792) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:476) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:441) at javax.servlet.http.HttpServlet.service(HttpServlet.java:710) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.displaytag.filter.ResponseOverrideFilter.doFilter(ResponseOverrideFilter.java:125) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.springframework.orm.hibernate3.support.OpenSessionInViewFilter.doFilterInternal(OpenSessionInViewFilter.java:198) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:75) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at com.opensymphony.module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:119) at com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:55) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:264) at org.acegisecurity.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:107) at org.acegisecurity.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:72) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.ui.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:110) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.wrapper.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:81) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:217) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:191) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.util.FilterChainProxy.doFilter(FilterChainProxy.java:148) at org.acegisecurity.util.FilterToBeanProxy.doFilter(FilterToBeanProxy.java:90) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:151) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:870) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685) at java.lang.Thread.run(Thread.java:636) Caused by: java.sql.BatchUpdateException: Batch entry 13 delete from TAXONLABEL where TAXONLABEL_ID=71834 and VERSION=2 was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2537) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1328) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:351) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2674) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeBatch(NewProxyPreparedStatement.java:1723) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48) at org.hibernate.jdbc.BatchingBatcher.addToBatch(BatchingBatcher.java:34) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2525) ... 85 more « Back |