You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
(1) |
Apr
(41) |
May
(41) |
Jun
(50) |
Jul
(14) |
Aug
(21) |
Sep
(37) |
Oct
(8) |
Nov
(4) |
Dec
(135) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(145) |
Feb
(110) |
Mar
(216) |
Apr
(101) |
May
(42) |
Jun
(42) |
Jul
(23) |
Aug
(17) |
Sep
(33) |
Oct
(15) |
Nov
(18) |
Dec
(6) |
2011 |
Jan
(8) |
Feb
(10) |
Mar
(8) |
Apr
(41) |
May
(48) |
Jun
(62) |
Jul
(7) |
Aug
(9) |
Sep
(7) |
Oct
(11) |
Nov
(49) |
Dec
(1) |
2012 |
Jan
(17) |
Feb
(63) |
Mar
(4) |
Apr
(13) |
May
(17) |
Jun
(21) |
Jul
(10) |
Aug
(10) |
Sep
|
Oct
|
Nov
|
Dec
(16) |
2013 |
Jan
(10) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(5) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: William P. <wil...@ya...> - 2009-12-09 06:52:48
|
On Dec 8, 2009, at 8:01 PM, Hilmar Lapp wrote: > Bill - do you have a ballpark estimate? That's hard to do -- but I don't think I'll be holding anything up if Vladimir is going to first tackle these one or two other issues first. > Is there room for further automation with additional scripting? I don't think so. > Should we look into possibilities for hiring some help for this curation activity? I have workers I can solicit if needed -- but seeing as I'm in Santa Barbara this week, that's not realistic. Rather, this will just be my evening entertainment... bp |
From: Hilmar L. <hl...@ne...> - 2009-12-09 04:02:12
|
On Dec 8, 2009, at 6:28 PM, William Piel wrote: > The bad news is that I have to map 30,000 names against uBio -- a > procedure that has scripts to assist me, but that still requires > inspection of every mapping. So, it will take some time Bill - do you have a ballpark estimate? It sounds like this has the potential to be the rate-limiting factor for pinning down a release target date. Is there room for further automation with additional scripting? Should we look into possibilities for hiring some help for this curation activity? Is there a way we can harness the TB2 machinery for taxon name resolution? -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: William P. <wil...@ya...> - 2009-12-08 23:29:02
|
On Dec 8, 2009, at 4:44 AM, Rutger Vos wrote: > Either Password hashing or referee access. Either of these would be great. If the latter, let's chat because I'd like to develop a clearer sense of how that will be implemented. > Besides, doing the last import requires some amount of pre-processing on the part of Bill Piel. I've started working on the pre-processing. Since Jan 09, TreeBASE has acquired about 30,000 distinct taxon labels that it hasn't seen before. So the good news is that submissions to TreeBASE have proceeded at a healthy pace. The bad news is that I have to map 30,000 names against uBio -- a procedure that has scripts to assist me, but that still requires inspection of every mapping. So, it will take some time -- hence better for you to start with the password hashing or referee access. Bill |
From: Hilmar L. <hl...@ne...> - 2009-12-08 22:19:57
|
On Dec 8, 2009, at 4:44 AM, Rutger Vos wrote: >> The high-priority items marked with my name in the release plan >> (google doc) are: >> - TreeBase: TB1 --> TB2 import >> - TreeBase: Password hashing [TB 2797430] >> - TreeBase: Reviewer/Referee Access [TB 2826165] >> My feeling is that the 1st of them (TB1 --> TB2 import) is the most >> pressing, but that the other two could be easier to understand. >> Which one would make the most sense to start with? > > Either Password hashing or referee access. I know that TB1->TB2 import > might seem the most pressing on the surface It sounds to me like password hashing and reviewer access are best for getting your feet (hands?) wet with the code. We do need a dated plan for the TB1->TB2 data import though, as that's I think what will determine when we can go public. Also, my sense is that there are unknown factors in that (untested scripts and Java code that used to need a lot of post-migration massaging), so the sooner we can shed light on that the more confident we can be. Does that sound right? -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Hilmar L. <hl...@ne...> - 2009-12-08 22:16:01
|
On Dec 8, 2009, at 11:33 AM, Rutger Vos wrote: > it'd be easiest for us if we come up with a way for us to share that > setup. A treebase-env.sh shell script maybe that's somewhere under /etc? Or a /home/treebase/treebase-env.sh script? There's surely ways to do this according to best practices. Jon, do you want to advise from sysadmin point here? -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Hilmar L. <hl...@ne...> - 2009-12-08 17:52:36
|
On Dec 8, 2009, at 10:08 AM, Rutger Vos wrote: > all you have to do is issue 'publish' on the command line Can we name that publish-treebase instead please? :-) -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Rutger V. <rut...@gm...> - 2009-12-08 16:33:34
|
> Rutger, I don't get this yet. The treebase@treebase-dev account (for > which I've got the credentials from Jon) does not seem to have any > setup like this. Setting up each of our treebase-dev accounts like > this would be duplication of effort (I am doing that on my Mac > already). I suspect you had in mind ssh'ing to rvosa@treebase-dev.... Yeah, I had that in mind. Not ideal because it's tied to my name, but it'd be easiest for us if we come up with a way for us to share that setup. -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: Vladimir G. <vga...@ne...> - 2009-12-08 16:16:26
|
On Dec 8, 2009, at 10:08 AM, Rutger Vos wrote: >> Rutger, I am going through the source code, and will have time >> working on >> the debugging of the TreeBASE code. The question is, do we need to >> build and >> re-deploy the war file to the production server every time we fix a >> bug? > > In principle that would be handy, so that others can see the > improvement right away when a ticket is closed. It's not that hard to > do though: when you ssh into treebasedb-dev.nescent.org all you have > to do is issue 'publish' on the command line (it's in the $PATH) and > it'll update the source, build the war, copy it into the tomcat > servlet folder and restart tomcat. Rutger, I don't get this yet. The treebase@treebase-dev account (for which I've got the credentials from Jon) does not seem to have any setup like this. Setting up each of our treebase-dev accounts like this would be duplication of effort (I am doing that on my Mac already). I suspect you had in mind ssh'ing to rvosa@treebase-dev.... --VG |
From: Rutger V. <rut...@gm...> - 2009-12-08 15:58:51
|
Yup, that is exactly what it does. On Tue, Dec 8, 2009 at 3:36 PM, youjun guo <you...@ya...> wrote: > That's great. > > Based my understand, correct me if I am wrong, the "publish" script will: > > 1. download the new version of the source code form sourceforge; > 2. build the war > 3. re-deploy > > Youjun > > On Tue, Dec 8, 2009 at 10:08 AM, Rutger Vos <rut...@gm...> wrote: >> >> > Rutger, I am going through the source code, and will have time working >> > on >> > the debugging of the TreeBASE code. The question is, do we need to build >> > and >> > re-deploy the war file to the production server every time we fix a bug? >> >> In principle that would be handy, so that others can see the >> improvement right away when a ticket is closed. It's not that hard to >> do though: when you ssh into treebasedb-dev.nescent.org all you have >> to do is issue 'publish' on the command line (it's in the $PATH) and >> it'll update the source, build the war, copy it into the tomcat >> servlet folder and restart tomcat. >> >> > On Tue, Dec 8, 2009 at 4:44 AM, Rutger Vos <rut...@gm...> wrote: >> >> >> >> Dear Vladimir, >> >> >> >> thanks for taking the initiative on this, welcome aboard! >> >> >> >> > The high-priority items marked with my name in the release plan >> >> > (google doc) are: >> >> > - TreeBase: TB1 --> TB2 import >> >> > - TreeBase: Password hashing [TB 2797430] >> >> > - TreeBase: Reviewer/Referee Access [TB 2826165] >> >> > My feeling is that the 1st of them (TB1 --> TB2 import) is the most >> >> > pressing, but that the other two could be easier to understand. >> >> > Which one would make the most sense to start with? >> >> >> >> Either Password hashing or referee access. I know that TB1->TB2 import >> >> might seem the most pressing on the surface, but that item is actually >> >> only about 2009 submissions, we have all the data from preceding >> >> years. Besides, doing the last import requires some amount of >> >> pre-processing on the part of Bill Piel. >> >> >> >> > In any case, I'd need 1-on-1 help (Skype?) with someone who can show >> >> > me the ropes as well as explain in some detail what needs to be done. >> >> > Rutger? >> >> >> >> I'd prefer we make initial contact through gmail chat. I always have a >> >> tab with gmail open in my browser, so you can always ping me that way >> >> (user: rutgeraldo). If you want to hear my voice we can just take it >> >> from there, either through google voice chat or skype or ichat. >> >> >> >> > So far, I have scanned through all the wiki pages accessible from the >> >> > root. My next step will be setting up a development environment as >> >> > outlined on >> >> > >> >> > https://sourceforge.net/apps/mediawiki/treebase/index.php?title=DeveloperEnvironment >> >> > Can someone clarify whether I will need to set up a local DB and a >> >> > local web server for development purposes or whether I should use >> >> > those at treebase-dev.nescent.org ? >> >> >> >> You don't need to set up a local DB (I can give you the JDBC >> >> connection credentials for the nescent instance off-list), but it's >> >> probably handy if you do set up a local tomcat, either integrated in >> >> eclipse or to launch using the publish script that we also use for >> >> building and deploying on treebasedb-dev.nescent.org >> >> >> >> > Do developers have application accounts, with varying privileges, for >> >> > playing and testing? I did auto-register for an account (vgapeyev), >> >> > which got a "User" role. >> >> >> >> Yes, it's handy to have one of those. You will probably want to be >> >> able to upgrade and downgrade that account once in a while to see the >> >> User view versus the Reviewer or Admin view. >> >> >> >> > Is there a pre-laid DB diagram somewhere, or any other documentation >> >> > on the DB and application design and architecture? >> >> >> >> Not at this point. There are old ER diagrams of the schema but they >> >> will probably be more confusing than helpful at this point, being out >> >> of date. What you can do is install Aqua Data Studio >> >> (http://www.aquafold.com/) and use that to create new diagrams, >> >> explore the tables in other ways and run queries from your desktop. >> >> >> >> Rutger >> >> >> >> -- >> >> Dr. Rutger A. Vos >> >> School of Biological Sciences >> >> Philip Lyle Building, Level 4 >> >> University of Reading >> >> Reading >> >> RG6 6BX >> >> United Kingdom >> >> Tel: +44 (0) 118 378 7535 >> >> http://www.nexml.org >> >> http://rutgervos.blogspot.com >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> Return on Information: >> >> Google Enterprise Search pays you back >> >> Get the facts. >> >> http://p.sf.net/sfu/google-dev2dev >> >> _______________________________________________ >> >> Treebase-devel mailing list >> >> Tre...@li... >> >> https://lists.sourceforge.net/lists/listinfo/treebase-devel >> > >> > >> >> >> >> -- >> Dr. Rutger A. Vos >> School of Biological Sciences >> Philip Lyle Building, Level 4 >> University of Reading >> Reading >> RG6 6BX >> United Kingdom >> Tel: +44 (0) 118 378 7535 >> http://www.nexml.org >> http://rutgervos.blogspot.com > > -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: youjun g. <you...@ya...> - 2009-12-08 15:36:18
|
That's great. Based my understand, correct me if I am wrong, the "publish" script will: 1. download the new version of the source code form sourceforge; 2. build the war 3. re-deploy Youjun On Tue, Dec 8, 2009 at 10:08 AM, Rutger Vos <rut...@gm...> wrote: > > Rutger, I am going through the source code, and will have time working on > > the debugging of the TreeBASE code. The question is, do we need to build > and > > re-deploy the war file to the production server every time we fix a bug? > > In principle that would be handy, so that others can see the > improvement right away when a ticket is closed. It's not that hard to > do though: when you ssh into treebasedb-dev.nescent.org all you have > to do is issue 'publish' on the command line (it's in the $PATH) and > it'll update the source, build the war, copy it into the tomcat > servlet folder and restart tomcat. > > > On Tue, Dec 8, 2009 at 4:44 AM, Rutger Vos <rut...@gm...> wrote: > >> > >> Dear Vladimir, > >> > >> thanks for taking the initiative on this, welcome aboard! > >> > >> > The high-priority items marked with my name in the release plan > >> > (google doc) are: > >> > - TreeBase: TB1 --> TB2 import > >> > - TreeBase: Password hashing [TB 2797430] > >> > - TreeBase: Reviewer/Referee Access [TB 2826165] > >> > My feeling is that the 1st of them (TB1 --> TB2 import) is the most > >> > pressing, but that the other two could be easier to understand. > >> > Which one would make the most sense to start with? > >> > >> Either Password hashing or referee access. I know that TB1->TB2 import > >> might seem the most pressing on the surface, but that item is actually > >> only about 2009 submissions, we have all the data from preceding > >> years. Besides, doing the last import requires some amount of > >> pre-processing on the part of Bill Piel. > >> > >> > In any case, I'd need 1-on-1 help (Skype?) with someone who can show > >> > me the ropes as well as explain in some detail what needs to be done. > >> > Rutger? > >> > >> I'd prefer we make initial contact through gmail chat. I always have a > >> tab with gmail open in my browser, so you can always ping me that way > >> (user: rutgeraldo). If you want to hear my voice we can just take it > >> from there, either through google voice chat or skype or ichat. > >> > >> > So far, I have scanned through all the wiki pages accessible from the > >> > root. My next step will be setting up a development environment as > >> > outlined on > >> > > https://sourceforge.net/apps/mediawiki/treebase/index.php?title=DeveloperEnvironment > >> > Can someone clarify whether I will need to set up a local DB and a > >> > local web server for development purposes or whether I should use > >> > those at treebase-dev.nescent.org ? > >> > >> You don't need to set up a local DB (I can give you the JDBC > >> connection credentials for the nescent instance off-list), but it's > >> probably handy if you do set up a local tomcat, either integrated in > >> eclipse or to launch using the publish script that we also use for > >> building and deploying on treebasedb-dev.nescent.org > >> > >> > Do developers have application accounts, with varying privileges, for > >> > playing and testing? I did auto-register for an account (vgapeyev), > >> > which got a "User" role. > >> > >> Yes, it's handy to have one of those. You will probably want to be > >> able to upgrade and downgrade that account once in a while to see the > >> User view versus the Reviewer or Admin view. > >> > >> > Is there a pre-laid DB diagram somewhere, or any other documentation > >> > on the DB and application design and architecture? > >> > >> Not at this point. There are old ER diagrams of the schema but they > >> will probably be more confusing than helpful at this point, being out > >> of date. What you can do is install Aqua Data Studio > >> (http://www.aquafold.com/) and use that to create new diagrams, > >> explore the tables in other ways and run queries from your desktop. > >> > >> Rutger > >> > >> -- > >> Dr. Rutger A. Vos > >> School of Biological Sciences > >> Philip Lyle Building, Level 4 > >> University of Reading > >> Reading > >> RG6 6BX > >> United Kingdom > >> Tel: +44 (0) 118 378 7535 > >> http://www.nexml.org > >> http://rutgervos.blogspot.com > >> > >> > >> > ------------------------------------------------------------------------------ > >> Return on Information: > >> Google Enterprise Search pays you back > >> Get the facts. > >> http://p.sf.net/sfu/google-dev2dev > >> _______________________________________________ > >> Treebase-devel mailing list > >> Tre...@li... > >> https://lists.sourceforge.net/lists/listinfo/treebase-devel > > > > > > > > -- > Dr. Rutger A. Vos > School of Biological Sciences > Philip Lyle Building, Level 4 > University of Reading > Reading > RG6 6BX > United Kingdom > Tel: +44 (0) 118 378 7535 > http://www.nexml.org > http://rutgervos.blogspot.com > |
From: Rutger V. <rut...@gm...> - 2009-12-08 15:08:17
|
> Rutger, I am going through the source code, and will have time working on > the debugging of the TreeBASE code. The question is, do we need to build and > re-deploy the war file to the production server every time we fix a bug? In principle that would be handy, so that others can see the improvement right away when a ticket is closed. It's not that hard to do though: when you ssh into treebasedb-dev.nescent.org all you have to do is issue 'publish' on the command line (it's in the $PATH) and it'll update the source, build the war, copy it into the tomcat servlet folder and restart tomcat. > On Tue, Dec 8, 2009 at 4:44 AM, Rutger Vos <rut...@gm...> wrote: >> >> Dear Vladimir, >> >> thanks for taking the initiative on this, welcome aboard! >> >> > The high-priority items marked with my name in the release plan >> > (google doc) are: >> > - TreeBase: TB1 --> TB2 import >> > - TreeBase: Password hashing [TB 2797430] >> > - TreeBase: Reviewer/Referee Access [TB 2826165] >> > My feeling is that the 1st of them (TB1 --> TB2 import) is the most >> > pressing, but that the other two could be easier to understand. >> > Which one would make the most sense to start with? >> >> Either Password hashing or referee access. I know that TB1->TB2 import >> might seem the most pressing on the surface, but that item is actually >> only about 2009 submissions, we have all the data from preceding >> years. Besides, doing the last import requires some amount of >> pre-processing on the part of Bill Piel. >> >> > In any case, I'd need 1-on-1 help (Skype?) with someone who can show >> > me the ropes as well as explain in some detail what needs to be done. >> > Rutger? >> >> I'd prefer we make initial contact through gmail chat. I always have a >> tab with gmail open in my browser, so you can always ping me that way >> (user: rutgeraldo). If you want to hear my voice we can just take it >> from there, either through google voice chat or skype or ichat. >> >> > So far, I have scanned through all the wiki pages accessible from the >> > root. My next step will be setting up a development environment as >> > outlined on >> > https://sourceforge.net/apps/mediawiki/treebase/index.php?title=DeveloperEnvironment >> > Can someone clarify whether I will need to set up a local DB and a >> > local web server for development purposes or whether I should use >> > those at treebase-dev.nescent.org ? >> >> You don't need to set up a local DB (I can give you the JDBC >> connection credentials for the nescent instance off-list), but it's >> probably handy if you do set up a local tomcat, either integrated in >> eclipse or to launch using the publish script that we also use for >> building and deploying on treebasedb-dev.nescent.org >> >> > Do developers have application accounts, with varying privileges, for >> > playing and testing? I did auto-register for an account (vgapeyev), >> > which got a "User" role. >> >> Yes, it's handy to have one of those. You will probably want to be >> able to upgrade and downgrade that account once in a while to see the >> User view versus the Reviewer or Admin view. >> >> > Is there a pre-laid DB diagram somewhere, or any other documentation >> > on the DB and application design and architecture? >> >> Not at this point. There are old ER diagrams of the schema but they >> will probably be more confusing than helpful at this point, being out >> of date. What you can do is install Aqua Data Studio >> (http://www.aquafold.com/) and use that to create new diagrams, >> explore the tables in other ways and run queries from your desktop. >> >> Rutger >> >> -- >> Dr. Rutger A. Vos >> School of Biological Sciences >> Philip Lyle Building, Level 4 >> University of Reading >> Reading >> RG6 6BX >> United Kingdom >> Tel: +44 (0) 118 378 7535 >> http://www.nexml.org >> http://rutgervos.blogspot.com >> >> >> ------------------------------------------------------------------------------ >> Return on Information: >> Google Enterprise Search pays you back >> Get the facts. >> http://p.sf.net/sfu/google-dev2dev >> _______________________________________________ >> Treebase-devel mailing list >> Tre...@li... >> https://lists.sourceforge.net/lists/listinfo/treebase-devel > > -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: youjun g. <you...@ya...> - 2009-12-08 14:55:30
|
Welcome on board Vladimir, Rutger, I am going through the source code, and will have time working on the debugging of the TreeBASE code. The question is, do we need to build and re-deploy the war file to the production server every time we fix a bug? Youjun On Tue, Dec 8, 2009 at 4:44 AM, Rutger Vos <rut...@gm...> wrote: > Dear Vladimir, > > thanks for taking the initiative on this, welcome aboard! > > > The high-priority items marked with my name in the release plan > > (google doc) are: > > - TreeBase: TB1 --> TB2 import > > - TreeBase: Password hashing [TB 2797430] > > - TreeBase: Reviewer/Referee Access [TB 2826165] > > My feeling is that the 1st of them (TB1 --> TB2 import) is the most > > pressing, but that the other two could be easier to understand. > > Which one would make the most sense to start with? > > Either Password hashing or referee access. I know that TB1->TB2 import > might seem the most pressing on the surface, but that item is actually > only about 2009 submissions, we have all the data from preceding > years. Besides, doing the last import requires some amount of > pre-processing on the part of Bill Piel. > > > In any case, I'd need 1-on-1 help (Skype?) with someone who can show > > me the ropes as well as explain in some detail what needs to be done. > > Rutger? > > I'd prefer we make initial contact through gmail chat. I always have a > tab with gmail open in my browser, so you can always ping me that way > (user: rutgeraldo). If you want to hear my voice we can just take it > from there, either through google voice chat or skype or ichat. > > > So far, I have scanned through all the wiki pages accessible from the > > root. My next step will be setting up a development environment as > > outlined on > https://sourceforge.net/apps/mediawiki/treebase/index.php?title=DeveloperEnvironment > > Can someone clarify whether I will need to set up a local DB and a > > local web server for development purposes or whether I should use > > those at treebase-dev.nescent.org ? > > You don't need to set up a local DB (I can give you the JDBC > connection credentials for the nescent instance off-list), but it's > probably handy if you do set up a local tomcat, either integrated in > eclipse or to launch using the publish script that we also use for > building and deploying on treebasedb-dev.nescent.org > > > Do developers have application accounts, with varying privileges, for > > playing and testing? I did auto-register for an account (vgapeyev), > > which got a "User" role. > > Yes, it's handy to have one of those. You will probably want to be > able to upgrade and downgrade that account once in a while to see the > User view versus the Reviewer or Admin view. > > > Is there a pre-laid DB diagram somewhere, or any other documentation > > on the DB and application design and architecture? > > Not at this point. There are old ER diagrams of the schema but they > will probably be more confusing than helpful at this point, being out > of date. What you can do is install Aqua Data Studio > (http://www.aquafold.com/) and use that to create new diagrams, > explore the tables in other ways and run queries from your desktop. > > Rutger > > -- > Dr. Rutger A. Vos > School of Biological Sciences > Philip Lyle Building, Level 4 > University of Reading > Reading > RG6 6BX > United Kingdom > Tel: +44 (0) 118 378 7535 > http://www.nexml.org > http://rutgervos.blogspot.com > > > ------------------------------------------------------------------------------ > Return on Information: > Google Enterprise Search pays you back > Get the facts. > http://p.sf.net/sfu/google-dev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel > |
From: Rutger V. <rut...@gm...> - 2009-12-08 09:44:39
|
Dear Vladimir, thanks for taking the initiative on this, welcome aboard! > The high-priority items marked with my name in the release plan > (google doc) are: > - TreeBase: TB1 --> TB2 import > - TreeBase: Password hashing [TB 2797430] > - TreeBase: Reviewer/Referee Access [TB 2826165] > My feeling is that the 1st of them (TB1 --> TB2 import) is the most > pressing, but that the other two could be easier to understand. > Which one would make the most sense to start with? Either Password hashing or referee access. I know that TB1->TB2 import might seem the most pressing on the surface, but that item is actually only about 2009 submissions, we have all the data from preceding years. Besides, doing the last import requires some amount of pre-processing on the part of Bill Piel. > In any case, I'd need 1-on-1 help (Skype?) with someone who can show > me the ropes as well as explain in some detail what needs to be done. > Rutger? I'd prefer we make initial contact through gmail chat. I always have a tab with gmail open in my browser, so you can always ping me that way (user: rutgeraldo). If you want to hear my voice we can just take it from there, either through google voice chat or skype or ichat. > So far, I have scanned through all the wiki pages accessible from the > root. My next step will be setting up a development environment as > outlined on https://sourceforge.net/apps/mediawiki/treebase/index.php?title=DeveloperEnvironment > Can someone clarify whether I will need to set up a local DB and a > local web server for development purposes or whether I should use > those at treebase-dev.nescent.org ? You don't need to set up a local DB (I can give you the JDBC connection credentials for the nescent instance off-list), but it's probably handy if you do set up a local tomcat, either integrated in eclipse or to launch using the publish script that we also use for building and deploying on treebasedb-dev.nescent.org > Do developers have application accounts, with varying privileges, for > playing and testing? I did auto-register for an account (vgapeyev), > which got a "User" role. Yes, it's handy to have one of those. You will probably want to be able to upgrade and downgrade that account once in a while to see the User view versus the Reviewer or Admin view. > Is there a pre-laid DB diagram somewhere, or any other documentation > on the DB and application design and architecture? Not at this point. There are old ER diagrams of the schema but they will probably be more confusing than helpful at this point, being out of date. What you can do is install Aqua Data Studio (http://www.aquafold.com/) and use that to create new diagrams, explore the tables in other ways and run queries from your desktop. Rutger -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: Vladimir G. <vga...@ne...> - 2009-12-07 22:40:12
|
Hi everyone, Just starting on the project, I am trying to figure out my way around, so here are a few disjoint questions. Thanks for any help! The high-priority items marked with my name in the release plan (google doc) are: - TreeBase: TB1 --> TB2 import - TreeBase: Password hashing [TB 2797430] - TreeBase: Reviewer/Referee Access [TB 2826165] My feeling is that the 1st of them (TB1 --> TB2 import) is the most pressing, but that the other two could be easier to understand. Which one would make the most sense to start with? In any case, I'd need 1-on-1 help (Skype?) with someone who can show me the ropes as well as explain in some detail what needs to be done. Rutger? So far, I have scanned through all the wiki pages accessible from the root. My next step will be setting up a development environment as outlined on https://sourceforge.net/apps/mediawiki/treebase/index.php?title=DeveloperEnvironment Can someone clarify whether I will need to set up a local DB and a local web server for development purposes or whether I should use those at treebase-dev.nescent.org ? Do developers have application accounts, with varying privileges, for playing and testing? I did auto-register for an account (vgapeyev), which got a "User" role. Is there a pre-laid DB diagram somewhere, or any other documentation on the DB and application design and architecture? Thanks, --Vladimir |
From: Hilmar L. <hl...@ne...> - 2009-12-07 17:44:36
|
On Dec 7, 2009, at 6:14 AM, Rutger Vos wrote: >> BTW once we've filled everything in I would like to move the table to >> the TreeBASE wiki (which is public). Since the motivation for not >> starting it there isn't to keep it private until then but rather ease >> of editing it, let me know if anyone would have a problem or concerns >> with the Google Doc being made public right away. > > Why the wiki? Can't we just stick them in the tracker with the > understanding that the highest priority (9) means pre-release > milestone? An issue tracker is not the same thing as a release plan, so they are not redundant at all to me. The tracker adds a lot of stuff that's important for tracking issues, but clutter for a release plan. There are things about a release plan that have little to do with tracking resolution of issues. I don't share your concern that these could go out-of-sync. We are not going to track resolution in the release plan (except to update when something is complete), and if tasks get added to or removed from critical for release-status we need to be more explicit about that than updating the priority in the tracker (which will remain obscure to most on the stakeholder team). -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Rutger V. <rut...@gm...> - 2009-12-07 11:14:15
|
> BTW once we've filled everything in I would like to move the table to > the TreeBASE wiki (which is public). Since the motivation for not > starting it there isn't to keep it private until then but rather ease > of editing it, let me know if anyone would have a problem or concerns > with the Google Doc being made public right away. Why the wiki? Can't we just stick them in the tracker with the understanding that the highest priority (9) means pre-release milestone? I'm worried about redundancy in where we put the TODO items and I feel that the wiki ought to consist of (develop/build/deploy/architecture) documentation, not time-sensitive, aspirational shopping lists. > On Dec 3, 2009, at 10:55 AM, hl...@ne... wrote: > >> Hi all, >> >> here's an initial draft of a release planning and prioritization >> document that we can use as a basis for our conference call. I >> seeded it with the tasks that Val sent and added a few more. I also >> put them into different categories to give us a sense of perspective >> in which areas action is needed. >> >> -hilmar > > -- > =========================================================== > : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : > =========================================================== > > > > > ------------------------------------------------------------------------------ > Join us December 9, 2009 for the Red Hat Virtual Experience, > a free event focused on virtualization and cloud computing. > Attend in-depth sessions from your desk. Your couch. Anywhere. > http://p.sf.net/sfu/redhat-sfdev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel > -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: Hilmar L. <hl...@ne...> - 2009-12-04 21:57:12
|
Rutger - in response to your (I assume?) suggestion: "Rather have this contingent on the [performance optimization], i.e. produce final performance numbers after optimization" I disagree with this somewhat. There are three main reasons to do the benchmarking: 1) is the NESCent-hosted instance so severely slower (overall or in specific areas) than the SDSC-hosted one so as to amount to a release show-stopper or serious user experience issue; 2) do any of the significant performance differences hint at schema bugs (such as missing indexes, or failure to collect row and column statistics); and 3) indicate scalability issues. None of these depend on optimization having been completed. That being said, we will be running another set of performance tests after optimization to have an assessment as to how much optimization helped with any of the above. So, if the tests you have run so far in your opinion suffice to answer the above 3 questions, we can table all remaining ones until we rerun the suite after optimization. Did you already have a plan for where to document the results? Does the TB2 wiki sound like a good place? -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Hilmar L. <hl...@ne...> - 2009-12-04 21:42:11
|
As a follow-up to our teleconference, here are our decisions and action items for the next conference call on Thursday next week. 1) We are going to table priority 2 and 3 items for now, unless they take no more than 2-3 hours to address. 2) Everyone please fill in effort estimates for the tasks in the tables that you are assigned to. If the ability to do so depends on better understanding the requirements first, please write that in. 3) Based on estimates, priority order, and your fractional allocation to work on TreeBASE, please propose feasible target dates for those tasks you are assigned to. 4) Tasks in the table that do not have a tracker item# yet should receive one (or more, if they decompose into multiple tracker items). This should be done by the assignee(s), and the item# recorded in the table. As a reminder, at next week's Thu conference call we will want to pin down a target date for the first TB2 public release. This will also help organize and time the logistics tasks necessary along the way (such as announcements, server deployment, language on the TB1 website, etc). BTW once we've filled everything in I would like to move the table to the TreeBASE wiki (which is public). Since the motivation for not starting it there isn't to keep it private until then but rather ease of editing it, let me know if anyone would have a problem or concerns with the Google Doc being made public right away. -hilmar On Dec 3, 2009, at 10:55 AM, hl...@ne... wrote: > Hi all, > > here's an initial draft of a release planning and prioritization > document that we can use as a basis for our conference call. I > seeded it with the tasks that Val sent and added a few more. I also > put them into different categories to give us a sense of perspective > in which areas action is needed. > > -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |
From: Rutger V. <rut...@gm...> - 2009-12-03 10:21:33
|
Hi, I've been trying to create some indices: 1. an index on the discretecharstate_id column in matrixelement 2. an index on the matrixrow_id column in matrixelement matrixelement is the largest table by far in the database. Each tuple is a cell in a character state matrix. When we construct a character state matrix - to serialize to nexus/nexml/rdf or to display in a browser - we need to fetch all the elements for a given matrix, organize them in rows (matrixrow) and columns (matrixcolumn) and look up their character state symbols (usually discretecharstate). All of the mentioned tables that matrixelement references - by foreign keys pointing to their respective primary keys - are also quite large, so looking all this stuff up takes a long time, as we know, empirically. It seemed sensible to try to speed this up by adding some indices: CREATE INDEX matrixelement_matrixcolumn_id_idx ON matrixelement(matrixcolumn_id); CREATE INDEX matrixelement_matrixrow_id_idx ON matrixelement(matrixrow_id); CREATE INDEX matrixelement_discretecharstate_id_idx ON matrixelement(discretecharstate_id); ...but some way into this we ran out of disk space. Can we get some more? Rutger -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: Hilmar L. <hl...@du...> - 2009-12-02 14:48:48
|
Rutger - I would suggest that you post any backup requests that we want to rely on being able to restore from to the helpdesk at NESCent. This is one of our responsibilities as TreeBASE hosts, including being accountable if something goes wrong. -hilmar On Dec 2, 2009, at 6:36 AM, Rutger Vos wrote: > Hi, > > I am about to run a script that deletes orphaned taxonlabels from the > database. These labels were created during an intermediate step of > data bulk loading. They don't interfere with the integrity of genuine > records, but may have some effect on performance (though probably not > spectacular) so I want to get rid of them. Just to be on the safe > side, I want to dump the database first. Also, this might be a good > practice run if we want to release periodical database dumps to the > public (the way itis and ncbi do). So what would be the best way to do > this? Anyone have any experience running pg_dump? > > Rutger > > -- > Dr. Rutger A. Vos > School of Biological Sciences > Philip Lyle Building, Level 4 > University of Reading > Reading > RG6 6BX > United Kingdom > Tel: +44 (0) 118 378 7535 > http://www.nexml.org > http://rutgervos.blogspot.com > > ------------------------------------------------------------------------------ > Join us December 9, 2009 for the Red Hat Virtual Experience, > a free event focused on virtualization and cloud computing. > Attend in-depth sessions from your desk. Your couch. Anywhere. > http://p.sf.net/sfu/redhat-sfdev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- hlapp at duke dot edu : =========================================================== |
From: Jon A. <jon...@du...> - 2009-12-02 12:05:10
|
Rutger, I've got a cron job that dumps the database every night. I keep a weeks worth of backups on the server, and a months worth on our backup server. You can find the backups in the '/backups' directory. The backup script is: # backup all the roles with password hashes pg_dumpall -U postgres -w -r -f /backups/roles.`date +%a`.sql # backup the treebase database in custom (binary) format pg_dump -U postgres -F c -f /backups/treebasedev`date +%a`.custom.dump treebasedev I could have used pg_dumpall just as easily, but I prefer to have granularity in my restores. You've got three choices for dump format: custom, tar, or plain text The reason I no longer do plain txt is that I've had issues restoring some of the plain text dumps. Sometimes postgres will try to restore the sequences before the corresponding tables and the whole restore fails. With that being said, we should run a restore before you run your script, just to make sure it works. When I get in today, I'll try restoring it on another server, since the treebasedb-dev server is rather bogged down at the moment. -Jon On Dec 2, 2009, at 6:36 AM, Rutger Vos wrote: > Hi, > > I am about to run a script that deletes orphaned taxonlabels from the > database. These labels were created during an intermediate step of > data bulk loading. They don't interfere with the integrity of genuine > records, but may have some effect on performance (though probably not > spectacular) so I want to get rid of them. Just to be on the safe > side, I want to dump the database first. Also, this might be a good > practice run if we want to release periodical database dumps to the > public (the way itis and ncbi do). So what would be the best way to do > this? Anyone have any experience running pg_dump? > > Rutger > > -- > Dr. Rutger A. Vos > School of Biological Sciences > Philip Lyle Building, Level 4 > University of Reading > Reading > RG6 6BX > United Kingdom > Tel: +44 (0) 118 378 7535 > http://www.nexml.org > http://rutgervos.blogspot.com > > ------------------------------------------------------------------------------ > Join us December 9, 2009 for the Red Hat Virtual Experience, > a free event focused on virtualization and cloud computing. > Attend in-depth sessions from your desk. Your couch. Anywhere. > http://p.sf.net/sfu/redhat-sfdev2dev > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel ------------------------------------------------------- Jon Auman Systems Administrator National Evolutionary Synthesis Center Duke University http:www.nescent.org jon...@ne... ------------------------------------------------------ |
From: Rutger V. <rut...@gm...> - 2009-12-02 11:36:29
|
Hi, I am about to run a script that deletes orphaned taxonlabels from the database. These labels were created during an intermediate step of data bulk loading. They don't interfere with the integrity of genuine records, but may have some effect on performance (though probably not spectacular) so I want to get rid of them. Just to be on the safe side, I want to dump the database first. Also, this might be a good practice run if we want to release periodical database dumps to the public (the way itis and ncbi do). So what would be the best way to do this? Anyone have any experience running pg_dump? Rutger -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: Rutger V. <rut...@gm...> - 2009-12-02 10:02:50
|
Still works for me. Will this be a call hosted by NESCent? Will you provide call-in details? (Note that I forwarded this message to Tre...@li... - I think we should be using the designated communication channels more than we have been.) On Tue, Dec 1, 2009 at 5:52 PM, Hilmar Lapp <hl...@ne...> wrote: > Hi all, > > it looks like we all have time on Thursday, Dec 3, 11am EST, so lets set the > conference call for that time. (Actually, that's the *only* time that will > work this week, so I hope everyone is still open on that slot.) > > Talk to you all on Thu. > > Cheers, > > -hilmar > > > -- > =========================================================== > : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : > =========================================================== > > > > -- Dr. Rutger A. Vos School of Biological Sciences Philip Lyle Building, Level 4 University of Reading Reading RG6 6BX United Kingdom Tel: +44 (0) 118 378 7535 http://www.nexml.org http://rutgervos.blogspot.com |
From: Val T. <va...@ci...> - 2009-11-24 12:55:18
|
Yes, submission was part of the beta testing. However, the number of volunteers that helped was disappointingly low in spite of repeated efforts by Bill. Val On Nov 24, 2009, at 12:16 AM, Hilmar Lapp wrote: > Bill et al, > > I'm wondering whether the TB2 submission interface has been part of > the beta testing yet. I believe there was a dummy data file that > testers were asked to submit, but maybe I'm confusing this with > something else? > > -hilmar > > > -- > =========================================================== > : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : > =========================================================== > > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > 30-Day > trial. Simplify your report design, integration and deployment - and > focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel |
From: Hilmar L. <hl...@ne...> - 2009-11-24 05:17:02
|
Bill et al, I'm wondering whether the TB2 submission interface has been part of the beta testing yet. I believe there was a dummy data file that testers were asked to submit, but maybe I'm confusing this with something else? -hilmar -- =========================================================== : Hilmar Lapp -:- Durham, NC -:- informatics.nescent.org : =========================================================== |