You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(11) |
Jul
(34) |
Aug
(14) |
Sep
(10) |
Oct
(10) |
Nov
(11) |
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(56) |
Feb
(76) |
Mar
(68) |
Apr
(11) |
May
(97) |
Jun
(16) |
Jul
(29) |
Aug
(35) |
Sep
(18) |
Oct
(32) |
Nov
(23) |
Dec
(77) |
2004 |
Jan
(52) |
Feb
(44) |
Mar
(55) |
Apr
(38) |
May
(106) |
Jun
(82) |
Jul
(76) |
Aug
(47) |
Sep
(36) |
Oct
(56) |
Nov
(46) |
Dec
(61) |
2005 |
Jan
(52) |
Feb
(118) |
Mar
(41) |
Apr
(40) |
May
(35) |
Jun
(99) |
Jul
(84) |
Aug
(104) |
Sep
(53) |
Oct
(107) |
Nov
(68) |
Dec
(30) |
2006 |
Jan
(19) |
Feb
(27) |
Mar
(24) |
Apr
(9) |
May
(22) |
Jun
(11) |
Jul
(34) |
Aug
(8) |
Sep
(15) |
Oct
(55) |
Nov
(16) |
Dec
(2) |
2007 |
Jan
(12) |
Feb
(4) |
Mar
(8) |
Apr
|
May
(19) |
Jun
(3) |
Jul
(1) |
Aug
(6) |
Sep
(12) |
Oct
(3) |
Nov
|
Dec
|
2008 |
Jan
(4) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(21) |
2009 |
Jan
|
Feb
(2) |
Mar
(1) |
Apr
|
May
(1) |
Jun
(8) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
(4) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
(19) |
Jun
(14) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
(22) |
Apr
(12) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2016 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dave B. <db...@pc...> - 2004-07-13 17:08:39
|
Hi Juan, The error is definitely from java code not compiling correctly. My first suggestion is below, but first, let me just say that if you ever get a build error that looks like it is a result of java code not compiling correctly, there is an immediate solution whereby you add the line -skipJavaCompiling to your build command (so you would run "build GUS install -append -skipJavaCompiling"). This does exactly what it says and no java code will compile. This is intended as a temporary fix; often when you are building GUS you will be using it without needing any of the java code/software that comes with it. However, when the java doesn't compile for whatever reason, it will prevent you from doing a complete build. Add the -skipJavaCompiling option to get around this, but also send mail to the list when you encounter this error so a developer can figure out what is causing the error in the first place. On to trying to fix the problem. My first question is, do you have the GUS Java object layer generated? You can check this by moving to the directory {PROJECT_HOME}/GUS/Model/src/java/org/gusdb/model/ within this directory should be more directories representing the various schemas of GUS (Core, DoTS, TESS, SRES, RAD3), and within those should be the GUS Java Objects (hundreds of java objects representing tables in GUS). If those objects aren't there, then your instance of the GUS database may not be set up correctly. The GUS objects are created according to the entries in the Core.TableInfo table in the GUS database; check to see what is in that table currently and let me know. If you can answer those leading questions for me then hopefully I'll have a better idea of what's going on. Dave On Tue, 13 Jul 2004, Juan Perin wrote: > Hello, > > I am in the process of building GUS for implementing RAD on a Dell server > running Oracle 9i and Redhat AS 2.1. Everything has installed as suggested > by the VBI document from Virg tech. Our error appears to come from java when > trying to run 'build GUS install -append'. Our first thoughts were that > either Ant is having errors finding class dependencies or that there may be > some errors in the source code (which is probably very unlikely). The output > along with errors appears as follows: (THANKS FOR ANY INSIGHT!) > > [gus@rad gususer]$ build GUS install -append > > ant -f /checkout/install/build.xml install -Dproj=GUS > -DtargetDir=/checkout/GUS -Dcomp= -DprojectsDir=/checkout -Dappend=true > -logger org.apache.tools.ant.NoBannerLogger | grep ']' > > [echo] . > [echo] Installing CBIL/Bio > [echo] . > [echo] Installing CBIL/CSP > [echo] . > [echo] Installing CBIL/Util > [echo] . > [echo] Installing CBIL/HQ > [echo] . > [echo] Installing CBIL/ObjectMapper > [concat] Warning: Could not find any of the files specified in concat > task. > [echo] . > [echo] Installing GUS/Common > [echo] . > [echo] Installing GUS/DBAdmin > [echo] . > [echo] Installing GUS/GOPredict > [echo] . > [echo] Installing GUS/ObjRelP > [delete] Deleting 20 files from /checkout/GUS/Model/lib/perl/DoTS > [delete] Deleting 20 files from /checkout/GUS/lib/perl/GUS/Model/DoTS > [echo] generating Perl Objects > [copy] Copying 20 files to /checkout/GUS/Model/lib/perl/DoTS > [rmic] RMI Compiling 1 class to /checkout/GUS/ObjRelJ/src/java > [rmic] RMI Compiling 1 class to /checkout/GUS/ObjRelJ/src/java > [move] Moving 4 files to > /checkout/GUS/ObjRelJ/classes/org/gusdb/objrelj > [echo] . > [echo] Installing GUS/ObjRelJ > [jar] Building jar: /checkout/GUS/lib/java/GUS-ObjRelJ.jar > [echo] starting target: javaGeneratedModel > [delete] Deleting 2 files from > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS > [delete] Deleting 1 files from > /checkout/GUS/Model/src/java/org/gusdb/model/SRes > [echo] generating java objects > [copy] Copying 2 files to > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS > [copy] Copying 1 file to > /checkout/GUS/Model/src/java/org/gusdb/model/SRes > [echo] Starting target: JavaModel > [echo] . > [echo] Installing GUS/Model > [copy] Copying 20 files to /checkout/GUS/lib/perl/GUS/Model > [javac] Compiling 3 source files to /checkout/GUS/Model/classes > [javac] > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:26: > cannot resolve symbol > [javac] symbol : class BLATAlignment_Row > [javac] location: class org.gusdb.model.DoTS.BLATAlignment > [javac] public class BLATAlignment extends BLATAlignment_Row { > [javac] ^ > [javac] > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/NASequence.java:26: cannot > resolve symbol > [javac] symbol : class NASequence_Row > [javac] location: class org.gusdb.model.DoTS.NASequence > [javac] public class NASequence extends NASequence_Row { > [javac] ^ > [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:16: > cannot resolve symbol > [javac] symbol : class Taxon_Row > [javac] location: class org.gusdb.model.SRes.Taxon > [javac] public class Taxon extends Taxon_Row { > [javac] ^ > [javac] > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:34: > cannot resolve symbol > [javac] symbol : method getQstarts () > [javac] location: class org.gusdb.model.DoTS.BLATAlignment > [javac] String starts = getQstarts(); > [javac] ^ > [javac] > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:39: > cannot resolve symbol > [javac] symbol : method getBlocksizes () > [javac] location: class org.gusdb.model.DoTS.BLATAlignment > [javac] String lengths = getBlocksizes(); > [javac] ^ > [javac] > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:44: > cannot resolve symbol > [javac] symbol : method getTstarts () > [javac] location: class org.gusdb.model.DoTS.BLATAlignment > [javac] String tStarts = getTstarts(); > [javac] ^ > [javac] > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/NASequence.java:42: cannot > resolve symbol > [javac] symbol : method getSequenceLobLength () > [javac] location: class org.gusdb.model.DoTS.NASequence > [javac] return this.getSequenceAsSymbolList(1, > this.getSequenceLobLength().longValue()); > [javac] ^ > [javac] > /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/NASequence.java:53: cannot > resolve symbol > [javac] symbol : method getSequence (long,long) > [javac] location: class org.gusdb.model.DoTS.NASequence > [javac] char seqData[] = this.getSequence(start, end); > [javac] ^ > [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:34: > cannot resolve symbol > [javac] symbol : method getTaxonNameList (boolean) > [javac] location: class org.gusdb.model.SRes.Taxon > [javac] Vector allTaxonNames = getTaxonNameList(false); > [javac] ^ > [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:57: > cannot resolve symbol > [javac] symbol : method getTaxonNameList (boolean) > [javac] location: class org.gusdb.model.SRes.Taxon > [javac] kids = this.getTaxonNameList(true); > [javac] ^ > [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:66: > cannot resolve symbol > [javac] symbol : class TaxonName > [javac] location: class org.gusdb.model.SRes.Taxon > [javac] TaxonName tn = (TaxonName)(kids.elementAt(i)); > [javac] ^ > [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:66: > cannot resolve symbol > [javac] symbol : class TaxonName > [javac] location: class org.gusdb.model.SRes.Taxon > [javac] TaxonName tn = (TaxonName)(kids.elementAt(i)); > [javac] ^ > [javac] 12 errors > > BUILD FAILED > file:/checkout/install/build.xml:241: Compile failed; see the compiler error > output for details. > > Total time: 57 seconds > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev > |
From: Juan P. <BI...@ge...> - 2004-07-13 16:15:29
|
Hello, I am in the process of building GUS for implementing RAD on a Dell server running Oracle 9i and Redhat AS 2.1. Everything has installed as suggested by the VBI document from Virg tech. Our error appears to come from java when trying to run 'build GUS install -append'. Our first thoughts were that either Ant is having errors finding class dependencies or that there may be some errors in the source code (which is probably very unlikely). The output along with errors appears as follows: (THANKS FOR ANY INSIGHT!) [gus@rad gususer]$ build GUS install -append ant -f /checkout/install/build.xml install -Dproj=GUS -DtargetDir=/checkout/GUS -Dcomp= -DprojectsDir=/checkout -Dappend=true -logger org.apache.tools.ant.NoBannerLogger | grep ']' [echo] . [echo] Installing CBIL/Bio [echo] . [echo] Installing CBIL/CSP [echo] . [echo] Installing CBIL/Util [echo] . [echo] Installing CBIL/HQ [echo] . [echo] Installing CBIL/ObjectMapper [concat] Warning: Could not find any of the files specified in concat task. [echo] . [echo] Installing GUS/Common [echo] . [echo] Installing GUS/DBAdmin [echo] . [echo] Installing GUS/GOPredict [echo] . [echo] Installing GUS/ObjRelP [delete] Deleting 20 files from /checkout/GUS/Model/lib/perl/DoTS [delete] Deleting 20 files from /checkout/GUS/lib/perl/GUS/Model/DoTS [echo] generating Perl Objects [copy] Copying 20 files to /checkout/GUS/Model/lib/perl/DoTS [rmic] RMI Compiling 1 class to /checkout/GUS/ObjRelJ/src/java [rmic] RMI Compiling 1 class to /checkout/GUS/ObjRelJ/src/java [move] Moving 4 files to /checkout/GUS/ObjRelJ/classes/org/gusdb/objrelj [echo] . [echo] Installing GUS/ObjRelJ [jar] Building jar: /checkout/GUS/lib/java/GUS-ObjRelJ.jar [echo] starting target: javaGeneratedModel [delete] Deleting 2 files from /checkout/GUS/Model/src/java/org/gusdb/model/DoTS [delete] Deleting 1 files from /checkout/GUS/Model/src/java/org/gusdb/model/SRes [echo] generating java objects [copy] Copying 2 files to /checkout/GUS/Model/src/java/org/gusdb/model/DoTS [copy] Copying 1 file to /checkout/GUS/Model/src/java/org/gusdb/model/SRes [echo] Starting target: JavaModel [echo] . [echo] Installing GUS/Model [copy] Copying 20 files to /checkout/GUS/lib/perl/GUS/Model [javac] Compiling 3 source files to /checkout/GUS/Model/classes [javac] /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:26: cannot resolve symbol [javac] symbol : class BLATAlignment_Row [javac] location: class org.gusdb.model.DoTS.BLATAlignment [javac] public class BLATAlignment extends BLATAlignment_Row { [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/NASequence.java:26: cannot resolve symbol [javac] symbol : class NASequence_Row [javac] location: class org.gusdb.model.DoTS.NASequence [javac] public class NASequence extends NASequence_Row { [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:16: cannot resolve symbol [javac] symbol : class Taxon_Row [javac] location: class org.gusdb.model.SRes.Taxon [javac] public class Taxon extends Taxon_Row { [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:34: cannot resolve symbol [javac] symbol : method getQstarts () [javac] location: class org.gusdb.model.DoTS.BLATAlignment [javac] String starts = getQstarts(); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:39: cannot resolve symbol [javac] symbol : method getBlocksizes () [javac] location: class org.gusdb.model.DoTS.BLATAlignment [javac] String lengths = getBlocksizes(); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/BLATAlignment.java:44: cannot resolve symbol [javac] symbol : method getTstarts () [javac] location: class org.gusdb.model.DoTS.BLATAlignment [javac] String tStarts = getTstarts(); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/NASequence.java:42: cannot resolve symbol [javac] symbol : method getSequenceLobLength () [javac] location: class org.gusdb.model.DoTS.NASequence [javac] return this.getSequenceAsSymbolList(1, this.getSequenceLobLength().longValue()); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/DoTS/NASequence.java:53: cannot resolve symbol [javac] symbol : method getSequence (long,long) [javac] location: class org.gusdb.model.DoTS.NASequence [javac] char seqData[] = this.getSequence(start, end); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:34: cannot resolve symbol [javac] symbol : method getTaxonNameList (boolean) [javac] location: class org.gusdb.model.SRes.Taxon [javac] Vector allTaxonNames = getTaxonNameList(false); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:57: cannot resolve symbol [javac] symbol : method getTaxonNameList (boolean) [javac] location: class org.gusdb.model.SRes.Taxon [javac] kids = this.getTaxonNameList(true); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:66: cannot resolve symbol [javac] symbol : class TaxonName [javac] location: class org.gusdb.model.SRes.Taxon [javac] TaxonName tn = (TaxonName)(kids.elementAt(i)); [javac] ^ [javac] /checkout/GUS/Model/src/java/org/gusdb/model/SRes/Taxon.java:66: cannot resolve symbol [javac] symbol : class TaxonName [javac] location: class org.gusdb.model.SRes.Taxon [javac] TaxonName tn = (TaxonName)(kids.elementAt(i)); [javac] ^ [javac] 12 errors BUILD FAILED file:/checkout/install/build.xml:241: Compile failed; see the compiler error output for details. Total time: 57 seconds |
From: Deborah F. P. <pi...@pc...> - 2004-07-12 17:41:01
|
Hi, I'll try to answer your questions. The plugin is designed to load from fasta files downloaded directly from NCBI. We download the full db file and the plugin updates our database including deleting obsolete entries when the --delete option is used. Therefore, I think it would do well with update/change files but in this case the --delete option would NOT be used. The plugin loads entries into dots.NRDBEntry and dots.ExternalNASequence, in each case inserting if new, updating existing data, and deleting but only if that option is used. The NRDB data would change with each update, although existing ids should be more or less stable. I think that would mean that analyses should be rerun after updates but there should be only incremental differences. Specific queries can be done to test whether an analysis is "stale". You may have had a problem because a complicated command line is required as well as a fairly large set of sres.externaldatabaserelease.external_database_release_id's. Below is an example command line: (ga GUS::Common::Plugin::LoadNRDB --temp_login "$temp_login" --sourceDB $sourceDB --temp_password "$temp_password" --dbi_str "$dbi_str" $restart --gitax $gitax --nrdb $nrdb --extDbRelId $nrdbReleaseId --maketemp --plugin --delete > nrdb.out) >& nrdb.err & $nrdbReleaseId = 4194 $nrdb = /files/cbil/data/thirdpart/nrdb/2004-01-07/nr $gitax = /files/cbil/data/thirdpart/taxonomy/2004-01-07/gi_taxid_prot.dmp $restart = for restarting the interrupted plugin(use with plugin option), use number from last set number in log $dbi_str = fill these in with what you want $temp_password = I kept these because I have to grant $temp_login = permissions and truncate the table but they should be eliminated at some point $sourceDB = hum: 6501:gb,6502:emb,6503:dbj,6504:pir,6505:prf,6506:sp,6507:pdb,6508:pat,6509:bbs,6510:gnl,168:ref,6633:lcl,6511:genpept,7333:tpe,8035:tpd You will need to supply your own external_database_release_id's. The gi_taxid_prot.dmp is another file downloaded from NCBI (ftp://ftp.ncbi.nih.gov/pub/taxonomy). Another requirement is that the taxonomy tables (sres.taxon, sres.taxonname, and sres.geneticcode) must be filled with NCBI taxonomy data. (ftp://ftp.ncbi.nih.gov/pub/taxonomy). That can be done using the LoadTaxonomy.pm plugin. If you need a sample command line for LoadTaxonomy.pm, let me know. Debbie On Mon, 12 Jul 2004, Pablo Nascimento Mendes wrote: > Hello all, > we have some questions on loading a new release of NR into a live > database. Maybe some of you have already addressed these issues. We > would appreciate any help. > > 1) How does GenBank release new versions of the NR database? > - Whole DB downloads? > - Updates/changes only? > > 2) Has anybody used the LoadNRDB GUS plug-in successfully with a FASTA > version of the NR DB obtained directly from NCBI? > We have the version 1.22 of LoadNRDB GUS plug-in and we're having > difficulty using it to load the NR database. It crashes when it finds, > for instance, a header like this: >> gi|25506335|pir||B90058 conserve (...) > So, do we need to fix all the supposedly incorrect headers? > > 3) How are sequences updated in GUS? Is the whole DB version loaded each > time? Or only sequences whose accession/ids have changed? > > 4) How are analyses that were run from a previous db release version > affected? What about analyses performed on sequences that didn't change? > > 5) How can we determine what analyses stored in GUS are "stale" when a > new version of NRDB is released? > > Thanks in advance > -- > ----------------------------- > Pablo N. Mendes > Research Scholar > Kissinger Lab > Department of Genetics > University of Georgia > C210 Life Sciences Bldg. > Athens, Georgia 30602 > Phone:706 542-1447 > E-mail: pa...@ug... > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev > |
From: Pablo N. M. <pa...@ug...> - 2004-07-12 16:44:59
|
Hello all, we have some questions on loading a new release of NR into a live database. Maybe some of you have already addressed these issues. We would appreciate any help. 1) How does GenBank release new versions of the NR database? - Whole DB downloads? - Updates/changes only? 2) Has anybody used the LoadNRDB GUS plug-in successfully with a FASTA version of the NR DB obtained directly from NCBI? We have the version 1.22 of LoadNRDB GUS plug-in and we're having difficulty using it to load the NR database. It crashes when it finds, for instance, a header like this: >gi|25506335|pir||B90058 conserve (...) So, do we need to fix all the supposedly incorrect headers? 3) How are sequences updated in GUS? Is the whole DB version loaded each time? Or only sequences whose accession/ids have changed? 4) How are analyses that were run from a previous db release version affected? What about analyses performed on sequences that didn't change? 5) How can we determine what analyses stored in GUS are "stale" when a new version of NRDB is released? Thanks in advance -- ----------------------------- Pablo N. Mendes Research Scholar Kissinger Lab Department of Genetics University of Georgia C210 Life Sciences Bldg. Athens, Georgia 30602 Phone:706 542-1447 E-mail: pa...@ug... |
From: Jinal J. <jjh...@vb...> - 2004-07-12 16:39:22
|
Thanks Jonathan What is the name of the plugin to load enzymeclass related tables? --Jinal On Friday 09 July 2004 11:46 pm, Jonathan Schug wrote: > SRes::EnzymeClass. |
From: Jonathan S. <js...@pc...> - 2004-07-10 03:46:36
|
Jinal: The intended method is this way, if you don't mind linking to the sequence: DoTS::AASequenceEnzymeClass links DoTS::AaSequences with SRes::EnzymeClass. You'll want to load the EC db first. There is a plugin that loads the EnzymeClass-related tables. This can be a pain due to the lack of an easily available source for a current EC database. You may have to roll your own plugin to make the associations. Jonathan Schug ------------------------------------------------------------------------ --- Jonathan Schug Center for Bioinformatics js...@pc... Computational Biology and Informatics Lab (215) 573-3113 voice University of Pennsylvania, (215) 573-3111 fax 1413 Blockley Hall, Philadelphia, PA 19014-6021 |
From: Jinal J. <jjh...@vb...> - 2004-07-09 20:43:54
|
Has anyone tried to load the EC Number for a gene? Any idea about in which table will this information be loaded? I have already successfully loaded genbank entries but because genbank doesn't support ec numbers I need to load them separately. My genbank gene entries are in nagene, etc tables. I looked into the sres.enzymeclass and sres.enzymeclassattribute tables but I don't know how can I connect the dots.nagene entries with these tables. thanks --Jinal |
From: Jinal J. <jjh...@vb...> - 2004-07-08 18:17:52
|
I am sorry guys. It works now. Just that I was not running a proper boolean query thanks --Jinal On Thursday 08 July 2004 01:32 pm, Jinal Jhaveri wrote: > Hi, > > I am trying to configure the history page and it seems like few things have > changed in the schema of Queries table (changed with reference to the .sql > queries given on cbil site). Two fields are added named dataset_name and > session_id. After adding that to the queries table, my servlet does add the > query history information in the Queries table. But still I can't view them > through the history page servlet. Has anyone tried enabling this history > page. Am I missing something (for e.g any change in the servlet-config)? > > > thanks > --Jinal > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev |
From: Jinal J. <jjh...@vb...> - 2004-07-08 17:33:14
|
Hi, I am trying to configure the history page and it seems like few things have changed in the schema of Queries table (changed with reference to the .sql queries given on cbil site). Two fields are added named dataset_name and session_id. After adding that to the queries table, my servlet does add the query history information in the Queries table. But still I can't view them through the history page servlet. Has anyone tried enabling this history page. Am I missing something (for e.g any change in the servlet-config)? thanks --Jinal |
From: Sucheta T. <su...@vb...> - 2004-07-08 16:26:49
|
Hi Steve, Michael, I know there were a lots of discussion on this earlier where Michael had created some new tables. However, I am not clear what happens to the data if you build GUS on a previous installtion. The reason I ask this is with the earlier distribution we don't have the dots.scaffoldgapfeature view and I want to create this. I saw the detailed instruction on this in wiki page, just wanted to make sure the existing data in the tables can be restored. Thanks Sucheta |
From: Paul M. <pj...@sa...> - 2004-07-08 11:02:47
|
Hi, I have found an issue with the table DoTS.DomainFeature. When setting its parent to a TranslatedAASequence I get an error because it can be a parent to 2 separate fields (you only get the SQL error when you submit and the other field is not nullable). In DomainFeature_Table.pm it defines TranslatedAASequence to these fields; $self->setParentList( ['GUS::Model::DoTS:: TranslatedAASequence','aa_sequence_id','aa_sequence_id'], <and> ['GUS::Model::DoTS:: TranslatedAASequence','motif_aa_sequence_id','aa_sequence_id'], Is this a design error? Paul. |
From: Steve F. <st...@pc...> - 2004-07-06 23:59:41
|
sucheta- the way we use the similarity table is between sequences that are in the database. the query is, for example, an Allgenes or PlasmoDB gene. And, yes, we load NRDB, which is the target. it all depends on what you plan on doing with your blast results. but, our standard model is that the blast results are information most importantly associated with a gene that we care about. so, you get to the blast results from the gene by the similarity table. if you have the target sequence in the db, then you can show the user the definition line of the target and the sequence. so, the LoadBlastSimFast plugin works is that it assumes the query and target sequences have previously been loaded, and it is just establishing a similarity relationship between them steve Sucheta Tripathy wrote: > Steve, > > I have the outputs generated from blast results after blasting against > several databases(Like NR, Several different organisms), so Do I need > to put the subject sequences in some of the GUS tables? If so where? > > Similarly, for the querytable, do I have to put the query sequences > first into NAsequenceImp tables? > > I am not quite sure, if I understood the concept correctly. When I > upload blast results to some database, I parse them and store them > into some tables with the HSP regions, which I don't see in the parsed > output? Is it why you want the querytable and subjecttable to have the > sequences stored? > > Can you please send me some example? > > Thanks > > Sucheta > > At 06:49 PM 7/6/2004 -0400, Steve Fischer wrote: > >> Sucheta- >> >> At first glance, the output looks ok, so let's assume Michael's >> script works. >> >> similarities are between two things, the query (your sequence) and >> the subject (the sequences you are blasting against). >> >> the querytable is the table that holds the sequences you are blasting. >> >> the subjecttable is the table that holds the sequences you are >> blasting against. >> >> --file is the arg that you pass the file to. >> >> hope that helps, >> steve >> >> Sucheta Tripathy wrote: >> >>> I am trying to load a bunch of blast output files using the >>> LoadBlastSimFast plugin. First I have parsed the blast output with a >>> script 'ParseFileForSimilarity.pl' I obtained from Michael. As I see >>> from the plugin comments, the blast output need to be parsed by a >>> script >>> 'generateBlastSimilarity.pl'. I am not sure if both the parsing >>> scripts do the same job.. >>> >>> I am not sure for the LoadBlastSimFast plugin what value should I >>> pass for --QueryTable and --SubjectTable and >>> Which parameter takes the parsed blast output file. >>> >>> My blast output is currently formatted into the following fileds >>> using ParseFileForSimilarity: >>> >>> Cutoff parameters: >>> P value: 1e-05 >>> Length: >>> 10 >>> Percent Identity: 20 >>> >>> >CL1Contig2 (35 subjects) >>> Sum: AAA83776.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >>> HSP1: AAA83776.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >>> Sum: XP_342001.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >>> HSP1: XP_342001.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >>> Sum: AAB52915.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >>> HSP1: AAB52915.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >>> Sum: NP_032016.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >>> HSP1: NP_032016.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >>> Sum: NP_001988.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >>> HSP1: NP_001988.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >>> Sum: XP_345683.1:99:6e-20:72:151:30:278:1:83:54:63:0:+3 >>> HSP1: XP_345683.1:54:63:83:99:6e-20:72:151:30:278:0:+3 >>> Sum: BAB68608.1:99:6e-20:58:137:30:278:1:83:54:63:0:+3 >>> HSP1: BAB68608.1:54:63:83:99:6e-20:58:137:30:278:0:+3 >>> Sum: NP_777156.1:97:2e-19:54:133:30:278:1:83:53:63:0:+3 >>> HSP1: NP_777156.1:53:63:83:97:2e-19:54:133:30:278:0:+3 >>> Sum: NP_598374.1:97:2e-19:54:132:30:275:1:82:53:62:0:+3 >>> HSP1: NP_598374.1:53:62:82:97:2e-19:54:132:30:275:0:+3 >>> Sum: AAQ63318.1:96:3e-19:53:133:27:278:1:84:54:64:0:+3 >>> HSP1: AAQ63318.1:54:64:84:96:3e-19:53:133:27:278:0:+3 >>> Sum: BAB68610.1:96:4e-19:58:137:30:278:1:83:53:62:0:+3 >>> HSP1: BAB68610.1:53:62:83:96:4e-19:58:137:30:278:0:+3 >>> Sum: AAK95215.1:93:3e-18:54:133:30:278:1:83:54:60:0:+3 >>> HSP1: AAK95215.1:54:60:83:93:3e-18:54:133:30:278:0:+3 >>> Sum: AAN05597.1:90:2e-17:51:131:27:278:1:84:52:61:0:+3 >>> HSP1: AAN05597.1:52:61:84:90:2e-17:51:131:27:278:0:+3 >>> Sum: NP_505007.1:90:2e-17:51:130:30:278:1:83:51:62:0:+3 >>> HSP1: NP_505007.1:51:62:83:90:2e-17:51:130:30:278:0:+3 >>> Sum: AAL49305.2:90:3e-17:86:146:90:278:1:63:47:52:0:+3 >>> HSP1: AAL49305.2:47:52:63:90:3e-17:86:146:90:278:0:+3 >>> Sum: NP_650922.1:90:3e-17:71:131:90:278:1:63:47:52:0:+3 >>> HSP1: NP_650922.1:47:52:63:90:3e-17:71:131:90:278:0:+3 >>> Sum: AAK92197.1:89:6e-17:71:131:90:278:1:63:47:52:0:+3 >>> HSP1: AAK92197.1:47:52:63:89:6e-17:71:131:90:278:0:+3 >>> Sum: XP_343942.1:86:3e-16:68:144:30:278:1:83:50:58:0:+3 >>> HSP1: XP_343942.1:50:58:83:86:3e-16:68:144:30:278:0:+3 >>> Sum: Q05472:86:4e-16:1:59:96:278:1:61:46:50:0:+3 >>> HSP1: Q05472:46:50:61:86:4e-16:1:59:96:278:0:+3 >>> Sum: XP_311180.1:85:7e-16:61:139:42:281:1:81:48:60:0:+3 >>> HSP1: XP_311180.1:48:60:81:85:7e-16:61:139:42:281:0:+3 >>> Sum: CAC86461.1:84:2e-15:2:61:93:278:1:62:46:51:0:+3 >>> HSP1: CAC86461.1:46:51:62:84:2e-15:2:61:93:278:0:+3 >>> Sum: NP_473105.1:81:9e-15:2:58:93:278:1:62:45:47:0:+3 >>> HSP1: NP_473105.1:45:47:62:81:9e-15:2:58:93:278:0:+3 >>> Sum: NP_194668.1:80:2e-14:2:66:93:275:2:122:88:94:0:+3 >>> HSP1: NP_194668.1:44:47:61:80:2e-14:2:60:93:275:0:+3 >>> HSP2: NP_194668.1:44:47:61:80:2e-14:8:66:93:275:0:+3 >>> Sum: CAD98413.1:80:3e-14:3:58:96:272:1:59:44:47:0:+3 >>> HSP1: CAD98413.1:44:47:59:80:3e-14:3:58:96:272:0:+3 >>> Sum: O22424:80:3e-14:1:259:195:971:1:259:169:209:1:-1 >>> HSP1: O22424:169:209:259:80:3e-14:1:259:195:971:1:-1 >>> Sum: P47961:80:3e-14:1:253:213:971:1:253:164:204:1:-1 >>> >>> Thanks >>> >>> Sucheta >>> >>> >>> >>> ------------------------------------------------------- >>> This SF.Net email sponsored by Black Hat Briefings & Training. >>> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - >>> digital self defense, top technical experts, no vendor pitches, >>> unmatched networking opportunities. Visit www.blackhat.com >>> _______________________________________________ >>> Gusdev-gusdev mailing list >>> Gus...@li... >>> https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev >> >> >> >> >> >> ------------------------------------------------------- >> This SF.Net email sponsored by Black Hat Briefings & Training. >> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital >> self defense, top technical experts, no vendor pitches, unmatched >> networking opportunities. Visit www.blackhat.com >> _______________________________________________ >> Gusdev-gusdev mailing list >> Gus...@li... >> https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev > |
From: Sucheta T. <su...@vb...> - 2004-07-06 23:07:28
|
Steve, I have the outputs generated from blast results after blasting against several databases(Like NR, Several different organisms), so Do I need to put the subject sequences in some of the GUS tables? If so where? Similarly, for the querytable, do I have to put the query sequences first into NAsequenceImp tables? I am not quite sure, if I understood the concept correctly. When I upload blast results to some database, I parse them and store them into some tables with the HSP regions, which I don't see in the parsed output? Is it why you want the querytable and subjecttable to have the sequences stored? Can you please send me some example? Thanks Sucheta At 06:49 PM 7/6/2004 -0400, Steve Fischer wrote: >Sucheta- > >At first glance, the output looks ok, so let's assume Michael's script works. > >similarities are between two things, the query (your sequence) and the >subject (the sequences you are blasting against). > >the querytable is the table that holds the sequences you are blasting. > >the subjecttable is the table that holds the sequences you are blasting >against. > >--file is the arg that you pass the file to. > >hope that helps, >steve > >Sucheta Tripathy wrote: > >>I am trying to load a bunch of blast output files using the >>LoadBlastSimFast plugin. First I have parsed the blast output with a >>script 'ParseFileForSimilarity.pl' I obtained from Michael. As I see from >>the plugin comments, the blast output need to be parsed by a script >>'generateBlastSimilarity.pl'. I am not sure if both the parsing scripts >>do the same job.. >> >>I am not sure for the LoadBlastSimFast plugin what value should I pass >>for --QueryTable and --SubjectTable and >>Which parameter takes the parsed blast output file. >> >>My blast output is currently formatted into the following fileds using >>ParseFileForSimilarity: >> >>Cutoff parameters: >> P value: 1e-05 >> Length: >>10 >> Percent Identity: 20 >> >> >CL1Contig2 (35 subjects) >> Sum: AAA83776.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >> HSP1: AAA83776.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >> Sum: XP_342001.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >> HSP1: XP_342001.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >> Sum: AAB52915.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >> HSP1: AAB52915.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >> Sum: NP_032016.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >> HSP1: NP_032016.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >> Sum: NP_001988.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 >> HSP1: NP_001988.1:54:63:83:99:6e-20:54:133:30:278:0:+3 >> Sum: XP_345683.1:99:6e-20:72:151:30:278:1:83:54:63:0:+3 >> HSP1: XP_345683.1:54:63:83:99:6e-20:72:151:30:278:0:+3 >> Sum: BAB68608.1:99:6e-20:58:137:30:278:1:83:54:63:0:+3 >> HSP1: BAB68608.1:54:63:83:99:6e-20:58:137:30:278:0:+3 >> Sum: NP_777156.1:97:2e-19:54:133:30:278:1:83:53:63:0:+3 >> HSP1: NP_777156.1:53:63:83:97:2e-19:54:133:30:278:0:+3 >> Sum: NP_598374.1:97:2e-19:54:132:30:275:1:82:53:62:0:+3 >> HSP1: NP_598374.1:53:62:82:97:2e-19:54:132:30:275:0:+3 >> Sum: AAQ63318.1:96:3e-19:53:133:27:278:1:84:54:64:0:+3 >> HSP1: AAQ63318.1:54:64:84:96:3e-19:53:133:27:278:0:+3 >> Sum: BAB68610.1:96:4e-19:58:137:30:278:1:83:53:62:0:+3 >> HSP1: BAB68610.1:53:62:83:96:4e-19:58:137:30:278:0:+3 >> Sum: AAK95215.1:93:3e-18:54:133:30:278:1:83:54:60:0:+3 >> HSP1: AAK95215.1:54:60:83:93:3e-18:54:133:30:278:0:+3 >> Sum: AAN05597.1:90:2e-17:51:131:27:278:1:84:52:61:0:+3 >> HSP1: AAN05597.1:52:61:84:90:2e-17:51:131:27:278:0:+3 >> Sum: NP_505007.1:90:2e-17:51:130:30:278:1:83:51:62:0:+3 >> HSP1: NP_505007.1:51:62:83:90:2e-17:51:130:30:278:0:+3 >> Sum: AAL49305.2:90:3e-17:86:146:90:278:1:63:47:52:0:+3 >> HSP1: AAL49305.2:47:52:63:90:3e-17:86:146:90:278:0:+3 >> Sum: NP_650922.1:90:3e-17:71:131:90:278:1:63:47:52:0:+3 >> HSP1: NP_650922.1:47:52:63:90:3e-17:71:131:90:278:0:+3 >> Sum: AAK92197.1:89:6e-17:71:131:90:278:1:63:47:52:0:+3 >> HSP1: AAK92197.1:47:52:63:89:6e-17:71:131:90:278:0:+3 >> Sum: XP_343942.1:86:3e-16:68:144:30:278:1:83:50:58:0:+3 >> HSP1: XP_343942.1:50:58:83:86:3e-16:68:144:30:278:0:+3 >> Sum: Q05472:86:4e-16:1:59:96:278:1:61:46:50:0:+3 >> HSP1: Q05472:46:50:61:86:4e-16:1:59:96:278:0:+3 >> Sum: XP_311180.1:85:7e-16:61:139:42:281:1:81:48:60:0:+3 >> HSP1: XP_311180.1:48:60:81:85:7e-16:61:139:42:281:0:+3 >> Sum: CAC86461.1:84:2e-15:2:61:93:278:1:62:46:51:0:+3 >> HSP1: CAC86461.1:46:51:62:84:2e-15:2:61:93:278:0:+3 >> Sum: NP_473105.1:81:9e-15:2:58:93:278:1:62:45:47:0:+3 >> HSP1: NP_473105.1:45:47:62:81:9e-15:2:58:93:278:0:+3 >> Sum: NP_194668.1:80:2e-14:2:66:93:275:2:122:88:94:0:+3 >> HSP1: NP_194668.1:44:47:61:80:2e-14:2:60:93:275:0:+3 >> HSP2: NP_194668.1:44:47:61:80:2e-14:8:66:93:275:0:+3 >> Sum: CAD98413.1:80:3e-14:3:58:96:272:1:59:44:47:0:+3 >> HSP1: CAD98413.1:44:47:59:80:3e-14:3:58:96:272:0:+3 >> Sum: O22424:80:3e-14:1:259:195:971:1:259:169:209:1:-1 >> HSP1: O22424:169:209:259:80:3e-14:1:259:195:971:1:-1 >> Sum: P47961:80:3e-14:1:253:213:971:1:253:164:204:1:-1 >> >>Thanks >> >>Sucheta >> >> >> >>------------------------------------------------------- >>This SF.Net email sponsored by Black Hat Briefings & Training. >>Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital >>self defense, top technical experts, no vendor pitches, unmatched >>networking opportunities. Visit www.blackhat.com >>_______________________________________________ >>Gusdev-gusdev mailing list >>Gus...@li... >>https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev > > > > >------------------------------------------------------- >This SF.Net email sponsored by Black Hat Briefings & Training. >Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self >defense, top technical experts, no vendor pitches, unmatched networking >opportunities. Visit www.blackhat.com >_______________________________________________ >Gusdev-gusdev mailing list >Gus...@li... >https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev |
From: Steve F. <st...@pc...> - 2004-07-06 22:47:38
|
Sucheta- At first glance, the output looks ok, so let's assume Michael's script works. similarities are between two things, the query (your sequence) and the subject (the sequences you are blasting against). the querytable is the table that holds the sequences you are blasting. the subjecttable is the table that holds the sequences you are blasting against. --file is the arg that you pass the file to. hope that helps, steve Sucheta Tripathy wrote: > I am trying to load a bunch of blast output files using the > LoadBlastSimFast plugin. First I have parsed the blast output with a > script 'ParseFileForSimilarity.pl' I obtained from Michael. As I see > from the plugin comments, the blast output need to be parsed by a script > 'generateBlastSimilarity.pl'. I am not sure if both the parsing > scripts do the same job.. > > I am not sure for the LoadBlastSimFast plugin what value should I pass > for --QueryTable and --SubjectTable and > Which parameter takes the parsed blast output file. > > My blast output is currently formatted into the following fileds using > ParseFileForSimilarity: > > Cutoff parameters: > P value: 1e-05 > Length: > 10 > Percent Identity: 20 > > >CL1Contig2 (35 subjects) > Sum: AAA83776.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 > HSP1: AAA83776.1:54:63:83:99:6e-20:54:133:30:278:0:+3 > Sum: XP_342001.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 > HSP1: XP_342001.1:54:63:83:99:6e-20:54:133:30:278:0:+3 > Sum: AAB52915.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 > HSP1: AAB52915.1:54:63:83:99:6e-20:54:133:30:278:0:+3 > Sum: NP_032016.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 > HSP1: NP_032016.1:54:63:83:99:6e-20:54:133:30:278:0:+3 > Sum: NP_001988.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 > HSP1: NP_001988.1:54:63:83:99:6e-20:54:133:30:278:0:+3 > Sum: XP_345683.1:99:6e-20:72:151:30:278:1:83:54:63:0:+3 > HSP1: XP_345683.1:54:63:83:99:6e-20:72:151:30:278:0:+3 > Sum: BAB68608.1:99:6e-20:58:137:30:278:1:83:54:63:0:+3 > HSP1: BAB68608.1:54:63:83:99:6e-20:58:137:30:278:0:+3 > Sum: NP_777156.1:97:2e-19:54:133:30:278:1:83:53:63:0:+3 > HSP1: NP_777156.1:53:63:83:97:2e-19:54:133:30:278:0:+3 > Sum: NP_598374.1:97:2e-19:54:132:30:275:1:82:53:62:0:+3 > HSP1: NP_598374.1:53:62:82:97:2e-19:54:132:30:275:0:+3 > Sum: AAQ63318.1:96:3e-19:53:133:27:278:1:84:54:64:0:+3 > HSP1: AAQ63318.1:54:64:84:96:3e-19:53:133:27:278:0:+3 > Sum: BAB68610.1:96:4e-19:58:137:30:278:1:83:53:62:0:+3 > HSP1: BAB68610.1:53:62:83:96:4e-19:58:137:30:278:0:+3 > Sum: AAK95215.1:93:3e-18:54:133:30:278:1:83:54:60:0:+3 > HSP1: AAK95215.1:54:60:83:93:3e-18:54:133:30:278:0:+3 > Sum: AAN05597.1:90:2e-17:51:131:27:278:1:84:52:61:0:+3 > HSP1: AAN05597.1:52:61:84:90:2e-17:51:131:27:278:0:+3 > Sum: NP_505007.1:90:2e-17:51:130:30:278:1:83:51:62:0:+3 > HSP1: NP_505007.1:51:62:83:90:2e-17:51:130:30:278:0:+3 > Sum: AAL49305.2:90:3e-17:86:146:90:278:1:63:47:52:0:+3 > HSP1: AAL49305.2:47:52:63:90:3e-17:86:146:90:278:0:+3 > Sum: NP_650922.1:90:3e-17:71:131:90:278:1:63:47:52:0:+3 > HSP1: NP_650922.1:47:52:63:90:3e-17:71:131:90:278:0:+3 > Sum: AAK92197.1:89:6e-17:71:131:90:278:1:63:47:52:0:+3 > HSP1: AAK92197.1:47:52:63:89:6e-17:71:131:90:278:0:+3 > Sum: XP_343942.1:86:3e-16:68:144:30:278:1:83:50:58:0:+3 > HSP1: XP_343942.1:50:58:83:86:3e-16:68:144:30:278:0:+3 > Sum: Q05472:86:4e-16:1:59:96:278:1:61:46:50:0:+3 > HSP1: Q05472:46:50:61:86:4e-16:1:59:96:278:0:+3 > Sum: XP_311180.1:85:7e-16:61:139:42:281:1:81:48:60:0:+3 > HSP1: XP_311180.1:48:60:81:85:7e-16:61:139:42:281:0:+3 > Sum: CAC86461.1:84:2e-15:2:61:93:278:1:62:46:51:0:+3 > HSP1: CAC86461.1:46:51:62:84:2e-15:2:61:93:278:0:+3 > Sum: NP_473105.1:81:9e-15:2:58:93:278:1:62:45:47:0:+3 > HSP1: NP_473105.1:45:47:62:81:9e-15:2:58:93:278:0:+3 > Sum: NP_194668.1:80:2e-14:2:66:93:275:2:122:88:94:0:+3 > HSP1: NP_194668.1:44:47:61:80:2e-14:2:60:93:275:0:+3 > HSP2: NP_194668.1:44:47:61:80:2e-14:8:66:93:275:0:+3 > Sum: CAD98413.1:80:3e-14:3:58:96:272:1:59:44:47:0:+3 > HSP1: CAD98413.1:44:47:59:80:3e-14:3:58:96:272:0:+3 > Sum: O22424:80:3e-14:1:259:195:971:1:259:169:209:1:-1 > HSP1: O22424:169:209:259:80:3e-14:1:259:195:971:1:-1 > Sum: P47961:80:3e-14:1:253:213:971:1:253:164:204:1:-1 > > Thanks > > Sucheta > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital > self defense, top technical experts, no vendor pitches, unmatched > networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev |
From: Sucheta T. <su...@vb...> - 2004-07-06 21:39:05
|
I am trying to load a bunch of blast output files using the LoadBlastSimFast plugin. First I have parsed the blast output with a script 'ParseFileForSimilarity.pl' I obtained from Michael. As I see from the plugin comments, the blast output need to be parsed by a script 'generateBlastSimilarity.pl'. I am not sure if both the parsing scripts do the same job.. I am not sure for the LoadBlastSimFast plugin what value should I pass for --QueryTable and --SubjectTable and Which parameter takes the parsed blast output file. My blast output is currently formatted into the following fileds using ParseFileForSimilarity: Cutoff parameters: P value: 1e-05 Length: 10 Percent Identity: 20 >CL1Contig2 (35 subjects) Sum: AAA83776.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 HSP1: AAA83776.1:54:63:83:99:6e-20:54:133:30:278:0:+3 Sum: XP_342001.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 HSP1: XP_342001.1:54:63:83:99:6e-20:54:133:30:278:0:+3 Sum: AAB52915.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 HSP1: AAB52915.1:54:63:83:99:6e-20:54:133:30:278:0:+3 Sum: NP_032016.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 HSP1: NP_032016.1:54:63:83:99:6e-20:54:133:30:278:0:+3 Sum: NP_001988.1:99:6e-20:54:133:30:278:1:83:54:63:0:+3 HSP1: NP_001988.1:54:63:83:99:6e-20:54:133:30:278:0:+3 Sum: XP_345683.1:99:6e-20:72:151:30:278:1:83:54:63:0:+3 HSP1: XP_345683.1:54:63:83:99:6e-20:72:151:30:278:0:+3 Sum: BAB68608.1:99:6e-20:58:137:30:278:1:83:54:63:0:+3 HSP1: BAB68608.1:54:63:83:99:6e-20:58:137:30:278:0:+3 Sum: NP_777156.1:97:2e-19:54:133:30:278:1:83:53:63:0:+3 HSP1: NP_777156.1:53:63:83:97:2e-19:54:133:30:278:0:+3 Sum: NP_598374.1:97:2e-19:54:132:30:275:1:82:53:62:0:+3 HSP1: NP_598374.1:53:62:82:97:2e-19:54:132:30:275:0:+3 Sum: AAQ63318.1:96:3e-19:53:133:27:278:1:84:54:64:0:+3 HSP1: AAQ63318.1:54:64:84:96:3e-19:53:133:27:278:0:+3 Sum: BAB68610.1:96:4e-19:58:137:30:278:1:83:53:62:0:+3 HSP1: BAB68610.1:53:62:83:96:4e-19:58:137:30:278:0:+3 Sum: AAK95215.1:93:3e-18:54:133:30:278:1:83:54:60:0:+3 HSP1: AAK95215.1:54:60:83:93:3e-18:54:133:30:278:0:+3 Sum: AAN05597.1:90:2e-17:51:131:27:278:1:84:52:61:0:+3 HSP1: AAN05597.1:52:61:84:90:2e-17:51:131:27:278:0:+3 Sum: NP_505007.1:90:2e-17:51:130:30:278:1:83:51:62:0:+3 HSP1: NP_505007.1:51:62:83:90:2e-17:51:130:30:278:0:+3 Sum: AAL49305.2:90:3e-17:86:146:90:278:1:63:47:52:0:+3 HSP1: AAL49305.2:47:52:63:90:3e-17:86:146:90:278:0:+3 Sum: NP_650922.1:90:3e-17:71:131:90:278:1:63:47:52:0:+3 HSP1: NP_650922.1:47:52:63:90:3e-17:71:131:90:278:0:+3 Sum: AAK92197.1:89:6e-17:71:131:90:278:1:63:47:52:0:+3 HSP1: AAK92197.1:47:52:63:89:6e-17:71:131:90:278:0:+3 Sum: XP_343942.1:86:3e-16:68:144:30:278:1:83:50:58:0:+3 HSP1: XP_343942.1:50:58:83:86:3e-16:68:144:30:278:0:+3 Sum: Q05472:86:4e-16:1:59:96:278:1:61:46:50:0:+3 HSP1: Q05472:46:50:61:86:4e-16:1:59:96:278:0:+3 Sum: XP_311180.1:85:7e-16:61:139:42:281:1:81:48:60:0:+3 HSP1: XP_311180.1:48:60:81:85:7e-16:61:139:42:281:0:+3 Sum: CAC86461.1:84:2e-15:2:61:93:278:1:62:46:51:0:+3 HSP1: CAC86461.1:46:51:62:84:2e-15:2:61:93:278:0:+3 Sum: NP_473105.1:81:9e-15:2:58:93:278:1:62:45:47:0:+3 HSP1: NP_473105.1:45:47:62:81:9e-15:2:58:93:278:0:+3 Sum: NP_194668.1:80:2e-14:2:66:93:275:2:122:88:94:0:+3 HSP1: NP_194668.1:44:47:61:80:2e-14:2:60:93:275:0:+3 HSP2: NP_194668.1:44:47:61:80:2e-14:8:66:93:275:0:+3 Sum: CAD98413.1:80:3e-14:3:58:96:272:1:59:44:47:0:+3 HSP1: CAD98413.1:44:47:59:80:3e-14:3:58:96:272:0:+3 Sum: O22424:80:3e-14:1:259:195:971:1:259:169:209:1:-1 HSP1: O22424:169:209:259:80:3e-14:1:259:195:971:1:-1 Sum: P47961:80:3e-14:1:253:213:971:1:253:164:204:1:-1 Thanks Sucheta |
From: <io...@pc...> - 2004-07-02 13:28:16
|
You can set MODIFICATION_DATE to SYSDATE on sqlldr insert, just as you do with a plugin. I found this example at http://www.orafaq.com/faqloadr.htm#MODIFY LOAD DATA INFILE * INTO TABLE modified_data ( rec_no "my_db_sequence.nextval", region CONSTANT '31', time_loaded "to_char(SYSDATE, 'HH24:MI')", data1 POSITION(1:5) ":data1/100", data2 POSITION(6:15) "upper(:data2)", data3 POSITION(16:22)"to_date(:data3, 'YYMMDD')" ) BEGINDATA 11111AAAAAAAAAA991201 22222BBBBBBBBBB990112 Quoting Jonathan Schug <js...@pc...>: > Sucheta: > > I am in the process of using sqlldr to load data in to a new TESS table > that has lots of data that we want to compress using Oracle's built-in > compression ability. The table has all of the normal GUS overhead > columns and will be queried using GUS objects and/or SQL as usual. If > the overhead columns are constant for the load, they can be set in the > header of the SQL loader file. (See example below; note the limited > date resolution.) Once in the db, these rows are as good as any other > rows. I 'need' to use sqlldr because it is the simplest way to make > sure the data gets compressed as it's loaded. I will probably be doing > more of this in the future. Will let you know how it goes. > > Since your data is not in originally in GenBank format, there is > certainly much merit in writing a loader specifically for your data > format. We certainly do not convert everything to GB format to put it > in GUS. As you note, this is especially true when there is extra data > that is not well handled by GenBank format. In this scenario, the best > thing about using the object layer (as opposed to sqlldr) is that they > handle most of the details of getting primary keys, rollbacks, submits > over multiple tables, GUS overhead rows, etc. You get none of this > with the bulk loader, though you could (re)create some of it I guess. > We use the objects to load the PlasmoDB data read from XML files and I > think the performance is perfectly acceptable. > > On the other hand, if you have to convert your data to GenBank format > to submit it to GenBank anyway, then it wouldn't hurt to use gbparser > and try to track down and fix (or at least report) any errors you > encounter. GBparser should be kept in good working order! In any > case, it is a good idea to compare the contents of the db after a run > to make sure everything that you think should have gone in did in fact > go in. > > Jonathan Schug > > ----- > > LOAD DATA > INFILE * > APPEND > INTO TABLE TESS.PredictedBindingSite > FIELDS TERMINATED BY ',' > ( predicted_binding_site_id, > model_id, > na_sequence_id, > score, > begin, > end, > is_reversed, > modification_date constant '19-JUN-04', > user_read constant 1, > user_write constant 1, > group_read constant 1, > group_write constant 1, > other_read constant 1, > other_write constant 0, > row_user_id constant 7, > row_group_id constant 0, > row_project_id constant 0, > row_alg_invocation_id constant 247696 > ) > BEGINDATA > 1,5017,100928782,6.277017,41202114,41202122,0 > ... > > ------------------------------------------------------------------------ > --- > Jonathan Schug Center for Bioinformatics > js...@pc... Computational Biology and Informatics Lab > (215) 573-3113 voice University of Pennsylvania, > (215) 573-3111 fax 1413 Blockley Hall, Philadelphia, PA > 19014-6021 > > > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev > |
From: Sucheta T. <su...@vb...> - 2004-07-02 12:36:14
|
Jonathan, Steve, Thank you for your comments and suggestion. What I am planning to do is convert few of the sequences into genbank format and see how it goes through GBParser and since our data will be uniform(unlike genbank generic format), I assume it will be possible to fix GBParser. I shall let you by today evening how things are. Thanks Sucheta > Sucheta: > > I am in the process of using sqlldr to load data in to a new TESS table > that has lots of data that we want to compress using Oracle's built-in > compression ability. The table has all of the normal GUS overhead > columns and will be queried using GUS objects and/or SQL as usual. If > the overhead columns are constant for the load, they can be set in the > header of the SQL loader file. (See example below; note the limited > date resolution.) Once in the db, these rows are as good as any other > rows. I 'need' to use sqlldr because it is the simplest way to make > sure the data gets compressed as it's loaded. I will probably be doing > more of this in the future. Will let you know how it goes. > > Since your data is not in originally in GenBank format, there is > certainly much merit in writing a loader specifically for your data > format. We certainly do not convert everything to GB format to put it > in GUS. As you note, this is especially true when there is extra data > that is not well handled by GenBank format. In this scenario, the best > thing about using the object layer (as opposed to sqlldr) is that they > handle most of the details of getting primary keys, rollbacks, submits > over multiple tables, GUS overhead rows, etc. You get none of this > with the bulk loader, though you could (re)create some of it I guess. > We use the objects to load the PlasmoDB data read from XML files and I > think the performance is perfectly acceptable. > > On the other hand, if you have to convert your data to GenBank format > to submit it to GenBank anyway, then it wouldn't hurt to use gbparser > and try to track down and fix (or at least report) any errors you > encounter. GBparser should be kept in good working order! In any > case, it is a good idea to compare the contents of the db after a run > to make sure everything that you think should have gone in did in fact > go in. > > Jonathan Schug > > ----- > > LOAD DATA > INFILE * > APPE > ND > INTO TABLE TESS.PredictedBindingSite > FIELDS TERMINATED BY ',' > ( predicted_binding_site_id, > model_id, > na_sequence_id, > score, > begin, > end, > is_reversed, > modification_date constant '19-JUN-04', > user_read constant 1, > user_write constant 1, > group_read constant 1, > group_write constant 1, > other_read constant 1, > other_write constant 0, > row_user_id constant 7, > row_group_id constant 0, > row_project_id constant 0, > row_alg_invocation_id constant 247696 > ) > BEGINDATA > 1,5017,100928782,6.277017,41202114,41202122,0 > ... > > ------------------------------------------------------------------------ > --- > Jonathan Schug Center for Bioinformatics > js...@pc... Computational Biology and Informatics Lab > (215) 573-3113 voice University of Pennsylvania, > (215) 573-3111 fax 1413 Blockley Hall, Philadelphia, PA > 19014-6021 > > -- Sucheta Tripathy Virginia Bioinformatics Institute Phase-I Washington street. Virginia Tech. Blacksburg,VA 24061-0447 phone:(540)231-8138 Fax: (540) 231-2606 |
From: Jonathan S. <js...@pc...> - 2004-07-02 04:30:29
|
Sucheta: I am in the process of using sqlldr to load data in to a new TESS table that has lots of data that we want to compress using Oracle's built-in compression ability. The table has all of the normal GUS overhead columns and will be queried using GUS objects and/or SQL as usual. If the overhead columns are constant for the load, they can be set in the header of the SQL loader file. (See example below; note the limited date resolution.) Once in the db, these rows are as good as any other rows. I 'need' to use sqlldr because it is the simplest way to make sure the data gets compressed as it's loaded. I will probably be doing more of this in the future. Will let you know how it goes. Since your data is not in originally in GenBank format, there is certainly much merit in writing a loader specifically for your data format. We certainly do not convert everything to GB format to put it in GUS. As you note, this is especially true when there is extra data that is not well handled by GenBank format. In this scenario, the best thing about using the object layer (as opposed to sqlldr) is that they handle most of the details of getting primary keys, rollbacks, submits over multiple tables, GUS overhead rows, etc. You get none of this with the bulk loader, though you could (re)create some of it I guess. We use the objects to load the PlasmoDB data read from XML files and I think the performance is perfectly acceptable. On the other hand, if you have to convert your data to GenBank format to submit it to GenBank anyway, then it wouldn't hurt to use gbparser and try to track down and fix (or at least report) any errors you encounter. GBparser should be kept in good working order! In any case, it is a good idea to compare the contents of the db after a run to make sure everything that you think should have gone in did in fact go in. Jonathan Schug ----- LOAD DATA INFILE * APPEND INTO TABLE TESS.PredictedBindingSite FIELDS TERMINATED BY ',' ( predicted_binding_site_id, model_id, na_sequence_id, score, begin, end, is_reversed, modification_date constant '19-JUN-04', user_read constant 1, user_write constant 1, group_read constant 1, group_write constant 1, other_read constant 1, other_write constant 0, row_user_id constant 7, row_group_id constant 0, row_project_id constant 0, row_alg_invocation_id constant 247696 ) BEGINDATA 1,5017,100928782,6.277017,41202114,41202122,0 ... ------------------------------------------------------------------------ --- Jonathan Schug Center for Bioinformatics js...@pc... Computational Biology and Informatics Lab (215) 573-3113 voice University of Pennsylvania, (215) 573-3111 fax 1413 Blockley Hall, Philadelphia, PA 19014-6021 |
From: Sucheta T. <su...@vb...> - 2004-07-01 19:53:30
|
Steve, The major issues for using sql lodaer for our data will be: 1.. Our data is sequenced and annotated by us and is not yet submitted to genbank and exist in no defined format. 2. As I understand the system, the GBParser is the plugin which uploads majority of annotated data into GUS from genbank format. But from the test data we have used it looks like (in case of arabidopsis acc: AP004850), out of 30 proteins only 9 get registered into NAProtein. Similarly several views like genefeature, transcript etc. also get less number of entries than the actual data. This basically makes us nervous to use gbparser and then later track which did not get into where(Or is there a tracking tool available for doing that). Also about the other functional annotation that I had posted earlier does not have a definitive way of getting into the tables, so in those cases also we may have to load them in bulk. Please let me know if anyone has a better suggestion for doing this .. Thanks Sucheta At 03:01 PM 7/1/2004 -0400, Steve Fischer wrote: >sucheta- > >we only use bulk loading to transfer data from one GUS to another. > >we don't have any experience in loading somebody else's data into gus that >way. > >i don't know much about sql loader, having never used it, but, here are >some issues that come to mind, if i understand it, which i might not: > 1. it is primarily designed to load simple data into a table or two. >typically when we load data we have to do significant transformations, >which is why we write plugins. what data do you plan on loading and into >what tables? > >2. if you don't use plugins to load your data then you will probably >bypass the tracking GUS uses, eg, who/what/when put the data in. > >steve > >Sucheta Tripathy wrote: > >>Hello All, >> >>Sorry for bombarding the forum with several questions. >> >>I am planning to use sqlldr for lots of our data to be uploaded to GUS. I >>can't foresee now if it may have some negative impact later. Can someone >>clarify me what impact it may have. >> >>Thanks >> >>Sucheta >> >> >> >>------------------------------------------------------- >>This SF.Net email sponsored by Black Hat Briefings & Training. >>Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital >>self defense, top technical experts, no vendor pitches, unmatched >>networking opportunities. Visit www.blackhat.com >>_______________________________________________ >>Gusdev-gusdev mailing list >>Gus...@li... >>https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev > |
From: Sucheta T. <su...@vb...> - 2004-07-01 19:31:26
|
Steve, Following is the data snippet. ******************* /translation="MEAADAADSVLHGDLLECVLLRVPHGELTASPALVSREWRRAAR EAHQRHRRRRRHLPCLVAHVHGAAAGVGRSTHVYDPRAGAWASDGWRVAGALPVRRCA CAGGDRVYALSLASMAVSEDAVGAAWRELPPPRVWRVDPVVAAVGPHVVVLGGGCGAT AAAGVVEVLDEGAGWATCPPMPAPLASRWVSSAASERRVYVVERRTGWASWFDPAARQ WGPARQLQLPEGNNTASVESWAACGVTTSGGGGASERLLVLAGGGGGKVSLWGVDGDT LLLDAEANNTSMPPEMSERLGGAGSIAAAAAGAASGYVYNASEPSKGAVRYELVDAEV GGGHGSYSDSDSKNGRHEKTWGKRSSGGSRWEWEWLPCPPAAAAAMSTSSSAVVVFAC CGSSSAPNK" gene join(22480..22704,23721..23786) /gene="OJ1342_D02.4" misc_feature join(22480..22704,23721..23786) /gene="OJ1342_D02.4" /note="gag-pol polyprotein-like" gene complement(25925..26257) /gene="OJ1342_D02.5" mRNA complement(<25925..>26257) /gene="OJ1342_D02.5" /note="start and end point are not identified" CDS complement(25925..26257) /gene="OJ1342_D02.5" /note="predicted by FGENESH etc." /codon_start=1 /product="hypothetical protein" /protein_id="BAD19559.1" /db_xref="GI:47497506" ************************************* In this case the prediction was by FGENESH. Sucheta At 02:43 PM 7/1/2004 -0400, you wrote: >sucheta- > >can you clarify where the prediction_program_algorithm is used in your >data. Is it in a genbank file? can you provide a snippet of that data? > >thanks, >steve > >Sucheta Tripathy wrote: > >>Hi, >> >>My most immediate concern is about locating where the >>prediction_program_algorithm is stored in GUS. Here I mean the gene >>prediction programs like genscan, glimmer etc. >> >>In some of the views which gets data from NAFeatureIMP though I see >>prediction_algorithm_id but that refers to core.algorithm, and which just >>stores the plugin information, that are used in the current project. >>For example in our core.algorithm table the following is the data: >>id Name >>=== ===== >>1 SQL*PLUS >>2 GA-Plugin >>3 GUS::Common::Plugin::SubmitRow >>4 GUS::Common::Plugin::LoadTaxon >>5 GUS::GOPredict::Plugin::LoadGoOntology >>.... >>So the prediction_algorithm_id is wrongly pointing to core.algorithm?? >> >> >>thanks >>Sucheta >> >> >> >>------------------------------------------------------- >>This SF.Net email sponsored by Black Hat Briefings & Training. >>Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital >>self defense, top technical experts, no vendor pitches, unmatched >>networking opportunities. Visit www.blackhat.com >>_______________________________________________ >>Gusdev-gusdev mailing list >>Gus...@li... >>https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev > |
From: Steve F. <sfi...@pc...> - 2004-07-01 19:01:38
|
sucheta- we only use bulk loading to transfer data from one GUS to another. we don't have any experience in loading somebody else's data into gus that way. i don't know much about sql loader, having never used it, but, here are some issues that come to mind, if i understand it, which i might not: 1. it is primarily designed to load simple data into a table or two. typically when we load data we have to do significant transformations, which is why we write plugins. what data do you plan on loading and into what tables? 2. if you don't use plugins to load your data then you will probably bypass the tracking GUS uses, eg, who/what/when put the data in. steve Sucheta Tripathy wrote: > Hello All, > > Sorry for bombarding the forum with several questions. > > I am planning to use sqlldr for lots of our data to be uploaded to > GUS. I can't foresee now if it may have some negative impact later. > Can someone clarify me what impact it may have. > > Thanks > > Sucheta > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital > self defense, top technical experts, no vendor pitches, unmatched > networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev |
From: Steve F. <sfi...@pc...> - 2004-07-01 18:43:32
|
sucheta- can you clarify where the prediction_program_algorithm is used in your data. Is it in a genbank file? can you provide a snippet of that data? thanks, steve Sucheta Tripathy wrote: > Hi, > > My most immediate concern is about locating where the > prediction_program_algorithm is stored in GUS. Here I mean the gene > prediction programs like genscan, glimmer etc. > > In some of the views which gets data from NAFeatureIMP though I see > prediction_algorithm_id but that refers to core.algorithm, and which > just stores the plugin information, that are used in the current project. > For example in our core.algorithm table the following is the data: > id Name > === ===== > 1 SQL*PLUS > 2 GA-Plugin > 3 GUS::Common::Plugin::SubmitRow > 4 GUS::Common::Plugin::LoadTaxon > 5 GUS::GOPredict::Plugin::LoadGoOntology > .... > So the prediction_algorithm_id is wrongly pointing to core.algorithm?? > > > thanks > Sucheta > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital > self defense, top technical experts, no vendor pitches, unmatched > networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev |
From: Jinal J. <jjh...@vb...> - 2004-07-01 16:26:11
|
Core.Algorithm is the table where we (or the plugin) make an entry of the algorithm used. All the algorithm ids point to this table. Thus this table is not limited to plugin entries only. --Jinal p.s Though Theoritically I assume that whatever we will be doing with GUS should ideally be with a plugin written by us according to the format described on the wiki and the website. On Thursday 01 July 2004 12:15 pm, Sucheta Tripathy wrote: > Hi, > > My most immediate concern is about locating where the > prediction_program_algorithm is stored in GUS. Here I mean the gene > prediction programs like genscan, glimmer etc. > > In some of the views which gets data from NAFeatureIMP though I see > prediction_algorithm_id but that refers to core.algorithm, and which just > stores the plugin information, that are used in the current project. > For example in our core.algorithm table the following is the data: > id Name > === ===== > 1 SQL*PLUS > 2 GA-Plugin > 3 > GUS::Common::Plugin::SubmitRow > 4 > GUS::Common::Plugin::LoadTaxon > 5 > GUS::GOPredict::Plugin::LoadGoOntology > .... > So the prediction_algorithm_id is wrongly pointing to core.algorithm?? > > > thanks > Sucheta > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Gusdev-gusdev mailing list > Gus...@li... > https://lists.sourceforge.net/lists/listinfo/gusdev-gusdev |
From: Sucheta T. <su...@vb...> - 2004-07-01 16:24:01
|
Hello All, Sorry for bombarding the forum with several questions. I am planning to use sqlldr for lots of our data to be uploaded to GUS. I can't foresee now if it may have some negative impact later. Can someone clarify me what impact it may have. Thanks Sucheta |
From: Sucheta T. <su...@vb...> - 2004-07-01 16:15:23
|
Hi, My most immediate concern is about locating where the prediction_program_algorithm is stored in GUS. Here I mean the gene prediction programs like genscan, glimmer etc. In some of the views which gets data from NAFeatureIMP though I see prediction_algorithm_id but that refers to core.algorithm, and which just stores the plugin information, that are used in the current project. For example in our core.algorithm table the following is the data: id Name === ===== 1 SQL*PLUS 2 GA-Plugin 3 GUS::Common::Plugin::SubmitRow 4 GUS::Common::Plugin::LoadTaxon 5 GUS::GOPredict::Plugin::LoadGoOntology .... So the prediction_algorithm_id is wrongly pointing to core.algorithm?? thanks Sucheta |