|
From: Todd F. P. <tf...@nc...> - 2001-12-14 07:52:54
|
oh yeah, it's fast and easy. that was not the case with the curation tool/xml mechanism. we figure, first get the data in the db and then annotate as necessary. this will be good for small labs. hopefully, the design will allow scaleability to larger environments. also, after loading one additional dataset, i saw a big decrease in performance from the db. i'm attempting to solve that problem. the mechanism i am proposing allows the flexibility of having all of the data in the db, or just what is needed. bottom line, we're experimenting by taking small steps and designing/implementing each one well while building on previous steps. todd On 13 Dec 2001, Jason E. Stewart wrote: > "Todd F. Peterson" <tf...@nc...> writes: > > > I have completed a loader that uses a very simple schema. The > > original datafile is referenced int the File_Properties table. When > > data is needed it is loaded into a temporary table on-the-fly. When > > database is near capacity, or performance gets bad. These tables may > > be unloaded. It's kind of a caching mechanism. Attached is picture > > of schema. Will try to get this on the sourceforge site or the > > genex.ncgr.org site soon. > > Hey Todd, > > Sorry, I'm a bit confused. Why are you loading to a separate schema > and not to GeneX proper? How does that help get data into GeneX? > > jas. > > _______________________________________________ > Genex-dev mailing list > Gen...@li... > https://lists.sourceforge.net/lists/listinfo/genex-dev > |