From: Val T. <va...@ci...> - 2009-04-29 20:27:46
|
MJD had the exact figures a few months ago. What are they, Mark? Val On Apr 29, 2009, at 4:04 PM, William Piel wrote: > > On Apr 29, 2009, at 3:01 PM, Rutger Vos wrote: > >>> How big do you think the total dump files will be? For Postgresql, >>> our data >>> dump sizes depend upon the type of data in the database. We've got >>> a 400 MB >>> database with a fair amount of binary data that dumps to a 200 MB >>> file, and >>> we've got a 1 GB database with no binary data that dumps to a 20 MB >>> file. >>> Do you have an idea of the size of the current DB2 database on >>> disk, and >>> what kind of data is in there (test or binary)? >> >> I don't know the exact size of the current database, but it's larger >> than your cases. > > I think DB2's footprint is on the order of 100 GB -- is that right? > > Our largest table is probably the matrix element data -- let's say > that there are 3,000 matrices with an average of 100 taxa and 1000 > bases -- that by itself is 300,000,000 records. The next biggest is > probably the edges table -- we have about 5,500 trees, supposing the > average tree has 80 leaves. That comes to about 5500 * ((80 * 2) - 1) > = 875,000 edge records. After that, the taxon_variant table has about > 550,000 records, etc. > > bp > > > > ------------------------------------------------------------------------------ > Register Now & Save for Velocity, the Web Performance & Operations > Conference from O'Reilly Media. Velocity features a full day of > expert-led, hands-on workshops and two days of sessions from industry > leaders in dedicated Performance & Operations tracks. Use code > vel09scf > and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf > _______________________________________________ > Treebase-devel mailing list > Tre...@li... > https://lists.sourceforge.net/lists/listinfo/treebase-devel |