From: William P. <wil...@ya...> - 2009-04-29 20:04:42
|
On Apr 29, 2009, at 3:01 PM, Rutger Vos wrote: >> How big do you think the total dump files will be? For Postgresql, >> our data >> dump sizes depend upon the type of data in the database. We've got >> a 400 MB >> database with a fair amount of binary data that dumps to a 200 MB >> file, and >> we've got a 1 GB database with no binary data that dumps to a 20 MB >> file. >> Do you have an idea of the size of the current DB2 database on >> disk, and >> what kind of data is in there (test or binary)? > > I don't know the exact size of the current database, but it's larger > than your cases. I think DB2's footprint is on the order of 100 GB -- is that right? Our largest table is probably the matrix element data -- let's say that there are 3,000 matrices with an average of 100 taxa and 1000 bases -- that by itself is 300,000,000 records. The next biggest is probably the edges table -- we have about 5,500 trees, supposing the average tree has 80 leaves. That comes to about 5500 * ((80 * 2) - 1) = 875,000 edge records. After that, the taxon_variant table has about 550,000 records, etc. bp |