From: Ryan S. <fi...@ha...> - 2006-02-01 19:20:11
|
All, We have run our first full backup of our data server, and we are in the process of restoring the entire system to a temporary drive. Everything is going great, although when bconsole goes to grab all of the files (before we are able to mark the ones that we want), it take 10-15 minutes before we can mark any files. A full backup consists of 803.4 GB and about 10 million file. mysql> select count(*) from File; +----------+ | count(*) | +----------+ | 9978625 | +----------+ 1 row in set (0.00 sec) Currently, with just this full backup (no incrementals yet), the size of the database is 1.7 GB. While the query is executing, bacula-dir consumes over 2GB of memory, then the building of the file tree takes another 600MB. We are planning to keep a record of full backups for a year, and with the current growth rate of the database, it will likely get quite large indeed. The foreseen problem here is that as the size of our full backups grow, so will the amount of memory that the queries need to run. This will pose the biggest problem for situations where a use wants to restore just a few files. Also, the current query takes up almost all of the 1 GB of memory plus 2 GB of swap. The system is a dual PIII-800 with 1G of RAM. Are there people out there restoring a similar size/number of files? If so, I would like to hear what hardware you are running. On the other hand, once the file tree is built, marking files is relatively quick. The restore begins almost as soon as the job is started, whereas our current backup manager (Networker) takes over 12 hours to select all of the files it is going to need, then dies because it has lost its connection to the client. So my question to the community is: Is there any way to speed up the building of the initial file tree? Or is there way of limiting the query size? Best Regards, Ryan Sizemore |