From: Pedro M. S. O. <pms...@gm...> - 2011-12-20 04:31:45
|
sorry to take long to reply. yes it saves me a lot of time, let me explain. although I have a fast san and servers the time for fetching lots of small files is high, the max bandwidth i could get was about 5MB/s, increasing concurrecy i can get about 20-40MB/s depending on what im backingup at the moment. this way i can get more out of the san and backup server. if i increase concurrency even more i can reach higher performance but i don't want to steal all io available for backuppc, but to be sincere I don't need it anyway as i get really good performance this way. This setup is running on a large finantional group and it outperforms very expensive (and complex) proprietary solutions. the backuppc should have a fairly amount of ram, cpu, and isn't virtualized. in my case a 4 core server and 8GB ram (although it swaps a bit), i'm also using ssh+ rsync which add some overhead but not critical in any way. cheers pedro Sent from my galaxy nexus. www.linux-geex. com On Dec 19, 2011 6:05 PM, "Jean Spirat" <jea...@sq...> wrote: > Le 18/12/2011 20:44, Pedro M. S. Oliveira a écrit : > >> >> you may try to use a rsyncd directly on the server. This may speed up >> things. >> another thing is to split the large backup into several smaller ones. >> I've an email cluster with 8TB and millions of small files (I'm using >> dovecot), theres also a san involved. in order to use all the bandwidth >> available I configured backup to run from username starting in a to e, f >> to j and so on, then they all run at the same time. incremental take about >> 1 hour and full about 5. >> cheers >> pedro >> >> > I directly mount the nfs share on the backuppc server so no need for > rsyncd here this is like local backup with the NFS overhead of course. > > Do you won a lot from splitting instead of doing just one big backup ? At > least you seems to have the same kind of file numbers i have. > > regards, > Jean. > |