From: Bryan K. (.net) <bk...@ke...> - 2012-07-05 15:40:38
|
Good thread here. Jonas - sorry for my reply about the initial backups - you clearly know what you're doing, and thus you're issue is after your initial backups. That XFS might be the problem sounds plausible. Please report back what you find for those of us following this thread. Finally, what are "typical" SMB backup speeds I should be seeing? On Thu, Jul 5, 2012 at 9:10 AM, Adam Goryachev < mai...@we...> wrote: > On 05/07/12 23:53, jonas wrote: > > Hello again, > > > > first, I forgot to mention an important detail: we changed from > > ext3 to xfs on the 5TB RAID10 array which holds all BackupPC data > > during the re-setup. > > > > Am 05.07.2012 15:04, schrieb Adam Goryachev: no, this doesn't seem > > to be the problem. I didn't find any unexpected logs. The pc/$HOST > > directory shrinks a lot during the link-process, so I consider this > > as evidence that pooling actually works as expected. I did > > recognize though, that the pool directory is empty on all BackupPC > > servers. All pooled files seem to be stored in cpool instead. Is > > this expected? > compressed files are in cpool, so this suggests you have compression > enabled (for file storage, not in transit). This is 'normal'. > > 2) Filesystem or hard drive layout/configuration (ie, RAID level, > > layout, chunk size, ext3 compared to jfs, etc) As written above: > > moved from ext3 to xfs as we thought that this might increase > > performance. > Obviously this is the big change, so you should probably start here. I > would suggest reading up on how to optimize xfs for backuppc, or > generically, a large number of small read/writes. > > I recently subscribed to the linux-raid mailing list for a backuppc > related issue in fact, and saw a few un-related (to me) posts regarding > default raid stripe size, and (I think it was) xfs file system being > problematic. I think there is some relationship with the FS storing some > type of data every x bytes per disk, and if this data all ends up on the > same physical disk (or in your case pair of disks) then you end up with > a dis-proportionate amount of load on a small subset of your disk array. > It seems you are using hardware raid which presents the disk as "sdk", > however, the basic idea would still apply (stripe size of the raid and > block size of the FS). > > I'd suggest to subscribe to the xfs mailing list, and/or look into how > you should optimise your filesystem to improve performance (defaults are > not always ideal). > > Regards, > Adam > > -- > Adam Goryachev > Website Managers > Ph: +61 2 8304 0000 ad...@we... > Fax: +61 2 8304 0001 www.websitemanagers.com.au > > -- > Adam Goryachev > Website Managers > www.websitemanagers.com.au > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |