From: John G. <jgo...@co...> - 2011-02-23 14:45:04
|
BackupPC is being surprisingly slow even for incrementals. This appears to be tied to the rsync backend in some fashion. Here's an example. It took 5 *HOURS* to run an incremental over a machine in which the total incremental size was 2383 files of 616MB. This is fairly typical. Examining processes with strace and lsof, it appears that the BackupPC unnecessarily is opening every single file from the existing backup sets on the server -- even if they hadn't changed a bit. I am using cpool with rsync checksum caching -- the latter turned on hoping it would help, but it didn't. The most recent full backup of this system took 56 *HOURS*, though that may be before checksum caching had a chance to kick in. The first full backup of it took only 17 hours, and I didn't do anything like copy tons of new stuff to it or anything. That most recent full backup had 787941 files which was 444GB. From what I have seen to date, tar doesn't have this problem. However, due to the limitations of the tar backups documented at http://bit.ly/fbiCyh I really wish to avoid using tar for backups wherever I can. The system being backed up in this example is a Core 2 Duo running Debian squeeze, 8GB ram. The BackupPC server is a dual-core Pentium E6500 2.93GHz. The backup disk is a 1TB USB drive, ext4, running through LUKS encryption since it goes offsite. The USB bit seems to contribute more to slowness than the LUKS bit on this system. But the end effect is that if BackupPC is being non-smart about how it accesses the disk on the server side, this will be magnified by the USB interface. What can be done to fix this? Combined with the problem of very slow performance for large files with rsync [1] (which did NOT refer to a problem with slow disks), I am starting to doubt whether the rsync backend is really useable yet. Thanks, -- John [1] http://bit.ly/hCJ7Gj |
From: Jeffrey J. K. <bac...@ko...> - 2011-02-23 15:47:07
|
John Goerzen wrote at about 08:44:54 -0600 on Wednesday, February 23, 2011: > BackupPC is being surprisingly slow even for incrementals. This appears > to be tied to the rsync backend in some fashion. > > Here's an example. It took 5 *HOURS* to run an incremental over a > machine in which the total incremental size was 2383 files of 616MB. > This is fairly typical. > > Examining processes with strace and lsof, it appears that the BackupPC > unnecessarily is opening every single file from the existing backup sets > on the server -- even if they hadn't changed a bit. > > I am using cpool with rsync checksum caching -- the latter turned on > hoping it would help, but it didn't. > > The most recent full backup of this system took 56 *HOURS*, though that > may be before checksum caching had a chance to kick in. The first full > backup of it took only 17 hours, and I didn't do anything like copy tons > of new stuff to it or anything. That most recent full backup had 787941 > files which was 444GB. > > From what I have seen to date, tar doesn't have this problem. However, > due to the limitations of the tar backups documented at > http://bit.ly/fbiCyh I really wish to avoid using tar for backups > wherever I can. > > The system being backed up in this example is a Core 2 Duo running > Debian squeeze, 8GB ram. The BackupPC server is a dual-core Pentium > E6500 2.93GHz. The backup disk is a 1TB USB drive, ext4, running through > LUKS encryption since it goes offsite. The USB bit seems to contribute > more to slowness than the LUKS bit on this system. But the end effect > is that if BackupPC is being non-smart about how it accesses the disk on > the server side, this will be magnified by the USB interface. > > What can be done to fix this? Combined with the problem of very slow > performance for large files with rsync [1] (which did NOT refer to a > problem with slow disks), I am starting to doubt whether the rsync > backend is really useable yet. > > Thanks, > > -- John > Are you using lvm snapshots? It seems that some of my terribly slow recent performance has been due to having lvm snapshots. I saw a site that claimed lvm snapshots in practice can degrade write performance up to 20-30x... lvm snapshots also made deleting old file trees *incredibly* slow |
From: Timothy J M. <tm...@ob...> - 2011-02-23 20:08:48
|
"Jeffrey J. Kosowsky" <bac...@ko...> wrote on 02/23/2011 10:46:54 AM: > Are you using lvm snapshots? > It seems that some of my terribly slow recent performance has been due > to having lvm snapshots. I saw a site that claimed lvm snapshots in > practice can degrade write performance up to 20-30x... This is mostly true when your log file is on the same spindle as the LV you're snapshotting. That should be considered an absolute no-no. You get the type of performance dropoff you're talking about. LVM snapshots pretty much require a dedicated drive for the log. Or drives: if the volume you're snapshotting is a, say, 6-disk RAID array, and you give only a single drive for the log, everything gets bottlenecked by the log drive. Timothy J. Massey Out of the Box Solutions, Inc. Creative IT Solutions Made Simple! http://www.OutOfTheBoxSolutions.com tm...@ob... 22108 Harper Ave. St. Clair Shores, MI 48080 Office: (800)750-4OBS (4627) Cell: (586)945-8796 |
From: John G. <jgo...@co...> - 2011-02-24 01:55:18
|
Jeffrey J. Kosowsky <backuppc <at> kosowsky.org> writes: > Are you using lvm snapshots? > It seems that some of my terribly slow recent performance has been due > to having lvm snapshots. I saw a site that claimed lvm snapshots in > practice can degrade write performance up to 20-30x... > > lvm snapshots also made deleting old file trees *incredibly* slow You are right; they do make things slow. But I'm not using them. -- John |