From: Les M. <le...@fu...> - 2008-01-18 18:41:15
|
Timothy J. Massey wrote: > > DRDB might help propagate the disaster in real-time. I'm not convinced > > that is desirable. > > You are correct; but that's what my second idea (replication of > individual backups) is for: *automatic* and hands-off movement of > backup data from one place to another. Much of the beauty of backuppc is that it doesn't know/care at the application level what else is linked to any file, so unless you do this movement at the filesystem level you can't maintain the de-duplication - which is why I think zfs incremental send/receive looks promising. > I want two servers with two pools, but I do not want the load on my > *clients* of having to back up data to two different servers. We have > the same goals (redundant copies of the data), but different ways of > doing it. If you don't care about server CPU and server/server bandwidth, it might be reasonable to use the tar transport on the redundant server but modify the remote command to run BackupPC_tarCreate on the primary instead of hitting the client again. But, since (low performance) disk space is fairly cheap, I'd probably park an intermediate uncompressed rsync copy somewhere that both backuppc servers could access instead. > > But I'm not sure it is so trivial to get the NFS performance you would > > need from such a file server. > > I am *so* *not* talking NFS. Something more like GFS and iSCSI... :) iSCSI isn't going to let 2 servers access the same thing at once. Does GFS give better performance than NFS? -- Les Mikesell les...@gm... |