Chuck Witt wrote:
> Hello Group:
> We have been using BackupPC for around a month now. It works great. We
> started with a 200 Gb hard drive devoted to BackupPC data and the disk is
> now full. We are backing up 6 Linux boxes, 4 Windows boxes, and would like
> to eventually backup 60 user boxes (approx 500 Mb each). The linux boxes are
> a mix of FC2 and AS 3, Windows is all 2000, and the users are all using
> 2000. A compression ratio of 3 is set. I want to keep 6 months of fulls, 4
IF your server is just working fir backupPC (or just a few other
services, or just @ night), change the compression method to BZIP2,
and the level to 9.
Be aware that nightly jobs will take much more time, so set a schedule
(i.e. mine) to [2,6,7,8,9...........] in order for nightly jobs to
complete without collision with backups.
(means that nighlty jobs start @ 0200, first backup can't occur till
> weeks of fulls, and 30 days of incremental. Rsync is used almost
> exclusively, except for 1 or 2 boxes that are using SMB because of very
> large data files.
> I have read about LVM and implemeted it on the BackuPC box using a new 250
> Gb hard drive.The BackuPC box now has a 40 Gb drive for the OS, 200 Gb drive
> holding the BackupPC data, and a new empty 250 GB drive to move the original
> data to. I beleive that LVM will allow me to add hard drives as needed to
> increase disk size when needed. In order to keep my old data, I tried to
> copy the directories holding config data, pc data, and pool data to the new
> drive after it had been set up as an LVM Logical Volume. Failed miserably
> because it ran out of disc space (even though the data was sitting on a 200
> Gb drive).
> Here's the question. How do I transfer the data to a new drive? (command
> used was cp -rv /backup/* /backup-new and the data would not fit on new
Check 'man cp' to see about the symblink and hardlink features
(Your way, you consider each merged file as an entire new file instead
of a hard/symb-link, that's what leads you out of space (I guess :)
> Next question is: Are there any "rule of thumb" ideas as to how much
> hard drive space to have to hold data? (We have approx 500 Gb now being held
> on a 200 Gb drive). I would also like to implement a Raid solution in the
With bzip2 @ level 9, the CPU is highly sollicited, BUT the overall
compression is near 60% (57.4% exactly)
My systems are all under Debian Sarge, except one which is Debian Woody.
> future so that I can take a drive or drives offsite for preservation. Any
> experience or ideas? Thanks in advance.
As Chris Mason told you, software raid is largely enough for this
I run a IDE controller based upon the ITE8212 chipset w/ 2 raid arrays
(149 & 244 GB w/ 3 HDs on each)
If you're using a 2.4.xx kernel I can send you the source of the driver,
or you can ask Anthony (from ITE) to supply you the right one.
(He's very sympathetic, and has very quick reaction: he modified the
red-hat/Suse/Mandrake sources for my Debian within 30 Hours :)
Of course, running 2 arrays between regular IDE controllers and ONE
additional external controller divide the transfer speed by 2, but I'm
not that in a hurry.