From: Holger P. <wb...@pa...> - 2007-05-30 23:35:59
|
Hi, Vyacheslav Klimov wrote on 28.05.2007 at 15:30:09 [[BackupPC-users] mistake when copying files to new location, incorrect size reported]: > I had to copy all folders under /var/lib/backuppc to another partition, > and then mount it under /var/lib/backuppc. I didn't read the manual (my > fault :( ), and did copying using cp without -a option. However, backuppc > is still working. But now it reports wrong pool size: > Pool is 0.52GB... > with its real size being around 13 Gb. > How do I make it sort of recount pool size? the reported pool size is most probably correct. Your problem is of a different nature. Depending on whether BackupPC_nightly has run since copying the pool or not, you might be in either of these two situations: 1. BackupPC_nightly has run. This is probably the case, since you say that "BackupPC is still working". BackupPC_nightly removed everything in the initial pool, since the link counts indicated that no pool file was used by any backup (which is in fact true). In the meantime, new backups have put 0.52GB of data into the pool. This data may or may not have been among the previous pool contents. If a backup ran prior to BackupPC_nightly, it may have in fact created links to pool files which were then not removed. There's really no difference now whether the file was deleted and re-created with identical contents or preserved though. Note that the copies of data files in the individual backups are not affected by removing the pool files. You just don't get the advantages of pooling. 2. BackupPC_nightly has not run yet. The pool contains 0.52GB of data. The backups contain 13GB worth of identical data which should be linked to the pool files but are not. The situation is really the same (only we're talking about much less data): you've got redundant copies of data files in the pool and in the individual backups. The pool copies will be removed once BackupPC_nightly runs, saving you at least the one copy you don't need. New backups can either link to pool files before that happens or re-create them later. Re-creating is somewhat slower (because the data needs to be compressed instead of uncompressed), but other than that there is no difference. Pooling (meaning the space savings) will work for new backups only. You can recover from this situation in several ways: 1. Simply wait for the affected backups to expire. If you've got the space, this is simple and not much can go wrong. Each expiring backup will immediately free all the space it takes up, as there is only one link to each file in it. That means this may be an option even if your pool file system usage is currently significantly above 50%, as long as for each new backup an old backup of the same type will expire. 2. Get BackupPC_link to re-process all backups. BackupPC_link's job is to put new files into the pool or replace them with links to pool files if files with identical contents already exist in the pool (this can normally happen if one or more concurrent backups try to create several copies of an identical file which was not previously in the pool). That's exactly what we need. It's not quite as simple as invoking 'BackupPC_link -with some -strange options' though, so I won't spend any time working out what you'd need to do unless you actually want to try it :-). It *will* take quite a lot of CPU time and disk I/O to complete, and there's always a slight chance something might go wrong (though BackupPC_link is a part of BackupPC you use after every backup, so it's fair enough to trust it to work :-). BackupPC_link should be run by the BackupPC daemon, because it knows what may run concurrently and what may not! If you're planing to keep these backups for a long time, there's probably no way around doing this. 3. A combination of 1. and 2. Ignore backups due to expire soon and re-process those you're keeping for a long time (eg. appropriate full backups with an exponential expiry schedule). Processing each backup will take time. You can save that for backups which are "about to expire" anyway. 4. cp -a. If you've still got the original data and the backups you've done in the meantime are not important, you could try to re-run cp with the -a option. Be prepared for that to take really long ... This is probably not a good idea, but I thought I'd mention it for completeness' sake. Thinking about it, your 'cp' command *did* finish without error, didn't it? Your target file system *did not* run out of space before 'cp' finished, right? Due to undoing the effects of pooling, your destination file system would need to have had at least twice and more typically something like ten times as much space as your pool previously occupied on the old file system for the 'cp' to succeed ... Regards, Holger |