From: Mark S. <ma...@al...> - 2007-06-07 03:37:15
|
Jason M. Kusar wrote: > Mark Sopuch wrote: > >> Hi all, >> >> I'd like to group data (let's just say dept data) from certain hosts >> together (or actually to seperate some from others) to a different >> filesystem and still keep the deduping pool in a common filesystem. >> Problem is that hard links would break in doing this if I am not >> mistaken. Is the only alternative to use seperately installed >> instances of BackupPC per-group and share nothing across BackupPC >> instances ? >> > > Yes, hard links do not work across filesystems. A hard link is simply > a second entry pointing to the same inode. If it's just an access > thing, could you just make different users with access to the > different data? The hosts are already kept separate in terms of the > interface. If you need data separated on a single host, just set up > two different hosts in the interface and have them each back up a > separate portion of the host. > > --Jason Thanks Jason. In it's own right aliasing for seperation of filter trees is a good idea sure but my concerns lie mainly with certain types of hosts (data) encroaching quite wildly into the shared allocated space under DATA/... thus leaving less room for the incoming data from other hosts. It's a space budgetting and control thing. Using the quota'ing capabilities of our NetApp's I figured I may be able to get soft and hard quota's blocking wildly-higher-than-budgetted backups from a host(s) based on their group or type or class or similar by seperatnig them out to seperate filesystems. At the moment all my backups are in the same instance of BackupPC and in the same fs. I am not sure how any other quoting schemes would work to provide similar capability for soft and hard quota if they are in the same fs and usernames are not stamped around in DATA/... to differentiate such things to those other quota'ing systems. Sure I want to back everything up but I do not want the bulkiest least important thing blocking a smaller top priority backup getting space to write to when there's a mad run of new data. Hope I am being clear enough. Thanks again. Mark |