From: Ross S. <rs...@rs...> - 2003-08-11 20:38:44
|
Hello, I'm using backuppc 2.0.0 to backup several different linux hosts. One of these hosts is a cvs server, and people typically use the other hosts for local cvs work. In our cvs repository, there's a file "Root" that just contains the path to CVSROOT. This file appears all over the tree. Since I'm backing up the cvs server, plus everyone's local cvs tree and doing this multiple times for the full backups, I have reached the hard link limit for the file "Root". The backuppc log file shows thousands of failed links attempts such as: 2003/8/11 05:10:32 BackupPC_link got error -3 when calling \ MakeFileLink(/home/backuppc/pc/*snip*/fRoot, \ 1899ff24d1230e28b0499687a2057e12, 1) A manual attempt to create a hard link to that pool file using "ln" fails, returning the error: too many links. The file system I'm working with is running reiserfs. I ran a simple script to test the hard link limit. I was able to make over 64,000 hard links to one file. After that, a manual "ln" fails in exactly the same way as the "ln" I did to that pool file. I believe it's quite possible we've reached this number with the "Root" file. When backuppc keeps track of the number of links to a file, does it not work properly with reiserfs? I've always had "$Conf{HardLinkMax} = 31999;" set in the configuration file. Is there something else I can check? Thanks very much, -Ross |