From: Jeffrey J. K. <bac...@ko...> - 2008-10-31 00:26:46
|
John Rouillard wrote at about 20:13:15 +0000 on Thursday, October 30, 2008: > On Thu, Oct 30, 2008 at 10:04:26AM -0400, Jeffrey J. Kosowsky wrote: > > Holger Parplies wrote at about 11:29:49 +0100 on Thursday, October 30, 2008: > > > Hi, > > > > > > Jeffrey J. Kosowsky wrote on 2008-10-30 03:55:16 -0400 [[BackupPC-users] Duplicate files in pool with same CHECKSUM and same CONTENTS]: > > > > I have found a number of files in my pool that have the same checksum > > > > (other than a trailing _0 or _1) and also the SAME CONTENT. Each copy > > > > has a few links to it by the way. > > > > > > > > Why is this happening? > > > > > > presumably creating a link sometimes fails, so BackupPC copies the file, > > > assuming the hard link limit has been reached. I suspect problems with your > > > NFS server, though not a "stale NFS file handle" in this case, > > > since the file succeeds. Strange. > > > > Yes - I am beginning to think that may be true. However as I mentioned > > in the other thread, the syslog on the nfs server is clean and the one > > on the client shows only about a dozen or so nfs timeouts over the > > past 12 hours which is the time period I am looking at now. Otherwise, > > I don't see any nfs errors. > > So if it is a nfs problem, something seems to be happening somewhat > > randomly and invisibly to the filesystem. > > IIRC you are using a soft nfs mount option right? If you are writing > to an NFS share that is not recommended. Try changing it to a hard > mount and see if the problem goes away. I only used soft mounts on > read only filesystems. True -- I changed it to 'hard' but am still encountering the problem... FRUSTRATING... It's really weird in that it seems to work the first time a directory is read but after a directory has been read a few times, it starts messing up. It's almost like the results are being stored in cache and then the cache is corrupted. |