Apologies if this is a stupid question, but I've been using snapraid forever with almost no errors. Primarily, I've seen the excel-touching-file-contents-without-updating-the-timestamp problem, and periodically the zero-file-size problem for things like mailboxes that get emptied. So, practically no genuine file errors.
I have an overnight script that runs a sync, and then a scrub of 8% of the array. Last night, it gave the following "WARNING! Unexpected file errors!" message, but snapraid didn't report which specific files or blocks.
I know that sometimes snapraid can error if the server is under load when it's scrubbing. In this case, I'm 99% sure that no files in the array were being altered during the scrub, and the load was probably no more than usual at 7am.
"status" gives "no error detected", and "diff" gives "no differences". I'm hesitant to run "fix" without knowing what files snapraid thinks it's going to fix. Though I suspect that "fix" will also report that there are no errors.
Is there any way of getting additional insight into the unexpected file error warning? Or should I just assume it was a transient read glitch during scrub, run "fix", and then forget about it if there's nothing to be fixed?
Additional data point: "smart" reports one error in each data disk. But that doesn't seem like a lot, given that these disks have been running for years. In fact, it seems like the kind of thing I'd expect if there was a power glitch during scrub, and there were no actual disk data errors.
So while I'm still not happy about seeing "WARNING! Unexpected file errors!", I'm not sure there's any more I can do to figure out what happened.
I suppose I could run a full "snapraid check". But that'll likely take a couple of days of running flat-out, on my ancient file server. And my current overnight script will scrub the entire array in about two weeks on its own. So it's easier to just let it do its thing.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Aha - so I found the original source of the problem: there was an excel file that produced a "Unexpected data modification of a file without parity!" warning message in a sync run from several days ago that I had missed. I'm assuming that the file contents were changed without changing the mtime, when it was opened for reading by excel, making it no longer match to another copy of the file elsewhere in the array. The sync run says as much:
Thisfilewasdetectedasacopyofanotherfilewiththesamename,size,
andtimestamp,butthefiledataisn't matching the assumed copy.If this is a false positive, and the files are expected to be different,youcan'sync'anywayusing'snapraid --force-nocopy sync'
But this kind of error (a file error, vs. a block error?) is apparently not recorded anywhere, or at least it's not reported by "diff" or "status". So my overnight script was running "diff", seeing no differences, and then skipping "sync", and going straight to "scrub".
Because "sync" is the only command that produces any real info about this type of error, and I'd missed the first time it was reported, the overnight script had been scrubbing in this state for several days. I guess it was only when the scrub reached the file in question that it was able to detect that something was wrong. (It reported "2 file errors" - which I assume means the two mis-matched copies of one xls file.)
I ran a sync --force-nocopy to fix the issue. Weirdly, it didn't report any added/changed files. I would have thought if it was essentially being told to treat two matched files as being separate entities, it would consider one of them an addition?
Anyway, at this point, I'm assuming tonight's overnight scrub will complete without errors.
And the moral of the story is to always read your logs carefully. And that for certain types of errors, like this one, don't expect diff/status/scrub to be able to tell you anything useful. (Well, "status" did report that the array was not fully synced - which could have been a hint.) sync is the only one that really knows what's going on.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Apologies if this is a stupid question, but I've been using snapraid forever with almost no errors. Primarily, I've seen the excel-touching-file-contents-without-updating-the-timestamp problem, and periodically the zero-file-size problem for things like mailboxes that get emptied. So, practically no genuine file errors.
I have an overnight script that runs a sync, and then a scrub of 8% of the array. Last night, it gave the following "WARNING! Unexpected file errors!" message, but snapraid didn't report which specific files or blocks.
I know that sometimes snapraid can error if the server is under load when it's scrubbing. In this case, I'm 99% sure that no files in the array were being altered during the scrub, and the load was probably no more than usual at 7am.
"status" gives "no error detected", and "diff" gives "no differences". I'm hesitant to run "fix" without knowing what files snapraid thinks it's going to fix. Though I suspect that "fix" will also report that there are no errors.
Is there any way of getting additional insight into the unexpected file error warning? Or should I just assume it was a transient read glitch during scrub, run "fix", and then forget about it if there's nothing to be fixed?
Additional data point: "smart" reports one error in each data disk. But that doesn't seem like a lot, given that these disks have been running for years. In fact, it seems like the kind of thing I'd expect if there was a power glitch during scrub, and there were no actual disk data errors.
Last edit: Mitchell Deoudes 2026-02-21
Yep - as expected, running
snapraid fix -eresulted in nothing to do:So while I'm still not happy about seeing "WARNING! Unexpected file errors!", I'm not sure there's any more I can do to figure out what happened.
I suppose I could run a full "snapraid check". But that'll likely take a couple of days of running flat-out, on my ancient file server. And my current overnight script will scrub the entire array in about two weeks on its own. So it's easier to just let it do its thing.
Aha - so I found the original source of the problem: there was an excel file that produced a "Unexpected data modification of a file without parity!" warning message in a sync run from several days ago that I had missed. I'm assuming that the file contents were changed without changing the mtime, when it was opened for reading by excel, making it no longer match to another copy of the file elsewhere in the array. The sync run says as much:
But this kind of error (a file error, vs. a block error?) is apparently not recorded anywhere, or at least it's not reported by "diff" or "status". So my overnight script was running "diff", seeing no differences, and then skipping "sync", and going straight to "scrub".
Because "sync" is the only command that produces any real info about this type of error, and I'd missed the first time it was reported, the overnight script had been scrubbing in this state for several days. I guess it was only when the scrub reached the file in question that it was able to detect that something was wrong. (It reported "2 file errors" - which I assume means the two mis-matched copies of one xls file.)
I ran a sync --force-nocopy to fix the issue. Weirdly, it didn't report any added/changed files. I would have thought if it was essentially being told to treat two matched files as being separate entities, it would consider one of them an addition?
Anyway, at this point, I'm assuming tonight's overnight scrub will complete without errors.
And the moral of the story is to always read your logs carefully. And that for certain types of errors, like this one, don't expect diff/status/scrub to be able to tell you anything useful. (Well, "status" did report that the array was not fully synced - which could have been a hint.) sync is the only one that really knows what's going on.