If I do change the sub-second timestamps, will the resulting files have their parity re-calculated, as if the file contents had changed? Snapraid touch command modifies the time stamps of the data files and modifies the content file accordingly. In other words: Snapraid diff should not consider the files to have been modified after the touch operation and there should be no need to recalculate the parity. Considering the huge amount of files involved, you may still want to consider a small scale...
If I do change the sub-second timestamps, will the resulting files have their parity re-calculated, as if the file contents had changed? Snapraid touch command modifies the time stamps of the data files and modifies the content file accordingly. In other words: Snapraid diff should not consider the files to have been modified after the touch operation and there should be no need to recalculate the parity. Considering the huge amount of files involved, you may still want to consider a small scale...
If I do change the sub-second timestamps, will the resulting files have their parity re-calculated, as if the file contents had changed? Snapraid touch command modifies the time stamps of the data files and modifies the content file accordingly. In other words: Snapraid diff should not consider the files to have been modified after the touch operation and there should be no need to recalculate the parity. Considering the huge amount of files involved, you may still want to consider a small scale...
I can't see anything wrong and sync is doing progress. So I guess you should simply run sync again to let it finish and then pay extra attention on diff and status for the next few syncs.
Something definitely went wrong. I would recommend to press CTRL + C to stop current sync and pay close attention to what happens. Expected output: bla bla interrupted bla bla Saving state to content files bla bla bla verifying content files bla bla bla Everything OK Verify that above happened as expected. Then run snapraid status to confirm that progress % is around what would be expected for the amount of time it has been running. run snapraid diff to confirm that, from snapraids perspective, nothing...
Something definitely went wrong. I would recommend to press CTRL + C to stop current sync and pay close attention to what happens. Expected output: bla bla interrupted bla bla Saving state to content files bla bla bla verifying content files bla bla bla Everything OK Verify that above happened as expected. Then run snapraid status to confirm that progress % is around what would be expected for the amount of time it has been running. run snapraid diff to confirm that, from snapraids perspective, nothing...
Something definitely went wrong. I would recommend to press CTRL + C to stop current sync and pay close attention what happens. Expected output: bla bla interrupted bla bla Saving state to content files bla bla bla verifying content files bla bla bla Everything OK Verify that above happened as expected. Then run snapraid status to confirm that progress % is around what would be expected for the amount of time it has been running. run snapraid diff to confirm that, from snapraids perspective, nothing...
using sudo snapraid --conf /etc/snapraidNA2.conf list does not show me what disk # the files are on You can add this parameter to see full path in list: --test-fmt path
Yes, fix is supposed to be slower than sync and scrub. I guess the good part about that, is that my new server works fine :) during such operations you are more interested in reliability than in performance I can't imagine anyone disagreeing with that. pushing the system to its limit is likely not a good idea I think that this is starting to become a bit less true, here in the future. More specificially, it turns out that my ~1.5 days recovery time was only to recover about 6-7 TB. My largest disk...
Yes, fix is supposed to be slower than sync and scrub. I guess the good part about that, is that my new server works fine :) during such operations you are more interested in reliability than in performance I can't imagine anyone disagreeing with that. pushing the system to its limit is likely not a good idea I think that this is starting to become a bit less true, here in the future. More specificially, it turns out that my ~1.5 days recovery time was only to recover about 6-7 TB. My largest disk...
Yes, fix is supposed to be slower than sync and scrub. I guess the good part about that, is that my new server works fine :) during such operations you are more interested in reliability than in performance I can't imagine anyone disagreeing with that. pushing the system to its limit is likely not a good idea I think that is starting to become a bit less true, here in the future. More specificially, it turns out that my ~1.5 days recovery time was only to recover about 6-7 TB. My largest disk is...
Hi, long time, no see :) Background Yesterday I reached a major milestone, in a little project, to replace my old Windows file server from ~2012, with 18 data drives and 3 parities, with a new fresh machine, running Linux (Kubuntu). Migrating from Windows to Linux was completely painless (Mount all drives, install snapraid v13.0, copy content file, adjust snapraid.conf and save it in /etc/ run sync, done). Both during the initial sync (with only a few new files), and during a small scrub afterwards,...
And you would run into divide by zero in a tiny array with less than 100 blocks, so let's put that idea permanently in the trashcan :)
And you would run into divide by zero in a tiny array with less than 100 blocks, so let's put that idea in the trash can :)
I guess this would also work, by simply avoiding the large number, in case there would be possible issues using uint64_t? out_perc = countpos / (countmax / 100);
There is no need to rebuild parity when adding an additional part to existing parity. But, once the parity has been split you can't undo the split... Instead you can later put both parts on the same device like below example from my current array: parity C:\Mnt\Duo1\p1a.par,C:\Mnt\Duo1\p1b.par,C:\Mnt\Duo2\p1c.par To expand a bit further on above example: p1a.par and p1b.par was originally stored 4 TB disks, protecting 8 TB data disks. When the disk containing p1a.par got full, the parity continued...
I believe it is this old bug: https://sourceforge.net/p/snapraid/discussion/1677233/thread/bab5f97c/#7465 TLDR: Percentage completed resets to 0% when reaching block ~42 million (~11.2 TB data on invidivudal disk with block size 256 kiB). As far as I know, it has no practical effect on anything.
Assuming that you are not getting any error messages while running fix, then I would suspect that the files were bad even before the data loss. To confirm, you could run snapraid list and compare the size listed with the size of one of the small recovered files. Unfortunately it means that you won't be able to repair any of the bad files.
Are you offended ? Are you deliberately not understanding the problem ? Yes, I'm very sensitive, thanks for noticing. But, I perfectly understand the issue. No. My test confirmed that "snapraid fix" recreates a folder with the old name (including files), but keeps the folder with the new name (including files). I repeat what I have already said. This means that we now have two folders with the same files ! Is it difficult to understand ? This was never unclear. No. This is not normal. All backup...
Are you offended ? Are you deliberately not understanding the problem ? Yes, I'm very sensitive, thanks for noticing. But, I perfectly understand the issue. No. My test confirmed that "snapraid fix" recreates a folder with the old name (including files), but keeps the folder with the new name (including files). I repeat what I have already said. This means that we now have two folders with the same files ! Is it difficult to understand ? This was never unclear. No. This is not normal. All backup...
A renamed folder containing files is not a "file with modified / corrupted content" ??? Correct. I don't want to test the restore by moving the files, I want to test the restore in the case of a renamed folder. I want candy but no one cares about that. So "Snapraid fix" does not restore renamed folders!?! No, your test confirmed that it successfully restored the folder and all files within it. Wow! I can not believe it ! This is a major drawback. It's not marked anywhere. People need to be warned!...
I assume this doesn't matter since snapraid just sees everything as just a bunch of files Correct. Snapraid doesn't care if a file is physically stored on disk1:block1000 or disk1:block1234, it just wants disk1 to deliver the file.
As mentioned by David. Files added during sync will simply be ignored until next sync. The only thing you should not be doing is to delete or modify existing files while sync is running.
Snapraid fix is only expected to add missing files and / or replace files with modified / corrupted content. If from snapraids perspective, there are files missing or corrupted, then fix will restore / replace those files, no more, no less. If you want to test restore functionality, then you should move the files outside of the array instead of renaming them.
Frequently spinning up and down a dying drive is probably a bad choice regardless of what is causing the drive to die. The only scenario where reading data slower could make a positive difference is if the disk issue is heat related. In all other scenarios it would make more sense to read the data as fast as possible. And if there is any reason to suspect any kind of heat issue while attempting to recover data, then it is better to not attempt recovery until you have been able to improve cooling....
I think it is this scenario: 1. Add some files 2. Snapraid diff >Logfile.txt 3. Add additional files 4. Snapraid sync 5. Additional files not in Logfile.txt and not in future snapraid diff because it is already synced :( Snapraid list would seem like a good alternative in any scenario where above would be an actual problem.
Then it is relatively simple. Copy all data from old disk1 to new disk1 Update snapraid.conf to properly reflect mountpoint for new disk1 Run snapraid sync for snapraid to detect that disk1 has been replaced (instant) Copy all data from disk2 to disk1 Run snapraid sync for snapraid to parity protect all added files in disk1 (takes long time) Follow FAQ "How can I remove a data disk from an existing array?" https://www.snapraid.it/faq to remove disk2
I'm using 2 x 4TB + 2 x 8TB as data disks, 2 x 4TB disks are almost full What exactly does that mean? Do you have any parity disks? Are the 8 TB disks full or empty?
There is no such limit. Think of it like this: Any parity disk can replace any data disk. Regardless if lost disks are parity or data disks, you can make full recovery, as long as the total number of lost disks is not greater than the total number of parity disks. You can have up to 6 parity disks and up to ~250 data disks. In any situation where you have lost a total number of disks larger than the number of parity disks, all surving parity is useless It can't be used to restore data and it can't...
The config in your example only contains a single parity, which just happens to be stored on 2 physical disks. If you lose disk5: Only the second half of the parity is lost and you can use the first half stored on disk4 to recover up to 8 TB from a single lost data disk. If you lose disk4: 8 TB of data won't be possible to restore. If you have only lost disk5: Then yes, you can skip rebuilding the first 8 TB of the parity, using fix -S command, which allows you to specify a starting block (parity...
A fully synced array, with 3 parity disks, can always be fully restored from any combination of 3 or less missing / broken disks. If more than 3 disks are lost then no data can be restored. Examples: DDDDDDPPP = 6 data disks, parity and nothing to do DXDXDXPPP = OK, restore 3 data disks using 3 parity. DDXDDXPXP = OK, restore 2 data disks using 2 parity + rebuild 1 parity. DDDDDDXXX = OK, rebuild 3 parity. DDDDDXXXX = NOK, 1 data disk permanently lost. DXXXXDPPP = NOK, 4 data disks permanently lost....
Fix will start over each time. If you are restoring a lost disk or missing files you can workaround that by using the -m parameter to only restore missing files, which will be decreasing in number for each file restored. You will also need to be vigilant regarding partially fixed files that were in progress of being written during the crash and delete them in order to include them again in the next fix attempt.
You can add multiple mount points on top of the drive letters. Add mountpoints in disk manager for all drives (except C:) Make a copy of snapraid.conf Replace drive letters with mount point in snapraid.conf Run snapraid diff to confirm that there are 0 files added, removed or modified. Remove the drive letters one by one in disk manager and confirm in Drive Pool GUI that they magically change from drive letters to mount point after a few seconds each. Run snapraid diff again to double check that...
It is possible that I am wrong about not working on higher level parity, but even in that case the higher parity levels need to read lower level parity as part of the parity computation. So at the very minimum you would end up with all parity drives connected or additional re-runs after each previous parity level was modified. I think the best (nearest) way to do what you want would be to create one array with single parity for each set of drives. It would require more parity in total but it would...
Maybe I'm too crazy for thinking like this? Yes, it is too crazy. Not because it wouldn't work but because it is a totally impractical. It would have to be a special feature only available for single parity, which is only recommended for up to 4 drives. It would be very slow since parity blocks would need to be read + written instead of only written. If you were to modify any file on any disk, then you would have to resync all drives. To restore a single file on any drive you would need to access...
Maybe I'm too crazy for thinking like this? Yes, it is too crazy. Not because it wouldn't work but because it is a totally impractical. It would have to be a special feature only available for single parity, which is only recommended for up to 4 drives. It would be very slow since parity blocks would need to be read + written instead of only written. If you were to modify a file on disk 1 you would have to resync all drives. To restore a single file on any drive you would need to access all drives....
Yes, in theory it would be possible to add new drives to single parity without reading already synced drives. But you would still need to be able to connect all drives in order to restore a drive so it would seem quite pointless on top of all other more or less obvious drawbacks?
The chart shows that the array was created 39 days ago and that between 28 to 38 days ago you have synced on several occations which has caused snapraid to update 28% of the parity, leaving 72% untouched since creation. If you run snapraid scrub the column on day 39 (far left) will become smaller and a new column at the far right will appear. You will also notice that the symbols in the new column is asterisk instead of "o". Asterisk = Data + parity OK. o = Data (but not parity) OK.
Correct
I finaly got around to testing it. With all drives already spinning I couldn't observe any significant speed increase from snapraid diff. When running sync all disks were scanned in 0 seconds so maybe I would need more files per disk to really notice the difference or do a test where all drives are sleeping. The output issue is definitely fixed in any case.
The output from snaprad diff in snapraid-12.0-rc-5-g2c3f9b8-windows-x64.zip is a bit messed up. C:\Snapraid>snapraid.exe diff --test-fmt disk Loading state from C:/snapraid/snapraid.content... WARNING! With 28 disks it's recommended to use four parity levels. Comparing... addadd add "EC IA1:TV/9-1EBadd EB5_I:TV/9-1-1/9-1-1.s05eadd EB3_H:TV/9-1-1/ ... The total number of lines (243) is correct and the content of some of the lines looks OK but most are corrupted: 1.H.264d DD-a" <-- That is a very small...
I think that you may be able to restore some of the lost data if you manually setup the config file like this: data disk1 /mnt/disk1/ data disk2 /mnt/disk1/ <-- Yes same mount point as disk1. data disk3 /mnt/disk3/ <-- Location of the new d3 replacement drive parity /mnt/paritydisk/snapraid.parity And you have a working snapraid.content file and you are able to get rid of any show stopping errors related to the content file. And finally add -D and possibly -U to the fix command like this: snapraid...
Does the error message really say Disk 'data2' with uuid '4c43e....' not present in the configuration file ? In the config the disk seems to be named BAY2WD4ENTDATA2 instead of data2.
Does the error message really say Disk 'data2' with uuid '4c43e....' not present in the configuration file! In the config the disk seems to be named BAY2WD4ENTDATA2 instead of data2.
I think that you may be able to restore some of the lost data if you manually setup the config file like this: data disk1 /mnt/disk1/ data disk2 /mnt/disk1/ <-- Yes same mount point as disk1. data disk3 /mnt/disk3/ <-- Location of the new d3 replacement drive parity /mnt/paritydisk/snapraid.parity And you have a working snapraid.content file and you are able to get rid of any show stopping errors related to the content file. And finally add -D and possibly -U to the fix command like this: snapraid...
I think that you may be able to restore some of the lost data if you manually setup the config file like this: data disk1 /mnt/disk1/ data disk2 /mnt/disk1/ <-- Yes same mount point as disk1. data disk3 /mnt/disk3/ <-- Location of the new d3 replacement drive parity /mnt/paritydisk/snapraid.parity And you have a working snapraid.content file and you are able to get rid of any show stopping errors related to the content file. And finally add -D and possibly -U to the fix command like this: snapraid...
I think that you may be able to restore some of the lost data if you manually setup the config file like this: data disk1 /mnt/disk1/ data disk2 /mnt/disk1/ <-- Yes same mount point as disk1. data disk3 /mnt/disk3/ <-- Location of the new d3 replacement drive parity /mnt/paritydisk/snapraid.parity And you have a working snapraid.content file and you are able to get rid of any show stopping errors related to the content file. And finally add -D and possible -U to the fix command like this: snapraid...
In order to use Snapraid you would need to be able to access and configure the data disks individually like this: data d1 /mnt/disk1/ data d2 /mnt/disk2/ data d3 /mnt/disk3/ data d4 /mnt/disk4/ And I'm guessing that is not possible when they are members of the concatinated jbod. You would basically need a 36 TB parity disk to protect the 36 TB jbod data disk.
This is how it works: You edit snapraid.conf like this: parity Z:\Snapraid.parity content C:\Snapraid\snapraid.content content H:\snapraid.content content I:\snapraid.content data M1 H:\ data M2 I:\ data M3 J:\ data M4 K:\ data M5 L:\ data M6 M:\ data M7 N:\ data M8 O:\ You run snapraid sync from command prompt or powershell and wait for snapraid to create the following files: Z:\Snapraid.parity H:\snapraid.content I:\snapraid.content Later you run the following commands: snapraid diff to see what...
This is how it works: You edit snapraid.conf like this: parity Z:\Snapraid.parity content H:\snapraid.content content I:\snapraid.content data M1 H:\ data M2 I:\ data M3 J:\ data M4 K:\ data M5 L:\ data M6 M:\ data M7 N:\ data M8 O:\ You run snapraid sync from command prompt or powershell and wait for snapraid to create the following files: Z:\Snapraid.parity H:\snapraid.content I:\snapraid.content Later you run the following commands: snapraid diff to see what has changed since last sync snapraid...
This is how it works: You edit snapraid.conf like this: parity Z:\Snapraid.parity content H:\snapraid.content content I:\snapraid.content data M1 H:\ data M2 I:\ data M3 J:\ data M4 K:\ data M5 L:\ data M6 M:\ data M7 N:\ data M8 O:\ You run snapraid sync from command prompt or powershell and wait for snapraid to create the following files: Z:\Snapraid.parity H:\snapraid.content I:\snapraid.content Later you run the following commands: snapraid diff to see what has changes since last sync snapraid...
There are scripts created by users but I have no experience of using them and I'm not updated on which are better or more reliable. Personally I just run a .bat file for sync and another .bat file for scrub every now and then.
To clarify further. Traditional RAID solutions like RAID5 requires all drives in a RAID set to spin simultaneously. Since Snapraid is completely passive it doesn't require any drive at all to spin when not in use and only a single drive to be spinning when a file is accessed, regardless of reading or writing. While parity is updated or used to restore files with Snapraid then all drives need to spin. Unless my understanding of UNRAID is incorrect, it is necessary for all drives to spin when writing...
Unlike UNRAID Snapraid is completely passive. You set it up via snapraid.conf and then you use the following commands: snapraid diff to see what has changes since last sync snapraid sync to update parity snapraid scrub to verify that files and parity is ok. snapraid fix to restore files Another important difference is that snapraid is great for large media files that rarely change but not so great for lots of small files like system backups. If you want to use it for the latter I would strongly recommend...
So, first question - when running the SnapRAID fix command, does this also verify vs checksum to confirm there's no data corruption? Yes, but it doesn't verify again after writing new data so new corruption could sneak in during the write process. Or do I have to run a SnapRAID check after the fix to make sure there were no data errors? Yes, to verify that restored / repaired files did really get written as expected. Personally I would skip this step in most situations and leave it to normal scrub...
I've had similar issues with DrivePool in the past. Try this: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 Or this: https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/ Basically update owner of absolutely everything in the pool.
The fix3.log file attached in previous post is from 2021-09-26. So, yes it seems like you have not gotten any new logs since then. I think your best option is currently to simply start a command prompt and run the snapraid commands without any log so that you can simply see directly in the command prompt what is happening or not happening. Start with this one: snapraid status
At first snapraid is complaing about data being modified but when it starts logging error 1167 I think it means that the drives have been disconnected. Since it suddenly happens to multiple disks at once I would suspect the backplanes or power supply. The cables that are physically connected to the drives are very unlikely to be the problem since it wouldn't affect multiple drives at the same time.
Where did the scrub report go? Rule of thumb is to only sync after you have fixed everything that needs fixing.
I think it is the same thing as observed here: https://sourceforge.net/p/snapraid/discussion/1677233/thread/bab5f97c/#7465 But I'm not sure how OP was able to reproduce it, since it would seem to require 33 TB data disks for it to happen at 67% with blocksize 512, according to my old calculation.
Yes
Snapraid just doesn't introduce any additional need to wake up sleeping drives. Whether or not they sleep to begin with is unrelated to snapraid.
Remove 4-parity from snapraid.conf and delete the 4-parity file from disk. Now you have a replacement drive for 2-parity so all you need to do is change location of 2-parity in snapraid.conf and run snapraid sync -F to rebuild / refresh all 3 parity files.
Can the first sync be divided into small runs so that the data disks aren't stressed too much? I see that it can be interrupted (Ctrl-C), but is there a built-in way to schedule periodic pausing (for example, when running 'snapraid sync' overnight)? No. Is there a way to compute parity on one data disk, then spin it down, then compute parity on the next disk, then spin it down, and so on? No, that would multiply the number of required writes on the parity disk(s). What error checking (if any) does...
"-L, --error-limit Sets a new error limit before stopping execution. By default SnapRAID stops if it encounters more than 100 Input/Output errors, meaning that likely a disk is going to die. This options affects "sync" and "scrub", that are allowed to continue after the first bunch of disk errors, to try to complete at most their operations. Instead, "check" and "fix" always stop at the first error." You should check the SMART data for the drive in question. I think it will probably reveal bad sectors...
-L, --error-limit Sets a new error limit before stopping execution. By default SnapRAID stops if it encounters more than 100 Input/Output errors, meaning that likely a disk is going to die. This options affects "sync" and "scrub", that are allowed to continue after the first bunch of disk errors, to try to complete at most their operations. Instead, "check" and "fix" always stop at the first error. You should check the SMART data for the drive in question. I think it will probably reveal bad sectors...
I have never seen it before and can only speculate. Maybe a bug related to a 0 byte file? A tiny file that was hardlinked to a "copy" inside the array which ultimately didn't need to be recovered? Did snapraid diff show all files as restored?
As far as I can tell you can find the installer under Assets 3 here: https://github.com/Smurf-IV/Elucidate/releases
Run this: snapraid -e fix Don't add -m because that tells snapraid to only restore missing files.
Is the higher block size just to ease the memory requirements at the expense of wasted disk space? The hashes are also stored in the content-files. So trying to save disk space with smaller block size can sometimes come back and bite you if using very small block size. Hash collisions in snapraid would only be a problem for the dupe function. As long as you don't select a very small hash size it is practically impossible for any random silent error to sneak in undetected.
error:9534031:d8:bkup/Dec.2020/ That makes me think that you have accidentally configured d8 as follows: data d8 /mnt/SnapRaidArray/SRD8NA1B7/.snapshots/2370/snapshot/ If true then nothing outside 2370/snapshot/ is protected by snapraid and you also can't exclude .snapshots/ since it is not inside the array.
error:9534031:d8:bkup/Dec.2020/ That makes me think that you have accidentally configured d8 as follows: data d8 /mnt/SnapRaidArray/SRD8NA1B7/.snapshots/2370/snapshot/ If true then nothing outside the 2370/snapshot/ is protected by snapraid and you also can't exclude .snapshots/ since it is not inside the array.
exclude /.snapshots/ The above doesn't work because first slash indicates a root level folder of the data disks from snapraids perspective. Try changing it to exclude .snapshots/ exclude /mnt/SnapRaidArray/SRD8NA1B7/.snapshots/ The above doesn't work for same reason and also because mnt folder is definitely not inside the snapraid array.
exclude /.snapshots/ The above doesn't work because first slash indicates a root level folder of the data disks from snapraids perspective. Try changing it to exclude .snapshots/ exclude /mnt/SnapRaidArray/SRD8NA1B7/.snapshots/ The above doesn't work for same reason and the first and also because mnt folder is definitely not inside the snapraid array.
exclude /.snapshots/ The above doesn't work because first slash indicates a root level folder of the data disks from snapraids perspective. Try changing it to exclude .snapshots/ exclude /mnt/SnapRaidArray/SRD8NA1B7/.snapshots/ The above doesn't work because there is no such folder inside the snapraid array.
exclude /.snapshots/ The above doesn't work because first slash indicates a root level folder of the data disks from snapraids perspective. Try changing it to exclude /.snapshots/ exclude /mnt/SnapRaidArray/SRD8NA1B7/.snapshots/ The above doesn't work because there is no such folder inside the snapraid array.
exclude /.snapshots/ The above doesn't work because first slash indicates a root level folder of the data disks from snapraids perspective. Try changing it to .snapshots/ exclude /mnt/SnapRaidArray/SRD8NA1B7/.snapshots/ The above doesn't work because there is no such folder inside the snapraid array.
Yes, I don't think I will ever see somthing as weird as that again. It was a very small addition of 5 new files and everything appeared normal, including scanning, saving, verifying, until the point where sync actually began and I was literally looking at it when it happened, since I was curious to see the changes to the progress indicator line. At first it was only a few characters long. MB processed (and maybe also speed) and it remained like that for maybe 15-45 seconds but then changed to the...
eta 60+ hours but their is 1 file I need done now anyway to force / ignore the locked file ? The easiest solution would be: 1. Press CTRL + C to stop the ongoing activity. 2. Fix the single file. 3. Start the long running activity again (which will be resume where stopped if it is sync, scrub or fix -m, but not normal fix without -m).
eta 60+ hours but their is 1 file I need done now anyway to force / ignore the locked file ? The easiest solution would be: 1. Press CTRL + C to stop the ongoing activity. 2. Fix the single file. 3. Start the long running activity again (which will resume where stopped if it is sync, scrub or fix -m, but not normal fix without -m).
I think that it is simply a bug / oversight in the design that snapraid does not restore timestamps when fixing already existing files. https://sourceforge.net/p/snapraid/discussion/1677233/thread/5c5ff90441/ Not restoring ownership is a design limitation stated one row above Getting Started in the manual: Only file, time-stamps, symlinks and hardlinks are saved. Permissions, ownership and extended attributes are not saved. https://www.snapraid.it/manual
I've tried a few more times without being able to repeat it and I'm starting to think that it may have been a really odd hardware problem in the file server. The system disk (SSD) has been reporting lots and lots of bad sectors recently together with some general freezeing issues. Even though difficult to imagine exactly how, I guess the most likely explanation is nonetheless a hardware problem. Please ignore it for now.
But... uhm... The sync function acted really weird... 5 added files in total ~10 GB was early estimated for 1h40m and reading from data disks in sequence instead of parallel according to disk activity in task manager (full speed reading from the disks at 100-150 MB/s but only one disk at each time). And then suddenly after a few minutes everything became normal with all disks reading and writing in parallel and completed in a few minutes.
I just tested the check function on a specific file and the difference was like night and day. ~30 sec to read content file / prepare, ~45 sec to check the file and poff done :-) Thank you both @amadvance and @uhclem
What is a bit unclear is if -p 22 means 22% of the entire array or 22% of the blocks that are older than 8 days. Based on the screenshots I guess the answer is up to 22% of the entire array if there are enough blocks older than 8 days since last sync / scrub. I think you should simplify things and set older than days = 0 and 1 or 2 percent. That way you will scrub the entire array every 50 or 100 days.
I think that if you look closer this is basically what happens: Block 0 to 10,000,000 = Nothing to do; Long freeze while evaluating. Block 10,000,001 = Do check or fix for the tiny file matching filter Block 10,000,002 to 20,000,000 Nothing to do; Long freeze while evaluating. Done, Everything OK. When restoring an entire disk this is insignificant since it accounts for less than a percent of the total restore time. But when you want to restore something specific then it becomes very noticable due...
I think that if you look closer this is basically what happens: Block 0 to 10,000,000 = Nothing to do; Long freeze while evaluating. Block 10,000,001 = Do check or fix Block 10,000,002 to 20,000,000 Nothing to do; Long freeze while evaluating. Done, Everything OK. When restoring an entire disk this is insignificant since it accounts for less than a percent of the total restore time. But when you want to restore something specific then it becomes very noticable due to the relatively small time spent...
I think that if you look closer this is basically what happens: Block 0 to 10,000,000 = Nothing to do; Long freeze while evaluating. Block 10,000,001 = Do check or fix Block 10,000,002 to 20,000,000 Nothing to do; Long freeze while evaluating. Done, Everything OK. When restoring an entire disk this is insignificant since it accounts for less than a percent of the total restore time. But when you want to restore something specific then it becomes very noticable due to the relatively small time spent...
This is how you should interpret the chart: * Fully scrubbed during scrub. o Partially scrubbed during sync. 100% of the array was scrubbed 1 day ago A tiny bit of parity has been updated during sync and it is shown as 'o' left of the the single astrix above day zero. snapraid scrub -p new would explicitly scrub the o. Snapraid scrub -p anything -o anything will eventually get to it when done with the 96% on the left side.
This is how you should interpret the chart: * Fully scrubbed during scrub. o Partially scrubbed during sync. 100% of the array was scrubbed 1 day ago A tiny bit of parity has been updated during sync and it shown as o left of the the single astrix above day zero. snapraid scrub -p new would explicitly scrub the o. Snapraid scrub -p anything -o anything will eventually get to it when done with the 96% on the left side.
What does the chart near end of snapraid status say about it? If the chart shows 0 to 1 days ago since all blocks were scrubbed or related parity blocks were last updated then there is no problem. If you want to make sure that every single parity block has been scrubbed after change then you can add snapraid scrub -p new to the script somewhere after snapraid sync.
I think that most of the system file / exclude warnings can easily be explained like this: exclude \Something in snapraid.conf will exclude \Something but not \PoolPart\Something which is how you have configured Snapraid to see things. Generally speaking snapraid and drivepool play much nicer together if you configure snapraid like this: disk diskname mountpoint\PoolPart.abc\ disk diskname mountpoint\PoolPart.def\ That way snapraid and the drivepool device would have a shared definition of the root....
That is a lot of things to trouble shoot in parallel... New Snapraid user + DrivePool can often in itself be quite a lot. In this case there is also what looks like an UNC path related issue with snapraid and possibly hardware related issues on top of that. Decreasing the complexity would probably be a good first priority (but it could also introduce additional errors...). The best way to do that would be to at least temporary assign mount points to the hdd partitions and to use them in snapraid.conf...
That is a lot of things to trouble shoot in parallel... New Snapraid user + DrivePool can often in itself be quite a lot. In this case there is also what looks like an UNC path related issue with snapraid and possibly hardware related issues on top of that. Decreasing the complexity would probably be a good first priority (but it could also introduce additional errors...). The best way to do that would be to at least temporary assign mount points to the hdd partitions and to use them in snapraid.conf...
That is a lot of things to trouble shoot in parallel... New Snapraid user + DrivePool can often in itself be quite a lot. In this case there is also what looks like an UNC path related issue with snapraid and possibly hardware related issues on top of that. Decreasing the complexity would probably be a good first priority (but it could also introduce additional errors...). The best way to do that would be to at least temporary assign mount points to the hdd partitions and to use them in snapraid.conf...
Default is 8 percent, ignore blocks updated in the last 10 days and recommended to run it about once a week. snapraid scrub -p new is also worth considering. Instead of ignoring newly added or modified files and related parity it only checks that. Useful in order to quickly become aware of silent corruption.
If you change to this: percentage = 100 older-than = 0 Then it is most likely translated to this: snapraid scrub -p 100 -o 0 which scrubs the entire array. After you have scrubbed the entire array you should probably lower to a more reasonable 1-2 percent per day if it is a daily job. The main reason to not overdo the amount of scrubbing is to limit wear on the hard drives. If there really is a need for that is pretty much unknown.
I know there's a description of sync -F and sync -R in the manual but which use cases is one used over the other? sync -F rewrites the entire parity exactly as it is already supposed to be without making any changes to it (unless it is corrupt or missing). Useful in case you want to add additional parity files or suspect parity to be corrupted. Basically a repair option for the parity. sync -R destroy the existing parity and write a new one with optimal layout. Reuses information from the old content...
Deleting all content and parity files will turn normal snapraid sync into initial sync. -F and -R would retain all meta data, including timestamps, so it would not solve anything related to this issue.
I thought the whole purpose of running touch is to assign a non-zero sub-second timestamp on all the files in the array? Shouldn't touch be getting rid of these zero sub-second timestamps? Yes and no. The scenario that I think snapraid touch is designed to handle is .0000000 in both content file and file system. You run touch and it gets changed to a random number in both file system and content file. What happens if you do this? snapraid diff snapraid fix -f "Arrow (2012) - S08E01 - Starling City...