Activity for SnapRAID

  • Xyem Xyem posted a comment on discussion Help

    Just started using SnapRAID (as part of OpenMediaVault and with mergerfs, though I am issuing commands at the terminal) and I noticed that doing a sync -h (sync with prehash) that the prehash stage only processes data at the speed of a single drive. Watching iostat -m confirms it is only reading from 1 drive at a time. Unless there is a specific reason to do it this way, could it be changed to prehash from multiple drives at the same time (it is IO bound, with CPU usage being a mere 4%), or an option...

  • Jack Jack posted a comment on discussion Help

    Maybe you could even transferred the parity files as well. At best, the sync could have picked up the parity file and didn't had to start from scratch, speeding up the process.

  • Jeff Jeff modified a comment on discussion Help

    Question still stands but, I decided to delete all of the parity and content file and start over with a full SYNC. It's been running for around 16 to 17 hours so far. When I cleared out the bad drive, I removed the content file from the drive and removed it from the Drivepool pool and no matter what I tried, SnapRaid kept giving me a notice about restoring missing content files(s) first so, I just deleted them all and started over. EDIT - After doing some more reading, I'm thinking I should have...

  • Jeff Jeff posted a comment on discussion Help

    Question still stands but, I decided to delete all of the parity and content file and start over with a full SYNC. It's been running for around 16 to 17 hours so far. When I cleared out the bad drive, I removed the content file from the drive and removed it from the Drivepool pool and no matter what I tried, SnapRaid kept giving me a notice about restoring missing content files(s) first so, I just deleted them all and started over. Oh well...

  • Jeff Jeff posted a comment on discussion Help

    I have not had SnapRaid (using Drivepool for pooling) running for a while due to a bad drive issue. I manually moved about 15.TB of known good files off of the bad drive over to a good drive in the pool and rebalanced my Drivepool. I deleted everything off of the bad drive and removed it out of the SnapRaid config file. I want to get SnapRaid back up and running again, what would be the best course of action? SYNC - Since I manually moved files to good drives and removed the bad drive from the config...

  • Techie 2000v2 Techie 2000v2 posted a comment on discussion Help

    I THINK I found my own answer by delving into the docs some more. The -D option (aka "Force commands with inaccessible/shared disks") so with disk1-6 re-applied to the confihuration I get Self test... DANGER! Ignoring that disks 'd1' and 'd2' are on the same device DANGER! Ignoring that disks 'd1' and 'd3' are on the same device DANGER! Ignoring that disks 'd1' and 'd4' are on the same device DANGER! Ignoring that disks 'd1' and 'd5' are on the same device DANGER! Ignoring that disks 'd1' and 'd6'...

  • Techie 2000v2 Techie 2000v2 posted a comment on discussion Help

    My origial snapraid config saw some data disks and content disks mounted on the server chasis whilst some others were on a disk shelf. I new I was going to lose the server, but not the diskshelf so manually moved all the data from the server chasis disks to the disk shelf. I don't THINK I had chance to do a final sync before the server left me. I have a new server in place now and the disk shelf is conencted up and teh drives visible to me (and snapraid). However, Current content: "/mnt/disk7/snapraid.content"...

  • AJ Weber AJ Weber posted a comment on discussion Help

    I searched the forum and noticed that the last discussion of Windows VSS / Shadow Copy is 10 years old. Obviously, a lot has changed since then. Has a shadow copy option been added to the x64 application to make snapraid "VSS aware"? That would be tremendously helpful for data disks that have some files locked during sync. If not, do you keep a copy of the latest batch/PS script somewhere in the github? Kudos to the program having such longevity in the first place! :) Thank you!

  • Jack Jack posted a comment on discussion Help

    For 60TB, 60h seems reasonable for a HDD? I have a 24TB hard drive and in the future I will find out how long a FIX takes when I need to replace a bad drive, but even now a Snapraid check takes ages.

  • dor d dor d posted a comment on discussion Help

    That's the point of this exercise, 1.bin should be 'covered' by the first successful sync, but clearly under certain conditions, a subsequent interrupted sync operation may result with unrecoverable files, ones that were previously synced successfully. You make a big assumption here: "and still has the last state". No, it doesn't matter how I delete the files. The link you referenced is about the concern of deleting from one drive- and then trying to recover files that correspond to these deleted...

  • Jack Jack posted a comment on discussion Help

    Do what Gilbert said. Make sure to shutdown this server/PC first before removing drives. So if the 3 drives died and you can't read/write anything to these disks, then do the following: Shutdown the SnapRAID server/PC. Replace the 3 bad drives. Boot again. Ensure that the SnapRAID configuration file (/etc/snapraid.conf) reflects the changes. So the 3 drives should correspond with the 3 removed drives. Mentioned in the manual 4.4.1 4.4.2 4.4.3 snapraid sync -hv

  • Jack Jack posted a comment on discussion Help

    Step5 is a correct result of the check: 1. 1.bin is covered by the full Sync at Step2. 2. In step4 you do another sync, but interrupt it. This means that snapraid.parity hasn't been fully updated and still has the last state of the full sync in Step2. So 1.bin is/should still be covered by the full sync from Step2. This is also stated in the FAQ: "You are still able to recover data. In the worst case, you will be able to recover as much data as if the disk would have broken before the "sync". But...

  • dor d dor d posted a comment on discussion Help

    Hello again Jack, and thanks for taking the time. It doesn't matter if I physically remove the drive, or just delete the files, either way- the files are unrecoverable. I don't think you perform my test. You mention that you add files to d4, but you don't mention if the new blocks that you introduce actually correspond to preexisting blocks of other drives. You need the other drives to contain at least the size of the files you intend to 'interrupt' when you add to d4. For example, d1-d2-d3 should...

  • Jack Jack posted a comment on discussion Help

    Hi Michael You probably moved on after a month with no response. * Scrubbing is hard work for the drives, so this could kill them even more. Have you tried a diff or check? * It is better to copy the data to a new drive than to scrub. Scrubbing only works if the drive can read data. So if you can't copy, a scrub will probably fail. * If the parity drive is still working, and you have at least two working snapraid.content on two different drives, a fix should work.

  • Jack Jack posted a comment on discussion Help

    Hi Dor Ok, I understand it better now. In your other post I read it like you physically removed d1, but here you only removed the file on d1. On d4 I have one folder with some stuff, 3.5GB. Sync -hv I added a new different folder with different files. Sync -hv and I interrupted it at 10% only with Ctrl+C before it finished the sync and could update the snapraid.parity. No need for Kill or a simulated power outage. Your output makes sense. The 4.2.bin was added after the successful sync in #2, so...

  • dor d dor d posted a comment on discussion Help

    Hello again Jack, What do you initially sync? Why do you add 6 files to d4? Why do you delete or fix the same drive (d4) you just interrupted? My testcase is not about recovering or fixing the drive with the 'new' files that were not synced successfully, but about the state of the drives/files that correspond to the interrupted operation, ones that were synced successfully. Also, how do you interrupt the sync operation? and how many times do you interrupt it? I experimented with various interruptions,...

  • Jack Jack posted a comment on discussion Help

    You should keep an eye on the task manager (or logs) if mediavault has one. SnapRAID is only active if you are using a command (it's just a script running) or if you have automated the scrub, but even then it doesn't seem possible. So no, I don't think it can be SnapRAID or you've automated something, because by default SnapRAID is completely passive.

  • Jack Jack modified a comment on discussion Help

    I now see that you have two discussions running. I have answered here. With check I see. The command line doesn't say anything about read/write parity. But I see reading activity when I use check, which I don't see when I use check -a.

  • Jack Jack modified a comment on discussion Help

    I now see that you have two discussions running. I have answered here. With check I see. In the command-line it's not stating anything about reading/writing parity. But I see reading activity when using check, which I don't see with check -a.

  • Jack Jack posted a comment on discussion Help

    I now see that you have two discussions running. I have answered here.

  • Jack Jack modified a comment on discussion Help

    I'm running Debian 12.7 with the latest SnapRAID v12.3. * 6x data HDD * 2x parity (don't bother with single parity (RAID5), always use 2 parity (RAID6) or more) * snapraid.content on each data HDD and boot SSD. (This also ensures that the data drives are never the same size or larger than the parity drive when using drives of the same capacity). I followed the steps from your first post: 1. Sync 2. Added 6x 3.3GB files to d4 3. Sync interrupted at about 10%. 4. Deleted the 6x 3.3GB files 5. snapraid...

  • Jack Jack posted a comment on discussion Help

    I'm running Debian 12.7 with the latest SnapRAID v12.3. * 6x data HDD * 2x parity (don't bother with single parity (RAID5), always use 2 parity (RAID6) or more) * snapraid.content on each data HDD and boot SSD. (This also ensures that the data drives are never the same size or larger than the parity drive when using drives of the same capacity). I followed the steps from your first post: 1. Sync 2. Added 6x 3.3GB files to d4 3. Sync interrupted at about 10%. 4. Deleted the 6x 3.3GB files 5. snapraid...

  • dor d dor d posted a comment on discussion Help

    Hello Jack, This is my exact testcase. I have 4 data drives and 2 parity drives, and snapraid.content on system drive + each data drive. Drive 1: contains 1 file, 1.bin [1GB] Drive 2: contains 1 file, 2.bin [1GB] Drive 3: contains 1 file, 3.bin [1GB] Drive 4: contains 1 file, 4.bin [512MB] I perform a sync operation and let it complete successfully. I add a new file to drive #4, 4-2.bin [512MB] I perform a sync operation but I interrupt it. I perform snapraid check -v ... I'm being told 1.bin, 2.bin,...

  • Jack Jack modified a comment on discussion Help

    I'm using SnapRAID 12.3 So if I am reading correctly, one of your parities is bad? Do you have a 2-parity? How many data drives? How many drives with snapraid.content do you have? (I run snapraid.content on every data drive and (by default) the boot drive). If you use snapraid check -v you'll see that the parity drives and any snapraid.content files are not tested. For example, if you know that your data drives are 100% correct, you could just do a --force-realloc or --force-full.

  • Jack Jack posted a comment on discussion Help

    I'm using SnapRAID 11.1 So if I am reading correctly, one of your parities is bad? Do you have a 2-parity? How many data drives? How many drives with snapraid.content do you have? (I run snapraid.content on every data drive and (by default) the boot drive). If you use snapraid check -v you'll see that the parity drives and any snapraid.content files are not tested. For example, if you know that your data drives are 100% correct, you could just do a --force-realloc or --force-full.

  • Jack Jack posted a comment on discussion Help

    May I know the reason why you want to change it? 1. Yes, it seems clean in the config. 2. It may work now, but it's not the default, so a SnapRAID update could break it. 3. You might get confused now that you have two parity files with the same naming scheme.

  • Jack Jack modified a comment on discussion Help

    It works great on Debian 12.x On the data drives their will be a folder .Trash-0 when something has been deleted on that specific drive. sudo nano /etc/snapraid.config Under Defines files and directories to exclude Add exclude .Trash-0/ Save How can I safely move files from one disk to another? Diff -v (to verify that the just copied files are correctly identified as 'copy'.) Check -v (Optional) Sync -h -v Delete the files from the original location. Sync -h -v Afterwards you can remove the deleted...

  • Jack Jack posted a comment on discussion Help

    It works great on Debian 12.x On the data drives their will be a folder .Trash-0 when something has been deleted on that specific drive. /etc/snapraid.config Under Defines files and directories to exclude Add exclude .Trash-0/ Save How can I safely move files from one disk to another? Diff -v (to verify that the just copied files are correctly identified as 'copy'.) Check -v (Optional) Sync -h -v Delete the files from the original location. Sync -h -v Afterwards you can remove the deleted content...

  • UhClem UhClem posted a comment on discussion Help

    Your diff proved nothing.

  • pofo14 pofo14 posted a comment on discussion Help

    Thanks, didn't think of the check command. I rolled the dice and just changed the config, renamed the parity files and did a snapraid diff, it all was OK.

  • UhClem UhClem posted a comment on discussion Help

    If I rename the parity files, will my system work properly? Just try it ... Do a snapraid check (check is completely read-only) If it gets to the progress phase ( >100 MB processed), [you can kill ^C it now] answer: YES If SnapRAID complains, answer: NO (obviously)

  • pofo14 pofo14 posted a comment on discussion Help

    My Current Config: parity /mnt/parity1/snapraid.parity 2-parity /mnt/parity2/snapraid.2-parity My Proposed New Config 1-parity /mnt/parity1/snapraid.parity 2-parity /mnt/parity2/snapraid.parity If I rename the parity files, will my system work properly?

  • mazi42 mazi42 posted a comment on discussion Help

    I use Snapraid in the OMV7. I did all the settings through the GUI interface. When I manually start the synchronization, it returns an error: Self test...Loading state from /srv/dev-disk-by-uuid-0f72943d-7e63-4237-8404-dee2d8ddef8f/snapraid.content...WARNING! Content file '/srv/dev-disk-by-uuid-0f72943d-7e63-4237-8404-dee2d8ddef8f/snapraid.content' not found, attempting with another copy...Loading state from /srv/dev-disk-by-uuid-c0007f74-5662-4a22-8702-1c7b344cd4d3/snapraid.content...No content...

  • dor d dor d posted a comment on discussion Help

    Hi, I'm experimenting with inconsistent parities and this is what I'm observing: My parity files are in inconsistent state, if I remove d1 and try to recover it- it's unrecoverable. If I keep d1 and run a check or a scrub, I'm not being made aware of the fact that d1 is unrecoverable. The manual states: "Verify all the files and the parity data." (check) and: "checking for [...] errors in data and parity disks" (scrub) If both the files and the parities are being checked- shouldn't it be reported?...

  • dor d dor d posted a comment on discussion Help

    I think I found a workaround to keep the state of at least 1 parity consistent during a sync operation. I have 2 configs and 2 content files for the same data and parity drives. 1 config is for parity1, and the second config is for parity1 + parity2 (or more parities). I begin with syncing config1, if sync fails and I need to recover-I have parity2 to recover from. If sync of config1 succeeds, then I proceed to syncing config2. If config2 fails- parity1 has already been updated during the first sync...

  • Atomic Atomic posted a comment on discussion Help

    Hello, Last year I've built a NAS with a 18 TB Seagate exos drive and this year I've added 2 more 20 TB Seagate exos drives to it. I've started using Snapraid a couple of months ago, one of the 20 TB drives is used as the parity drive. Snapraid is syncing and scrubbing daily snapraid scrub -p new snapraid scrub -p 12 -o 10 I've noticed that the data drives have very high levels of Bytes Read, 32 PB and 107 PB (yup P, not T...). I've checked both the smart data from openmediavault and with openseachest....

  • UhClem UhClem posted a comment on discussion Help

    You're absolutely correct! My "move away--then move back" is better achieved the way you described.

  • Paul French Paul French posted a comment on discussion Help

    Hi UhClem, Yes, I use this batch file when syncing "C:\Program Files\Snapraid\snapraid.exe" sync "C:\Program Files\Snapraid\snapraid.exe" touch "C:\Program Files\Snapraid\snapraid.exe" -p new scrub "C:\Program Files\Snapraid\snapraid.exe" status I then follow this up by creating a .sha256 file from the files in the array, then cross check it with the files in the original location they came from. A thought I had while performing your fix, do I need to even move the file out of the array. As .unrecoverable...

  • UhClem UhClem posted a comment on discussion Help

    "Like deja vu all over again." :) Have you continued the modus operandi of following ALL sync commands with a -p new scrub command? [this practice is GOOD, because it assures that any glitch resulting from the sync is brought to your attention as soon as possible.] Since, like the first time, there was no problem with the file data, the procedure for rectifying things is the same. However, a slight alteration would be, instead of copying the original file (minus the .unrecoverable suffix) off-array,...

  • Paul French Paul French posted a comment on discussion Help

    Hi all, See https://sourceforge.net/p/snapraid/discussion/1677233/thread/5591bd3dcd/ for previous issue. I think I have run into the same problem again, but would like to check with you guys before applying the same fix. C:\Array\Snapraid>"C:\Program Files\Snapraid\snapraid.exe" -o 1 scrub Self test... Loading state from C:/Array/snapraid.content... Using 3697 MiB of memory for the file-system. Initializing... Using 160 MiB of memory for 64 cached blocks. Selecting... Scrubbing... Data error in file...

  • dor d dor d posted a comment on discussion Help

    I also made the following testcase with 4 parities: Drive 1: 1 file, 1GB Drive 2: 1 file, 512MB Drive 3: parity1 Drive 4: parity2 Drive 5: parity3 Drive 6: parity4 Synced. Then added a 512MB file to Drive 2. I performed a sync and interrupted it. Then I removed Drive 1, and only kept parity1... tried to recover: unrecoverable. Then I tried to recover with only parity2... unrecoverable. Then I tried to recover with only parity3... unrecoverable. Then I tried to recover with only parity4... recovered...

  • dor d dor d posted a comment on discussion Help

    Hi, I'm very much new and I want to experiment with SnapRAID features so I will be aware of its capabilities and limitations. I'm experimenting with recovery after a failed sync. This is the testcase: Drive 1: 1 file, 1GB Drive 2: 1 file, 1GB Drive 3: 1 file, 512MB Drive 4: parity1 Sync is being performed, and then a 512MB file is added to Drive 3. Now I perform sync again, and I interrupt it mid-way. Then I remove Drive 1 and try to recover it- and it has an unrecoverable error. I don't think that...

  • Michael Michael modified a comment on discussion Help

    I have two drives dying, but only one parity. This is not a big issue because I've backed up the "less broken" drive already (H:), so I'm only using Snapraid to recover the MUCH worse drive that is readable but won't let me copy anything off of it (I:). When recovering with the fix command it's stopping Snapraid's recovery because of an IO error on H:\, I understand that snapraid needs other drives as part of the recovery process, but is there a way to bypass these IO errors on H:\ so I can recover...

  • Michael Michael posted a comment on discussion Help

    I have two drives dying, but only one parity. This is not a big issue because I've backed up the "less broken" drive already (H:), so I'm only using Snapraid to recover the MUCH worse drive that is readable but won't let me copy anything off of it (I:). When recovering with the fix command it's stopping Snapraid's recovery because of an IO error on H:\, I understand that snapraid needs other drives as part of the recovery process, but is there a way to bypass these IO errors on H:\ so I can recover...

  • Jay K Jay K modified a comment on discussion Help

    Thank you very much. So when trying to fix, how do you propose I go about it? I have some files I know were on the 2nd subvolume still. So I'm thinking, I will rebuild the subvolume structure and mount both of the pseudodisks, though they will be empty. I will then use the following command: snapraid -d d3 -d d4 -l fix.log -i directoryWithSomeMissingFilesFromd4 fix Is this correct?

  • Jay K Jay K modified a comment on discussion Help

    Thank you very much. So when trying to fix, how do you propose I go about it? I have some files I know were on the 2nd subvolume still. So I'm thinking, I will rebuild the subvolume structure and mount both of the pseudodisks, though they will be empty. I will then use the following command: snapraid -d d3 -d d4 -l fix.log -i directoryWithSomeMissingFilesFromd4 fix Is this correct? Also one other thing, I have some of the same files from d4 but I don't think the timestamps will match with what is...

  • Jay K Jay K posted a comment on discussion Help

    Thank you very much. So when trying to fix, how do you propose I go about it? I have some files I know were on the 2nd subvolume still. So I'm thinking, I will rebuild the subvolume structure and mount both of the pseudodisks, though they will be empty. I will then use the following command: snapraid -d d3 -d d4 -l fix.log -i directoryWithSomeMissingFilesFromd4 fix Is this correct? Also one other thing, I have the same files but I don't think the timestamps will match with what is in parity (the...

  • David David modified a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. (God, this editor is freakin' stupid. Just show the same damn thing in the editor before and after posting) X............X X.....X....X X....X X X X....X X X X X X X X 1 2 3 4 P Here's your...

  • David David modified a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. (God, this editor is freakin' stupid. Just show the same damn thing in the editor before and after posting) X............X X.....X....X X.....X X X X.....X X X X X X X X 1 2 3 4 P Here's your...

  • David David modified a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. (God, this editor is freakin' stupid. Just show the same damn thing in the editor before and after posting) X.................X X.......X.......X X.......X . X . X X.......X . X. X X X X X X...

  • David David modified a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. (God, this editor is freakin' stupid. Just show the same damn thing in the editor before and after posting) X.................X X.......X.......X X.......X . X . X X.......X . X. X X . X . X...

  • David David modified a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. (God, this editor is freakin' stupid. Just show the same damn thing in the editor before and after posting) X.................X X.......X.......X X.......X X X X.......X X X X X X X X 1 2 3...

  • David David modified a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. (God, this editor is freakin' stupid. Just show the same damn thing in the editor before and after posting) X.................X X.......X.......X X.......X X X X.......X X X X X X X X 1 2 3...

  • David David modified a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. X X X X X X X X X X X X X X X X X X 1 2 3 4 P Here's your layout with me guessing capacity usage. You lose drives 1 & 2 and the parity drive provides for 1 drive loss. . X . X X . X X X . X...

  • David David posted a comment on discussion Help

    This seems trivial, and I understood your predicament, but next time maybe not use the same word to describe two different things? First, don't run a sync. I would guess you would be able to recover all of the files on the first disk that did not have any matching files on the second disk. Here's what I mean. X X X X X X X X X X X X X X X X X X 1 2 3 4 P Here's your layout with me guessing capacity usage. You lose drives 1 & 2 and the parity drive provides for 1 drive loss. X X X X X X X X X X X...

  • Jay K Jay K modified a comment on discussion Help

    "disk" = btrfs subvolume disk = physical disk So I very stupidly was using btrfs subvolumes to have two snapraid data "disks" on the same physical disk, with 2 additional physical disks as snapraid data disks and 1 physical disk as parity. When I lost the underlying physical disk, I lost both data "disks". The 2nd "disk" barely had any data ( a handful of files, a few GB at most), but the the 1st "disk" had a lot (at least 8TB). Is it even worth attempting recovery in this scenario (I am not expecting...

  • Jay K Jay K modified a comment on discussion Help

    "disk" = btrfs subvolume disk = physical disk So I very stupidly was using btrfs subvolumes to have two snapraid data "disks" on the same physical disk, with 2 additional physical disks as snapraid data disks and 1 physical disk as parity. When I lost the underlying physical disk, I lost both data "disks". The 2nd "disk" barely had any data ( a handful of files, a few GB at most), but the the 1st "disk" had a lot (at least 8TB). Is it even worth attempting recovery in this scenario (I am not expecting...

  • Jay K Jay K modified a comment on discussion Help

    "disk" = btrfs subvolume disk = physical disk So I very stupidly was using btrfs subvolumes to have two snapraid data "disks" on the same physical disk, with 2 additional physical disks as snapraid data disks and 1 physical disk as parity. When I lost the underlying physical disk, I lost both data "disks". The 2nd "disk" barely had any data ( a handful of files, a few GB at most), but the the 1st "disk" had a lot (at least 8TB). Is it even worth attempting recovery in this scenario (I am not expecting...

  • Jay K Jay K modified a comment on discussion Help

    "disk" = btrfs subvolume disk = physical disk So I very stupidly was using btrfs subvolumes to have two snapraid data "disks" on the same physical disk, with 2 additional physical disks as snapraid data disks and 1 physical disk as parity. When I lost the underlying physical disk, I lost both data "disks". The 2nd "disk" barely had any data ( a handful of files, a few GB at most), but the the 1st "disk" had a lot (at least 8TB). Is it even worth attempting recovery in this scenario (I am not expecting...

  • Jay K Jay K modified a comment on discussion Help

    "disk" = btrfs subvolume disk = physical disk So I very stupidly was using btrfs subvolumes to have two snapraid data "disks" on the same physical disk, with 2 additional physical disks as snapraid data disks. When I lost the underlying physical disk, I lost both data "disks" despite having only 1 parity disk. The 2nd "disk" barely had any data ( a handful of files, a few GB at most), but the the 1st "disk" had a lot (at least 8TB). Is it even worth attempting recovery in this scenario (I am not...

  • Jay K Jay K modified a comment on discussion Help

    "disk" = btrfs subvolume disk = physical disk So I very stupidly was using btrfs subvolumes to have two snapraid data "disks" on the same physical disk, with additional physical disks as snapraid data disks. When I lost the underlying physical disk, I lost both data "disks" despite having only 1 parity disk. The 2nd "disk" barely had any data ( a handful of files, a few GB at most), but the the 1st "disk" had a lot (at least 8TB). Is it even worth attempting recovery in this scenario (I am not expecting...

  • Jay K Jay K posted a comment on discussion Help

    "disk" = btrfs subvolume disk = physical disk So I very stupidly was using btrfs subvolumes to have two snapraid data "disks" on the same physical disk. When I lost the underlying physical disk, I lost both data "disks" despite having only 1 parity disk. The 2nd "disk" barely had any data ( a handful of files, a few GB at most), but the the 1st "disk" had a lot (at least 8TB). Is it even worth attempting recovery in this scenario (I am not expecting to recover everything)? If it is, how should I...

  • Gilbert Gilbert posted a comment on discussion Help

    try to check these 3 disks on another machine to make sure it's a disk error, if yes follow the steps in manual.

  • Amishman Amishman posted a comment on discussion Help

    I am using Linux Mint and am I'm a noob. I have 3 parity drives. The 3 drives I just lost are 1 parity and 2 data. Am I able to recover the data drives first and then the parity? Do I have to replace all of the drives first and then restore the data? How do I recover from this? I'm just not sure how to proceed. I lost one drive awhile ago and that was easy to replace and restore the data. I believe I am using version 12.1 from the Mint software manager. Any help is greatly appreciated.

  • crash crash posted a comment on discussion Help

    That will work nicely. Thank you.

  • UhClem UhClem modified a comment on discussion Help

    Using snapraid -v list will add :ss.nnnnnnnnn to the listed date/time. Files with all n's = 0 are what you seek.

  • UhClem UhClem posted a comment on discussion Help

    Using snapraid -v list will add :ss.nnnnnnnnn to the listed date/time. Files with all n's = 0 are what you seek.

  • crash crash posted a comment on discussion Help

    Id like to know which files are missing the full timestamp BEFORE snapraid runs TOUCH. Is this possible?

  • Bill McClain Bill McClain modified a comment on discussion Help

    Did you sync after the touch? The sub-second timestamp is an optimization feature to help identifying moved or copied files. Your files are protected even without it. From the manual: This improves the SnapRAID capability to recognize moved and copied files as it makes the time-stamp almost unique, removing possible duplicates. More specifically, if the sub-second time-stamp is not zero, a moved or copied file is identified as such if it matches the name, size and time-stamp. If instead the sub-second...

  • Rysz Rysz posted a comment on discussion Help

    This also depends on if your filesystem supports sub-second timestamps. https://stackoverflow.com/questions/14392975/timestamp-accuracy-on-ext4-sub-millsecond Ext4 only supports nanosecond timestamps if the inodes are 256 bytes or larger. The default inode size of mkfs.ext4 depends on your distro. Some "low-power systems" distros still use 128 bytes to save disk space. And some other distributions also have the rule that "if the partition is smaller than 512 MiB, use 128 byte inodes".

  • Bill McClain Bill McClain posted a comment on discussion Help

    Did you sync after the touch? The sub-second timestamp is an optimization feature to help identifying moved or copied files. Your files are protected even with it. From the manual: This improves the SnapRAID capability to recognize moved and copied files as it makes the time-stamp almost unique, removing possible duplicates. More specifically, if the sub-second time-stamp is not zero, a moved or copied file is identified as such if it matches the name, size and time-stamp. If instead the sub-second...

  • Gilbert Gilbert posted a comment on discussion Help

    btw, i checked the access to the files is properly granted.

  • Gilbert Gilbert posted a comment on discussion Help

    I use snapraid status and found there are 81 files with a zero sub-second timestamp issue, and run 'snapraid touch' to fix. the reuslt is nothing changed and the error still there. There is no problem to sync and access these files, any comments?

  • Leon Barber Leon Barber posted a comment on discussion Help

    I completed the replacement process as follows, and everything appears to be normal. Copied the existing parity file to the new drive. With both 14TB drives in USB docks, a 7TB parity file, and transfer rate of only 85MB/s, this step took almost 24 hours. Unfortunately, the source drive disconnected with less than 5% to go, and the copy had to be repeated. Started the copy a second time, and for some odd reason, the transfer rate this time was 230MB/s, so it completed in only 9 hours. Placed the...

  • rubylaser rubylaser modified a comment on discussion Help

    The issue is the snapraid process is trying to create a lockfile in /var and your user doesn't have permissions to write there. You could chown those files to your user and get past that error message. Some underlying commands like snapraid smart won't work without elevated permissions.

  • rubylaser rubylaser posted a comment on discussion Help

    The issue is the snapraid process is trying to create a lockfile in /var and your user doesn't have permissions to write there.

  • Gilbert Gilbert posted a comment on discussion Help

    I choose to save the snapraid.content to /home/user/ to avoid using sudo to run snapraid.

  • Rysz Rysz modified a comment on discussion Help

    Hello! Since your parity disk is broken, I would just replace the broken disk with the replacement disk and see this as a normal disk recovery. Afterwards I would run: snapraid fix -d PARITYNAME to recover the parity. Then make sure your files and everything are there, then run a snapraid sync. The procedure you linked is mainly for replacing a working parity disk with a (bigger) one. This is the relevant part for you: If you have a partially damaged or totally lost parity file, you can run the "fix"...

  • Rysz Rysz posted a comment on discussion Help

    Hello! Since your parity disk is broken, I would just replace the broken disk with the replacement disk and see this as a normal disk recovery. Afterwards I would run: snapraid fix -d PARITYNAME to recover the parity. Then make sure your files and everything are there, then run asnapraid sync. The procedure you linked is mainly for replacing a working parity disk with a (bigger) one. This is the relevant part for you: If you have a partially damaged or totally lost parity file, you can run the "fix"...

  • Leon Barber Leon Barber modified a comment on discussion Help

    Hello, I have 6 data disks and 2 parity, under Windows 10 Pro, connected to an Areca HBA in JBOD mode. My previous sync failed due to bad sectors on the first parity drive, and data has changed since last successful sync. I attempted to repair with chkdsk /F /R, but it returned insufficient space to replace bad clusters. I'm not clear on what the recovery procedure is with the new replacement disk. Following the FAQ section at https://www.snapraid.it/faq#reppardisk , it suggests first running a sync,...

  • Leon Barber Leon Barber posted a comment on discussion Help

    Hello, I have 6 data disks and 2 parity, connected to an Areca HBA in JBOD mode. My previous sync failed due to bad sectors on the first parity drive, and data has changed since last successful sync. I attempted to repair with chkdsk /F /R, but it returned insufficient space to replace bad clusters. I'm not clear on what the recovery procedure is with the new replacement disk. Following the FAQ section at https://www.snapraid.it/faq#reppardisk , it suggests first running a sync, which is not possible...

  • rubylaser rubylaser posted a comment on discussion Help

    It's a permissions error. Are you trying to run snapraid as a non-root user (your user probably doesn't have permissions to write to /var)?

  • Matthias McCready Matthias McCready posted a comment on discussion Help

    Hello, I am new to Linux and I haven't looked at my server since I set it up a few months ago - which is enough time for the cobwebs to build up! I am running it on Debian. Snapraid was working initially, I am thinking I messed something up. When I try checking the status I am getting the following: "Error creating the lock file '/var/snapraid.content.lock'. Permission denied." Thanks for the assistance.

  • Gilbert Gilbert posted a comment on discussion Help

    from your previous post, the mount point /mnt/parity changed from 3.7TB disk to 2.7TB, I think this is the root cause. you'd better use the UUID to mount disks, like /dev/disk/by-uuid/ee9298c7-0006-499d-9e30-34ca54b9993f /mnt/parity ext4 defaults 0 0

  • Gilbert Gilbert posted a comment on discussion Help

    You are right, nice to have parity disks larger than data disks, it's much safer for SYNC. while 16TB parity for 14TB data disks is not cost effective, i will choose to use mkfs.ext4 -m 5 to format the largest data disk and leave minfreespace=50G unchanged. for parity disk formatting i run "mkfs.ext4 -m 0 -J size=4 -i 67108864 DEVICE", it's working good and the parity size is only about 100GB larger than any single data disk. Filesystem Size Used Avail Use% Mounted on tmpfs 38G 21M 38G 1% /run efivarfs...

  • Thomas S Robertson Thomas S Robertson posted a comment on discussion Help

    For the curious, it turns out that I'd accidentally installed 32 bit ubuntu, not 64 bit. When I corrected that, everything worked again. So, I'm guessing this is the (an?) error message you get if you run out of addressable memory

  • David David posted a comment on discussion Help

    You may want to consider using the "autosave" feature.

  • Zammo Zammo posted a comment on discussion Help

    Thanks. So this means 60 hours is expected for a new parity drive on a 60TB array ? What is the logical reason it doesn't just copy the parity file from the Parity 1 drive ? Obviously there must be a logical reason but as a layman I am not sure what.

  • David David modified a comment on discussion Help

    You're creating a new parity file that allows for two drive recovery, not duplicating the first parity file. Data will be safe on the first parity drive. It will be just my luck that a drive or two fails when trying to add more parity, Ironically, this is why you want multilevel parity. When you're recovering a drive, you are hammering all of the drives at 100% for possibly days. If a drive is close to dying, this can push it over the edge. That's why having multiple parity drives is a good thing....

  • David David posted a comment on discussion Help

    You're creating a new parity file that allows for two drive recovery, not duplicating the first parity file. Data will be safe on the first parity drive. It will be just my luck that a drive or two fails when trying to add more parity, Ironically, this is why you want multilevel parity. When you're recovering a drive, you are hammering all of the drives at 100% for possibly days. If a drive is close to dying, this can push it over the edge. That's why having multiple parity drives is a good thing....

  • Zammo Zammo posted a comment on discussion Help

    I just added a second parity disk to my array (after using just 1 parity for the last year or so) adding the below in config........ 2-parity C:\Mounts\PARITY2\snapraid.2-parity Then I ran a full force sync (using Elucidate) as mentioned in the FAQ, and it says it will take 66 hours to complete. Why doesn't it just duplicate the large file from the 1st parity which would copy across much quicker ?? I am not using split parity. It looks like it is going to go through every file on all 11 hard disks,...

  • Thomas S Robertson Thomas S Robertson posted a comment on discussion Help

    I've been using snapraid for a couple years now and just encountered the following error message: Failed call to pthread_create(). Stacktrace of snapraid vnone, gcc 12.2.0, 32-bit, PATH_MAX=4096 Please report this error to the SnapRAID Forum: https://sourceforge.net/p/snapraid/discussion/1677233/ Aborted I'm sending here as requested. This now happens each time I run "snapraid scrub -p new" Here's some additional info: $ uname -a Linux ziggy 6.5.0-1020-raspi #23-Ubuntu SMP PREEMPT Mon Jun 24 13:19:52...

  • Raph Raph Raph Raph posted a comment on discussion Help

    Hey :D Thanks for your reply ^^ And sorry for my late reply :/ I found out that my SATA controller was too puny for RAID usage when i had as many discs as i do now, so I decided to live it somewhat dangerous (Since my discs are so new) and wait till my new SATA controller arrived and did a full sync -F. Works flawlessly now :D

  • David David posted a comment on discussion Help

    If you don't know what you're talking about, don't give advice. Oh no, I misunderstood something about snapraid. A thousand apologies. And I didn't crap on him, I told him to RTFM, since he had no clue what my question was even about, or the previous one from 4 years ago. Yeah, you were being a dick. I didn't care and that's why I let it slide. I figure everyone in this field has limited social skills. And you, what does your response have to do with my situation? Why even reply? He's pointing out...

  • Night Rider Night Rider posted a comment on discussion Help

    If you don't know what you're talking about, don't give advice. And I didn't crap on him, I told him to RTFM, since he had no clue what my question was even about, or the previous one from 4 years ago. And you, what does your response have to do with my situation? Why even reply? I swear, some people just like to post gibberish to increase their post counts. This isn't reddit. And to anyone in the future who is also curious about this, it worked great, go for it. And I am not holding anyone's hand...

  • rubylaser rubylaser modified a comment on discussion Help

    The one person that tried to help, you crapped on. No one else has documented it so far. Try it yourself, and share your findings.

  • rubylaser rubylaser modified a comment on discussion Help

    The one person that tried to help, you crapped on. No one else has documented it so far. Try it yourself, and document your findings.

  • rubylaser rubylaser posted a comment on discussion Help

    The one person that tried to help, you crapped on. Try it yourself, and document your findings.

  • Night Rider Night Rider posted a comment on discussion Help

    You are incorrect. Disk fragmentation is not the same as parity file fragmentation. Parity fragmentation is caused by deleting files on the data drives, thus causing unused blocks in the parity file. It is reported during a status command which was implemented in 4.0 2013/9. Please refer to this post and this post. Actually, I'm gonna guess that is you, the same David, making the exact same comments in that old post? It seems to me like you've been giving inaccurate advice due to your misunderstanding...

  • David David posted a comment on discussion Help

    You can defrag the data and parity drives any time you want.

1 >