Clicking Refresh is exactly what you have to do in this case :) It allows the daemon to rad the new state of the array after your manual sync. On Tue, Apr 7, 2026 at 9:39 AM abubin abubin@users.sourceforge.net wrote: The error is gone after I click on "refresh" on the webui. Will update here is anything showing again. Replacing 4tb data with 3tb data drive Sent from sourceforge.net because you indicated interest in < https://sourceforge.net/p/snapraid/discussion/1677233/> To unsubscribe from further...
The error is gone after I click on "refresh" on the webui. Will update here is anything showing again.
I have replace one of my dying data drive from 4tb to 3tb drive. Data has been copied manually. Mounted the 3tb and ran snapraid sync. After the initial errors of uuid change all, it went fine. However, snapraid daemon is showing error as below: CRITICAL: ARRAY FAILING System health is currently FAILING. Disk d2 has split 0 with UUID changed from b33dc6d0-7671-4ab7-9103-c09ff30xxxxx to a99d7d6a-96dc-431d-a204-ce0612axxxxx. Tried below to no avails: 1) restart snapraidd 2) snapraid sync --force-uuid...
Hi Ubuntu server 24.04.4 LTS, only thing i did was shut the server down when i couldnt get it working, only turned it back on to see if there were any replies to this thread and the web interface worked when i checked it. Time to disable my scripts in cron for syncing etc and figure out how the new interface works, Had a quick look, are there options to scrub newly added files, set off a manual sync if needed found the diff button, will i get emails on drive health once i set the emails part up,...
Yes. The web interface is inside the commander.zip. If it cannot be found, this is the issue. What system are you using, so that I can try to replicate it? Anyway, as a workaround, try to just copy the commander.zip to the expected place.
No idea what i did but it appears to be working now for some reason.
Hi Took the plunge and upgraded from 11.3 after many years of it just working. SnapRaid installed fine using: wget https://github.com/amadvance/snapraid/releases/download/v14.1/snapraid-14.1.tar.gz tar xzvf snapraid-14.1.tar.gz cd snapraid-14.1 ./configure make -j"$(nproc)" make check make install cd .. rm -rf snapraid-14.1* mike@Altair:~$ snapraid -V SnapRAID CLI v14.1 by Andrea Mazzoleni, https://www.snapraid.it Grabbed and installed the Daemon using wget https://github.com/amadvance/snapraid-daemon/releases/download/v1.5/snapraid-daemon-1.5.tar.gz...
Hi Took the plunge and upgraded from 11.3 after many years of it just working. SnapRaid installed fine using: wget https://github.com/amadvance/snapraid/releases/download/v14.1/snapraid-14.1.tar.gz tar xzvf snapraid-14.1.tar.gz cd snapraid-14.1 ./configure make -j"$(nproc)" make check make install cd .. rm -rf snapraid-14.1* mike@Altair:~$ snapraid -V SnapRAID CLI v14.1 by Andrea Mazzoleni, https://www.snapraid.it Grabbed and installed the Daemon using wget https://github.com/amadvance/snapraid-daemon/releases/download/v1.5/snapraid-daemon-1.5.tar.gz...
Hi Took the plunge and upgraded from 11.3 after many years of it just working. SnapRaid installed fine using: wget https://github.com/amadvance/snapraid/releases/download/v14.1/snapraid-14.1.tar.gz tar xzvf snapraid-14.1.tar.gz cd snapraid-14.1 ./configure make -j"$(nproc)" make check make install cd .. rm -rf snapraid-14.1* mike@Altair:~$ snapraid -V SnapRAID CLI v14.1 by Andrea Mazzoleni, https://www.snapraid.it Grabbed and installed the Daemon using wget https://github.com/amadvance/snapraid-daemon/releases/download/v1.5/snapraid-daemon-1.5.tar.gz...
These are bugfix releases addressing issues reported by users. It is strongly recommended to update to SnapRAID 14.1, as it fixes a critical regression in the filtering logic for the include/exclude directives that could cause files to be incorrectly excluded from the parity calculation. CHANGES IN SNAPRAID DAEMON 1.4 ============================== * Propagates errors logged to syslog (or EventLog on Windows) to the task log displayed in the UI. This makes troubleshooting script execution problems...
The SnapRAID project is taking a major step forward. We are proud to announce the official release of the SnapRAID Daemon (snapraidd), a specialized background service that transforms the manual SnapRAID CLI into an automated, "always-on" ecosystem. For years, SnapRAID has been a homelab staple for data redundancy through its command-line interface. The Daemon builds upon this foundation, providing a modern orchestration layer that handles array maintenance, monitoring, and reporting so you don't...
I can't see anything wrong and sync is doing progress. So I guess you should simply run sync again to let it finish and then pay extra attention on diff and status for the next few syncs.
Thank's to reply. This are the result of your suggest: ^C%, 37340658 MB, 1238 MB/s, 590 stripe/s, CPU 19%, 23:18 ETA Stopping for interruption at block 17518161 24% completed, 37340680 MB accessed in 10:22 d1 0% | d2 0% | d3 0% | d4 4% | ** d5 1% | d6 0% | d7 0% | d8 0% | d9 75% | ********************************************* parity 2% | * raid 2% | * hash 11% | * sched 0% | misc 0% | |______________ wait time (total, less is better) Everything OK Saving state to /var/snapraid.content... Saving state...
Thank's to reply. This are the result of your suggest: ^C%, 37340658 MB, 1238 MB/s, 590 stripe/s, CPU 19%, 23:18 ETA Stopping for interruption at block 17518161 24% completed, 37340680 MB accessed in 10:22 d1 0% | d2 0% | d3 0% | d4 4% | ** d5 1% | d6 0% | d7 0% | d8 0% | d9 75% | ********************************************* parity 2% | * raid 2% | * hash 11% | * sched 0% | misc 0% | |______________ wait time (total, less is better) Everything OK Saving state to /var/snapraid.content... Saving state...
Something definitely went wrong. I would recommend to press CTRL + C to stop current sync and pay close attention to what happens. Expected output: bla bla interrupted bla bla Saving state to content files bla bla bla verifying content files bla bla bla Everything OK Verify that above happened as expected. Then run snapraid status to confirm that progress % is around what would be expected for the amount of time it has been running. run snapraid diff to confirm that, from snapraids perspective, nothing...
Something definitely went wrong. I would recommend to press CTRL + C to stop current sync and pay close attention to what happens. Expected output: bla bla interrupted bla bla Saving state to content files bla bla bla verifying content files bla bla bla Everything OK Verify that above happened as expected. Then run snapraid status to confirm that progress % is around what would be expected for the amount of time it has been running. run snapraid diff to confirm that, from snapraids perspective, nothing...
Something definitely went wrong. I would recommend to press CTRL + C to stop current sync and pay close attention what happens. Expected output: bla bla interrupted bla bla Saving state to content files bla bla bla verifying content files bla bla bla Everything OK Verify that above happened as expected. Then run snapraid status to confirm that progress % is around what would be expected for the amount of time it has been running. run snapraid diff to confirm that, from snapraids perspective, nothing...
Hi everyone, I'm new to SnapRAID (first time using it) and not a huge expert, so I could really use some advice. I just ran my first snapraid sync on a setup with 70TB of data across nine disks. It took over 34 hours to complete, which seemed reasonable given the size. After it finished with Everythings OK message, I checked snapraid status and it showed a sync still in progress at only 2%. That's weird, right? I ran snapraid sync again, and now it's starting completely from scratch with an ETA of...
using sudo snapraid --conf /etc/snapraidNA2.conf list does not show me what disk # the files are on You can add this parameter to see full path in list: --test-fmt path
WARNING! All the files previously present in disk 'd5' at dir '/mnt/SnapRaidArrayNA2/SRD5NA2B4/.snapshots/11960/snapshot/' are now missing or have been rewritten! This could occur when restoring a disk from a backup program that is not setting correctly the timestamps. If you want to 'sync' anyway, use 'snapraid --force-empty sync'. I believe its just a timestamp issue as its a torrent that's been stuck 25% for a year as the 1 and only file I believe that should be on that disk is still their ? and...
WARNING! All the files previously present in disk 'd5' at dir '/mnt/SnapRaidArrayNA2/SRD5NA2B4/.snapshots/11960/snapshot/' are now missing or have been rewritten! This could occur when restoring a disk from a backup program that is not setting correctly the timestamps. If you want to 'sync' anyway, use 'snapraid --force-empty sync'. I believe its just a timestamp issue as its a torrent that's been stuck 25% for a year as the 1 and only file I believe that should be on that disk is still their ? and...
WARNING! All the files previously present in disk 'd5' at dir '/mnt/SnapRaidArrayNA2/SRD5NA2B4/.snapshots/11960/snapshot/' are now missing or have been rewritten! This could occur when restoring a disk from a backup program that is not setting correctly the timestamps. If you want to 'sync' anyway, use 'snapraid --force-empty sync'. I believe its just a timestamp issue as its a torrent that's been stuck 25% for a year as the 1 and only file I believe that should be on that disk is still their ? and...
WARNING! All the files previously present in disk 'd5' at dir '/mnt/SnapRaidArrayNA2/SRD5NA2B4/.snapshots/11960/snapshot/' are now missing or have been rewritten! This could occur when restoring a disk from a backup program that is not setting correctly the timestamps. If you want to 'sync' anyway, use 'snapraid --force-empty sync'. I believe its just a timestamp issue as its a torrent that's been stuck 25% for a year as the 1 and only file I believe that should be on that disk is still their ? and...
WARNING! All the files previously present in disk 'd5' at dir '/mnt/SnapRaidArrayNA2/SRD5NA2B4/.snapshots/11960/snapshot/' are now missing or have been rewritten! This could occur when restoring a disk from a backup program that is not setting correctly the timestamps. If you want to 'sync' anyway, use 'snapraid --force-empty sync'. I believe its just a timestamp issue as its a torrent that's been stuck 25% for a year as the 1 and only file I believe that should be on that disk is still their ? and...
WARNING! All the files previously present in disk 'd5' at dir '/mnt/SnapRaidArrayNA2/SRD5NA2B4/.snapshots/11960/snapshot/' are now missing or have been rewritten! This could occur when restoring a disk from a backup program that is not setting correctly the timestamps. If you want to 'sync' anyway, use 'snapraid --force-empty sync'. I believe its just a timestamp issue as its a torrent that's been stuck 25% for a year as the 1 and only file I believe that should be on that disk is still their ? and...
Yes, fix is supposed to be slower than sync and scrub. I guess the good part about that, is that my new server works fine :) during such operations you are more interested in reliability than in performance I can't imagine anyone disagreeing with that. pushing the system to its limit is likely not a good idea I think that this is starting to become a bit less true, here in the future. More specificially, it turns out that my ~1.5 days recovery time was only to recover about 6-7 TB. My largest disk...
Yes, fix is supposed to be slower than sync and scrub. I guess the good part about that, is that my new server works fine :) during such operations you are more interested in reliability than in performance I can't imagine anyone disagreeing with that. pushing the system to its limit is likely not a good idea I think that this is starting to become a bit less true, here in the future. More specificially, it turns out that my ~1.5 days recovery time was only to recover about 6-7 TB. My largest disk...
Yes, fix is supposed to be slower than sync and scrub. I guess the good part about that, is that my new server works fine :) during such operations you are more interested in reliability than in performance I can't imagine anyone disagreeing with that. pushing the system to its limit is likely not a good idea I think that is starting to become a bit less true, here in the future. More specificially, it turns out that my ~1.5 days recovery time was only to recover about 6-7 TB. My largest disk is...
Welcome back Leifi! Yes, fix is supposed to be slower than sync and scrub. The main reason is that sync and scrub use a multithreaded implementation with one thread for each disk to try to squeeze the maximum performance out of the system. Something similar is technically possible for fix and check as well, but in the end, during such operations you are more interested in reliability than in performance. And pushing the system to its limit is likely not a good idea.
Hi, long time, no see :) Background Yesterday I reached a major milestone, in a little project, to replace my old Windows file server from ~2012, with 18 data drives and 3 parities, with a new fresh machine, running Linux (Kubuntu). Migrating from Windows to Linux was completely painless (Mount all drives, install snapraid v13.0, copy content file, adjust snapraid.conf and save it in /etc/ run sync, done). Both during the initial sync (with only a few new files), and during a small scrub afterwards,...
This is likely the LAST BETA release before the official launch. This is your final opportunity to provide feedback and help us shape the stable version of the SnapRAID Daemon. WHAT'S NEW? =========== * Extra Disk Monitoring: You can now monitor disks that are not part of the main parity array. These are defined in snapraid.conf using the 'extra' keyword and receive full health and status tracking, though they are excluded from automated power management ('up', 'down', 'down_idle'). * Relocated State:...
Thanks for the report. Right now, Metro is generally better than Spooky2, but the improvement isn't large enough to justify replacing the old, battle-tested Spooky2. Version 14.0 will introduce MuseAir, a new contender expected to deliver even stronger performance. We'll see how MuseAir compares.
c:\snapraid>snapraid -T snapraid v13.0 by Andrea Mazzoleni, https://www.snapraid.it Compiler gcc 11.5.0 CPU GenuineIntel, family 6, model 165, flags sse2 ssse3 crc32 avx2 Memory is little-endian 64-bit Support nanosecond timestamps with futimens() Speed test using 8 data buffers of 262144 bytes, for a total of 2048 KiB. Memory blocks have a displacement of 1792 bytes to improve cache performance. The reported values are the aggregate bandwidth of all data blocks in MB/s, not counting parity blocks....
The SnapRAID project is evolving. We are excited to invite the community to test BETA 2 of the SnapRAID Daemon, a specialized service that transforms the manual SnapRAID CLI into an automated ecosystem. Based on feedback from BETA 1, we have refined the architecture to be more robust, secure, and for the first time, accessible to Windows users. WHAT'S NEW? =========== Windows Support: We are excited to introduce a dedicated Windows installer (.exe). The daemon should be installed in the same directory...
Yes, a Windows version is planned, but the first official release will be Linux-only.
Andrea, thanks a lot! Great tool for great software! Looking very stylish and useful! Are you planning support windows platform in future? Unfortunately I'm not using SnapRAID on linux.
Announcing the SnapRAID Daemon Open Beta ======================================== The SnapRAID project is evolving. We are excited to invite the community to test the SnapRAID Daemon (snapraidd), a specialized background service that transforms the manual SnapRAID CLI into an automated, "always-on" ecosystem. This release introduces a modern orchestration layer designed to handle array maintenance so you don't have to. WHAT'S NEW? =========== * Always-On Automation: Scheduled sync and scrub cycles...
Aha - so I found the original source of the problem: there was an excel file that produced a "Unexpected data modification of a file without parity!" warning message in a sync run from several days ago that I had missed. I'm assuming that the file contents were changed without changing the mtime, when it was opened for reading by excel, making it no longer match to another copy of the file elsewhere in the array. The sync run says as much: This file was detected as a copy of another file with the...
Yep - as expected, running snapraid fix -e resulted in nothing to do: Self test... Loading state from /var/snapraid.content... Searching disk d2... Searching disk d3... Searching disk d4... Selecting... Using 3212 MiB of memory for the file-system. Initializing... Selecting... Fixing... Nothing to do Everything OK So while I'm still not happy about seeing "WARNING! Unexpected file errors!", I'm not sure there's any more I can do to figure out what happened. I suppose I could run a full "snapraid...
Apologies if this is a stupid question, but I've been using snapraid forever with almost no errors. Primarily, I've seen the excel-touching-file-contents-without-updating-the-timestamp problem, and periodically the zero-file-size problem for things like mailboxes that get emptied. So, practically no genuine file errors. I have an overnight script that runs a sync, and then a scrub of 8% of the array. Last night, it gave the following "WARNING! Unexpected file errors!" message, but snapraid didn't...
Apologies if this is a stupid question, but I've been using snapraid forever with almost no errors. Primarily, I've seen the excel-touching-file-contents-without-updating-the-timestamp problem, and periodically the zero-file-size problem for things like mailboxes that get emptied. So, practically no genuine file errors. I have an overnight script that runs a sync, and then a scrub of 8% of the array. Last night, it gave the following "WARNING! Unexpected file errors!" message, but snapraid didn't...
Apologies if this is a stupid question, but I've been using snapraid forever with almost no errors. Primarily, I've seen the excel-touching-file-contents-without-updating-the-timestamp problem, and periodically the zero-file-size problem for things like mailboxes that get emptied. So, practically no genuine file errors. I have an overnight script that runs a sync, and then a scrub of 8% of the array. Last night, it gave the following "WARNING! Unexpected file errors!" message, but snapraid didn't...
Hello, Raphael. I'm currently running SnapRAID v13.0 with Elucidate v2025.9.14 and for now it is running without problems. Only problem that I encountered was when I was upgrading. I was running SnapRAID v11.5 and Elucidate v20.9.1022.1 previously. I upgraded Elucidate first and it didn’t find config file. It seams that there was a change of nomenclature of file between versions. Old version used “snapraid.config” new version uses “snapraid.conf”. I just renamed old configuration file and it started...
Thanks, it seems to work as it no longer segfaults. I guess the only bad thing is if I use a different NTFS driver, it seems like it's gonna take a long time to sync even though the files are the same. Even tried the --force-uuid option.
Thanks for the report! This should now be fixed in version 13.1-rc-77-g24aab43, available at: http://beta.snapraid.it/ Aside from the GitHub build servers, I don't have access to a physical macOS machine, so it would really help if you could confirm that it's working as expected. Ciao, Andrea
Hello, not sure if reported, but it seems like when trying to do a sync on a ntfs drive using certain NTFS drivers, I get an immediate segfault when running. I think I've narrowed it down to some NTFS drivers not reporting a UUID for a volume. Because a UUID isn't reported, it causes snapraid to crash with a segfault when running a sync since it can't seem to handle a missing UUID even though my snapraid config is correct. But running some other command like status or list works. eg. diskutil info...
Sounds good to me. Thanks for taking a looking and your project!
It’s a good idea! I’m adding it to the TODO. However, the name .snapignore can't be used, as it would conflict with the Ubuntu "snap" application ecosystem (they have also discussed using .snapignore). Maybe we can use .snapraidignore instead. It’s a bit longer, but it avoids the risk of conflicts and keeps things clear.
Currently snapraid allows excluding items from within a single snapraid.conf file. For this feature, rather than supporting only a single centrally managed snapraid.conf file, other files deeper in the tree would also manage exclusions, similar to how git projects can have multiple .gitignore's. They could be named something like .snapignore to make them distinct from snapraid.conf, although the existing syntax makes sense to me. Sometimes it's more intuitive/desirable to place a ignore file in the...
If you pass the --gui option you should get them. See the snapraid log documentation at https://www.snapraid.it/log Anyway, I'm developing a daemon with the intention to replace such cron jobs with also more features https://github.com/amadvance/snapraid-daemon It's still in early stage of development, so it would take some time.
While running snapraid as a cron job or in the background, especially during a full scrub operation, it would be helpful to log some % of progress into the log file. Thanks.
I have an existing 4-drive single parity setup, to which I wish to add a 2nd parity disk. I have the original "parity g:\snapraid.parity" statement in my configuration file and have added "2-parity H:\snapraid.2-parity" statement. The new drive is only a few months old, and is newly formatted with NTFS. I then attempt "snapraid -F sync". After around 10 minutes I get Missing data reading file 'H:/snapraid.2-parity' at offset 258411593728 for size 262144. WARNING! Unexpected read error in the 2-Parity...
I haven't tried your script yet but wanted to thank you for contributing to the community! :)
OK, thanks. Luckily the lost data was just movie rip backups on my plex media server. I can replace them easy enough. Problem is, I'm not sure which movies were on that drive. I'll have to go through several hundred movies just to find which ones are now missing. At least now I know what to do next time so hopefully I can get it right. I appreciate the help.
If you've synced, then recovery using snapraid is gone. Verifying after running fix to make sure all of the recovered files are there. Snapraid is idempotent so running it multiple times to ensure all files have been recovered is recommended. I know this doesn't help much now, but always verify after recovery whenever using any tool while recovery is still an option.
the only verifying I did was checking the files once all the steps were followed. Although it sounds like I should have verified this at an earlier point?? I don’t know at what point the files would have shown up or that I should have checked. Like I said I apologize, but I’m kind of new to all this. I muddled my way through setting it all up and had it finally working pretty well I thought. And then I had a drive failure and now I’m trying to recover. I did run a sync about a week prior to all of...
well, by verify, if you mean checking for restored files when I completed following the steps in the guide, then yes.. Although it sounds like I should have verified this at an earlier point?? I don’t know at what point the files would have shown up or that I should have checked. Like I said I apologize, but I’m kind of new to all this. I muddled my way through setting it all up and had it finally working pretty well I thought. And then I had a drive failure and now I’m trying to recover. I did run...
Didn't you verify the files were restored on the new drive?
Hello. Fairly new to Snapraid so this is my first time having to use it to recover a failed drive. I am using Snapraid along with Mergerfs and OMV. I have 7 drives total (2 parity) and the rest data and/or content drives. One of my data only drives failed completely (no option to recover data). I followed the guide here: https://wiki.omv-extras.org/doku.php?id=omv6:omv6_plugins:snapraid#replacing_a_failed_drive_in_mergerfs I followed all the steps carefully and replaced the failed drive using the...
Could I perhaps request my script be added to the official SnapRAID webpage here? https://www.snapraid.it/support I feel some of the currently listed ones are no longer maintained not having received updates in years.
I also will send on a donation. I like the sound of the Temperature limit feature!
Thanks, Ill try that if that error keeps returning after this rebuild. BTW, loving the fast speeds on my new system: 35%, 104526154 MB, 1828MB/s, 387 stripe/s, CPU 23%, 25:57 ETA
Grok says Vendor,Tool,How to mark bad block Seagate,SeaTools for DOS/Windows,"""Advanced Tests → Set Bad Sector""" WD,Data Lifeguard Diagnostics,"""Write Zeros to Sector"" → forces remap" Samsung,Magician,"""Diagnostic Scan"" → manual sector test" Intel/HGST,Drive Feature Tool,DFT -bad command And Open elevated PowerShell $sector = 12345678 $drive = 0 # \.\PhysicalDrive0 $fs = [IO.File]::Open("\.\PhysicalDrive$drive", 'Open', 'ReadWrite', 'None') $fs.Seek($sector * 512, 'Begin') $bad = [byte[]]@(0xDE,0xAD,0xBE,0xEF)...
What tool is there that can do that in Windows 10? BTW, I am going to (re)build the new parity with the new version of SnapRAID since that got just released.
Can you use a tool to manually mark the block as bad?
Yeah, me as well. But I tried to do a fix on that parity drive and it seems to stall. I suspect that this particular drive may be failing but there are no SMART indications that it has any issues. I may do a full sync (delete all prior parity) and see if I can replicate the issues.
Based on the info provided, I'm out of ideas.
Thanks. Another donation sent.
SnapRAID v13.0 has been released at : https://www.snapraid.it/ SnapRAID is a backup program for a disk array. SnapRAID stores parity information in the disk array, and it allows recovering from up to six disk failures. This is the list of changes: * Added new thermal protection configuration options: - temp_limit TEMPERATURE_CELSIUS Sets the maximum allowed disk temperature. When any disk exceeds this limit, SnapRAID stops all operations and spins down the disks to prevent overheating. - temp_sleep...
Nope, same error with that sync option. Frustrating error loop.
Probably needs snapraid --test-force-content-write sync (instead of plain sync).
I did just that. -e fix, sync, -p bad scrub and same exact error showed up.
(Directly) after the snapraid -e fix command, you need to do a snapraid sync command (so that the "fixed" state is actually written to the content files).
(Directly) after the snapraid -e fix command, you need to do a snapraid sync command (so that the "fixed" state is actuqally written to the content files).
I seem to be stuck in a loop... I keep getting this exact same error on scrubs: Data error in parity 'parity' at position '20452575', diff bits 16440/2097152 Status check shows: The oldest block was scrubbed 161 days ago, the median 114, the newest 0. No sync is in progress. The full array was scrubbed at least one time. No file has a zero sub-second timestamp. No rehash is in progress or needed. DANGER! In the array there are 1 errors! They are from block 20452575 to 20452575, specifically at blocks:...
I seem to be stuck in a loop... I keep getting this exact same error on scrubs: Data error in parity 'parity' at position '20452575', diff bits 16440/2097152 Status check shows: The oldest block was scrubbed 161 days ago, the median 114, the newest 0. No sync is in progress. The full array was scrubbed at least one time. No file has a zero sub-second timestamp. No rehash is in progress or needed. DANGER! In the array there are 1 errors! They are from block 20452575 to 20452575, specifically at blocks:...
Just to update people here - I've published a new release on the Github page. I've added the ability to include the output of "snapraid smart" for a nice quick report on the SMART data in the notification email. I've also added the capability to spin down the Hard disks using "snapraid down". Hopefully the script is of use to someone!
Andrea Mazzoleni - Thank you, the changes look good.
Thanks! I've updated the web page.
Please excuse me if this is not the right place to request updates. The web page its self pointed me here. This page: SnapRAID file system comparison could use some updates as below. The word "not" should probably be "non-" here: SnapRAID is only one of the available not standard RAID solutions for disk arrays. In the following row, both ZFS & BTRFS should be "partial" instead of "no". If the "Other failure" is in redundant Metadata, (which both ZFS & BTRFS can support), and another copy is available,...
Hi, I'm having a problem with SnapRaid v12.3. Scenario: I have five data drives, one of which is a parity drive. One of the data drives is empty. 1) I'm running 'sync'. 2) I'm deleting a file on one of the drives. The other drives are unchanged. 3) I'm trying to recover a deleted file using the command: snapraid fix -f full_path_to_file The file isn't being recovered. The messages indicate that nothing can be done. Running 'snapraid diff --test-fmt path' correctly shows that one file has been deleted....
Hi, I'm having a problem with SnapRaid v12.3. Scenario: I have five data drives, one of which is a parity drive. One of the data drives is empty. 1) I'm running 'sync'. 2) I'm deleting a file on one of the drives. The other drives are unchanged. 3) I'm trying to recover a deleted file using the command: snapraid fix -f <full path="" to="" file=""></full> The file isn't being recovered. The messages indicate that nothing can be done. Running 'snapraid diff --test-fmt path' correctly shows that one...
Hi people. This latest version of Elucidate was released just last month, the system requirements shows "SnapRAID 11.5 or lower". I wonder if I still need to use SnapRAID 11.5 or lower, or can I use with 12.4? Thanks!
PLEASE DELETE.. WRONG TITLE..
Hey guys Can someone please help me with this extremely annoying problem? msg:progress: Saving state to C:/Users//Documents/SnapRAID.content... msg:progress: Saving state to U:/SnapRAID.content... msg:verbose: 10353 files msg:verbose: 0 hardlinks msg:verbose: 0 symlinks msg:verbose: 5 empty dirs msg:progress: Verifying... msg:progress: Verified C:/Users//Documents/SnapRAID.content in 0 seconds msg:progress: Verified U:/SnapRAID.content in 1 seconds msg:progress: Using 224 MiB of memory for 64 cached...
Are you using a version that originated from the Debian repos? I get the same for version 12.4 on my installation of Debian 13. I found that if I compiled the SnapRAID source from Github into my own package and installed it, this didn't occur. The version was correctly displayed as 12.4, but if I install the version out of the Debian repos I get the above.
I would love to tell you which version has this problem, but er.... yea. snapraid vnone by Andrea Mazzoleni, http://www.snapraid.it
Inspecting the logs from here: https://github.com/Smurf-IV/Elucidate/issues/80 in file "Elucidate.20211105.14.log" At timestamp "2021-11-05 19:49:10.0019" It can be seen that the Percentage covered resets to zero for no reason! 2021-11-05 19:49:07.7565 [12] INFO: Elucidate.Controls.RunControl: StdOut[70%, 39398202 MB, 1155 MB/s, 1259 stripe/s, CPU 0%, 3:57 ETA ] 2021-11-05 19:49:07.7565 [12] INFO: Elucidate.Controls.RunControl: StdOut[correct "Muscle Shoals/Muscle Shoals 2013 1080p WEB-DL DD5.1 H.264-HaB.mkv"]...
I am trying to figure out how the parity calculation works. I did not find information about the details. In an old school RAID5 setting of a couple of disks, calculation is obvious. But how does Snapraid determines its blocks for parity calculation? Given two data disks of same size and a parity disk of bigger size, one data disk empty, the other only only containing a test file with a length of 1GB. How are blocks for parity calculations determined? If defragmentation does not affect Snapraid,...
Trying to run Snapraid in a docker container on MacOS - switching to Snapraid from using RAID storage solutions. Seems like Snapraid is not working correctly because it is not recognizing volumes as separate drives when Docker Volumes are used. Is there an option to suppress checking same device error? Am I doing something wrong when mapping drives into Docker container as volumes? Appreciate any help except for suggestions to run Linux instead of MacOS. Thank you. In MacOS following separate physical...
I thought I would share the Bash Script for automation of SnapRAID that I’ve been working on for years here. I wrote it back in around 2020 when I had problems with some of the existing scripts at the time and also for my own learning. I've added to it gradually over time and it's worked extremely well for me to date. I thought it a good idea to make it available to others so, I’ve recently published it to Github here: https://github.com/zoot101/snapraid-daily You can download it from the releases...
I thought I would share the Bash Script for automation of SnapRAID that I’ve been working on for years here. I wrote it back in around 2020 when I had problems with some of the existing scripts at the time and also for my own learning. I've added to it gradually over time and it's worked extremely well for me to date. I thought it a good idea to make it available to others so, I’ve recently published it to Github here: https://github.com/zoot101/snapraid-daily It does the following: By default it...
Unless I'm wrong - this is your problem: No space left on device.
I think something like this should work - just re-scrub the bad blocks. It'll find the error again, but with the -v option on, it should print the effected files, the should also be captured in the log file generated. snapraid -p bad -v -l scrub.log scrub Alternatively you can specify the block number directly by doing something like this: snapraid -l scrub.log -v -S 13147 -B 3 scrub And then repeat for the listed blocks above: snapraid -l scrub.log -v -S 73271 -B 10 scrub How did you run the scrub...
[Not certain on this -- I've got no errors, so can't test ...] Does snapraid -v -e check provide any info?
Been running SnapRAID for many years and have had to occasionally replace a failing disk over the years. But in those cases it was always obvious, as the disk was completely dead. I am now getting periodic data errors and a message about failing blocks having been marked as bad. But no info which disk those failing blocks are related to. I'd like to replace that disk, but how can I find out which one it is? See below my latest output from a scrub: 100% completed, 1641092 MB accessed in 1:35 %, 0:00...
I might be wrong. A scrub would clear those recovered errors.
If it can't really read a particular sector, I'm assuming ddrescue will skip it then? So just us import and then specify the parent dir of where I recovered the d4 files to? What happens if SnapRAID sees a file there that has a hash that it knows already, will it still rehash it or something?
Yes, ddrescue will attempt to read even unrecoverable files by making multiple attempts. The --import option will be required because SnapRAID has never seen the new files copied to the array, so it won't be aware of them.
If ddrescue copies the whole partition, does that mean even unrecoverable files will be copied? If I really can't replace d4 right now, can I copy its contents to either d1, d2, or d5? If I do that, will I still need to use the --import option? If I understand the use of that option correctly, I won't need to do that because the files are still inside the array (in one of the data disks) and all data disks are scanned anyway, correct?