Hello, Raphael. I'm currently running SnapRAID v13.0 with Elucidate v2025.9.14 and for now it is running without problems. Only problem that I encountered was when I was upgrading. I was running SnapRAID v11.5 and Elucidate v20.9.1022.1 previously. I upgraded Elucidate first and it didn’t find config file. It seams that there was a change of nomenclature of file between versions. Old version used “snapraid.config” new version uses “snapraid.conf”. I just renamed old configuration file and it started...
Thanks, it seems to work as it no longer segfaults. I guess the only bad thing is if I use a different NTFS driver, it seems like it's gonna take a long time to sync even though the files are the same. Even tried the --force-uuid option.
Thanks for the report! This should now be fixed in version 13.1-rc-77-g24aab43, available at: http://beta.snapraid.it/ Aside from the GitHub build servers, I don't have access to a physical macOS machine, so it would really help if you could confirm that it's working as expected. Ciao, Andrea
Hello, not sure if reported, but it seems like when trying to do a sync on a ntfs drive using certain NTFS drivers, I get an immediate segfault when running. I think I've narrowed it down to some NTFS drivers not reporting a UUID for a volume. Because a UUID isn't reported, it causes snapraid to crash with a segfault when running a sync since it can't seem to handle a missing UUID even though my snapraid config is correct. But running some other command like status or list works. eg. diskutil info...
Sounds good to me. Thanks for taking a looking and your project!
It’s a good idea! I’m adding it to the TODO. However, the name .snapignore can't be used, as it would conflict with the Ubuntu "snap" application ecosystem (they have also discussed using .snapignore). Maybe we can use .snapraidignore instead. It’s a bit longer, but it avoids the risk of conflicts and keeps things clear.
Currently snapraid allows excluding items from within a single snapraid.conf file. For this feature, rather than supporting only a single centrally managed snapraid.conf file, other files deeper in the tree would also manage exclusions, similar to how git projects can have multiple .gitignore's. They could be named something like .snapignore to make them distinct from snapraid.conf, although the existing syntax makes sense to me. Sometimes it's more intuitive/desirable to place a ignore file in the...
If you pass the --gui option you should get them. See the snapraid log documentation at https://www.snapraid.it/log Anyway, I'm developing a daemon with the intention to replace such cron jobs with also more features https://github.com/amadvance/snapraid-daemon It's still in early stage of development, so it would take some time.
While running snapraid as a cron job or in the background, especially during a full scrub operation, it would be helpful to log some % of progress into the log file. Thanks.
I have an existing 4-drive single parity setup, to which I wish to add a 2nd parity disk. I have the original "parity g:\snapraid.parity" statement in my configuration file and have added "2-parity H:\snapraid.2-parity" statement. The new drive is only a few months old, and is newly formatted with NTFS. I then attempt "snapraid -F sync". After around 10 minutes I get Missing data reading file 'H:/snapraid.2-parity' at offset 258411593728 for size 262144. WARNING! Unexpected read error in the 2-Parity...
I haven't tried your script yet but wanted to thank you for contributing to the community! :)
OK, thanks. Luckily the lost data was just movie rip backups on my plex media server. I can replace them easy enough. Problem is, I'm not sure which movies were on that drive. I'll have to go through several hundred movies just to find which ones are now missing. At least now I know what to do next time so hopefully I can get it right. I appreciate the help.
If you've synced, then recovery using snapraid is gone. Verifying after running fix to make sure all of the recovered files are there. Snapraid is idempotent so running it multiple times to ensure all files have been recovered is recommended. I know this doesn't help much now, but always verify after recovery whenever using any tool while recovery is still an option.
the only verifying I did was checking the files once all the steps were followed. Although it sounds like I should have verified this at an earlier point?? I don’t know at what point the files would have shown up or that I should have checked. Like I said I apologize, but I’m kind of new to all this. I muddled my way through setting it all up and had it finally working pretty well I thought. And then I had a drive failure and now I’m trying to recover. I did run a sync about a week prior to all of...
well, by verify, if you mean checking for restored files when I completed following the steps in the guide, then yes.. Although it sounds like I should have verified this at an earlier point?? I don’t know at what point the files would have shown up or that I should have checked. Like I said I apologize, but I’m kind of new to all this. I muddled my way through setting it all up and had it finally working pretty well I thought. And then I had a drive failure and now I’m trying to recover. I did run...
Didn't you verify the files were restored on the new drive?
Hello. Fairly new to Snapraid so this is my first time having to use it to recover a failed drive. I am using Snapraid along with Mergerfs and OMV. I have 7 drives total (2 parity) and the rest data and/or content drives. One of my data only drives failed completely (no option to recover data). I followed the guide here: https://wiki.omv-extras.org/doku.php?id=omv6:omv6_plugins:snapraid#replacing_a_failed_drive_in_mergerfs I followed all the steps carefully and replaced the failed drive using the...
Could I perhaps request my script be added to the official SnapRAID webpage here? https://www.snapraid.it/support I feel some of the currently listed ones are no longer maintained not having received updates in years.
I also will send on a donation. I like the sound of the Temperature limit feature!
Thanks, Ill try that if that error keeps returning after this rebuild. BTW, loving the fast speeds on my new system: 35%, 104526154 MB, 1828MB/s, 387 stripe/s, CPU 23%, 25:57 ETA
Grok says Vendor,Tool,How to mark bad block Seagate,SeaTools for DOS/Windows,"""Advanced Tests → Set Bad Sector""" WD,Data Lifeguard Diagnostics,"""Write Zeros to Sector"" → forces remap" Samsung,Magician,"""Diagnostic Scan"" → manual sector test" Intel/HGST,Drive Feature Tool,DFT -bad command And Open elevated PowerShell $sector = 12345678 $drive = 0 # \.\PhysicalDrive0 $fs = [IO.File]::Open("\.\PhysicalDrive$drive", 'Open', 'ReadWrite', 'None') $fs.Seek($sector * 512, 'Begin') $bad = [byte[]]@(0xDE,0xAD,0xBE,0xEF)...
What tool is there that can do that in Windows 10? BTW, I am going to (re)build the new parity with the new version of SnapRAID since that got just released.
Can you use a tool to manually mark the block as bad?
Yeah, me as well. But I tried to do a fix on that parity drive and it seems to stall. I suspect that this particular drive may be failing but there are no SMART indications that it has any issues. I may do a full sync (delete all prior parity) and see if I can replicate the issues.
Based on the info provided, I'm out of ideas.
Thanks. Another donation sent.
SnapRAID v13.0 has been released at : https://www.snapraid.it/ SnapRAID is a backup program for a disk array. SnapRAID stores parity information in the disk array, and it allows recovering from up to six disk failures. This is the list of changes: * Added new thermal protection configuration options: - temp_limit TEMPERATURE_CELSIUS Sets the maximum allowed disk temperature. When any disk exceeds this limit, SnapRAID stops all operations and spins down the disks to prevent overheating. - temp_sleep...
Nope, same error with that sync option. Frustrating error loop.
Probably needs snapraid --test-force-content-write sync (instead of plain sync).
I did just that. -e fix, sync, -p bad scrub and same exact error showed up.
(Directly) after the snapraid -e fix command, you need to do a snapraid sync command (so that the "fixed" state is actually written to the content files).
(Directly) after the snapraid -e fix command, you need to do a snapraid sync command (so that the "fixed" state is actuqally written to the content files).
I seem to be stuck in a loop... I keep getting this exact same error on scrubs: Data error in parity 'parity' at position '20452575', diff bits 16440/2097152 Status check shows: The oldest block was scrubbed 161 days ago, the median 114, the newest 0. No sync is in progress. The full array was scrubbed at least one time. No file has a zero sub-second timestamp. No rehash is in progress or needed. DANGER! In the array there are 1 errors! They are from block 20452575 to 20452575, specifically at blocks:...
I seem to be stuck in a loop... I keep getting this exact same error on scrubs: Data error in parity 'parity' at position '20452575', diff bits 16440/2097152 Status check shows: The oldest block was scrubbed 161 days ago, the median 114, the newest 0. No sync is in progress. The full array was scrubbed at least one time. No file has a zero sub-second timestamp. No rehash is in progress or needed. DANGER! In the array there are 1 errors! They are from block 20452575 to 20452575, specifically at blocks:...
Just to update people here - I've published a new release on the Github page. I've added the ability to include the output of "snapraid smart" for a nice quick report on the SMART data in the notification email. I've also added the capability to spin down the Hard disks using "snapraid down". Hopefully the script is of use to someone!
Andrea Mazzoleni - Thank you, the changes look good.
Thanks! I've updated the web page.
Please excuse me if this is not the right place to request updates. The web page its self pointed me here. This page: SnapRAID file system comparison could use some updates as below. The word "not" should probably be "non-" here: SnapRAID is only one of the available not standard RAID solutions for disk arrays. In the following row, both ZFS & BTRFS should be "partial" instead of "no". If the "Other failure" is in redundant Metadata, (which both ZFS & BTRFS can support), and another copy is available,...
Hi, I'm having a problem with SnapRaid v12.3. Scenario: I have five data drives, one of which is a parity drive. One of the data drives is empty. 1) I'm running 'sync'. 2) I'm deleting a file on one of the drives. The other drives are unchanged. 3) I'm trying to recover a deleted file using the command: snapraid fix -f full_path_to_file The file isn't being recovered. The messages indicate that nothing can be done. Running 'snapraid diff --test-fmt path' correctly shows that one file has been deleted....
Hi, I'm having a problem with SnapRaid v12.3. Scenario: I have five data drives, one of which is a parity drive. One of the data drives is empty. 1) I'm running 'sync'. 2) I'm deleting a file on one of the drives. The other drives are unchanged. 3) I'm trying to recover a deleted file using the command: snapraid fix -f <full path="" to="" file=""></full> The file isn't being recovered. The messages indicate that nothing can be done. Running 'snapraid diff --test-fmt path' correctly shows that one...
Hi people. This latest version of Elucidate was released just last month, the system requirements shows "SnapRAID 11.5 or lower". I wonder if I still need to use SnapRAID 11.5 or lower, or can I use with 12.4? Thanks!
PLEASE DELETE.. WRONG TITLE..
Hey guys Can someone please help me with this extremely annoying problem? msg:progress: Saving state to C:/Users//Documents/SnapRAID.content... msg:progress: Saving state to U:/SnapRAID.content... msg:verbose: 10353 files msg:verbose: 0 hardlinks msg:verbose: 0 symlinks msg:verbose: 5 empty dirs msg:progress: Verifying... msg:progress: Verified C:/Users//Documents/SnapRAID.content in 0 seconds msg:progress: Verified U:/SnapRAID.content in 1 seconds msg:progress: Using 224 MiB of memory for 64 cached...
Are you using a version that originated from the Debian repos? I get the same for version 12.4 on my installation of Debian 13. I found that if I compiled the SnapRAID source from Github into my own package and installed it, this didn't occur. The version was correctly displayed as 12.4, but if I install the version out of the Debian repos I get the above.
I would love to tell you which version has this problem, but er.... yea. snapraid vnone by Andrea Mazzoleni, http://www.snapraid.it
Inspecting the logs from here: https://github.com/Smurf-IV/Elucidate/issues/80 in file "Elucidate.20211105.14.log" At timestamp "2021-11-05 19:49:10.0019" It can be seen that the Percentage covered resets to zero for no reason! 2021-11-05 19:49:07.7565 [12] INFO: Elucidate.Controls.RunControl: StdOut[70%, 39398202 MB, 1155 MB/s, 1259 stripe/s, CPU 0%, 3:57 ETA ] 2021-11-05 19:49:07.7565 [12] INFO: Elucidate.Controls.RunControl: StdOut[correct "Muscle Shoals/Muscle Shoals 2013 1080p WEB-DL DD5.1 H.264-HaB.mkv"]...
I am trying to figure out how the parity calculation works. I did not find information about the details. In an old school RAID5 setting of a couple of disks, calculation is obvious. But how does Snapraid determines its blocks for parity calculation? Given two data disks of same size and a parity disk of bigger size, one data disk empty, the other only only containing a test file with a length of 1GB. How are blocks for parity calculations determined? If defragmentation does not affect Snapraid,...
Trying to run Snapraid in a docker container on MacOS - switching to Snapraid from using RAID storage solutions. Seems like Snapraid is not working correctly because it is not recognizing volumes as separate drives when Docker Volumes are used. Is there an option to suppress checking same device error? Am I doing something wrong when mapping drives into Docker container as volumes? Appreciate any help except for suggestions to run Linux instead of MacOS. Thank you. In MacOS following separate physical...
I thought I would share the Bash Script for automation of SnapRAID that I’ve been working on for years here. I wrote it back in around 2020 when I had problems with some of the existing scripts at the time and also for my own learning. I've added to it gradually over time and it's worked extremely well for me to date. I thought it a good idea to make it available to others so, I’ve recently published it to Github here: https://github.com/zoot101/snapraid-daily You can download it from the releases...
I thought I would share the Bash Script for automation of SnapRAID that I’ve been working on for years here. I wrote it back in around 2020 when I had problems with some of the existing scripts at the time and also for my own learning. I've added to it gradually over time and it's worked extremely well for me to date. I thought it a good idea to make it available to others so, I’ve recently published it to Github here: https://github.com/zoot101/snapraid-daily It does the following: By default it...
Unless I'm wrong - this is your problem: No space left on device.
I think something like this should work - just re-scrub the bad blocks. It'll find the error again, but with the -v option on, it should print the effected files, the should also be captured in the log file generated. snapraid -p bad -v -l scrub.log scrub Alternatively you can specify the block number directly by doing something like this: snapraid -l scrub.log -v -S 13147 -B 3 scrub And then repeat for the listed blocks above: snapraid -l scrub.log -v -S 73271 -B 10 scrub How did you run the scrub...
[Not certain on this -- I've got no errors, so can't test ...] Does snapraid -v -e check provide any info?
Been running SnapRAID for many years and have had to occasionally replace a failing disk over the years. But in those cases it was always obvious, as the disk was completely dead. I am now getting periodic data errors and a message about failing blocks having been marked as bad. But no info which disk those failing blocks are related to. I'd like to replace that disk, but how can I find out which one it is? See below my latest output from a scrub: 100% completed, 1641092 MB accessed in 1:35 %, 0:00...
I might be wrong. A scrub would clear those recovered errors.
If it can't really read a particular sector, I'm assuming ddrescue will skip it then? So just us import and then specify the parent dir of where I recovered the d4 files to? What happens if SnapRAID sees a file there that has a hash that it knows already, will it still rehash it or something?
Yes, ddrescue will attempt to read even unrecoverable files by making multiple attempts. The --import option will be required because SnapRAID has never seen the new files copied to the array, so it won't be aware of them.
If ddrescue copies the whole partition, does that mean even unrecoverable files will be copied? If I really can't replace d4 right now, can I copy its contents to either d1, d2, or d5? If I do that, will I still need to use the --import option? If I understand the use of that option correctly, I won't need to do that because the files are still inside the array (in one of the data disks) and all data disks are scanned anyway, correct?
There’s ddrescue, but it works at the device level, meaning it copies the entire partition, not individual files. I'm not aware of a tool that handles this reliably at the file level by default. You can try using cp along with a file list, keeping track of any failures and retrying as needed. ChatGPT can help you write a script for that if needed. If you can replace d4, that's definitely the better option, it will simplify the recovery process with SnapRAID and help restore d3 more efficiently. Otherwise,...
What is the best method to copy data from d4 that can retry to read on bad sectors and skip it if it fails after n retries? And can I copy to one of the othet data disks (d1, d2, or d5) since that is effectively adding data to the disk (not replacing anything)? Or is this a bad idea and I should replace d4 with a new drive toobefore trying to copy the files?
First, try to copy as much data as possible from d4 before it fails completely. Then, make sure the new d4 has the same directory structure as the original. Finally, use SnapRAID to recover the already failed d3.
Is this project dead already? :(
@amadvance @angrydingo any ideas?
Can anybody help me out here? :(
So apparently, d4 has a lot of smart errors already and is failing too. Is this why I can't recover the files in d3 anymore? Cause I only have one parity drive and one data drive totally failed (click) and the other failing but still accessible? At this point, how can I recover what I can in d3 without the fix operation terminating? Or is it better to go ahead and move the readable contents out of d4 first?
So now I did the fix command on d3 (with a new disk mounted as d3) and get a lot of unrecoverable errors in the fix.log. Also, the command did not finish as it looks like it terminated with this error: msg:fatal: Error reading file '/mnt/disk4/backup/Laptop Backup - Irene/2/Laptop Backup/Pictures/Yobo´s pics/2018/September/Florida/IMG_20180929_114751_1.jpg'. Input/output error. I'm assuming it terminated because I don't see the summary of the fix operation in the log file. FWIW, the snapraid.parity...
So one of my snapraid.content files is older, the one in /var. This is because I got it from an older backup of the rootfs. The ones in my 6 data disks are all the same and are updated. One of my data disks failed and I had to replace it. To continue with the recovery, I removed the var snapraid.content file from snapraid.conf since I know it's outdated anyway. My plan is to bring back that entry later after the recovery process. So did I do this correctly? From wat I understand, I only need one...
So one of my snapraid.content files is older, the one in /var. This is because I got it from an older backup of the rootfs. The one s in my 6 data disks are all the same and are updated. One of my data disks failed and I had to replace it. To continue with the recovery, I removed the var snapraid.content file from snapraid.conf since I know it's outdated anyway. My plan is to bring back that entry later after the recovery process. So did I do this correctly? From wat I understand, I only need one...
Thanks for your reply. According to the OS there are no overlapping drives, not based on disk number (PhysicalDeviceX) and not based on the integrated linux notation (sdX). They neatly range from 0 to 10 and from a to k. Smartctl itself needs the "sat" addition to work for the LSI drives, but on my primary snapraid no drives are connected tot he LSI so it's not a factor in that sense. I tried hardmapping the disks: smartctl d0 -s standby,now /dev/sdk # F: (Disk 10) and smartctl d0 -d ata /dev/pd10...
I did unfortunately the first assumption of the recovery chapter is that you have a spare disk running around I did not have a spare. Then my assumption was that I could still use snapraid using only the second parity disk. But then I found out snapraid refuses to run without a primary parity file defined. So I had to order a new disk just to continue using snapraid. After ordering and installing that new disk I could follow the manual BUT the appearance of unrecoverable error was troublesome to...
I did unfortunately the first assumption of the recovery chapter is that you have a spare disk running around I did not have a spare. Then my assumption was thast I could still use snapraid using only the second parity disk. But then I found out it refuses to work without a primary parity defined. So I had to order a new disk just to continue. After installing that new disk I could follow the manual BUT the appearance of unrecoverable error was troublesome to me and the manual and faq do not mention...
Yes. But you could go larger if the pricing is favorable.
Would you suggest getting two 10Tb drives for parity then?
A single parity drive will allow the recovery of only one data disk, regardless of how large it is when compared to the data disks.
Currently looking into setting up snapraid along with drivepool. I have 3x10TB drives with data on them, an 8TB without any, I know I cannot use the 8TB as a parity because its to small so I was thinking ill add it to the array once everything is setup. I was looking into 20TB and up drives as my parity. If I get a 20TB drive will this cover two hard drive failures as the majority of my drives with data on them are 10TB? Or is it overkill? I know I can get another 10TB to use as parity and it will...
Trying to understand some basics that I can't seem to find answers to online: After running "fix" on a data-disk snapraid -d data_disk fix i get: Error writing file '/srv/dev-disk-by-uuid-a1b90601-c219-4ade-bcd9-16f98fe1458e/path_to_file'. No space left on device. WARNING! Without a working data disk, it isn't possible to fix errors on it. Stopping at block 80978 18563 errors 18562 recovered errors 1 UNRECOVERABLE errors DANGER! There are unrecoverable errors! When I repeat the command, I get the...
Trying to understand some basics that I can't seem to find answers to online: After running "fix" on a data-disk snapraid -d data_disk fix i get: Error writing file '/srv/dev-disk-by-uuid-a1b90601-c219-4ade-bcd9-16f98fe1458e/link_to_file'. No space left on device. WARNING! Without a working data disk, it isn't possible to fix errors on it. Stopping at block 80978 18563 errors 18562 recovered errors 1 UNRECOVERABLE errors DANGER! There are unrecoverable errors! When I repeat the command, I get the...
Trying to understand some basics that I can't seem to find answers to online: After a parity-disk failure, when running "fix" on a new replacement parity disk snapraid -d parity_disk fix i get: 11583917 errors 0 recovered errors 0 unrecoverable errors Everything OK If all errors seem to be recoverable, why aren't any being recovered? Thanks in advance!
Trying to understand some basics that I can't seem to find answers to online: After a parity-disk failure, when running "fix" on a new replacement parity disk snapraid -d parity-disk fix i get: 11583917 errors 0 recovered errors 0 unrecoverable errors Everything OK If all errors seem to be recoverable, why aren't any being recovered? Thanks in advance!
Will it work with snapraid -F sync ?
From reading I get the feeling that you didn't read the manual and FAQ? You could have read the 'Recover' section and used the 'Fix' function to resolve the initial parity issue. When fixing doesn't work, you could change the disk a just run a forced sync. Alternatively, you could use Perplexity. I'm using Snapraid with two parity disks (2-parity) on "Debian/Ubuntu/Windows, etc." Now my first parity disk has died. I have a new HDD to replace it with.
Mixing onboard SATA and LSI controllers can cause overlapping drive numbering in BIOS/OS (e.g., drives on both controllers labeled 1-4). This misalignment likely causes SnapRAID to reference incorrect device paths. SnapRAID relies on smartctl to fetch disk attributes. RAID controllers often require custom SMART commands.
I'm using a DIY PC with the Delock 8 port SATA PCI Express x8 Card with Connection Cable To ensure that SnapRAID never syncs when the controller or a disk fails, I'm running a manual sync. I recommend doing this because I saw a post about someone who lost data after a disk failed and the automatic sync continued for days without them noticing. Another thing I do When using Linux: I use EXT4 on the SSD that runs Debian. I highly recommend XFS for storage disks. I would recommend using Linux (Debian...
How the unscrubbed percentage is determined Parity block revalidation: When you delete files, SnapRAID marks the related parity blocks as "unscrubbed" because it needs to make sure that the parity information is still accurate after the deletion. This is necessary because deleting files changes the layout of data on the drives, which in turn affects how parity is calculated and stored. For example, deleting 3TB from a 7.22TB drive can mark 42 percent of the array as unscrubbed. These parity blocks...
How the unscrubbed percentage is determined 1. Parity block revalidation: When you delete files, SnapRAID marks the related parity blocks as "unscrubbed" because it needs to make sure that the parity information is still accurate after the deletion. This is necessary because deleting files changes the layout of data on the drives, which in turn affects how parity is calculated and stored. For example, deleting 3TB from a 7.22TB drive can mark 42 percent of the array as unscrubbed. These parity blocks...
How the unscrubbed percentage is determined 1. Parity block revalidation: When you delete files, SnapRAID marks the related parity blocks as "unscrubbed" because it needs to make sure that the parity information is still accurate after the deletion. This is necessary because deleting files changes the layout of data on the drives, which in turn affects how parity is calculated and stored. For example, deleting 3TB from a 7.22TB drive can mark 42 percent of the array as unscrubbed. These parity blocks...
How the unscrubbed percentage is determined Parity block revalidation: When you delete files, SnapRAID marks the related parity blocks as "unscrubbed" because it needs to make sure that the parity information is still accurate after the deletion. This is necessary because deleting files changes the layout of data on the drives, which in turn affects how parity is calculated and stored. For example, deleting 3TB from a 7.22TB drive can mark 42 percent of the array as unscrubbed. These parity blocks...
How the Unscrubbed Percentage is Determined Parity block revalidation: When you delete files, SnapRAID marks the related parity blocks as "unscrubbed" because it needs to make sure that the parity information is still accurate after the deletion. This is necessary because deleting files changes the layout of data on the drives, which in turn affects how parity is calculated and stored. For example, deleting 3TB from a 7.22TB drive can mark 42 percent of the array as unscrubbed. These parity blocks...
This is precisely why I use the 'DISK_MOUNT_POINT' rather than the 'DISK_NAME'. snapraid.conf: parity /mnt/parity/parity1/snapraid.parity content /mnt/data/data01/snapraid.content data d1 /mnt/data/data01 fstab: #1 - data01=ST18_ZRxxxxxx LABEL=ZRxxxxxx /mnt/data/data01 #ext1 - parity1=ST24_ZYxxxxxx LABEL=ZYxxxxxx /mnt/parity/parity1 Opting for this setup allows you to replace the HDD/SSD without having to adjust the SnapRAID configuration.
This is precisely why I use the 'DISK_MOUNT_POINT' variable rather than the 'DISK_NAME' variable. snapraid.conf: parity /mnt/parity/parity1/snapraid.parity content /mnt/data/data01/snapraid.content data d1 /mnt/data/data01 fstab: #1 - data01=ST18_ZRxxxxxx LABEL=ZRxxxxxx /mnt/data/data01 #ext1 - parity1=ST24_ZYxxxxxx LABEL=ZYxxxxxx /mnt/parity/parity1 Opting for this setup allows you to replace the HDD/SSD without having to adjust the SnapRAID configuration.
I have a 3 disk mergerfs pool using the mfs policy and ignorepponrename=true on Ubuntu 22.04. When I rename non-hardlinked files I see a "move" action in the snapraid diff as you would expect, but when I rename files that are hardlinked to somewhere else in the pool it tends to show separate "remove" and "add" actions in the diff for the renamed file. To test, I created a text file at the root of my pool, "test file.txt", and then created a link to it with ln, "test file-link.txt", then sync'd those...
sorry for reopening but if I'm reading this correctly, right now as of version 12.4, to rename a disk all we have to do is change the name in the conf file and then run sync?
Sorry, but SnapRAID (and 99.999% of ALL programs) does not implement DWIM :):). Try --error-limit=0 (or -L0)
I'm experiencing a very unusual and persistent issue with SnapRAID on my Ubuntu 24.04.2 LTS (6.8.0-60-generic x86_64) system. I've tried both the apt package and manual compilation from GitHub, but the problem persists in a baffling way. Problem Description When attempting to run a snapraid scrub command with specific options (e.g., -p 20 --max-errors 0), SnapRAID consistently exits with the error: Unknown option '?' Example command causing the error: sudo snapraid -c /etc/snapraid.conf scrub -p...
I understand your point however this is conflicting with the x% is unscrubbed implying a large portion of the data was never scrubbed, even though almost all data is years old.
The -o option means older than, so -o 60 would only scrub data that hasn't been scrubbed in over 60 days that's why it didn't do anything.
Total noob here. I have an Elitedesk G5 that I use for creating subtitle files for movies/shows along with video editong. Decided to make it dual purpose with my Plex library. I have 4ea 8TB HDDs I'd like to add to the pc but I'm limited on available sata ports. What would you recommend? I've heard sata expanders are not reliable vs SAS HBAs. Thoughts? I am OK with building an external rack for HDDs and snaking cables out the back of the pc.
When I check snapraid smart I get a list of all disks but it gets the data wrong. In my case I know its disk 0 and 1 which are also known as sda and sdb inside Windows10. Temp Power Error FP Size C OnDays Count TB Serial Device Disk 34 1674 0 52% 8.0 x /dev/pd0 d1 35 1427 0 75% 8.0 x /dev/pd1 parity 31 3 0 4% 18.0 x /dev/pd2 - - 40565 - SSD 0.2 x /dev/pd3 - 31 9 0 4% 18.0 x /dev/pd4 - 32 2716 0 84% 8.0 x /dev/pd5 - 32 9 0 4% 18.0 x /dev/pd6 - 29 3095 logfail 84% 8.0 x /dev/pd7 - 34 1674 0 52% 8.0...
When I check snapraid smart I get a list of all disks but it gets the data wrong. In my case I know its disk 0 and 1 which are also known as sda and sdb inside Windows10. Temp Power Error FP Size C OnDays Count TB Serial Device Disk 34 1674 0 52% 8.0 x /dev/pd0 d1 35 1427 0 75% 8.0 x /dev/pd1 parity 31 3 0 4% 18.0 x /dev/pd2 - - 40565 - SSD 0.2 x /dev/pd3 - 31 9 0 4% 18.0 x /dev/pd4 - 32 2716 0 84% 8.0 x /dev/pd5 - 32 9 0 4% 18.0 x /dev/pd6 - 29 3095 logfail 84% 8.0 x /dev/pd7 - 34 1674 0 52% 8.0...
Is the file locked? What happens when you copy it? You might need to exclude this file and create a database export as backup and sync that.