I am looking for a storage solution for a NAS build.
ZFS looks pretty good, but not being able to add/remove drives makes it very less favourable.
Anyways I found SnapRAID but have some questions about it after having browsed through the site.
1.)
I'm still not sure how it is able the rebuild any failed drive with just one parity drive and 1 or 100 data drives. Even if I add lets say 95 drives afterwards.
As far as I could tell it does not alter any of the data drives.
2.)
On the Compare page the footnote 3 for 'Fix silent errors' says it would check data before using it.
Is that correct for every read of any file or just when doing a scrub?
If it is for every read, how would I get notified about that or will it fix the error automatically?
1) That's how parity works. 1 parity drive means you can lose 1 data drive. Doesn't matter if it there are 2 data drives or 200. SR doesn't alter the data. That's one of the big reasons why I like it and use it.
2) That occurs automatically when you run sync. SR will alert you and you run SR again to fix.
3) If you have 3 data drives and 2 parity drives you can lose 2 drives. If you lose 2 parity drives then you haven't lost any data. If you lose 1 of each you can rebuild the drive. If you lose 2 data drives then you can rebuild the two data drives.
SR is good stuff.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
My personal finger of warning if you selected ZFS for the right reasons and is now looking at SnapRAID (and possible for the wrong reason).
Please note that I think SnapRAID is excellent for its purpose but is not a ZFS replacement.
It is file based (running on top of a filesystem)
File activities on data disks are not impacted (for good and bad): no read validation as for ZFS or btrfs; parity and checksum protection is not updated as part of file changes (add/delete/update)
It's key benefits are in areas with write once, read many: ex archive, media files
It is RAID4 and not RAID5 based, which is making it perfect for video storage as only one disk need to be active
Read validation in SnapRAID is ensuring the data quality during parity calculation or data restoration (mitigating the ugly software raid issues with low level solutions such as "md" but also other file based parity solutions).
Reading this forum I think there are people having issues due to not understanding the limitations of SnapRAID as it is not a regular RAID:
File balancing "filesystems" ex. Drivepool: folder balancing distributing files in one folder to multiple disks which in worst case destroys the redundency on all files on all disks in the same parity block-set when deleted or updated. [Not a showstopper but should be handled] (also think about background balancing "moving each file from disk to disk" as each balancing is a file delete and add from perspective of SnapRAID).
Frequently updated files ex. home/user folder or even OS: Each file updated reduces the redundency on the related parity block-set (at least worst case)
..m2c..
/X
Last edit: xad 2015-07-04
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
ad 3)
I guess the docu is then just misleading/wrong, because if it would need all parity disks it wouldn't be "RAID6".
@xad
I understand the differences between ZFS and SR.
As for the part about balancing:
If I choose SR I will need some way to pool the drives together (media archive). Probably with mhddfs.
I don't think mhddfs does any balancing, at least not moving any old data between drives.
Would any pooling solution do that?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I think what the author is trying to say is that you can't go for level 6 parity without also setting up level 1-5 parity.
It would be an enourmous waste of CPU if snapraid had to calculate the parity for level 1 in order to be able to calculate the parity of level 2 only to calculate the level 3 and so on with the purpuse of having single parity calculated as level 6 parity.
But there is definitively room for improvement in the documentation area, most questions in this forum is pretty much evidence of that :)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
I am looking for a storage solution for a NAS build.
ZFS looks pretty good, but not being able to add/remove drives makes it very less favourable.
Anyways I found SnapRAID but have some questions about it after having browsed through the site.
1.)
I'm still not sure how it is able the rebuild any failed drive with just one parity drive and 1 or 100 data drives. Even if I add lets say 95 drives afterwards.
As far as I could tell it does not alter any of the data drives.
2.)
On the Compare page the footnote 3 for 'Fix silent errors' says it would check data before using it.
Is that correct for every read of any file or just when doing a scrub?
If it is for every read, how would I get notified about that or will it fix the error automatically?
3.)
In the snapraid.txt (https://github.com/amadvance/snapraid/blob/e236d7a7e0734633e767e3a9589fa34b6d471bf3/snapraid.txt#L941) it says that each parity level requires also all the files of the previous levels.
Lets say I have 2 parity disk and 3 data disks.
Will I then be loosing data if one data and one parity disk fails at the same time because the other parity disk can't be used alone?
Thank you!
Greetings,
Philipp
Last edit: Philipp 2015-07-03
1) That's how parity works. 1 parity drive means you can lose 1 data drive. Doesn't matter if it there are 2 data drives or 200. SR doesn't alter the data. That's one of the big reasons why I like it and use it.
2) That occurs automatically when you run sync. SR will alert you and you run SR again to fix.
3) If you have 3 data drives and 2 parity drives you can lose 2 drives. If you lose 2 parity drives then you haven't lost any data. If you lose 1 of each you can rebuild the drive. If you lose 2 data drives then you can rebuild the two data drives.
SR is good stuff.
1) I've created a Google sheets simulator which explains how single parity works here. Yes I was bored...
Or you could just google Raid 5 Parity and you will find a simple explanation with pictures like this.
Additional parity levels are a different story which is much more complicated, but the important thing is that it works :)
Philipp,
My personal finger of warning if you selected ZFS for the right reasons and is now looking at SnapRAID (and possible for the wrong reason).
Please note that I think SnapRAID is excellent for its purpose but is not a ZFS replacement.
..m2c..
/X
Last edit: xad 2015-07-04
Thank you all for your input!
ad 3)
I guess the docu is then just misleading/wrong, because if it would need all parity disks it wouldn't be "RAID6".
@xad
I understand the differences between ZFS and SR.
As for the part about balancing:
If I choose SR I will need some way to pool the drives together (media archive). Probably with mhddfs.
I don't think mhddfs does any balancing, at least not moving any old data between drives.
Would any pooling solution do that?
I think what the author is trying to say is that you can't go for level 6 parity without also setting up level 1-5 parity.
It would be an enourmous waste of CPU if snapraid had to calculate the parity for level 1 in order to be able to calculate the parity of level 2 only to calculate the level 3 and so on with the purpuse of having single parity calculated as level 6 parity.
But there is definitively room for improvement in the documentation area, most questions in this forum is pretty much evidence of that :)
I see, thank you.