Menu

Parity calculations

Help
fantom
2014-01-16
2014-01-29
  • fantom

    fantom - 2014-01-16

    I am trying to understand at a very high level how parity is calculated and it is taking a while to do that from the source code. I am hoping I can get a quick anser here.

    Let's say I have three disks (one parity and two data disks). Each data disk has one file (d1/file1 & d2/file2). I want to understand if the parity for d1/file1 is calculated using the contents of d2/file2 (and the other way around) or all the parity data for a single file is calculated based on the contents of this file only?

    The use case is this:

    1. sync is run successfully
    2. The contents of d1/file1 completly changes (every single byte)
    3. Disk2 fails and the file d2/file2 is lost.

    Will it be possible to completly restore the file d2/file2 in this case (d1/file1 changed significantly between the last sync and the repair)?

    Thanks in advance.

     

    Last edit: fantom 2014-01-17
  • fantom

    fantom - 2014-01-17

    Just answered my own question: d2/file2 cannot be recovered. A loss of a file on one drive AND a change to the file on another drive that was used in parity calculations constitutes a double failure and the file cannot be recovered (if a single parity is used).
    I tested this scenario and got an unrecoverable error from "fix".
    That means that I have to separate volatile files from static ones, otherwise I might end up in a situation where a file cannot be recovered if some other file changed...

     
  • JR A.

    JR A. - 2014-01-18

    I'm curious how likely this would be in day to day use.

    Seems from your comments that this could be a pretty big problem if you have files that often change or get deleted/added in between syncs.

     
  • jwill42

    jwill42 - 2014-01-18

    Deletes and adds are not a big problem. With an add, you can only lose the added file. With a delete, just move the file to a temporary Trash folder until the next sync.

    In place changes of a file are a problem if they happen frequently, like with OS files, database files, or VM files. Those can cause data on other drives to be unrecoverable.

    That is why it is important to segregate those type of files out of a snapraid volume, or if that is not practical, to be sure to use the snapraid exclude filter on files that are frequently modified.

     
  • Andrea Mazzoleni

    Hi,

    If you have files that often change, a good solution is to put them in the same disk, and use one more parity drive.

    Ciao,
    Andrea

     
  • JR A.

    JR A. - 2014-01-18

    Working on the 2nd parity drive. Need to buy a SAS extender, but as soon as I do, the 2nd parity drive is on the list.

    And I figured having files that change often are better served on 1 disk. In my Plex Media Server example, its database format contains many thousand small files and as many as 70-100k files can change in any 1 day (each file is max 1-5KB).

     
  • fantom

    fantom - 2014-01-18

    Well, if I put all frequently changed files on one drive then in order to survive a two-drive failure, I actually need three parity drives. I.e. one extra parity for each drive with more frequently changing files...

     
    • jwill42

      jwill42 - 2014-01-18

      If that is a problem, then just follow my suggestion already posted in this thread.

       
      • fantom

        fantom - 2014-01-18

        yeah, now that I understood this limitation all transient files will be on the exclude list...

         
  • Jens Bornemann

    Jens Bornemann - 2014-01-18

    Hi Igr,
    what about using read-only disk snapshot feature of NTFS, ZFS or LVM with snapraid?
    If your data changes are not extreme, additional space would be only allocated till next re-creation of the snapshots (snapraid sync).
    Your parity would be always aligned with the snapshot'ed data disks and no additional parities required or other optimizations to your existing disks (also ensures that during sync no data changes).
    Downside: restore requires additional steps due to the read-only data disk snapshots...
    Cheers, Jens.

     
  • JR A.

    JR A. - 2014-01-18

    Foregoing the RAID is not a backup conversation, isn't excluding important files negating the whole point of doing the snapshot?

    My exclude list really only contains items that are temporary or transient and unimportant.

    I find the work around to put these types of files on an exclude list to be defeating the purpose, but maybe it's just me. Or maybe I should be running ZFS.

    Who knows.

     
  • jwill42

    jwill42 - 2014-01-18

    If you are talking about the workaround I suggested, then you seem to have missed the main part of it and are complaining about the fallback part.

    And who said anything about excluding "important files"?

    Any frequently-modified "important" files that are not backed up or otherwise recoverable should be handled separately from the rest of the files that are managed by SnapRAID. That is what I mean by segregating those files.

     
  • fantom

    fantom - 2014-01-19

    Thx for the suggestions. I would really like to use ZFS, but have no intention to manage a Solaris box at home:) The 32-Linux support in ZFS I hear is lacking, and I will keep running 32-bit until the next major re-install; then I will entertain 64 bit and native ZFS.

    I experimeted with 8 disk RAID6 and it is great while it is running, but the rebuild/re-shape time is horrendous with >2TB drives.

    I followed jwill42's advice and segregated different kinds of data and it all makes sense now.

     
  • jwill42

    jwill42 - 2014-01-19

    Keep in mind that if you really want to run ZFS (or btrfs, or ext4 over mdadm RAID 1) with a mirror for your frequently modified data, there is no reason why you cannot still run SnapRAID for your static data.

    Most people considering SnapRAID probably have a lot more static data than frequently modified data. Take an example where you have 128GB of frequently modified data and many TeraBytes of static data.

    Just partition two (or three, or four) of your drives with a 128GB partition (or larger to give the FMD room to grow), and the rest of the free space on each drive can be put into a second partition. Then install ZFS (or btrfs, or ext4 on top of mdadm RAID1) into the 128GB partitions and set up a mirror (RAID 1). Put your static files on the second partitions and on the single partition on any other drives you have. SnapRAID will happily work with only the partitions with static data, and ZFS (or btrfs, or ext4/mdadm) will happily mirror your data on the other partitions.

     

    Last edit: jwill42 2014-01-19
  • fantom

    fantom - 2014-01-19

    Thx, but I am done with RAIDs for a while. A week ago I added two drives to a 6x2TB RAID6 (to the toal of 8) and it took >3 days to reshape. Two hours after it finished, smartd reported two drives having 1000+ & 100+ unrecoverable sectors. I had stop everything to make a backup (took a few more days), broke the raid, restored the data onto the good disks and planned to use snapraid. Then I ugraded the firmware in all drives & ran badblocks on the two that seemed to have failed. In the end all errors disappeared (SMART is now clean); could a bug in the firmware, which is not unheard of. It is not easy to deal with these issues when running a big RAID.
    I would rather deal with one drive at a time from now on: in reality none of my files are so important to waste so much time and effort and put the drives through so much wear...

     
  • mikehunt114

    mikehunt114 - 2014-01-20

    Perhaps a little off topic, but I'm curious why you would include Plex's metadata in your snapshot? I also run Plex and to me that would be so much extra work for your snapshot, when instead (in the case of a failure) you could have Plex rescan/reacquire all that data in a short amount of time . I recently switch OS's and Plex gathered all the metadata for my ~12TB media library in about an hour.

     
    • jwill42

      jwill42 - 2014-01-20

      I also think it is a good idea to exclude those kinds of files. I don't use Plex (I use MediaBrowser), but I have a lot of .xml and .jpg metadata files stored along with my media files, and I exclude them all from SnapRAID. MediaBrowser can automatically download them again, with little or no effort on my part, when it is necessary.

      Here is a portion of my snapraid.conf file:

      exclude /videos/*/*.jpg
      exclude /videos/*/*.nfo
      exclude /videos/*/*.png
      exclude /videos/*/*.tbn
      exclude /videos/*/*.xml
      exclude /videos/*/*/*.jpg
      exclude /videos/*/*/*.nfo
      exclude /videos/*/*/*.png
      exclude /videos/*/*/*.tbn
      exclude /videos/*/*/*.xml
      
       

      Last edit: jwill42 2014-01-20
    • JR A.

      JR A. - 2014-01-29

      You're probably right. I've been meaning to see which files I can safely exclude from Plex in the event of a drive failure without losing information such as watch status.

      I also have a 14TB library with about 14000 TV episodes and 1000+ movies and I would absolutely hate to lose the watch status (granted this is backed up by Crashplan).

       

Log in to post a comment.

MongoDB Logo MongoDB