Menu

SnapRAID version 11 beta update

Help
rubylaser
2016-06-21
2016-07-27
1 2 > >> (Page 1 of 2)
  • rubylaser

    rubylaser - 2016-06-21

    Hello Andrea,

    I wanted to see how version 11 is coming along. I went to see if there was a newer beta out, but saw that it's still the version released on May 31st which I think you previously said wasn't ready for prime time. I only ask because I picked up a few 8TB disks that I'd like to use as data disks, and thought this would be a perfect time to try out the new split parity. Thanks again for all you do.

     
  • CybrSage

    CybrSage - 2016-06-22

    I am also excited for this new and wondrous feature. Of course, I would rather wait and have it done right than hurry it along, but wanted you to know your work is greatly appreciated.

    Do you have a way for us to donate to you for your time and creativity?

     
    • Leifi Plomeros

      Leifi Plomeros - 2016-06-22

      Paypal donation link is in the bottom left corner here:
      http://www.snapraid.it/

       
  • Andrea Mazzoleni

    Hi rubylaser,

    The beta 33 was released after that not yet ready statement :) It's in a good state, and I'm using it regularly by some weeks on my array.

    It only misses some more testing of the splitted parity feature with real HDs. Unfortunately I didn't yet found time for that, and until now I only did simulations with smaller volumes. Anyway, the full regression test passes using splitted parity.
    If you, or anyone else would like to test it, likely now it's the right time. Any feedback will be very useful. Just take care that if the splitted parity is enabled, the saved content file cannot be read anymore by older SnapRAID. Likely it's better to not yet test it on real data.

    Another interesting change is the addition of the new --test-io-advise options that allow to tweak the cache behaviour. In Linux the default stragegy is now different, and it should provide a better overral performance. In Windows the default is the same as before, but there is the --test-io-advise-direct to try.

    More details are in the HISTORY:

    https://github.com/amadvance/snapraid/blob/master/HISTORY
    

    Ciao,
    Andrea

     
    • rubylaser

      rubylaser - 2016-06-22

      Awesome, thank you. I'll likely run both v10 and v11 beta side by side for a while. I have enough free disks that I can setup split parity on a couple of 4TB disks, and run a secondary snapraid.conf and alternate content files on the same data disks.

       
  • rubylaser

    rubylaser - 2016-06-23

    I've got this setup and I'm doing my first sync with v11 beta on my array (I still have v10 setup and parity on the older version by keeping the v10 and v11 binaries). I love seeing sync speeds at 1.6GB/s :)

    I do have one question though. Let's say I have (2) 3TB disks as parity disks like this...

    parity /mnt/parity/disk1/1.parity,/mnt/parity/disk2/2.parity
    

    Later, I buy 8TB data disks, so my parity disks aren't large enough to accomodate these new disks. I know, I could obviously add another disk to this parity line that's 2TB or bigger, but let's say I just wanted to replace these two disks with (2) 4TB disks that I have laying around instead (rsync the parity files over to the new parity disks and update the mountpoints).

    My question is if SnapRAID will use this new space on each disk. Or, are the two 1.parity and 2.parity files joined together at the point when it grows too big to be contained on disk1 to disk2 and it can't go back and take advantage of the new space available in 1.parity without recalculating all the parity.

    I know this is a fringe case, but I can see people wanting to do this as the upgrade from 3/4/5TB data disks to 6/8/10TB data disks while trying to re-purpose their older 3/4/5TB into a split parity configuration.

    Thanks again!

     

    Last edit: rubylaser 2016-06-24
    • Leifi Plomeros

      Leifi Plomeros - 2016-06-23

      If parity1 contains the first 3 TB parity and parity2 contains 4-6 TB parity. Would it make sense that parity1 is then grown to contain first 3 TB parity + the 7:th TB parity? And parity2 contains 4-6 + the 8:th?

      I think it would make more sense if you could either:
      A) Define parity3 to coexist on disk1 (parity1 would be static at first 3 TB, parity2 would continue to grow to 4 TB until disk2 is full and then parity3 on disk1 would grow up to 1 TB.)
      B) Tell snapraid to fix/rebuild a specific parity or parityfile in such a way that it fills up empty disks of new size (I don't see much point in manually copying the parity files to the new disks if snapraid could just create them directly on the new disks instead).

      In any case I'm happy that you are testing it out because I plan to start using the split parity as soon as I buy my next disk, and knowing that it has been tested in a real world setup before that is of course very nice :)

       
      • rubylaser

        rubylaser - 2016-06-24

        That's a great idea. Maybe it is as simple as adding a new file 3.parity file on disk1. I'd be willing to give it a try, but Andrea could likely tell me if it's feasible faster than me trying it out :)

        So far, so good on the testing. If you have anything you'd specific you would like me to test, please let me know. This is on a 54TB backup array, so it's a decent sized array to test with while still keeping my real parity in v10.

         
        • Leifi Plomeros

          Leifi Plomeros - 2016-06-25

          I'm mostly curious about the practical things.
          1. Can I split my existing parity disks, one parity disk at a time and keep the existing checksums?
          2. Can I do it in reverse if I want to go back to unified parity files later?

          Testing those thing in blind on a huge array is probably not so fun, so yes, let's see if Andrea can give us some insight first.

           

          Last edit: Leifi Plomeros 2016-06-25
  • rubylaser

    rubylaser - 2016-06-23

    V11 Beta has been working well. I am currently running (2) 4TB disks in split parity and I've run diffs, syncs, scrubs, and even recovered deleted files and directories without issue.

    As an FYI for others as I said previously above, I am still maintaining the v10 binary and all of my v10 triple parity disks and content files for this array. That way I can test the v11 beta without worrying about something going sideways.

     

    Last edit: rubylaser 2016-06-23
  • Andrea Mazzoleni

    Hi rubylaser and Leifi,

    Thanks for your tests.

    At now the splitted parity allow to grow only the latest file that doesn't have a zero size.
    This means that when starting with an empty array, the first parity file grows until its disk is completely filled. At this point the second file starts to grow until its disk is filled, and then following with the others.
    When a new file starts to grow, the previous ones cannot grow anymore, and their fixed size is saved inside the .content file.
    This means that even deleting all of these parity files, the "fix" command will restore them to their original size, even if there is more space in the disks.

    Likely this can be improved in future release, but for now it's a known limitatition.

    Ciao,
    Andrea

     
    • rubylaser

      rubylaser - 2016-06-25

      Thank Andrea! If I did as Leifi suggested above, and added another file on the now larger disk, can SnapRAID use two files on the same underlying disk?

       
      • Andrea Mazzoleni

        Hi rubylaser,

        Yes. I just checked. It's expected to work, and I don't see any drawback on it.

        Ciao,
        Andrea

         
        • rubylaser

          rubylaser - 2016-06-25

          Awesome! Thanks for the speedy reply :) I'm still testing, but I'm very close to just running this. I've tested a bunch of scenarios and all have passed with flying colors. I'm currently running 2-parity on 2 sets of (2) 4TB disks.

           
  • rubylaser

    rubylaser - 2016-06-29

    Update: Still running great. I've added disks, changed parity levels from 1 to 2-parity, fixed, scrubbed. Everything is working well. I wish I had some more extra disks laying around, or I'd try to emulate the scenario I mentioned above, but since Andrea already confirmed it should work, I'm not going to worry about it. This software just keeps getting better :)

     
  • Leifi Plomeros

    Leifi Plomeros - 2016-07-01

    Thank you for the update. It seems safe enough for me :)

    I will start using it with split parities during this weekend and keep the old snapraid files in parallell with a single parity for a few weeks just to be on the safe side.

     
  • Leifi Plomeros

    Leifi Plomeros - 2016-07-04

    I have finally gotten around to start testing the beta split parity function.
    10 data disks (1x2,2x4,7x8 TB ), block size 2048 and 2 pairs of parity on 4x4 TB disks.

    First half of each parity file filled the disks perfectly (880 KiB empty) and the second half of the parity files left about 10 GiB empty disk space each.

    In theory this is perfect. In reality on Windows 10 it is not so fun since it triggers an insanely annoying warning message for both of the full parity disks. If I ignore them they come back again and again every 10 minutes. If I click on them I get the option to clean 0 bytes of data and no option to disable, so 10 minutes later they are back again, and again.

    In windows Vista, 7 and 8 you can disable the warning by changing a registry key. In windows 10 it seems like you need to use a third party tool or follow a lengthy tutorial to get it done.

    Not interested in either of those options I aborted the sync, deleted parity and content files.
    I then put a 210 MiB junk file on each parity disk and started sync again.

    After the parity files were created I removed the junk files from the full disks and now I have at least 210 MiB empty space on all disks, which prevents the warning message from being triggered, since the trigger level is 200 MiB.

    Even though this is not a snapraid issue by definition, I think it would be a good thing if snapraid by default reserved a minimum of 201 MiB free space, for Windows users, when splitting the parity.

    Even if some users would prefer to fill the disk to absolute limit it seems very likely that most users would prefer even more to not be constantly informed about an imaginary crisis detected by the OS or spending time figuring out how to make it stop.

    Oh well. Estimated 18 hours left at ~1200 MiB/s until sync completes :)

     
  • rubylaser

    rubylaser - 2016-07-07

    I did find one thing that could use a look. snapraid up triggers errors like this on the split parity disks.

    root@loki:~# snapraid up
    Spinup...
    Spunup device '8:177' for disk 'd06' in 8 ms.
    Spunup device '8:129' for disk 'd10' in 10 ms.
    Failed to create dir '/mnt/split-parity/parity2-disk1/.snapraid-spinup'.
    Spunup device '8:145' for disk 'd14' in 16 ms.
    Spunup device '8:193' for disk '2-parity' in 16 ms.
    Spunup device '8:209' for disk 'd01' in 17 ms.
    Spunup device '8:225' for disk 'd15' in 17 ms.
    Failed to create dir '/mnt/split-parity/parity1-disk1/.snapraid-spinup'.
    ...
    Spinup is unsupported in this platform.
    
     
    • Leifi Plomeros

      Leifi Plomeros - 2016-07-08

      No problem with that in windows.

      You do however get that last error message (Spinup is unsupported in this platform) if you don't run snapraid as an administrator in Windows.

      Also worth checking is that you haven't forgotten to set correct smartctl xx -d sat %s in the config.

       
      • rubylaser

        rubylaser - 2016-07-08

        Sorry, I should have explained myself better above. This is running on Ubuntu 16.04 Server and I have smartmontools configured correctly (I don't need to use any of the -d options on my disks to properly query them). This issue is happening becuase the two disks mentioned are the first disks in my split-parity setup and they are completely out of space. So, this goes to your issue above. It appears that SnapRAID should reserve a little space on each disk for common tasks on the system (prevent Windows alerts, allow the directory to be created to support spindown, etc.).

        root@loki:~# df -h | grep split | grep disk1
        Filesystem                 Size  Used Avail Use% Mounted on
        /dev/sdp1                  3.7T  3.7T     0 100% /mnt/split-parity/parity2-disk1
        /dev/sdk1                  3.7T  3.7T     0 100% /mnt/split-parity/parity1-disk1
        
         
  • rubylaser

    rubylaser - 2016-07-08

    dupe...

     

    Last edit: rubylaser 2016-07-08
  • Leifi Plomeros

    Leifi Plomeros - 2016-07-08

    I made a small observation which I believe is new and related to the split parity functionality.

    When I sync new files to the fullest data disk the parity file is expanded before the sync begins and shows a message about resizing (or it has always been like that but I didn't notice).

    Wouldn't that point in time be a good opportunity to abort the sync and inform the user that there is too much data on disk xx instead of attempting to sync if there is not enough space for parity?

    As far as testing goes, so far so good. Everything seems to be working great and I'm currently doing a huge sync (about 4 TB moved from 2 data disks to 2 other data disks). If that goes well I am very tempted to drop the old v10 setup completely and increase the beta setup to 3 parity levels instead.

     
    • rubylaser

      rubylaser - 2016-07-09

      I did this exact thing a couple days ago. I droppred v10 completely, and I'm running triple parity on 18 data disks. So far so good:)

       
  • Leifi Plomeros

    Leifi Plomeros - 2016-07-11

    I just noticed something that I requested which turned out a bit too good when using snapraid diff:

    Old behaviour:
    copy E:/Array/Folder/FileName.txt -> F:/Array/Folder/FileName.txt

    New behaviour:
    copy Folder/FileName.txt -> Folder/FileName.txt

    Wanted behaviour:
    copy d1/Folder/FileName.txt -> d2/Folder/FileName.txt

    d1 and d2 is the snapraid names for the data disks.

     

    Last edit: Leifi Plomeros 2016-07-11
    • Leifi Plomeros

      Leifi Plomeros - 2016-07-22

      I just noticed during a fix with -l and -v that snapraid mostly outputs exactly the way I want it.

      msg:error: Reading data from missing file '<full path>' at offset 2196766720.
      error:3814805:<snapraid disk name>:<local path inside snapraid array>: Read error at position 2095
      entry:0:block:known:bad:<snapraid disk name>:<local path inside snapraid array>:2095:
      fixed:3814805:<snapraid disk name>:<local path inside snapraid array>: Fixed data error at position 2095
      

      So if only snapraid diff did this: <snapraid disk="" name="">:<local path="" inside="" snapraid="" array=""> then it would be perfect.

       
1 2 > >> (Page 1 of 2)

Log in to post a comment.