Menu

SnapRAID Help with 72 drive setup...

Help
Vorpel
2014-06-02
2014-11-10
  • Vorpel

    Vorpel - 2014-06-02

    I'm looking for help with setting up SnapRaid on a large home setup (200TB+). Here is my setup:

    Server 1 (96TB raw): Supermicro SC846TQ chassis with 24 Seagate 4TB hard drives connected to 3x Supermicro SAT-MV2 SATA cards. Currently running Windows Server 2008R2, but I'll run whatever is best to get the job done. Supermicro H8DME-2 mobo with dual AMD 6-core Opteron procs and 32GB memory. This server also has an Areca ARC-1680ix-8 card for connecting to servers 2 & 3 HP SAS cards.

    Server 2 (84TB raw): Supermicro SC846TQ chassis with 12 Seagate 4TB hard drives and 12 mixed 3TB hard drives all connected to a HP SAS Expander card (rev 2.10). Currently running Windows Server 2008R2 (this is my Domain Controller). Supermicro H8DME-2 mobo with dual AMD 6-core Opteron procs and 32GB memory.

    Server 3 (46TB raw): Supermicro SC846TQ chassis with 12 mixed 3TB hard drives and 12 empty bays all connected to a HP SAS Expander card (rev 2.08). Currently running Windows Server 2012 (not running much on this - it is primarily for the hard drives). Supermicro H8DME-2 mobo with dual AMD 4-core Opteron procs and 32GB memory.

    I currently have all 60 drives connected to Server #1 and they are mounted as NTFS folders (end goal is 72 drives). All drives are currently formatted with NTFS, and only about 1/3 of the space is currently used (upgraded from a couple of 16x 1.5TB Raid6 arrays, so wanted more space to grow into).

    My initial thoughts were to setup the drives like this:

    Server #1: Drives 1-10 setup as data
    Drives 11-12 setup as parity

            Drives 13-22 setup as data
            Drives 23-24 setup as parity
    
            Drives 25-34 setup as data
            Drives 35-36 setup as parity
    
            Drives 37-46 setup as data
            Drives 47-48 setup as parity
    
            Drives 49-58 setup as data
            Drives 59-60 setup as parity
    

    I am good with committing 1 out of 6 drives for parity.

    The data is 98% video (BluRay rips, DVD rips, tons of video capture) and doesn't ever change.

    I played with SnapRaid yesterday and got it setup on 12 drives (10 data, 2 parity).

    Goals:

    My ultimate goal is to have 1 large pool of disk that I can share out on my network so that I don't have to manually manage what is where. Along with that the data needs to be protected.

    Questions:

    Am I heading down the right path with this setup? I need parity as I know hard drives fail, and I want dual parity.

    How do you do multiple SnapRaid setups? I didn't see how to do multiple configs (though it was a quick scan through the readme) - is this accomplished by having multiple SnapRaid setups/directories?

    If I did set this up this way, would I be able to setup disk pooling across all 5 SnapRaid drive groups?

    I welcome all advice and constructive criticism.

    Thanks!

    -dc

     
    • Andrew T Finnell

      Is this a setup you have running already? You were able to get the 4TB drives to work with your SAT-MV2 controllers? If that is the case did you have you upgrade the firmware to the 6.6 version? I was having difficulty getting them to recognize a 3TB drive and was concerned that updating the firmware would blow away the controllers ability to read my existing drives.

       
  • Leifi Plomeros

    Leifi Plomeros - 2014-06-02

    Yes, to run multiple setups it would be easiest to simply have separate snapraid folders with unique config files... Given that you have 200 TB disk space, you can probably afford a few extra folders with 1500 kilo bytes each :)

    Not sure why you would want to limit each snapraid setup to 12 disks each when you could just as well use a bigger setup with ~20 data disks and 3-4 parity disks on each setup.

    As long as you have all 24 disks in the same physical server chassis I would assume it is preferable that any 4 disks could break down without losing any data compared to 3 unlucky break downs resulting in lost data.

     
  • Vorpel

    Vorpel - 2014-06-03

    Thanks for the reply. I wasn't sure about what the performance hit would be doing 12 drives (10 data/2 parity) vs 24 drives (20 data/4 parity). If it isn't that big of a deal then the less number of groups is better - i.e. I'll just do 3 x 24 drive groups.

    Regarding the pooling with another product, does it matter if the disks I want to pool are in different SnapRAID groups?

    Thanks again - I'm looking forward to getting this beast setup!

    -dc

     
    • Leifi Plomeros

      Leifi Plomeros - 2014-06-03

      Regarding the performance hit it shouldn't be any problem in theory, but it would be very simple to test... Just set it up with 10 data disks and start building parity... Then stop after a few minutes, delete parity and content-file.
      Add the remaining disks to the config file and start again to see if there is any performance difference to be concerned about.

      I think there is a bigger risk that the controller cards or some other interface will become a bottleneck when running 24 disks at full speed in parallel than snapraid.

      In any case, Snapraid does no changes to the data disks so it is completely safe to test.

      Regarding pooling I have very little advice, but I think many would be interested to know which solution you finally end up with.

       

Log in to post a comment.

MongoDB Logo MongoDB