Menu

Failed to grow parity file even though taken suggested countermeasures

Help
2016-08-02
2016-08-03
  • Codicus Maximus

    Codicus Maximus - 2016-08-02

    I am trying to perform my very first snapraid sync and I am having trouble:

    Failed to grow parity file '/mnt/P01/snapraid.parity' to size 4347130806272 using fallocate due lack of space.
    

    I am confused as to why that is since I have 24 4TB data disks and 4 4TB parity disks so they are the same size. I read the FAQ and followed the advice to format all four of my parity drives using "mkfs.ext4 -m 0 -T largefile4 DEVICE" to make the parity drives have a bit more capacity than my data drives so I would expect the parity to fit no problem. Interestingly enough, the first files it seems to have trouble with are on D05 -- that's what disk the first "outofparity" messages complain about. All in all, it complains about files on D05, D08, D09, D11, D12, and D14 with "outofparity" messages. It complains about a ton of files -- enough to generate 45MB of logs. I guess for some reason it doesn't see any problems with files on any of the other disks. This seems odd to me.

    Here are my disks:

    Data Drives:
    3845578572  3639967560    10243752 100% /mnt/D01
    3845578572  3819000168           0 100% /mnt/D02
    3845578572  3643114840     7096472 100% /mnt/D03
    3845578572  3629738304    20473008 100% /mnt/D04
    3845578572  3665820180           0 100% /mnt/D05
    3845578572  3138248440   511962872  86% /mnt/D06
    3845578572  3251234708   398976604  90% /mnt/D07
    3845578572  3661056388           0 100% /mnt/D08
    3845578572  3610368932    39842380  99% /mnt/D09
    3845578572  3804815420           0 100% /mnt/D10
    3845578572  3842569808           0 100% /mnt/D11
    3845578572  3844447456           0 100% /mnt/D12
    3845578572  3844850996           0 100% /mnt/D13
    3845578572  3842702900           0 100% /mnt/D14
    3845578572  3844808680           0 100% /mnt/D15
    3845578572  1711226268  1938985044  47% /mnt/D16
    3845578572       69632  3650141680   1% /mnt/D17
    3845578572       69632  3650141680   1% /mnt/D18
    3845578572       69632  3650141680   1% /mnt/D19
    3845578572       69632  3650141680   1% /mnt/D20
    3845578572     4384704  3645826608   1% /mnt/D21
    3845578572     4384704  3645826608   1% /mnt/D22
    3845578572     4384704  3645826608   1% /mnt/D23
    3845578572     4384704  3645826608   1% /mnt/D24
    
    Parity Drives:
    3906388932  3906372548           0 100% /mnt/P01
    3906388932       69632  3906302916   1% /mnt/P02
    3906388932       69632  3906302916   1% /mnt/P03
    3906388932       69632  3906302916   1% /mnt/P04
    
    /mnt/P01:
    4000053751808 Aug  2 11:48 snapraid.parity
    

    My config file is as follows:

    parity /mnt/P01/snapraid.parity
    2-parity /mnt/P02/snapraid.parity
    3-parity /mnt/P03/snapraid.parity
    4-parity /mnt/P04/snapraid.parity
    content /mnt/D21/snapraid.content
    content /mnt/D22/snapraid.content
    content /mnt/D23/snapraid.content
    content /mnt/D24/snapraid.content
    content /srv/snapraid/snapraid.content
    data d01 /mnt/D01/
    data d02 /mnt/D02/
    data d03 /mnt/D03/
    data d04 /mnt/D04/
    data d05 /mnt/D05/
    data d06 /mnt/D06/
    data d07 /mnt/D07/
    data d08 /mnt/D08/
    data d09 /mnt/D09/
    data d10 /mnt/D10/
    data d11 /mnt/D11/
    data d12 /mnt/D12/
    data d13 /mnt/D13/
    data d14 /mnt/D14/
    data d15 /mnt/D15/
    data d16 /mnt/D16/
    data d17 /mnt/D17/
    data d18 /mnt/D18/
    data d19 /mnt/D19/
    data d20 /mnt/D20/
    data d21 /mnt/D21/
    data d22 /mnt/D22/
    data d23 /mnt/D23/
    data d24 /mnt/D24/
    exclude /lost+found/
    

    results of "snapraid status" command:

    Loading state from /mnt/D21/snapraid.content...
    Using 6543 MiB of memory for the FileSystem.
    SnapRAID status report:
    
       Files Fragmented Excess  Wasted  Used    Free  Use Name
                Files  Fragments  GB      GB      GB
       11325       0       0       -    3728     210  94% d01
       13557       0       0       -    3911      27  99% d02
       16985       0       0       -    3730     207  94% d03
       13815       0       0       -    3764     221  94% d04
       45619       0       0       -    4341     194  95% d05
       50686       0       0       -    3404     731  82% d06
       38942       0       0       -    3330     608  84% d07
       ]1262336       0       0       -    3748     188  95% d08
     2273172       0       0       -    3709     240  94% d09
      333550       0       0       -    3919      41  98% d10
      611813       0       0       -    3941       3  99% d11
      139140       0       0       -    3999       1  99% d12
      109118       0       0       -    3938       0  99% d13
       17144       0       0       -    4126       3  99% d14
        4393       0       0       -    3944       0  99% d15
       10398       0       0       -    1757    2185  44% d16
           0       0       0       -       0       -   -  d17
           0       0       0       -       0       -   -  d18
           0       0       0       -       0       -   -  d19
           0       0       0       -       0       -   -  d20
           0       0       0       -       0       -   -  d21
           0       0       0       -       0       -   -  d22
           0       0       0       -       0       -   -  d23
           0       0       0       -       0       -   -  d24
     --------------------------------------------------------------------------
     4951993       0       0     0.0   59298    4866  92%
    The array is empty.
    
     

    Last edit: Codicus Maximus 2016-08-02
  • Leifi Plomeros

    Leifi Plomeros - 2016-08-02

    From the manual:

    For each file, even of few bytes, a whole block of parity is allocated, and with many files this may result in a lot of unused parity space. And when you completely fill the parity disk, you are not allowed to add more files in the data disks. Anyway, the wasted parity doesn't sum between data disks. Wasted space resulting from a high number of files in a data disk, limits only the amount of data in such data disk and not in others.

    As approximation, you can assume that half of the block size is wasted for each file. For example, with 100000 files and a 256 KiB block size, you are going to waste 13 GB of parity, that may result in 13 GB less space available in the data disk.

    Specific examples in your setup:
    Example d01: 11 325 files x 0.25 MiB / 2 = ~1 415 MiB free space needed.
    Example d09: 2 273 172 files x 0.25 MiB / 2 = ~284 146 MiB free space needed

     
  • Codicus Maximus

    Codicus Maximus - 2016-08-02

    Ah! Thanks a bunch for your help. This is a very critical piece of information -- the fact that it's tucked away in the "BlockSize" setting documentation to me seems poor. The "Getting Started" section neglects to mention anything about parity file waste space and since other RAID mechanisms operate on the expectation that parity drive size does not need to be larger than data drive size I think it would be very valuable to include this information in the "Getting Started" section. I also see value in adding it to the FAQ under "Why in 'sync' do I get the error 'Failed to grow parity file 'xxx' to size xxx due lack of space.'?" since the info there right now does not mention this either. It seems to imply that you don't need any additional space beyond the size of the data drive. Unfortunately this means that four of these 4TB drives I bought for my server are essentially "useless" -- now I need to buy four 5TB drives. Just wish I would have known this before. Crossing my fingers that the old clunker I have these in even supports >4TB drives (I had to fiddle with stuff BIOS, HBA card BIOS, drivers, etc. just to get it to support >2TB).

    Thanks again.

     

    Last edit: Codicus Maximus 2016-08-02
    • Leifi Plomeros

      Leifi Plomeros - 2016-08-02

      Wouldn't it make more sense to leave free space on the data disks?
      Sure ~300 GiB free space on the 2 million+ files disk is a lot, but it is also much less than the unused space you will end up with on the parity disks...

       
      • Codicus Maximus

        Codicus Maximus - 2016-08-03

        I am using mhddfs to pool the drives together so leaving free space on a drive is not an option since any free space will get used. I could set its "mlimit" to 300GB to keep that much free space; however, that means it would keep that much free space on ALL 24 drives equaling a 7200GB loss.

         
  • John

    John - 2016-08-03

    One solution, granted fiddlier than one would like, would be to exclude some files/folders from snapraid on the disks that are close to full and have a large number of files (i.e. you don't need to keep the space free, you can use it but not have it protected by snapraid).

    Apart from things you maybe just don't care (for example I have a lot of linux iso's and a few kiwix versions of wikipedia for offline use, saved the day many times when my internet was down... - no point in protecting those too much, anyway they get obsolete; if I lose them I just go and download the next current version when I need it) you might have things that SHOULD NOT BE INCLUDED in snapraid anyway, because they change. Like virtual machines or temporary download folders or any other file that is expected to change any time soon.

     
    • Codicus Maximus

      Codicus Maximus - 2016-08-03

      Yeah, that's what I'm doing for now -- a combination of removing backed up folders from SnapRAID and rebalancing some files between drives in the mhddfs array. Ideally I would like it to be "set it and forget it" and not have to worry about the fiddliness or how many files are on each drive especially since I'm using mhddfs to pool all these volumes together; however, it saves me from having to buy more hard drives for now, heh.

      Regarding files which change -- correct me if I'm wrong but from a data integrity standpoint, files which are excluded from being handled by SnapRAID but which still reside on any of the content drives may still cause inability to restore files if the excluded files change. In other words, files which change (such as your mentioned temporary downloads folder) should not reside on any of the drives configured as "data" in the config at all not simply be configured in an "exclude" statement.

       
      • Leifi Plomeros

        Leifi Plomeros - 2016-08-03

        Nope, files outside or excluded from the snapraid array have no effect on the files on the inside.

        To make sure you are not overfilling disks you could put X GB junk files in an excluded folder on each disk. You still need to figure out roughly how many files you plan to put on each disk in order to put appropriate amount of junk there though.

         

Log in to post a comment.

MongoDB Logo MongoDB