Menu

Failed to grow parity file

Help
2014-11-03
2014-11-11
  • cannondale0815

    cannondale0815 - 2014-11-03

    I'm just getting this message when running sync after running my system for several months without issues:

    Failed to grow parity file '/mnt/DiskP1/snapraid.parity' to size 3977654829056 using fallocate due lack of space.
    

    Ubuntu 14.04, 10 disks, 2 of them parity.

    I read the section in the FAQ about how to reformat disks in Linux to get more space, but since this is an existing parity drive, should I still reformat and then sync again?

     

    Last edit: cannondale0815 2014-11-03
  • Leifi Plomeros

    Leifi Plomeros - 2014-11-03

    Based on first result I found on google, it doesn't seem possible to change number of inodes without reformating.
    http://unix.stackexchange.com/questions/63062/resize-ext4-partition-to-create-more-inodes

    Run snapraid fix instead of sync in order to keep existing file checksums.
    Then after fix is complete, run sync to include the latest files into parity.

    Alternatively get rid of the last files added on the fullest disk and keep enough free space to avoid full parity disk.

     
  • cannondale0815

    cannondale0815 - 2014-11-03

    Thanks -- the snapraid fix command resulted in the same error -- "Failed to grow parity". I suppose I could reformat the affected parity disk without data loss, right?

     
  • John

    John - 2014-11-04

    As said on the biggest drive(s) move out the files you added last (and some more, for good measure). Out can be

    • completely removed
    • on some smaller data drives
    • on some other "biggest" data drive that is not that full
    • outside snapraid (that is not only on some other drive but can be also on the same data drive but ignored in snapraid.conf).

    Yes, the inode stuff is a PITA.

     

    Last edit: John 2014-11-04
  • cannondale0815

    cannondale0815 - 2014-11-04

    Thanks! I have 8 data drives of equal size, four of those are practically 100% full. Can I not just simply try to reformat the one parity drive in question, or is that going about it the wrong way?

     

    Last edit: cannondale0815 2014-11-04
  • John

    John - 2014-11-04

    If you want to do the math first, try to do tune2fs -l /dev/sdxy (sdxy = your parity disk) and see:

    Inode count: 476960
    Inode size: 256

    (those are from my 2TB ext4 partition with -T largefile4). Multiply the values and I have like 122MB for inodes (not much). I assume you want ext4 as well. Also I hope you're using -m 0 on the parity already.

    Your inode count is probably much, much higher, multiply to see what you'll get (the usual default setting will eat about 30GB for a 2TB drive).

    Other than giving up for a while (one) parity by reformatting the drive I don't see any issue. I don't know precisely what to do so snapraid doesn't scream "murder" (content file still has references to the parity you want to nuke) but I'm sure there is a way to fix everything "on the fly" while still keeping your second parity.

     
    • Mitchell Deoudes

      I use: mkfs.ext4 -m 0 -i 262144 /dev/sdXX

      to format my data drives. It gives about 10mil inodes for a 3TB disk,
      using an inode ratio 1/16th of the default (so, 1/16 the number of inodes that default gives). I felt like that was plenty of inodes, and going any lower would bring diminishing returns in terms of space saving. "-m 0" reclaims additional space that's not necessary to reserve on a non-system drive. I found that most of the other options I found interesting were already the defaults.

      On 11/4/2014 7:53 AM, John wrote:

      If you want to do the math first, try to do tune2fs -l /dev/sdxy (sdxy = your parity disk) and see:

      Inode count: 476960
      Inode size: 256

      (those are from my 2TB ext4 partition with -T largefile4). Multiply the values and I have like 122MB for inodes (not much). I assume you want ext4 as well. Also I hope you're using -m 0 on the parity already.

      Your inode count is probably much, much higher, multiply to see what you'll get (the usual default setting will eat about 30GB for a 2TB drive).

      Other than giving up for a while (one) parity by reformatting the drive I don't see any issue. I don't know precisely what to do so snapraid doesn't scream "murder" (content file still has references to the parity you want to nuke) but I'm sure there is a way to fix everything "on the fly" while still keeping your second parity.


      Failed to grow parity file


      Sent from sourceforge.net because you indicated interest in https://sourceforge.net/p/snapraid/discussion/1677233/

      To unsubscribe from further messages, please visit https://sourceforge.net/auth/subscriptions/

       

      Last edit: Mitchell Deoudes 2015-09-12
    • cannondale0815

      cannondale0815 - 2014-11-05

      Hi John, thanks for the continued help, much appreciated. These are the inode numbers for the parity drive (4TB) in question:

      Inode count:              244195328
      Inode size:               256
      

      And yes, it's ext4. Now how do these numbers help me determine if reformatting the drive with

      mkfs.ext4 -m 0 -T largefile4
      

      as described in the FAQ would help me?

      P.S.: I doubt I originally formatted the parity drives with the "-m 0" option. I used the 'gnome-disks' GUI tool to format them with default values.

       

      Last edit: cannondale0815 2014-11-05
      • John

        John - 2014-11-05

        So you're using 244195328*256=62514003968, that's 62 GB, pretty nice chunk you can save (I'm not considering at all the space used for inodes with largefile4, that would be considerably smaller, probably hundreds of megs) !

        So for parity disk(s) the path is clear, -largefile4

        Keep in mind NOT to use largefile4 with the non-parity drives where you might need more than 500 000 - 1 000 000 inodes. You can/should use what Mitchell mentioned, something like:

        "mkfs.ext4 -m 0 -i 262144 /dev/sdXX
        to format my data drives. It gives about 10mil inodes for a 3TB disk"

        Using -largefile (this means one inode per MB I think) instead of -i 262144 would give you something like 2.8 mil inodes per 3TB filesystem.

        It depends how many files you have currently (df -i or snapraid -v helps here).

        With -m 0 you might have a nice surprise. Run tune2fs -l and look at "Reserved block count". If it is something non-zero you can make it zero with tune2fs -m 0 /dev/sdXY (probably you should unmount it before and mount it after). And then you'll get (usually) 5% :-)

         
  • cannondale0815

    cannondale0815 - 2014-11-10

    I just want to report back. Took me days of work, and still in progress. I went about it in a more careful way. I first copied the existing parity files from the parity drives to different locations, then reformatted the parity drives with the largefile option, then copied the parity files back. Given the size of my parity drives, this took a long time. Now I'm running snapraid fix and it will likely take the better part of the day to finish.

    Also, what I noticed in the process is that my partitions were misaligned. Apparently the gnome-disks tool is crap and doesn't create properly aligned partitions. I used gdisk to fix that.

    Now, I have 8 data drives in my array, three are still empty. For those, I re-created the partitions the proper way using gdisk. Since they are 4TB, should I be using Mitchell's suggestion of formatting them with a specific number of inodes, or adjust that number a bit, since my disks are 4TB instead of 3TB? I am mainly storing files in the order of between 2GB and 10GB each.

    Thanks!
    -J

     
  • John

    John - 2014-11-11

    Have a look inside /etc/mke2fs.conf
    For me it shows:

        largefile = {
                inode_ratio = 1048576
                blocksize = -1
        }
        largefile4 = {
                inode_ratio = 4194304
                blocksize = -1
        }
    

    So you can have -i 262144 or -i 1048576 (same as -T largefile) or -i 4194304 (same as -T largefile4). Assuming mke2fs.conf configured as mine, of course.

    That is the number of bytes per inode (as in how rare are the inodes, one every so many bytes, not how many bytes is each inode eating).

    So if you use "-T largefile" you'll have for 4GB total bytes on the disk 4*10^12/1048576 = 3814697 inodes

    Find Inode size: 256 as above and you can calculate how much you're wasting - worse case scenario with 3814697 inodes x 256 bytes/inode = 976562432 bytes (just under 1 GB).

    Now check you data with df -i to make an idea how many inodes you'll need (probably not many for such large files). Are 3.8 millions surely more than enough?

    We should put this into a FAQ or something, as mentioned in some other place I just can't find any documentation to say precisely what -T largefile means!

     

    Last edit: John 2014-11-11

Log in to post a comment.

MongoDB Logo MongoDB