Menu

Content file when disk is 100% full (Linux / EXT4 / mhddfs)

Help
2014-05-13
2014-05-18
  • cannondale0815

    cannondale0815 - 2014-05-13

    One of my data disks that also is the home to a copy of the content file is now 100% full (literally 0 bytes available). Will this pose a problem for the content file on that disk? Do I need to ensure that there always are a few GB of disk space remaining so that the content file has room to grow?

    I am using mhddfs across all my data disks, so it really is mhddfs that filled up my first disk to the rim. If I need to keep a few GB free on that disk, what is the best way to go about it with mhddfs?

     
  • rubylaser

    rubylaser - 2014-05-13

    What I usually do is use tune2fs to have all of my other data disks leave 1% of their disk space unused. On my parity disks I have this percentage set to zero. Another method would be to use mhddfs's mlimit option to set minimum available space.

    -o mlimit=size[m|k|g]
    
    a free space size threshold
    If a drive has the free space  less  than  the  threshold specified  then another drive will be chosen while creating a new file.  If all the drives have  free  space less than the threshold specified then a drive containing most free space will be chosen.
    
     

    Last edit: rubylaser 2014-05-13
  • John

    John - 2014-05-13

    No space on the disk(s) with the content file is asking for trouble for sure.

    I'm just happy I learned about mhddfs, just what the doctor ordered! Have to try it.

     
  • cannondale0815

    cannondale0815 - 2014-05-13

    Thanks ruby -- in reading up on the mlimit option, I also learned that the default (when nothing is specified) is 4GB, yet when I do a df on my data disks it says '0' available. The content file itself is just around 500MB, so that could not have alone filled up the 4GB that mhddfs was supposed to leave on the data disk.

    Something is fishy here. Is there another way (not df) to check for the available disk space? Maybe df isn't reporting it correctly...

    On another note, so far snapraid sync has not complained about not having enough space for the content file. And despite seemingly 0 available space, the file still seems to grow... (so there must still be some disk space left that is unreported).

    Edit: I think I just don't fully understand the output the df command gives me. Here is what I see:

    user@NAS:/mnt/Disk1$ df .
    Filesystem      1K-blocks       Used Available Use% Mounted on
    /dev/sdb1      3845579580 3655185712         0 100% /mnt/Disk1
    

    and also:

    user@NAS:/mnt/Disk1$ df -h .
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdb1       3.6T  3.5T     0 100% /mnt/Disk1
    

    So I do still have space available, if I look at the difference between Size and Used, but the Avail shows 0 in both cases.

     

    Last edit: cannondale0815 2014-05-13
    • jwill42

      jwill42 - 2014-05-18

      Are you running snapraid as root?

      If your content file is indeed growing, then running snapraid as root is the only explanation I can think of. Well, it is possible to define a user ID that is allowed to write to the reserved space, but I doubt you have done that, so we are left with only root.

      The 1K-blocks and Size fields from df include any reserved space in the ext4 filesystem (I call those fields which include the reserved space: "Free space"). But the Avail field does NOT include any reserved space. Free space = Available space + Reserved space. A non-root user can only write to Available space, but a root user can write to all Free space.

       

      Last edit: jwill42 2014-05-18
  • abubin

    abubin - 2014-05-14

    when it say 0 available it means no space left. You can try copying some files into this partition. You will get error of diskspace full.

    Have you tried the command as suggested by ruby to use tune2fs to set the reserved space to 0? You don't need reserved space for non-system partition. So you can free up 5% more space from your drive which is like 200GB. That is a lot of space wasted on being reserved.

    you can also move the content file to another partition that have more free space.

     

    Last edit: abubin 2014-05-14
    • cannondale0815

      cannondale0815 - 2014-05-14

      Thanks -- I have not used tune2fs yet. The content file still grows when I run snapraid sync, even on the supposedly full drive. So it cannot be truly full yet, even though 0 is reported.

      Nevertheless, I will heed your advise to reduce the amount of reserved space, seeing that I truly don't need it on data drives.

      Thanks!

       
  • therealjmc

    therealjmc - 2014-05-14

    Do you have a lot of small files on your disks? Have you tried to take a look how many inodes are free on the disk?

    df -i /mnt/Disk1

     

    Last edit: therealjmc 2014-05-14
    • cannondale0815

      cannondale0815 - 2014-05-14

      Thanks, inodes look fine:

      user@NAS:/mnt/Disk1$ df -i .
      Filesystem        Inodes IUsed     IFree IUse% Mounted on
      /dev/sdb1      244195328  4219 244191109    1% /mnt/Disk1
      
       
  • Andrea Mazzoleni

    Hi,

    If the content file in one disk cannot be written because such disk is full, the whole sync process will fail, and all the content files won't be updated.

    SnapRAID first writes a new copy of the content files in all the disks, and if all of them are written correctly, these new copies are renamed over the old ones.

    Anyway, If this problem happen, you need only to free some space in the reported disk, and run sync again.

    Ciao,
    Andrea

     
  • cannondale0815

    cannondale0815 - 2014-05-14

    Hi Andrea! Thanks and welcome back! :)

     
  • mikehunt114

    mikehunt114 - 2014-05-18

    Note that there can be a discrepancy between what df -h reports and what is actually free/used. The kernel reduces the available space displayed in df -h by the amount of space reserved by the filesystem, which is a default of 5% for ext4 unless you specified otherwise when creating the filesystem. Also of note is that root can write to this reserved space, but normal users cannot. This means that it is possible to write beyond 0% free space if it's done by root, so you may or may not notice an issue. FWIW gparted will display the actual used/free space.

    Hope I've got that right, I'm going from memory and it's been a while since I set my stuff up.

     
    • rubylaser

      rubylaser - 2014-05-18

      You have this correct. tune2fs can be used to set this percentage of reserved space to zero even after the filesystem is created. I like to set my parity disks to have zero reserved space and set my data disks have have 2% reserved. This gives me a nice safety cushion.

       
    • jwill42

      jwill42 - 2014-05-18

      You have it mostly correct, but your comment about what "df -h" reports is somewhat misleading. With ext4 the situation is more complicated than has been explained so far in this thread.

      I divide the "missing" space in an ext4 filesystem into three categories. And by "missing", I mean the difference between the size of the partition (block device) on which the filesystem is installed and the space for user data that can be written to the filesystem.

      1) Low-level filesystem overhead : this is mostly accounted for by the inode tables that mkfs.ext4 reserves space for when the filesystem is created. Roughly 1/64 of the size of the partition will be reserved for inode tables. By default, it also includes 128MiB for the journal.

      2) Reserved space : this is the default 5% space that is reserved for only root to write to. As has been stated, this can be reduced using tune2fs.

      3) High-level filesystem overhead : this is the amount of used space (as reported by df) that shows up when the filesystem is completely empty. It depends on the size of the filesystem. It is about 70MB on a 4TB default ext4 filesystem.

      You can determine the size of (1) by subtracting the size of the partition (block device) as reported by fdisk -l /dev/sd__ from the "1B-blocks" as reported by df -B1.

      The reserved space (2) can be determined with tune2fs -l /dev/sd__. Look at the "Reserved block count" and "Block size" (usually 4096).

      With those preliminaries out of the way, now I can explain what was misleading about your comment on df -h. That command will actually show you BOTH the "Free space" and the "Available space", where I define Free space = Available space + Reserved space. The "Size" field from df -h will be the Free space, and the "Avail" field will be the Available space.

       

      Last edit: jwill42 2014-05-18

Log in to post a comment.

MongoDB Logo MongoDB