Menu

Added data disks not being used

Help
2023-07-16
2023-07-17
  • Bel Lockhaven

    Bel Lockhaven - 2023-07-16

    I feel like I am doing something right, and doing something wrong. I initially set up snapraid some time ago using two 14TB drives, one as parity, one as data. This was working well and recentlyI was finally able to pick up a set of drives to add to the pool and increase my storage space. Everything seemed to go well and I started rclone up to transfer my offsite data to a local copy spread across the drives. However, after some time that started popping an error with "no space left on device" and the original data drive full but the others untouched.

    df -h is what is making me believe I did something right, as mergerfs is showing the 64TB size I expect from the pool:

    Filesystem                         Size  Used Avail Use% Mounted on
    tmpfs                              4.0G  3.9M  4.0G   1% /run
    /dev/mapper/ubuntu--vg-ubuntu--lv   98G   81G   13G  87% /
    tmpfs                               20G     0   20G   0% /dev/shm
    tmpfs                              5.0M     0  5.0M   0% /run/lock
    mergerfs                            64T   12T   51T  20% /mnt/storage
    /dev/sda2                          2.0G  254M  1.6G  14% /boot
    /dev/sda1                          1.1G  6.1M  1.1G   1% /boot/efi
    /dev/sdb1                           13T   28K   13T   1% /mnt/disk03
    /dev/sdh1                           13T   28K   13T   1% /mnt/disk04
    /dev/sdc1                           13T   28K   13T   1% /mnt/disk02
    /dev/sdg1                           13T   25G   13T   1% /mnt/disk05
    /dev/sdf2                          5.5T  1.8T  3.4T  35% /mnt/secondary01
    /dev/sde1                           13T   12T  186G  99% /mnt/disk01
    /dev/sdd1                           13T   12T  182G  99% /mnt/parity01
    gdrive:                            1.1P   66T  1.0P   7% /mnt/gdrive
    seedbox:                           1.0P     0  1.0P   0% /mnt/seedbox
    tmpfs                              4.0G  4.0K  4.0G   1% /run/user/1001
    tmpfs                              4.0G  4.0K  4.0G   1% /run/user/1000
    

    Parity and the original data disk are now full but the recently added data disks have not had data moved to them. While I don't have a great familiarity with Linux I've spent some time with this server now and multiple guides so I've gotten better with some things. My fstab file

    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    # / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
    /dev/disk/by-id/dm-uuid-LVM-LR3JAffs5mFXqzpLaQWaNEjjPJ4lNvEmeZG6d45IHGWFZfvEEeW3pFC6N0wDXMsd / ext4 defaults 0 1
    # /boot was on /dev/sdb2 during curtin installation
    /dev/disk/by-uuid/c4f1c5b7-dcef-407f-8bfa-ff2651579207 /boot ext4 defaults 0 1
    # /boot/efi was on /dev/sdb1 during curtin installation
    /dev/disk/by-uuid/4227-E912 /boot/efi vfat defaults 0 1
    /swap.img       none    swap    sw      0       0
    #2TB WD - Failed - Removed
    #/dev/disk/by-id/wwn-SSN-REDACTED-part1 /mnt/secondary01 ext4 defaults 0 0
    
    #6TB Replacement for Failed Drive - Slot 02
    /dev/disk/by-id/wwn-SSN-REDACTED-part2 /mnt/secondary01 ext4 defaults 0 0
    
    #14TB slot 03
    /dev/disk/by-id/wwn-SSN-REDACTED-part1 /mnt/parity01 ext4 defaults 0 0
    
    #14TB slot 04
    /dev/disk/by-id/wwn-SSN-REDACTED-part1 /mnt/disk01 ext4 defaults 0 0
    #14TB slot 05 - Seagate Exos
    /dev/disk/by-id/wwn-SSN-REDACTED-part1 /mnt/disk02 ext4 defaults 0 0
    #14TB slot 06 - Seagate Exos
    /dev/disk/by-id/wwn-SSN-REDACTED-part1 /mnt/disk03 ext4 defaults 0 0
    #14TB slot 07 - Seagate Exos
    /dev/disk/by-id/wwn-SSN-REDACTED-part1 /mnt/disk04 ext4 defaults 0 0
    #14TB slot 08 - Seagate Exos
    /dev/disk/by-id/wwn-SSN-REDACTED-part1 /mnt/disk05 ext4 defaults 0 0
    
    
    /mnt/disk* /mnt/storage fuse.mergerfs defaults,nonempty,allow_other,use_ino,cache.files=off,moveonenospc=true,dropcacheonclose=true,minfreespace=200G,fsname=mergerfs 0 0
    

    and snapraid.conf file

    # Parity location(s)
    1-parity /mnt/parity01/snapraid.parity
    #2-parity /mnt/parity02/snapraid.parity
    
    # Content file location(s)
    content /var/snapraid.content
    content /mnt/disk01/.snapraid.content
    #content /mnt/disk02/.snapraid.content
    
    # Data disks
    data d1 /mnt/disk01
    data d2 /mnt/disk02
    data d3 /mnt/disk03
    data d4 /mnt/disk04
    data d5 /mnt/disk05
    
    # Excludes hidden files and directories
    exclude *.unrecoverable
    exclude /tmp/
    exclude /lost+found/
    exclude downloads/
    exclude appdata/
    exclude *.!sync
    

    both seem to be correct from what I can compare with different guides so I must be missing something but can't figure out what it is.

     
  • rubylaser

    rubylaser - 2023-07-16

    This is an issue with your create mode in mergerfs, and is not related to SnapRAID. Since you didn't define a create policy in your mergerfs mount line in /etc/fstab, it is defaulting to a path perserving create mode. I would suggest you umount /mnt/storage, then add this to your mergerfs mountpoint in fstab. This is not path perserving and will use the disk with the most free space.

    category.create=mfs
    

    Then, remount. Since you already have moveonenospc enabled, that should be all that it takes to get things working.

     

    Last edit: rubylaser 2023-07-16
  • Bel Lockhaven

    Bel Lockhaven - 2023-07-17

    This seems to have fixed the issue I was having. Thank you very much for your assistance. Following guides and videos makes it easy to miss something when I'm not familiar with the specifics.

     
    • rubylaser

      rubylaser - 2023-07-17

      Great news! I am happy to help. Good luck on your journey to learning more about the great combination of SnapRAID and mergerfs!

       

Log in to post a comment.