I feel like I am doing something right, and doing something wrong. I initially set up snapraid some time ago using two 14TB drives, one as parity, one as data. This was working well and recentlyI was finally able to pick up a set of drives to add to the pool and increase my storage space. Everything seemed to go well and I started rclone up to transfer my offsite data to a local copy spread across the drives. However, after some time that started popping an error with "no space left on device" and the original data drive full but the others untouched.
df -h is what is making me believe I did something right, as mergerfs is showing the 64TB size I expect from the pool:
Parity and the original data disk are now full but the recently added data disks have not had data moved to them. While I don't have a great familiarity with Linux I've spent some time with this server now and multiple guides so I've gotten better with some things. My fstab file
This is an issue with your create mode in mergerfs, and is not related to SnapRAID. Since you didn't define a create policy in your mergerfs mount line in /etc/fstab, it is defaulting to a path perserving create mode. I would suggest you umount /mnt/storage, then add this to your mergerfs mountpoint in fstab. This is not path perserving and will use the disk with the most free space.
category.create=mfs
Then, remount. Since you already have moveonenospc enabled, that should be all that it takes to get things working.
Last edit: rubylaser 2023-07-16
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This seems to have fixed the issue I was having. Thank you very much for your assistance. Following guides and videos makes it easy to miss something when I'm not familiar with the specifics.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I feel like I am doing something right, and doing something wrong. I initially set up snapraid some time ago using two 14TB drives, one as parity, one as data. This was working well and recentlyI was finally able to pick up a set of drives to add to the pool and increase my storage space. Everything seemed to go well and I started rclone up to transfer my offsite data to a local copy spread across the drives. However, after some time that started popping an error with "no space left on device" and the original data drive full but the others untouched.
df -h is what is making me believe I did something right, as mergerfs is showing the 64TB size I expect from the pool:
Parity and the original data disk are now full but the recently added data disks have not had data moved to them. While I don't have a great familiarity with Linux I've spent some time with this server now and multiple guides so I've gotten better with some things. My fstab file
and snapraid.conf file
both seem to be correct from what I can compare with different guides so I must be missing something but can't figure out what it is.
This is an issue with your create mode in mergerfs, and is not related to SnapRAID. Since you didn't define a create policy in your mergerfs mount line in /etc/fstab, it is defaulting to a path perserving create mode. I would suggest you umount /mnt/storage, then add this to your mergerfs mountpoint in fstab. This is not path perserving and will use the disk with the most free space.
Then, remount. Since you already have moveonenospc enabled, that should be all that it takes to get things working.
Last edit: rubylaser 2023-07-16
This seems to have fixed the issue I was having. Thank you very much for your assistance. Following guides and videos makes it easy to miss something when I'm not familiar with the specifics.
Great news! I am happy to help. Good luck on your journey to learning more about the great combination of SnapRAID and mergerfs!