Yes, this is it - I asked for this "feature". Negative is good and "more negative" is better :-)
It started with a "Wasted" that would go only to 0 and let's say 50GB (non-negative) would mean that you need to keep 50GB free on that disk otherwise you run out of parity. But if you eat some space on that disk with anything that's not included in snapraid the waste goes down. At first it was going only down to 0 so you wouldn't know how well you're doing (in having enough parity versus a data disk with lots of files). Now you can tell, for example if you have -0.1GB waste probably if you fill that data disk with one hundred thousand files you'll run out of parity. However, if you have -30GB you're probably perfectly fine.
Last edit: John 2015-10-19
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Edit:
'doh! Is this the correct answer?
https://sourceforge.net/p/snapraid/discussion/1677233/thread/885dd732/
I have two snapraid arrays that both report negative wasted space.
How is this possible and is this correct? Should I be concerned?
Last edit: MakOwner 2015-10-18
Yes, this is it - I asked for this "feature". Negative is good and "more negative" is better :-)
It started with a "Wasted" that would go only to 0 and let's say 50GB (non-negative) would mean that you need to keep 50GB free on that disk otherwise you run out of parity. But if you eat some space on that disk with anything that's not included in snapraid the waste goes down. At first it was going only down to 0 so you wouldn't know how well you're doing (in having enough parity versus a data disk with lots of files). Now you can tell, for example if you have -0.1GB waste probably if you fill that data disk with one hundred thousand files you'll run out of parity. However, if you have -30GB you're probably perfectly fine.
Last edit: John 2015-10-19