I've read the snapraid manual as well as several posts here about the impact of different block sizes.
I have 8GB of ram and am considering using block_size = 128 because I have a lot of smaller files and it seems like it will be more space efficient (minimally). Is there any reason besides RAM usage that I should use the default (256) or another size?
On a side note, to test speeds I started two fresh syncs using 128 and then 256 block sizes. I let each sync run for an hour or so and it appears that using the smaller block size results in a faster sync.
The only reason I know of to not use a smaller block_size is if you don't have enough RAM. Remember that as you later add more data and more drives, your RAM usage will increase, so plan ahead. You need to delete parity and content and start from scratch if you decide to change the block_size later.
You didn't say what OS you are using. If linux, then there is no reason that the block_size should make a difference in sync speed, assuming your read ahead is set to at least 128 (the default).
I just wanted to ask another question about block sizes.
Since i run a rather large array(22TB) on a machine with just 2GB ram. So a blocksize of 256 would take my memory up to roughly 2gb which is to much. However 512 would just use half my ram.
So are there any reasons that it has to be a power of two (128,256.512) ? or could i go for say 384 or 412 ?
Good question (power of 2?)... Unless versions since 1.7 have relaxed this, state.c *requires* that block_size (in the configuration file) be a power of 2 (else error_exit). I doubt that this is a gratuitous restriction, but I can't point to the specific "dependency" (Andrea, care to shine a light?).
In true it's a gratuitous limitation. I don't see any reason why it should not work for other values.
What only matter is to use something that matches well with the cache size of the OS. And "usually" using a power of 2, gives good results. But it's not necessarily always true.
Blocksize does not seem to work in snapraid 1.12.
I have a smaller array 5TB (3x1TB + 2TB), 4Gb ram 64 bits machine (old DL360 G4p), and "block_size 1024" in my configuration file (which is quite a lot, and tried smaller number too like 512), but snapraid is requiring 5Gb to run.
The array is 50% used, but each time I increase it a little, it requires even more memory.
I am afraid that when my array will reach 80% full, the swap will not even be enough.
What am I doing wrong?