First, sorry I did not look close enough at z440.txt. I re-ran the tests, attached.
Second, I didn't think you wanted them. time outputs to standard error so it wasn't captured with your pipe to tee command. I will fix so its included.
Third, yes it took me two weeks to recover a disk (I had to run fix a few times).
2667v2 CPUs came today. Went in and installed them. New results attached.
Just wanted to interject into this thread quickly. UhClem, the time and effort you have put in to support Allan is incredible. People don't get paid support as competent and thorough as you have been giving. Well done!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Just wanted to interject into this thread quickly. UhClem, the time and effort you have put in to support Allan is incredible. People don't get paid support as competent and thorough as you have been giving. Well done!
I agree and Ill be sure to give back.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I appreciate your nice words, but, in full disclosure, and despite your perception as the beholder, my motivation is that I enjoy solving problems--especially (software/system) performance-related ones. [I've been retired for 25 years, so this is like an old-fart Formula1 chief mechanic helping a neighbor tune up his car.:)] Credit to Allan for being a willing, and capable, co-conspirator. (The benefit part is just a nice side-effect.)
AND everybody should show their thanks and appreciation to Andrea Mazzoleni, the author of SnapRAID. Obviously, you all know how helpful and useful SR is; but I would like to emphasize that it is a very well-designed and skillfully implemented piece of software. (and I do know software)
==End of PSA== (back to the program ... if this forum software isn't misbehaving [i.e. it is NOT good software])
(to be continued, I hope ...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Sorry took so long.
Allan,
With the limitatiions that the ad hoc TestArray[**] approach imposes, in your configuration, the best suggestion I can make, with a high degree of confidence, is to split your 56+6 configuration into 2x 28+3(or 4) arrays. And, since you have dual CPUs, you can run your 2x SR operations in parallel. E.g., if you had 1TB of new data to be added, you'd put about half of it on each array; then do the 2x sync's concurrently. (With only one CPU, put all the new data on one array, and sync; next time, the other array.) Similarly for scrub's. If/when a fix is needed, only the affected array will be involved.
If you go this route, please let us know the outcome.
Good luck.
[**] The ideal way to do this testing is the pre-meditated method. I've been doing it since v1.7
I set up a dedicatred 8GB partition at the beginning of every SR drive (data & parity). Thus, file layout, and speed, are optimal and consistent. For all tests. Only variable is the config, and the MB/sec (and cpu-time #s) are very consistent across multiple runs. Something to think about when setting up a new, from scratch, array.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thank you, I have change my setup to use two arrays instead of one, I am able with dual procs to run syncs at the same time and they don't fight over CPU. Initial sync took three days instead of two weeks. Yeah! big thanks. I have ordered new drives to get my parity up to 4. Setting up a small first partition is really good idea. It might also be a good place to put content files
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
First, sorry I did not look close enough at z440.txt. I re-ran the tests, attached.
Second, I didn't think you wanted them. time outputs to standard error so it wasn't captured with your pipe to tee command. I will fix so its included.
Third, yes it took me two weeks to recover a disk (I had to run fix a few times).
2667v2 CPUs came today. Went in and installed them. New results attached.
Merry Christmas
Last edit: Allan Bennett 2022-12-25
Just wanted to interject into this thread quickly. UhClem, the time and effort you have put in to support Allan is incredible. People don't get paid support as competent and thorough as you have been giving. Well done!
I agree and Ill be sure to give back.
I appreciate your nice words, but, in full disclosure, and despite your perception as the beholder, my motivation is that I enjoy solving problems--especially (software/system) performance-related ones. [I've been retired for 25 years, so this is like an old-fart Formula1 chief mechanic helping a neighbor tune up his car.:)] Credit to Allan for being a willing, and capable, co-conspirator. (The benefit part is just a nice side-effect.)
AND everybody should show their thanks and appreciation to Andrea Mazzoleni, the author of SnapRAID. Obviously, you all know how helpful and useful SR is; but I would like to emphasize that it is a very well-designed and skillfully implemented piece of software. (and I do know software)
==End of PSA== (back to the program ... if this forum software isn't misbehaving [i.e. it is NOT good software])
(to be continued, I hope ...
Sorry took so long.
Allan,
With the limitatiions that the ad hoc TestArray[**] approach imposes, in your configuration, the best suggestion I can make, with a high degree of confidence, is to split your 56+6 configuration into 2x 28+3(or 4) arrays. And, since you have dual CPUs, you can run your 2x SR operations in parallel. E.g., if you had 1TB of new data to be added, you'd put about half of it on each array; then do the 2x sync's concurrently. (With only one CPU, put all the new data on one array, and sync; next time, the other array.) Similarly for scrub's. If/when a fix is needed, only the affected array will be involved.
If you go this route, please let us know the outcome.
Good luck.
[**] The ideal way to do this testing is the pre-meditated method. I've been doing it since v1.7
I set up a dedicatred 8GB partition at the beginning of every SR drive (data & parity). Thus, file layout, and speed, are optimal and consistent. For all tests. Only variable is the config, and the MB/sec (and cpu-time #s) are very consistent across multiple runs. Something to think about when setting up a new, from scratch, array.
Thank you, I have change my setup to use two arrays instead of one, I am able with dual procs to run syncs at the same time and they don't fight over CPU. Initial sync took three days instead of two weeks. Yeah! big thanks. I have ordered new drives to get my parity up to 4. Setting up a small first partition is really good idea. It might also be a good place to put content files