From: Casper L. <cas...@pr...> - 2019-11-27 16:02:54
|
Hi Dave, I would definitely advise goal 3. An undetected bad chunk combined with single server outage could make a chunk unavailable. >From a performance perspective it depends on too many factors to say something sensible. Write/read workload ratio, disk speeds, number of disks, file sizes, do you do many meta-operations, is there a normal throughput, or do you have spikes in which a lot of data is requested? In our case there is not much performance gain or loss. When network I/O is a bottleneck, you could reduce the number of disks per chunkserver. If IOPS is a problem you could reduce the used size per disk (ie: increase the number of disks) Because with increasing goal from 2 to 3 you are not actually decreasing the number of read operations on the cluster as a whole, you are not really 'relieving' much stress on disk on average. Unless there is 1 big file that is constantly read, we can assume that read operations are balanced throughout the cluster anyway. Obviously the total number of write operations on disks in the cluster will increase, but the copies are written after the first chunk is stored, so writing won't be slower, but as more copies are written, writes can affect other reads. Greetings, Casper Op di 26 nov. 2019 om 20:20 schreef David Myer via moosefs-users < moo...@li...>: > Dear MFS users, > > Out of curiosity, would increasing the number of file replicas across more > disks reduce read times by spreading the read load? I have a lot of spare > space and I thought it might be worth using it for this reason. My replica > goal is currently 2. > > Thanks, > Dave > > > Sent with ProtonMail <https://protonmail.com> Secure Email. > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |