From: Zlatko Č. <zca...@bi...> - 2018-05-20 14:10:44
|
On 20.05.2018 15:38, Marin Bernard wrote: > I may have some answers. > >> There where some posts saying that MooseFS doesn't support fsync at >> all, is this fixed? Can we force the use of fsync to be sure that an >> ACK is sent if and only if all data is really wrote on disks? > MooseFS chunkservers have the HDD_FSYNC_BEFORE_CLOSE config setting > (default off) which does just that. The equivalent setting on LizardFS > is PERFORM_FSYNC (default on). The performance penalty with > PERFORM_FSYNC=1 is very high, though. > > Funny thing, just this morning I did some testing with this flag on and off. Basically, I was running with fsync *on* ever since, because it just felt as the right thing to do. And it's the only change I ever made to mfschunkserver.cfg, all other settings being at their default values. But, just yesterday, suddenly I wondered, if it would make any difference for my case, where chunkservers are rather far away from the mount point (about 10 - 15 ms away). So, I did some basic tests, writing a large file, and untarring linux kernel source (lots of small files) with fsync setting on/off. Much as expected, there was no difference at all for large files (network interface saturated, anyway). In case of very small files (linux kernel source), performance per chunkserver improved from about 350 writes/sec to about 420 writes/sec. IOW, depending on your workload, it seems that turning HDD_FSYNC_BEFORE_CLOSE on could result in 0-15% performance degradation. Of course, depending on your infrastructure (network, disks...), the results may vary. I eventually decided, it's not a big price to pay, and I'm dealing mostly with bigger files anyway, so I turned the setting back on and don't intend to do any more testing. -- Zlatko |