From: Gandalf C. <gan...@gm...> - 2018-05-20 13:43:32
|
Il dom 20 mag 2018, 15:38 Marin Bernard <li...@ol...> ha scritto: > MooseFS chunkservers have the HDD_FSYNC_BEFORE_CLOSE config setting > (default off) which does just that. The equivalent setting on LizardFS > is PERFORM_FSYNC (default on). The performance penalty with > PERFORM_FSYNC=1 is very high, though. > > If you use ZFS as a backend (which I do), fsync may be enforced at the > file system level, which is probably more efficient as it bypasses the > kernel buffer cache (ZFS uses its own). Performance penalty is higher > than on other file systems because in async mode, ZFS batches disk > transactions to minimize latency -- a performance boost which is lost > when fsync is enabled. If performance is critical, you may improve it > with zlogs. > No, data reliability and consistancy is more important for us > > I think you can do the same in MooseFS with storage classes: just > specify a different label expression for chunk Creation and Keep steps. > This way, you may even decide to assign newly created chunks to > specific chunk servers. With LizardFS, there is no way to limit chunk > creation to a subset of chunkservers: they are always distributed > randomly. That's a problem when half of your servers are part of > another site. > Could you please make a real example? Let's assume a replica 4, 2 SSD chunkservers, 2 hdd chunkservers I would like that ACK is returned to the client after the first 2 SSD server has wrote the data while the other 2 (up to a goal of 4) are still writing > |