From: Marco M. <mar...@gm...> - 2018-05-25 12:18:29
|
On 05/25/2018 03:10 AM, Alexander AKHOBADZE wrote: > > Hi All! > > Is it true? I'm asking about "MooseFS Pro also includes a Windows client." > > If yes where can I read more about it? MooseFS Pro is not free. I have to pay to get it. -- Marco > > > > On 20.05.2018 16:08, Marin Bernard wrote: >>>> This is very interesting >>>> Any official and detailed docs about the HA feature? >>>> >>>> Other than this, without making any flame, which are the >>>> differences >>>> between MFS4 and LizardFS? >>>> >>> Hi again, >>> >>> I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel >>> for >>> a few weeks now. Here are the main differences I found while using >>> them. I think most of them will still be relevant with MooseFS 4.0. >>> >>> * High availability >>> In theory, LizardFS provides master high-availability with >>> _shadow_ instances. The reality is less glorious, as the piece of >>> software actually implementing master autopromotion (based on uraft) >>> is >>> still proprietary. It is expected to be GPL'd, yet nobody knows when. >>> So as of now, if you need HA with LizardFS, you have to write your >>> own >>> set of scripts and use a 3rd party cluster manager such as corosync. >>> >>> * POSIX ACLs >>> Using POSIX ACLs with LizardFS requires a recent Linux Kernel (4.9+), >>> because a version of FUSE with ACL support is needed. This means ACLs >>> are unusable with most LTS distros, whose kernels are too old. >>> >>> With MooseFS, ACLs do work even with older kernels; maybe because >>> they >>> are implemented at the master level and the client does not even try >>> to >>> enforce them? >>> >>> * FreeBSD support >>> According to the LizardFS team, all components do compile on FreeBSD. >>> They do not provide a package repository, though, nor did they >>> succeed >>> in submitting LizardFS to the FreeBSD ports tree (bug #225489 is >>> still >>> open on phabricator). >>> >>> * Storage classes >>> Erasure coding is supported in LizardFS, and I had no special issue >>> with it. So far, it works as expected. >>> >>> The equivalent of MooseFS storage classes in LizardFS are _custom >>> goals_. While MooseFS storage classes may be dealt with >>> interactively, >>> LizardFS goals are statically defined in a dedicated config file. >>> MooseFS storage classes allow the use of different label expressions >>> at >>> each step of a chunk lifecycle (different labels for new, kept and >>> archived chunks). LizardFS has no equivalent. >>> >>> One application of MooseFS storage classes is to transparently delay >>> the geo-replication of a chunk for a given amount of time, to lower >>> the >>> latency of client I/O operations. As far as I know, it is not >>> possible >>> to do the same with LizardFS. >>> >>> * NFS support >>> LizardFS supports NFSv4 ACL. It may also be used with the NFS Ganesha >>> server to export directories directly through user-space NFS. I did >>> not >>> test this feature myself. According to several people, the feature, >>> which is rather young, does work but performs poorly. Ganesha on top >>> of >>> LizardFS is a multi-tier setup with a lot of moving parts. I think it >>> will take some time for it to reach production quality, if ever. >>> >>> In theory, Ganesha is compatible with kerberized NFS, which would be >>> far more secure a solution than the current mfsmount client, enabling >>> its use in public/hostile environments. I don't know if MooseFS 4.0 >>> has >>> improved on this matter. >>> >>> * Tape server >>> LizardFS includes a tape server daemon for tape archiving. That's >>> another way to implement some kind of chunk lifecycle without storage >>> classes. >>> >>> * IO limits >>> Lizardfs includes a new config file dedicated to IO limits. It allows >>> to assign IO limits to cgroups. The LFS client negotiates its >>> bandwidth >>> limit with the master is leased a reserved bandwidth for a given >>> amount >>> of time. The big limitation of this feature is that the reserved >>> bandwidth may not be shared with another client while the original >>> one >>> is not using it. In that case, the reserved bandwidth is simply lost. >>> >>> * Windows client >>> The paid version of LizardFS includes a native Windows client. I >>> think >>> it is built upon some kind of fsal à la Dokan. The client allows to >>> map >>> a LizardFS export to a drive letter. The client supports Windows ACL >>> (probably stored as NFSv4 ACL). >>> >>> * Removed features >>> LizardFS removed chunkserver maintenance mode and authentication code >>> (AUTH_CODE). Several tabs from the Web UI are also gone, including >>> the >>> one showing quotas. The original CLI tools were replaced by their own >>> versions, which I find harder to use (no more tables, and very >>> verbose >>> output). >>> >>> >>> >>> I've been using MooseFS for several years and never had any problem >>> with it, even in very awkward situations. My feeling is that it is >>> really a rock-solid, battle-tested product. >>> >>> I gave LizardFS a try, mainly for erasure coding and high- >>> availability. >>> While the former worked as expected, the latter turned out to be a >>> myth: the free version of LizardFS does not provide more HA than >>> MooseFS CE: in both cases, building a HA solution requires writing >>> custom scripts and relying on a cluster managed such as corosync. I >>> see >>> no added value in using LizardFS for HA. >>> >>> On all other aspects, LizardFS does the same or worse than MooseFS. I >>> found performance to be roughly equivalent between the two (provided >>> you disable fsync on LizardFS chunkservers, where it is enabled by >>> default). Both solutions are still similar in many aspects, yet >>> LizardFS is clouded by a few negative points: ACLs are hardly usable, >>> custom goals are less powerful than storage classes and less >>> convenient >>> for geo-replication, FreeBSD support is inexistent, CLI tools are >>> less >>> efficient, and native NFS support is too young to be really usable. >>> >>> After a few months, I came to the conclusion than migrating to >>> LizardFS >>> was not worth the single erasure coding feature, especially now that >>> MooseFS 4.0 CE with EC is officially announced. I'd rather buy a few >>> more drives and cope with standard copies for a while than ditching >>> MooseFS reliability for LizardFS. >>> >>> Hope it helps, >>> >>> Marin >> A few corrections: >> >> 1. MooseFS Pro also includes a Windows client. >> >> 2. LizardFS did not "remove" tabs from the web UI: these tabs were >> added by MooseFS after LizardFS had forked the code base. >> > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |