From: Marin B. <li...@ol...> - 2018-03-19 20:16:46
|
Hi, I have been using MooseFS CE for 4 or 5 years now and I'm really happy with it. Now I have a few questions about the project roadmap, and the differences between MooseFS CE and Pro editions. As far I as know, MooseFS CE and Pro edition ship with exactly the same feature set, except for mfsmaster HA which is only available with a Pro license.However, the moosefs.com website mentions several features which seem not included in the current release of MooseFS CE. Here is a short list: * Computation on Nodes * Erasure coding with up to 9 parity sums * Support of standard ACLs in addition to Unix file modes * SNMP management interface * Data compression or data deduplication? (not sure on this; deduced from "MooseFS enables users to save a lot of HDD space maintaining the same data redundancy level.") I may be wrong but as far as I know, none of those wonderful features are part of the MooseFS 3.x CE branch. Will these features ship with the next major ("MooseFS 4.0") version?More importantly: will those features require Pro licensing? Could you please clarify those points for me? Thank you, Marin. |
From: Alexander A. <ba...@ya...> - 2018-03-20 06:23:50
|
Hi! As far "Support of standard ACLs in addition to Unix file modes " I my case (v.3.0.100) getfacl and setfacl work fine. Do you mean this? wbr Alexander On 19.03.2018 23:16, Marin Bernard wrote: > Hi, > > I have been using MooseFS CE for 4 or 5 years now and I'm really happy > with it. Now I have a few questions about the project roadmap, and the > differences between MooseFS CE and Pro editions. > > As far I as know, MooseFS CE and Pro edition ship with exactly the > same feature set, except for mfsmaster HA which is only available with > a Pro license.However, the moosefs.com website mentions several > features which seem not included in the current release of MooseFS CE. > Here is a short list: > > * Computation on Nodes > * Erasure coding with up to 9 parity sums > * Support of standard ACLs in addition to Unix file modes > * SNMP management interface > * Data compression or data deduplication? (not sure on this; deduced > from "MooseFS enables users to save a lot of HDD space maintaining the > same data redundancy level.") > > I may be wrong but as far as I know, none of those wonderful features > are part of the MooseFS 3.x CE branch. Will these features ship with > the next major ("MooseFS 4.0") version?More importantly: will those > features require Pro licensing? > > Could you please clarify those points for me? > > Thank you, > > Marin. > > ------------------------------------------------------------------------------ > > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Marin B. <li...@ol...> - 2018-05-19 10:39:48
Attachments:
smime.p7s
|
Hi, It's been a few weeks since I posted this message which was left unanswered. Since then, the MooseFS team published a blog post announcing that MooseFS 4.0 is now production ready. This version was long-expected, and this is really great news! As of now, it seems version 4.0 is yet to be released: I did not find any source or binary package for it. Could someone provide an estimated release date for this version, and a quick recap of the included/paid features? I do know that the CE edition is best-effort software, but I'm in a position where I have to decide whether to use MooseFS as mid/long-term storage solution, and I need to know where it is going. Thanks, Marin. > Hi, > > I have been using MooseFS CE for 4 or 5 years now and I'm really > happy > with it. Now I have a few questions about the project roadmap, and > the > differences between MooseFS CE and Pro editions. > > As far I as know, MooseFS CE and Pro edition ship with exactly the > same > feature set, except for mfsmaster HA which is only available with a > Pro > license.However, the moosefs.com website mentions several features > which > seem not included in the current release of MooseFS CE. Here is a > short > list: > > * Computation on Nodes > * Erasure coding with up to 9 parity sums > * Support of standard ACLs in addition to Unix file modes > * SNMP management interface > * Data compression or data deduplication? (not sure on this; deduced > from "MooseFS enables users to save a lot of HDD space maintaining > the > same data redundancy level.") > > I may be wrong but as far as I know, none of those wonderful > features > are part of the MooseFS 3.x CE branch. Will these features ship with > the > next major ("MooseFS 4.0") version?More importantly: will those > features > require Pro licensing? > > Could you please clarify those points for me? > > Thank you, > > Marin. |
From: Marco M. <mar...@gm...> - 2018-05-19 15:01:34
|
Marin, Relax. MooseFS v4 will be free and will have all the features. (HA, EC) If you need professional support, you have to pay for that. Very simple. (Similar to Ubuntu model.) -- Marco On 05/19/2018 06:24 AM, Marin Bernard wrote: > Hi, > > It's been a few weeks since I posted this message which was left > unanswered. Since then, the MooseFS team published a blog post > announcing that MooseFS 4.0 is now production ready. This version was > long-expected, and this is really great news! > > As of now, it seems version 4.0 is yet to be released: I did not find > any source or binary package for it. Could someone provide an estimated > release date for this version, and a quick recap of the included/paid > features? > > I do know that the CE edition is best-effort software, but I'm in a > position where I have to decide whether to use MooseFS as mid/long-term > storage solution, and I need to know where it is going. > > Thanks, > > Marin. > >> Hi, >> >> I have been using MooseFS CE for 4 or 5 years now and I'm really >> happy >> with it. Now I have a few questions about the project roadmap, and >> the >> differences between MooseFS CE and Pro editions. >> >> As far I as know, MooseFS CE and Pro edition ship with exactly the >> same >> feature set, except for mfsmaster HA which is only available with a >> Pro >> license.However, the moosefs.com website mentions several features >> which >> seem not included in the current release of MooseFS CE. Here is a >> short >> list: >> >> * Computation on Nodes >> * Erasure coding with up to 9 parity sums >> * Support of standard ACLs in addition to Unix file modes >> * SNMP management interface >> * Data compression or data deduplication? (not sure on this; deduced >> from "MooseFS enables users to save a lot of HDD space maintaining >> the >> same data redundancy level.") >> >> I may be wrong but as far as I know, none of those wonderful >> features >> are part of the MooseFS 3.x CE branch. Will these features ship with >> the >> next major ("MooseFS 4.0") version?More importantly: will those >> features >> require Pro licensing? >> >> Could you please clarify those points for me? >> >> Thank you, >> >> Marin. >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Gandalf C. <gan...@gm...> - 2018-05-19 15:29:23
|
Where did you read that v4 will have HA? Il sab 19 mag 2018, 17:02 Marco Milano <mar...@gm...> ha scritto: > Marin, > > Relax. > > MooseFS v4 will be free and will have all the features. > (HA, EC) > > If you need professional support, you have to pay for that. > > Very simple. > (Similar to Ubuntu model.) > > -- Marco > > On 05/19/2018 06:24 AM, Marin Bernard wrote: > > Hi, > > > > It's been a few weeks since I posted this message which was left > > unanswered. Since then, the MooseFS team published a blog post > > announcing that MooseFS 4.0 is now production ready. This version was > > long-expected, and this is really great news! > > > > As of now, it seems version 4.0 is yet to be released: I did not find > > any source or binary package for it. Could someone provide an estimated > > release date for this version, and a quick recap of the included/paid > > features? > > > > I do know that the CE edition is best-effort software, but I'm in a > > position where I have to decide whether to use MooseFS as mid/long-term > > storage solution, and I need to know where it is going. > > > > Thanks, > > > > Marin. > > > >> Hi, > >> > >> I have been using MooseFS CE for 4 or 5 years now and I'm really > >> happy > >> with it. Now I have a few questions about the project roadmap, and > >> the > >> differences between MooseFS CE and Pro editions. > >> > >> As far I as know, MooseFS CE and Pro edition ship with exactly the > >> same > >> feature set, except for mfsmaster HA which is only available with a > >> Pro > >> license.However, the moosefs.com website mentions several features > >> which > >> seem not included in the current release of MooseFS CE. Here is a > >> short > >> list: > >> > >> * Computation on Nodes > >> * Erasure coding with up to 9 parity sums > >> * Support of standard ACLs in addition to Unix file modes > >> * SNMP management interface > >> * Data compression or data deduplication? (not sure on this; deduced > >> from "MooseFS enables users to save a lot of HDD space maintaining > >> the > >> same data redundancy level.") > >> > >> I may be wrong but as far as I know, none of those wonderful > >> features > >> are part of the MooseFS 3.x CE branch. Will these features ship with > >> the > >> next major ("MooseFS 4.0") version?More importantly: will those > >> features > >> require Pro licensing? > >> > >> Could you please clarify those points for me? > >> > >> Thank you, > >> > >> Marin. > >> > >> > >> > ------------------------------------------------------------------------------ > >> Check out the vibrant tech community on one of the world's most > >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot > >> > >> > >> _________________________________________ > >> moosefs-users mailing list > >> moo...@li... > >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Marco M. <mar...@gm...> - 2018-05-19 15:48:22
|
Tea Leaves :-) On 05/19/2018 11:29 AM, Gandalf Corvotempesta wrote: > Where did you read that v4 will have HA? > > Il sab 19 mag 2018, 17:02 Marco Milano <mar...@gm... <mailto:mar...@gm...>> ha scritto: > > Marin, > > Relax. > > MooseFS v4 will be free and will have all the features. > (HA, EC) > > If you need professional support, you have to pay for that. > > Very simple. > (Similar to Ubuntu model.) > > -- Marco > > On 05/19/2018 06:24 AM, Marin Bernard wrote: > > Hi, > > > > It's been a few weeks since I posted this message which was left > > unanswered. Since then, the MooseFS team published a blog post > > announcing that MooseFS 4.0 is now production ready. This version was > > long-expected, and this is really great news! > > > > As of now, it seems version 4.0 is yet to be released: I did not find > > any source or binary package for it. Could someone provide an estimated > > release date for this version, and a quick recap of the included/paid > > features? > > > > I do know that the CE edition is best-effort software, but I'm in a > > position where I have to decide whether to use MooseFS as mid/long-term > > storage solution, and I need to know where it is going. > > > > Thanks, > > > > Marin. > > > >> Hi, > >> > >> I have been using MooseFS CE for 4 or 5 years now and I'm really > >> happy > >> with it. Now I have a few questions about the project roadmap, and > >> the > >> differences between MooseFS CE and Pro editions. > >> > >> As far I as know, MooseFS CE and Pro edition ship with exactly the > >> same > >> feature set, except for mfsmaster HA which is only available with a > >> Pro > >> license.However, the moosefs.com <http://moosefs.com> website mentions several features > >> which > >> seem not included in the current release of MooseFS CE. Here is a > >> short > >> list: > >> > >> * Computation on Nodes > >> * Erasure coding with up to 9 parity sums > >> * Support of standard ACLs in addition to Unix file modes > >> * SNMP management interface > >> * Data compression or data deduplication? (not sure on this; deduced > >> from "MooseFS enables users to save a lot of HDD space maintaining > >> the > >> same data redundancy level.") > >> > >> I may be wrong but as far as I know, none of those wonderful > >> features > >> are part of the MooseFS 3.x CE branch. Will these features ship with > >> the > >> next major ("MooseFS 4.0") version?More importantly: will those > >> features > >> require Pro licensing? > >> > >> Could you please clarify those points for me? > >> > >> Thank you, > >> > >> Marin. > >> > >> > >> ------------------------------------------------------------------------------ > >> Check out the vibrant tech community on one of the world's most > >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot > >> > >> > >> _________________________________________ > >> moosefs-users mailing list > >> moo...@li... <mailto:moo...@li...> > >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Gandalf C. <gan...@gm...> - 2018-05-19 16:42:27
|
il sab 19 mag 2018, 17:48 Marco Milano <mar...@gm...> ha scritto: > Tea Leaves :-) > Seriously, where did you get this info? How HA works in moosefs? Is failover automatic? > |
From: Jakub Kruszona-Z. <ac...@mo...> - 2018-05-20 09:33:40
|
> On 19 May 2018, at 18:42, Gandalf Corvotempesta <gan...@gm...> wrote: > > il sab 19 mag 2018, 17:48 Marco Milano <mar...@gm... <mailto:mar...@gm...>> ha scritto: > Tea Leaves :-) > > Seriously, where did you get this info? > How HA works in moosefs? Is failover automatic? MooseFS 4.x now is in “close beta” (or even more “close release-candidate”) stage. Currently we started to use it by ourselves. When we will see that there are no obvious bugs then we will release it as open-source product under GPL-v2 (or even GPL-v2+) licence. In the meantime if you want to participate in tests of MFS 4.x - please let us know, we will send you packages of MFS 4.x for you OS. Marco Milano is one of our “testers” and this is why he knows more about MFS 4.x HA in MFS 4.x works fully automatic. You just need to define group of IP numbers in your DNS for your “mfsmaster” name. Then install master servers and run them on machines with those IP numbers and thats it. One of them will became “LEADER” of the group and other “FOLLOWER”-s of the group. If for any reason “LEADER” goes down then one of the FOLLOWER’s is chosen as an “ELECT” and when more than half of chunkservers connects to it then it automatically switches to “LEADER” state and starts working as main master server. When your previous leader becomes available again then it usually joins the group as a “FOLLOWER”. This is rather “fire and forget” solution. They will synchronise metadata between them automatically, chunkservers and clients will reconnect to current leader also automatically etc. Regards, Jakub Kruszona-Zawadzki |
From: Marin B. <li...@ol...> - 2018-05-20 10:01:40
Attachments:
smime.p7s
|
> MooseFS 4.x now is in “close beta” (or even more “close release- > candidate”) stage. Currently we started to use it by ourselves. When > we will see that there are no obvious bugs then we will release it as > open-source product under GPL-v2 (or even GPL-v2+) licence. > In the meantime if you want to participate in tests of MFS 4.x - > please let us know, we will send you packages of MFS 4.x for you OS. > > Marco Milano is one of our “testers” and this is why he knows more > about MFS 4.x > > HA in MFS 4.x works fully automatic. You just need to define group of > IP numbers in your DNS for your “mfsmaster” name. Then install master > servers and run them on machines with those IP numbers and thats it. > One of them will became “LEADER” of the group and other “FOLLOWER”-s > of the group. If for any reason “LEADER” goes down then one of the > FOLLOWER’s is chosen as an “ELECT” and when more than half of > chunkservers connects to it then it automatically switches to > “LEADER” state and starts working as main master server. When your > previous leader becomes available again then it usually joins the > group as a “FOLLOWER”. This is rather “fire and forget” solution. > They will synchronise metadata between them automatically, > chunkservers and clients will reconnect to current leader also > automatically etc. > > Regards, > Jakub Kruszona-Zawadzki Hi Jakub, Thank you for taking the time providing these answers! Marin |
From: Gandalf C. <gan...@gm...> - 2018-05-20 13:19:50
|
Il dom 20 mag 2018, 11:33 Jakub Kruszona-Zawadzki <ac...@mo...> ha scritto: > In the meantime if you want to participate in tests of MFS 4.x - please > let us know, we will send you packages of MFS 4.x for you OS. > I have a test cluster to revive. I've planned to use Lizard, but if i understood properly, MFS will have HA even in FOSS version thus i would be glad to test, if possibile. |
From: Zlatko Č. <zca...@bi...> - 2018-05-20 13:56:17
|
On 20.05.2018 11:33, Jakub Kruszona-Zawadzki wrote: > >> On 19 May 2018, at 18:42, Gandalf Corvotempesta >> <gan...@gm... >> <mailto:gan...@gm...>> wrote: >> >> il sab 19 mag 2018, 17:48 Marco Milano <mar...@gm... >> <mailto:mar...@gm...>> ha scritto: >> >> Tea Leaves :-) >> >> >> Seriously, where did you get this info? >> How HA works in moosefs? Is failover automatic? >> > > MooseFS 4.x now is in “close beta” (or even more “close > release-candidate”) stage. Currently we started to use it by > ourselves. When we will see that there are no obvious bugs then we > will release it as open-source product under GPL-v2 (or even GPL-v2+) > licence. > In the meantime if you want to participate in tests of MFS 4.x - > please let us know, we will send you packages of MFS 4.x for you OS. > > Marco Milano is one of our “testers” and this is why he knows more > about MFS 4.x > > HA in MFS 4.x works fully automatic. You just need to define group of > IP numbers in your DNS for your “mfsmaster” name. Then install master > servers and run them on machines with those IP numbers and thats it. > One of them will became “LEADER” of the group and other “FOLLOWER”-s > of the group. If for any reason “LEADER” goes down then one of the > FOLLOWER’s is chosen as an “ELECT” and when more than half of > chunkservers connects to it then it automatically switches to “LEADER” > state and starts working as main master server. When your previous > leader becomes available again then it usually joins the group as a > “FOLLOWER”. This is rather “fire and forget” solution. They will > synchronise metadata between them automatically, chunkservers and > clients will reconnect to current leader also automatically etc. > > Regards, > Jakub Kruszona-Zawadzki > > Hey Jakub! Such an excellent work on MooseFS, as always! Thank you for letting us know all these great news upfront! With new features announces, I really don't see how any other SDS competitor will be able to compete. Of course, I may be biased, running MooseFS for several years, but also being very happy with it all that time. Occassionaly, I tested other SDS solutions, but none were match for MooseFS. Last half a year, I've been successfully running MooseFS 3.0 as a Kubernetes deployment, and honestly I can't wait to see how new MooseFS 4.0 HA will fit that scheme. I expect lots of fun, in any case! Keep up the good work! -- Zlatko |
From: Gandalf C. <gan...@gm...> - 2018-05-20 17:59:38
|
Il giorno dom 20 mag 2018 alle ore 11:33 Jakub Kruszona-Zawadzki < ac...@mo...> ha scritto: > HA in MFS 4.x works fully automatic. You just need to define group of IP numbers in your DNS for your “mfsmaster” name. Then install master servers and run them on machines with those IP numbers and thats it. One of them will became “LEADER” of the group and other “FOLLOWER”-s of the group. If for any reason “LEADER” goes down then one of the FOLLOWER’s is chosen as an “ELECT” and when more than half of chunkservers connects to it then it automatically switches to “LEADER” state and starts working as main master server. When your previous leader becomes available again then it usually joins the group as a “FOLLOWER”. This is rather “fire and forget” solution. They will synchronise metadata between them automatically, chunkservers and clients will reconnect to current leader also automatically etc. Just thinking about this. So, in a MooseFS 4.0, if RAM is enough on all servers, placing a mfsmaster on each chunkserver, should increase reliability (every server can be a leader in case of failure) and metalogger would be useless. Which protocol is used for this ? RAFT ? |
From: Gandalf C. <gan...@gm...> - 2018-05-20 10:07:13
|
This is very interesting Any official and detailed docs about the HA feature? Other than this, without making any flame, which are the differences between MFS4 and LizardFS? Il dom 20 mag 2018, 11:33 Jakub Kruszona-Zawadzki <ac...@mo...> ha scritto: > > On 19 May 2018, at 18:42, Gandalf Corvotempesta < > gan...@gm...> wrote: > > il sab 19 mag 2018, 17:48 Marco Milano <mar...@gm...> ha scritto: > >> Tea Leaves :-) >> > > Seriously, where did you get this info? > How HA works in moosefs? Is failover automatic? > >> > MooseFS 4.x now is in “close beta” (or even more “close > release-candidate”) stage. Currently we started to use it by ourselves. > When we will see that there are no obvious bugs then we will release it as > open-source product under GPL-v2 (or even GPL-v2+) licence. > In the meantime if you want to participate in tests of MFS 4.x - please > let us know, we will send you packages of MFS 4.x for you OS. > > Marco Milano is one of our “testers” and this is why he knows more about > MFS 4.x > > HA in MFS 4.x works fully automatic. You just need to define group of IP > numbers in your DNS for your “mfsmaster” name. Then install master servers > and run them on machines with those IP numbers and thats it. One of them > will became “LEADER” of the group and other “FOLLOWER”-s of the group. If > for any reason “LEADER” goes down then one of the FOLLOWER’s is chosen as > an “ELECT” and when more than half of chunkservers connects to it then it > automatically switches to “LEADER” state and starts working as main master > server. When your previous leader becomes available again then it usually > joins the group as a “FOLLOWER”. This is rather “fire and forget” solution. > They will synchronise metadata between them automatically, chunkservers and > clients will reconnect to current leader also automatically etc. > > Regards, > Jakub Kruszona-Zawadzki > > > > > |
From: Marin B. <li...@ol...> - 2018-05-20 12:53:50
Attachments:
smime.p7s
|
> This is very interesting > Any official and detailed docs about the HA feature? > > Other than this, without making any flame, which are the differences > between MFS4 and LizardFS? > Hi again, I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel for a few weeks now. Here are the main differences I found while using them. I think most of them will still be relevant with MooseFS 4.0. * High availability In theory, LizardFS provides master high-availability with _shadow_ instances. The reality is less glorious, as the piece of software actually implementing master autopromotion (based on uraft) is still proprietary. It is expected to be GPL'd, yet nobody knows when. So as of now, if you need HA with LizardFS, you have to write your own set of scripts and use a 3rd party cluster manager such as corosync. * POSIX ACLs Using POSIX ACLs with LizardFS requires a recent Linux Kernel (4.9+), because a version of FUSE with ACL support is needed. This means ACLs are unusable with most LTS distros, whose kernels are too old. With MooseFS, ACLs do work even with older kernels; maybe because they are implemented at the master level and the client does not even try to enforce them? * FreeBSD support According to the LizardFS team, all components do compile on FreeBSD. They do not provide a package repository, though, nor did they succeed in submitting LizardFS to the FreeBSD ports tree (bug #225489 is still open on phabricator). * Storage classes Erasure coding is supported in LizardFS, and I had no special issue with it. So far, it works as expected. The equivalent of MooseFS storage classes in LizardFS are _custom goals_. While MooseFS storage classes may be dealt with interactively, LizardFS goals are statically defined in a dedicated config file. MooseFS storage classes allow the use of different label expressions at each step of a chunk lifecycle (different labels for new, kept and archived chunks). LizardFS has no equivalent. One application of MooseFS storage classes is to transparently delay the geo-replication of a chunk for a given amount of time, to lower the latency of client I/O operations. As far as I know, it is not possible to do the same with LizardFS. * NFS support LizardFS supports NFSv4 ACL. It may also be used with the NFS Ganesha server to export directories directly through user-space NFS. I did not test this feature myself. According to several people, the feature, which is rather young, does work but performs poorly. Ganesha on top of LizardFS is a multi-tier setup with a lot of moving parts. I think it will take some time for it to reach production quality, if ever. In theory, Ganesha is compatible with kerberized NFS, which would be far more secure a solution than the current mfsmount client, enabling its use in public/hostile environments. I don't know if MooseFS 4.0 has improved on this matter. * Tape server LizardFS includes a tape server daemon for tape archiving. That's another way to implement some kind of chunk lifecycle without storage classes. * IO limits Lizardfs includes a new config file dedicated to IO limits. It allows to assign IO limits to cgroups. The LFS client negotiates its bandwidth limit with the master is leased a reserved bandwidth for a given amount of time. The big limitation of this feature is that the reserved bandwidth may not be shared with another client while the original one is not using it. In that case, the reserved bandwidth is simply lost. * Windows client The paid version of LizardFS includes a native Windows client. I think it is built upon some kind of fsal à la Dokan. The client allows to map a LizardFS export to a drive letter. The client supports Windows ACL (probably stored as NFSv4 ACL). * Removed features LizardFS removed chunkserver maintenance mode and authentication code (AUTH_CODE). Several tabs from the Web UI are also gone, including the one showing quotas. The original CLI tools were replaced by their own versions, which I find harder to use (no more tables, and very verbose output). I've been using MooseFS for several years and never had any problem with it, even in very awkward situations. My feeling is that it is really a rock-solid, battle-tested product. I gave LizardFS a try, mainly for erasure coding and high-availability. While the former worked as expected, the latter turned out to be a myth: the free version of LizardFS does not provide more HA than MooseFS CE: in both cases, building a HA solution requires writing custom scripts and relying on a cluster managed such as corosync. I see no added value in using LizardFS for HA. On all other aspects, LizardFS does the same or worse than MooseFS. I found performance to be roughly equivalent between the two (provided you disable fsync on LizardFS chunkservers, where it is enabled by default). Both solutions are still similar in many aspects, yet LizardFS is clouded by a few negative points: ACLs are hardly usable, custom goals are less powerful than storage classes and less convenient for geo-replication, FreeBSD support is inexistent, CLI tools are less efficient, and native NFS support is too young to be really usable. After a few months, I came to the conclusion than migrating to LizardFS was not worth the single erasure coding feature, especially now that MooseFS 4.0 CE with EC is officially announced. I'd rather buy a few more drives and cope with standard copies for a while than ditching MooseFS reliability for LizardFS. Hope it helps, Marin |
From: Marin B. <li...@ol...> - 2018-05-20 13:08:31
Attachments:
smime.p7s
|
> > This is very interesting > > Any official and detailed docs about the HA feature? > > > > Other than this, without making any flame, which are the > > differences > > between MFS4 and LizardFS? > > > > Hi again, > > I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel > for > a few weeks now. Here are the main differences I found while using > them. I think most of them will still be relevant with MooseFS 4.0. > > * High availability > In theory, LizardFS provides master high-availability with > _shadow_ instances. The reality is less glorious, as the piece of > software actually implementing master autopromotion (based on uraft) > is > still proprietary. It is expected to be GPL'd, yet nobody knows when. > So as of now, if you need HA with LizardFS, you have to write your > own > set of scripts and use a 3rd party cluster manager such as corosync. > > * POSIX ACLs > Using POSIX ACLs with LizardFS requires a recent Linux Kernel (4.9+), > because a version of FUSE with ACL support is needed. This means ACLs > are unusable with most LTS distros, whose kernels are too old. > > With MooseFS, ACLs do work even with older kernels; maybe because > they > are implemented at the master level and the client does not even try > to > enforce them? > > * FreeBSD support > According to the LizardFS team, all components do compile on FreeBSD. > They do not provide a package repository, though, nor did they > succeed > in submitting LizardFS to the FreeBSD ports tree (bug #225489 is > still > open on phabricator). > > * Storage classes > Erasure coding is supported in LizardFS, and I had no special issue > with it. So far, it works as expected. > > The equivalent of MooseFS storage classes in LizardFS are _custom > goals_. While MooseFS storage classes may be dealt with > interactively, > LizardFS goals are statically defined in a dedicated config file. > MooseFS storage classes allow the use of different label expressions > at > each step of a chunk lifecycle (different labels for new, kept and > archived chunks). LizardFS has no equivalent. > > One application of MooseFS storage classes is to transparently delay > the geo-replication of a chunk for a given amount of time, to lower > the > latency of client I/O operations. As far as I know, it is not > possible > to do the same with LizardFS. > > * NFS support > LizardFS supports NFSv4 ACL. It may also be used with the NFS Ganesha > server to export directories directly through user-space NFS. I did > not > test this feature myself. According to several people, the feature, > which is rather young, does work but performs poorly. Ganesha on top > of > LizardFS is a multi-tier setup with a lot of moving parts. I think it > will take some time for it to reach production quality, if ever. > > In theory, Ganesha is compatible with kerberized NFS, which would be > far more secure a solution than the current mfsmount client, enabling > its use in public/hostile environments. I don't know if MooseFS 4.0 > has > improved on this matter. > > * Tape server > LizardFS includes a tape server daemon for tape archiving. That's > another way to implement some kind of chunk lifecycle without storage > classes. > > * IO limits > Lizardfs includes a new config file dedicated to IO limits. It allows > to assign IO limits to cgroups. The LFS client negotiates its > bandwidth > limit with the master is leased a reserved bandwidth for a given > amount > of time. The big limitation of this feature is that the reserved > bandwidth may not be shared with another client while the original > one > is not using it. In that case, the reserved bandwidth is simply lost. > > * Windows client > The paid version of LizardFS includes a native Windows client. I > think > it is built upon some kind of fsal à la Dokan. The client allows to > map > a LizardFS export to a drive letter. The client supports Windows ACL > (probably stored as NFSv4 ACL). > > * Removed features > LizardFS removed chunkserver maintenance mode and authentication code > (AUTH_CODE). Several tabs from the Web UI are also gone, including > the > one showing quotas. The original CLI tools were replaced by their own > versions, which I find harder to use (no more tables, and very > verbose > output). > > > > I've been using MooseFS for several years and never had any problem > with it, even in very awkward situations. My feeling is that it is > really a rock-solid, battle-tested product. > > I gave LizardFS a try, mainly for erasure coding and high- > availability. > While the former worked as expected, the latter turned out to be a > myth: the free version of LizardFS does not provide more HA than > MooseFS CE: in both cases, building a HA solution requires writing > custom scripts and relying on a cluster managed such as corosync. I > see > no added value in using LizardFS for HA. > > On all other aspects, LizardFS does the same or worse than MooseFS. I > found performance to be roughly equivalent between the two (provided > you disable fsync on LizardFS chunkservers, where it is enabled by > default). Both solutions are still similar in many aspects, yet > LizardFS is clouded by a few negative points: ACLs are hardly usable, > custom goals are less powerful than storage classes and less > convenient > for geo-replication, FreeBSD support is inexistent, CLI tools are > less > efficient, and native NFS support is too young to be really usable. > > After a few months, I came to the conclusion than migrating to > LizardFS > was not worth the single erasure coding feature, especially now that > MooseFS 4.0 CE with EC is officially announced. I'd rather buy a few > more drives and cope with standard copies for a while than ditching > MooseFS reliability for LizardFS. > > Hope it helps, > > Marin A few corrections: 1. MooseFS Pro also includes a Windows client. 2. LizardFS did not "remove" tabs from the web UI: these tabs were added by MooseFS after LizardFS had forked the code base. |
From: Gandalf C. <gan...@gm...> - 2018-05-20 13:15:49
|
This is exactly what i was looking for. A well done comparison. Is MFS4 Will be released with native HA for everyone and not only for paying users, MooseFS could be easily considered one of the best SDS out there. There are some missing thing that Lizard won't implement due to the lack of man power in their company, i'll open the same request on MFS github repository, let's see if MooseFS is more open... There where some posts saying that MooseFS doesn't support fsync at all, is this fixed? Can we force the use of fsync to be sure that an ACK is sent if and only if all data is really wrote on disks? LizardFS has something interesting: an ACK could be sent after a defined amount of copies are done, even if if less than goal level In example, goal 4, redundancy level set to 2: after 2 succsefully written chunkservers, the ACK is returned, the missing 2 copies are made in writeback Any plan to add support for a quemu driver totally skipping the fuse stack? This is a much waited feature from Lizard and would bring MooseFS to the cloud world, where many hypervisors are qemu/kvm Il dom 20 mag 2018, 14:53 Marin Bernard <li...@ol...> ha scritto: > > This is very interesting > > Any official and detailed docs about the HA feature? > > > > Other than this, without making any flame, which are the differences > > between MFS4 and LizardFS? > > > > Hi again, > > I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel for > a few weeks now. Here are the main differences I found while using > them. I think most of them will still be relevant with MooseFS 4.0. > > * High availability > In theory, LizardFS provides master high-availability with > _shadow_ instances. The reality is less glorious, as the piece of > software actually implementing master autopromotion (based on uraft) is > still proprietary. It is expected to be GPL'd, yet nobody knows when. > So as of now, if you need HA with LizardFS, you have to write your own > set of scripts and use a 3rd party cluster manager such as corosync. > > * POSIX ACLs > Using POSIX ACLs with LizardFS requires a recent Linux Kernel (4.9+), > because a version of FUSE with ACL support is needed. This means ACLs > are unusable with most LTS distros, whose kernels are too old. > > With MooseFS, ACLs do work even with older kernels; maybe because they > are implemented at the master level and the client does not even try to > enforce them? > > * FreeBSD support > According to the LizardFS team, all components do compile on FreeBSD. > They do not provide a package repository, though, nor did they succeed > in submitting LizardFS to the FreeBSD ports tree (bug #225489 is still > open on phabricator). > > * Storage classes > Erasure coding is supported in LizardFS, and I had no special issue > with it. So far, it works as expected. > > The equivalent of MooseFS storage classes in LizardFS are _custom > goals_. While MooseFS storage classes may be dealt with interactively, > LizardFS goals are statically defined in a dedicated config file. > MooseFS storage classes allow the use of different label expressions at > each step of a chunk lifecycle (different labels for new, kept and > archived chunks). LizardFS has no equivalent. > > One application of MooseFS storage classes is to transparently delay > the geo-replication of a chunk for a given amount of time, to lower the > latency of client I/O operations. As far as I know, it is not possible > to do the same with LizardFS. > > * NFS support > LizardFS supports NFSv4 ACL. It may also be used with the NFS Ganesha > server to export directories directly through user-space NFS. I did not > test this feature myself. According to several people, the feature, > which is rather young, does work but performs poorly. Ganesha on top of > LizardFS is a multi-tier setup with a lot of moving parts. I think it > will take some time for it to reach production quality, if ever. > > In theory, Ganesha is compatible with kerberized NFS, which would be > far more secure a solution than the current mfsmount client, enabling > its use in public/hostile environments. I don't know if MooseFS 4.0 has > improved on this matter. > > * Tape server > LizardFS includes a tape server daemon for tape archiving. That's > another way to implement some kind of chunk lifecycle without storage > classes. > > * IO limits > Lizardfs includes a new config file dedicated to IO limits. It allows > to assign IO limits to cgroups. The LFS client negotiates its bandwidth > limit with the master is leased a reserved bandwidth for a given amount > of time. The big limitation of this feature is that the reserved > bandwidth may not be shared with another client while the original one > is not using it. In that case, the reserved bandwidth is simply lost. > > * Windows client > The paid version of LizardFS includes a native Windows client. I think > it is built upon some kind of fsal à la Dokan. The client allows to map > a LizardFS export to a drive letter. The client supports Windows ACL > (probably stored as NFSv4 ACL). > > * Removed features > LizardFS removed chunkserver maintenance mode and authentication code > (AUTH_CODE). Several tabs from the Web UI are also gone, including the > one showing quotas. The original CLI tools were replaced by their own > versions, which I find harder to use (no more tables, and very verbose > output). > > > > I've been using MooseFS for several years and never had any problem > with it, even in very awkward situations. My feeling is that it is > really a rock-solid, battle-tested product. > > I gave LizardFS a try, mainly for erasure coding and high-availability. > While the former worked as expected, the latter turned out to be a > myth: the free version of LizardFS does not provide more HA than > MooseFS CE: in both cases, building a HA solution requires writing > custom scripts and relying on a cluster managed such as corosync. I see > no added value in using LizardFS for HA. > > On all other aspects, LizardFS does the same or worse than MooseFS. I > found performance to be roughly equivalent between the two (provided > you disable fsync on LizardFS chunkservers, where it is enabled by > default). Both solutions are still similar in many aspects, yet > LizardFS is clouded by a few negative points: ACLs are hardly usable, > custom goals are less powerful than storage classes and less convenient > for geo-replication, FreeBSD support is inexistent, CLI tools are less > efficient, and native NFS support is too young to be really usable. > > After a few months, I came to the conclusion than migrating to LizardFS > was not worth the single erasure coding feature, especially now that > MooseFS 4.0 CE with EC is officially announced. I'd rather buy a few > more drives and cope with standard copies for a while than ditching > MooseFS reliability for LizardFS. > > Hope it helps, > > Marin |
From: Marin B. <li...@ol...> - 2018-05-20 13:38:43
Attachments:
smime.p7s
|
I may have some answers. > There where some posts saying that MooseFS doesn't support fsync at > all, is this fixed? Can we force the use of fsync to be sure that an > ACK is sent if and only if all data is really wrote on disks? MooseFS chunkservers have the HDD_FSYNC_BEFORE_CLOSE config setting (default off) which does just that. The equivalent setting on LizardFS is PERFORM_FSYNC (default on). The performance penalty with PERFORM_FSYNC=1 is very high, though. If you use ZFS as a backend (which I do), fsync may be enforced at the file system level, which is probably more efficient as it bypasses the kernel buffer cache (ZFS uses its own). Performance penalty is higher than on other file systems because in async mode, ZFS batches disk transactions to minimize latency -- a performance boost which is lost when fsync is enabled. If performance is critical, you may improve it with zlogs. > LizardFS has something interesting: an ACK could be sent after a > defined amount of copies are done, even if if less than goal level > In example, goal 4, redundancy level set to 2: after 2 succsefully > written chunkservers, the ACK is returned, the missing 2 copies are > made in writeback I think you can do the same in MooseFS with storage classes: just specify a different label expression for chunk Creation and Keep steps. This way, you may even decide to assign newly created chunks to specific chunk servers. With LizardFS, there is no way to limit chunk creation to a subset of chunkservers: they are always distributed randomly. That's a problem when half of your servers are part of another site. Marin. |
From: Gandalf C. <gan...@gm...> - 2018-05-20 13:43:32
|
Il dom 20 mag 2018, 15:38 Marin Bernard <li...@ol...> ha scritto: > MooseFS chunkservers have the HDD_FSYNC_BEFORE_CLOSE config setting > (default off) which does just that. The equivalent setting on LizardFS > is PERFORM_FSYNC (default on). The performance penalty with > PERFORM_FSYNC=1 is very high, though. > > If you use ZFS as a backend (which I do), fsync may be enforced at the > file system level, which is probably more efficient as it bypasses the > kernel buffer cache (ZFS uses its own). Performance penalty is higher > than on other file systems because in async mode, ZFS batches disk > transactions to minimize latency -- a performance boost which is lost > when fsync is enabled. If performance is critical, you may improve it > with zlogs. > No, data reliability and consistancy is more important for us > > I think you can do the same in MooseFS with storage classes: just > specify a different label expression for chunk Creation and Keep steps. > This way, you may even decide to assign newly created chunks to > specific chunk servers. With LizardFS, there is no way to limit chunk > creation to a subset of chunkservers: they are always distributed > randomly. That's a problem when half of your servers are part of > another site. > Could you please make a real example? Let's assume a replica 4, 2 SSD chunkservers, 2 hdd chunkservers I would like that ACK is returned to the client after the first 2 SSD server has wrote the data while the other 2 (up to a goal of 4) are still writing > |
From: Marin B. <li...@ol...> - 2018-05-20 14:36:28
Attachments:
smime.p7s
|
> Could you please make a real example? > Let's assume a replica 4, 2 SSD chunkservers, 2 hdd chunkservers > I would like that ACK is returned to the client after the first 2 SSD > server has wrote the data while the other 2 (up to a goal of 4) are > still writing The MooseFS team detailed this extensively in the storage class manual (see https://moosefs.com/Content/Downloads/moosefs-storage-classes-manu al.pdf, especially chapter 4 with common use scenarios). Let: - SSD chunkservers be labeled 'A' - HDD chunkservers be labeled 'B'. You may create the following storage class: mfsscadmin create -C2A -K2A,2B tiered4 The class is named 'tiered4'. It stores 2 copies on SSD at chunk creation (-C2A), and adds 2 more on HDD asynchronously (-K2A,2B). Then, you need to assign this storage class to a directory with: mfssetsclass tiered4 <directory> Furthermore, you may also configure your clients to prefer SSD chunkservers for R/W operations by adding the following line to mfsmount.cfg or passing it as a mount option: mfspreflabels=A This would make you clients prefer SSD copies over HDD ones to retrieve or modify chunks. Hope it helps you, Marin. |
From: Zlatko Č. <zca...@bi...> - 2018-05-20 13:56:17
|
On 20.05.2018 14:53, Marin Bernard wrote: >> This is very interesting >> Any official and detailed docs about the HA feature? >> >> Other than this, without making any flame, which are the differences >> between MFS4 and LizardFS? >> > Hi again, > > I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel for > a few weeks now. Here are the main differences I found while using > them. I think most of them will still be relevant with MooseFS 4.0. > > [snip] Hey Marin! Thank you for sharing your experience with competing product with us, much appreciated. Being very happy with MooseFS so far, never had enough motivation to test its fork. So your detailed explanation of differences perfectly satisfies my curiosity. ;) Even if LizardFS currently has a feature or two missing in current MooseFS 3.0, it seems that MooseFS 4.0 will be a clear winner, and definitely worth a wait! Regards, -- Zlatko |
From: Gandalf C. <gan...@gm...> - 2018-05-20 14:00:50
|
Il dom 20 mag 2018, 15:56 Zlatko Čalušić <zca...@bi...> ha scritto: > Even if LizardFS currently has a feature or two missing in current > MooseFS 3.0, it seems that MooseFS 4.0 will be a clear winner, and > definitely worth a wait! > Totally agree Probably the real missing thing, IMHO, is the qemu driver > |
From: Zlatko Č. <zca...@bi...> - 2018-05-20 14:10:44
|
On 20.05.2018 15:38, Marin Bernard wrote: > I may have some answers. > >> There where some posts saying that MooseFS doesn't support fsync at >> all, is this fixed? Can we force the use of fsync to be sure that an >> ACK is sent if and only if all data is really wrote on disks? > MooseFS chunkservers have the HDD_FSYNC_BEFORE_CLOSE config setting > (default off) which does just that. The equivalent setting on LizardFS > is PERFORM_FSYNC (default on). The performance penalty with > PERFORM_FSYNC=1 is very high, though. > > Funny thing, just this morning I did some testing with this flag on and off. Basically, I was running with fsync *on* ever since, because it just felt as the right thing to do. And it's the only change I ever made to mfschunkserver.cfg, all other settings being at their default values. But, just yesterday, suddenly I wondered, if it would make any difference for my case, where chunkservers are rather far away from the mount point (about 10 - 15 ms away). So, I did some basic tests, writing a large file, and untarring linux kernel source (lots of small files) with fsync setting on/off. Much as expected, there was no difference at all for large files (network interface saturated, anyway). In case of very small files (linux kernel source), performance per chunkserver improved from about 350 writes/sec to about 420 writes/sec. IOW, depending on your workload, it seems that turning HDD_FSYNC_BEFORE_CLOSE on could result in 0-15% performance degradation. Of course, depending on your infrastructure (network, disks...), the results may vary. I eventually decided, it's not a big price to pay, and I'm dealing mostly with bigger files anyway, so I turned the setting back on and don't intend to do any more testing. -- Zlatko |
From: Marin B. <li...@ol...> - 2018-05-20 15:13:20
Attachments:
smime.p7s
|
> On 20.05.2018 15:38, Marin Bernard wrote: > > I may have some answers. > > > > > There where some posts saying that MooseFS doesn't support fsync > > > at > > > all, is this fixed? Can we force the use of fsync to be sure that > > > an > > > ACK is sent if and only if all data is really wrote on disks? > > > > MooseFS chunkservers have the HDD_FSYNC_BEFORE_CLOSE config setting > > (default off) which does just that. The equivalent setting on > > LizardFS > > is PERFORM_FSYNC (default on). The performance penalty with > > PERFORM_FSYNC=1 is very high, though. > > > > > > Funny thing, just this morning I did some testing with this flag on > and off. > > Basically, I was running with fsync on ever since, because it just > felt as the right thing to do. And it's the only change I ever made > to mfschunkserver.cfg, all other settings being at their default > values. But, just yesterday, suddenly I wondered, if it would make > any difference for my case, where chunkservers are rather far away > from the mount point (about 10 - 15 ms away). > > So, I did some basic tests, writing a large file, and untarring linux > kernel source (lots of small files) with fsync setting on/off. > > Much as expected, there was no difference at all for large files > (network interface saturated, anyway). In case of very small files > (linux kernel source), performance per chunkserver improved from > about 350 writes/sec to about 420 writes/sec. IOW, depending on your > workload, it seems that turning HDD_FSYNC_BEFORE_CLOSE on could > result in 0-15% performance degradation. Of course, depending on your > infrastructure (network, disks...), the results may vary. > > I eventually decided, it's not a big price to pay, and I'm dealing > mostly with bigger files anyway, so I turned the setting back on and > don't intend to do any more testing. I saw a much important difference with LizardFS on a ZFS backend, even with large files. Roughly 50-60% faster without fsync. I never ran the test on MooseFS, so it's hard to tell the difference. You gave me the idea to try it though! It is possible that LizardFS performs slower than MooseFS when fsync is enabled. Afterall, their implementations have had enough time to drift. However, such a perf gap between both solutions seem unlikely to me. What is certain, however, is that the performance cost of fsync with ZFS is far higher than with another FS. In standard async mode, ZFS delays and batches disk writes to minimize IO latency. fsync predates this optimization without disabling it, and as a result performance drops quickly. I could try to set sync=always on the dataset, leave fsync off at chunkserver level, and see what happens. This would offer the same warranty than fsync=on while letting the filesystem handle synchronous writes by itself (and trigger its own optimization strategies, maybe not using batched transactions at all with synchronous writes). Thanks for sharing your figures with us! Marin. |
From: Gandalf C. <gan...@gm...> - 2018-05-20 16:48:37
|
Il giorno dom 20 mag 2018 alle ore 17:13 Marin Bernard <li...@ol...> ha scritto: > What is certain, however, is that the performance cost of fsync with > ZFS is far higher than with another FS. In standard async mode, ZFS > delays and batches disk writes to minimize IO latency. fsync predates > this optimization without disabling it, and as a result performance > drops quickly This is obvious but the question is not if enable or not fsync. The real question is: what happens when HDD_FSYNC_BEFORE_CLOSE is set to 0 ? What if client open a file with FSYNC set? Will Moose honor this and then send that to the underlying storage regardless the value of HDD_FSYNC_BEFORE_CLOSE ? In other words: is HDD_FSYNC_BEFORE_CLOSE an additional measure to force FSYNC (on file close) even if client didn't ask for it ? or is HDD_FSYNC_BEFORE_CLOSE the only way to issue FSYNC in MooseFS? |
From: Marin B. <li...@ol...> - 2018-05-20 20:09:34
Attachments:
smime.p7s
|
> > What is certain, however, is that the performance cost of fsync > > with > > ZFS is far higher than with another FS. In standard async mode, ZFS > > delays and batches disk writes to minimize IO latency. fsync > > predates > > this optimization without disabling it, and as a result performance > > drops quickly > > This is obvious but the question is not if enable or not fsync. > The real question is: what happens when HDD_FSYNC_BEFORE_CLOSE is set > to 0 ? The chunkserver acknowledges the write while the data are still pending commit to disk. If the server dies meanwhile, the data are lost. However, if goal is >= 2 (as it should always be), at least one more copy of the data must already be present on another chunkserver before the acknowledgment is sent. > What if client open a file with FSYNC set? Will Moose honor this and > then > send that to the underlying storage > regardless the value of HDD_FSYNC_BEFORE_CLOSE ? > In other words: is HDD_FSYNC_BEFORE_CLOSE an additional measure to > force > FSYNC (on file close) even if client > didn't ask for it ? In my understanding yes, it is a way to enforce fsync at the chunkserver level, regardless of what the client asked. This would apply to all writes on a specific chunkserver. However, this does not enforce fsync on the client side. > or is HDD_FSYNC_BEFORE_CLOSE the only way to issue > FSYNC in MooseFS? I don't think so. It seems that mfsmount has a undocumented option ``mfsfsyncbeforeclose``, which controls fsync before close for a given mountpoint. See: https://github.com/moosefs/moosefs/blob/138e149431b47b363de0d32e39 629e4036d1cb00/mfsmount/main.c#L295 This option appeared in MooseFS 3.0.5-1 to allow a user to disable fsync before close. According to the commit message, fsync was enabled and mandatory before that change. I suppose the commit also disabled fsync by default in mfsmount, because a few months later, MooseFS 3.0.51-1 reverted it to enabled by default. The ``mfsfsyncmintime`` option, which is documented, seems to be the official way to deal with this setting. The man page states that ``mfsfsyncmintime=0`` has the same effect as ``mfsfsyncbeforeclose=1``. So setting ``mfsfsyncmintime`` to 0 would force fsync before close regardless of what the client asked, and of the age of the file descriptor. MooseFS being distributed, a single fsync call on a client-side fd would ideally result in cache flushes both on the client and on the chunkservers. These operations must be orchestrated in a way that makes them both reliable and lightweight; so I'm sure the client was designed to be adaptive enough to find the best compromise in various scenarios. For instance, I would not be surprised to learn that the way fsync is managed depends on write cache or file locking settings. This is a highly technical discussion and I'm mostly making blind suppositions here. The final word may only come from a member of the dev team. Marin. |