From: rxknhe <rx...@gm...> - 2011-06-30 17:23:34
|
Few drawbacks we found when testing glusterfs: - In our glusterfs setup, we found meta-data access was very slow. This is important for small file operations such as source code repository, used by developers (svn, compiling code etc). In case of glusterfs, meta data is distributed and not a single point of failure like in MooseFS. Nice feature and one of the reason why we selected glusterFS over MooseFS in the first place. However, we decided to switch to MooseFS later because, MooseFS offers fast meta data access and few other reasons where glsuterfs wasn't a good fit for us. - glusterfs cluster totally stalled one day, when one of the disk in a server stopped working. We had to shutdown that chunk server(gluster brick) to bring the whole gluster system back. It could be some tuning/config issue that I could not deny, but I was rather expecting a better fault tolerance from a product. - glusterfs stores a complete file on a data server and replicate whole file between data(chunk) servers. This could be slow process when writing large file on a single disk@server and by using single disk spindle(assuming raid is not used). MooseFS divide up files in chunks (up to 64MB max size) and distribute across servers. So better overall i/o expected. - Setting up MooseFS is extremely simple, maintenance and admin is very easy. Overall we found MooseFS as a very smart and efficient design from operations perspective. This white paper may also to know more about MooseFS from operations perspective: http://contrib.meharwal.com/home/moosefs - Fuse vs kernel module: Although purist may argue about i/o speed between two approaches, but as Ricardo mentioned that fuse is much straight forward development path used by many. It is fairly beaten up and polished product. So why re-invent another wheel. In most practical situations, speed of fuse mounts may just be sufficient as we found in our environment. regds, rxknhe On Thu, Jun 30, 2011 at 12:34 PM, Ricardo J. Barberis < ric...@da...> wrote: > El Miércoles 29 Junio 2011, Sébastien Morand escribió: > > Hi, > > > > I'm currently interesting in the deployment of a distribuated filesystem > > and read a paper about moosefs. I have a few questions before starting > the > > job: > > Following answers are based on personal experience, YMMV. > > > 1/ Why moosefs instead of glusterfs or xtreemfs? > > I had some reliability problems with gluster when I tested it (versions > 2.0.8 > and some of the earlier 3.0.x). It has improved since, but I already bet on > Moose :) > > XtreemFS seemed more oriented to replications via WAN, I didn't even test > it. > > > 2/ glusterfs has been described in a paper I read from a french > university > > as really faster than moosefs, did you benchmark them too? > > I micro-benchmarked the versions mentioned above, for my use case Moose was > faster than Gluster by a minimal margin. > > I also tested Lustre, which had even better performance but doesn't provide > fault tolerance out of the box. > > > 3/ Why fuse and no kernel mode (should be faster)? > > There was recently a thread on lkml about this, with divided opinions: > > http://thread.gmane.org/gmane.linux.kernel/1148926/ > > Kernel-based filesystems are usually faster but more complicated to develop > and deploy. For example, Lustre provides a kernel module for some Linux > distros but if you don't use one of those you have to compile your own. > > Fuse-based are easier to develop, debug and deploy but usually not as fast > as > a kernel-based one. > > > Thanks by advance, > > Sébastien > > Regards, > -- > Ricardo J. Barberis > Senior SysAdmin / ITI > Dattatec.com :: Soluciones de Web Hosting > Tu Hosting hecho Simple! > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |