|
From: Elliot F. <efi...@gm...> - 2014-10-27 19:27:13
|
Give it enough spindles to handle the IO load that you're throwing at it and it works just fine. VMs going into RO mode is because of lengthy IO wait - i.e. not enough resources for what you want to do. On Mon, Oct 27, 2014 at 11:49 AM, WK <wk...@bn...> wrote: > We tried it. > > Worked great until you had situations where the cluster was ALSO > replicating and/or was under load. > > Then the VMs would start going Read-Only. > > We love MooseFS, but for VMs we use DRBD or GlusterFS with simple > replication. > > > On 10/23/2014 2:01 PM, Joseph Love wrote: > > Hi, > > > > I’m curious if anyone has used MFS as a virtual machine storage system > (specifically, as the storage for a VMware environment). Last tidbits I > read suggested that the latencies were a bit too high to use MFS in this > way, and instead to use it as the storage *in* the virtual machines instead. > > > > Some testing with a 3-node cluster and using bonnie++, iozone, & > tiobench suggests that write performance for small, random blocks suffer > from poor performance and high latencies. If i’m not mistaken, that should > be pretty similar to what VMware disk updates act like, so it seems like > it’s possibly a bad idea. Bigger blocks, and large sequential data access > seem to perform great, though. > > > > So, anyone with real-world experience with using MFS in this way? > > > > Thanks, > > -Joe > > > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |