From: Quenten G. <QG...@on...> - 2012-02-08 02:01:38
|
Hi Everyone, We are currently looking at using MooseFS in our environment to store virtual machines/iscsi targets on. So far I must say it looks excellent when compared to the other offers out there I still haven't found anything that is easy to setup and seems to "just work" so far on ubuntu 10LTS. So I'm after some recommendations on number of servers and number of drives per server. I've found a couple of benchmarks around the place and would like to hear form fellow users what kind of implementation you maybe using and its use case and if you've come across any issues so far? We are thinking along the lines of a Metadata server Virtualised HA or FT using vmware, A virtualised Backup Metadata Server with 4 physical chunk servers w/ 8 x 3TB 7200rpm SATA Disks using ZFS on the individual disks for chunk integrity (not using raidz) in each backed into 4 x 1GBE LACP Linked Ports per physical server. Any infomation you maybe able to share is greatly appericated :) Regards, Quenten Grasso |
From: Atom P. <ap...@di...> - 2012-02-08 18:30:15
|
I've just started using MooseFS for shared drives and web data. I chose not to use it for VMs because I couldn't be "five nines" confident in it's stability and VMs get very cranky if they loose connection to their storage. Based on my experiences so far I think your biggest concern will be disk I/O; which appears to be directly related to the amount of CPU you put in your metamaster. Also, be very very careful that you never ever fill up your disk on the metamaster. On 02/07/2012 05:36 PM, Quenten Grasso wrote: > Hi Everyone, > > We are currently looking at using MooseFS in our environment to store > virtual machines/iscsi targets on. > > So far I must say it looks excellent when compared to the other offers > out there I still haven’t found anything that is easy to setup and seems > to “just work” so far on ubuntu 10LTS. > > So I’m after some recommendations on number of servers and number of > drives per server. I’ve found a couple of benchmarks around the place > and would like to hear form fellow users what kind of implementation you > maybe using and its use case and if you’ve come across any issues so far? > > We are thinking along the lines of a Metadata server Virtualised HA or > FT using vmware, A virtualised Backup Metadata Server with 4 physical > chunk servers w/ 8 x 3TB 7200rpm SATA Disks using ZFS on the individual > disks for chunk integrity (not using raidz) in each backed into 4 x 1GBE > LACP Linked Ports per physical server. > > Any infomation you maybe able to share is greatly appericated J > > Regards, > > Quenten Grasso > > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Quenten G. <QG...@on...> - 2012-02-18 10:26:06
|
Hi Elliot, Thanks for the reply, I've been considering using 3TB Disks with ZFS in a 12/16 disk chassie with 2 x 6/8 disk raidz2 and using MFS for HA/speed only by using a replication factor of 2. I guess the pros and cons so far i've worked out are, In a distrubuted JBOD configuration which mfs seems to like, So I would have to use a replication factor of 3, I believe which would give us 3 x the High Avaliablity and higher recoverablity from bit rot/disk failure or something alike. However In this example if we use a replication factor of 3 for example 2TB Of data, Using the MFS goal of 3 I need to store 6TB vs using a ZFS zpool I would only need to store 4TB, now multiply that by 10 and we are at 60TB for 20TB of data vs 40TB, which as you can imagine adds up very quickly. I also would like to have a 2nd offiste replica at another datacentre and now we can muliply the sorage requirements for 20TB of data by 2. 120 vs 80 Using the ZFS as I see it would protect us from Bit Rot/Bad Sectors & Failed Drives also reducing rebuild times as it would be handled by zfs and using MFS for high avaliblity and replication / speed (striping) what do you think? On another note Are you using NFS or ISCSI targets? From the mfs share if I do a "dd if=/dev/zero of=ddfile bs=32k count=100000" i get around ~70mb/s however when I use iscsi or nfs i'm only getting 10-18mb/s. Our dev config which is 4 servers with 1 x 500gb sata drive and a 5th as metadata Server with ubuntu 11 with ext4 for our disk fs. Also Tried using FreeBSD (freenas w/pkg_add -r moosefs-client) which didn't seem to make any difference except I couldnt use NFS on that setup. Cheers, Quenten |