From: Allen, B. S <bs...@la...> - 2012-04-23 19:42:06
|
I agree with this approach as well for most general purpose deployments. Unless you're trying to optimize for single stream performance. In which case I'd suggest going back to large storage node approach, with RAID sets instead of showing MFS each spindle. For example I'm using a handful of SuperMicro's 36 drive chassis, 10GbE, with Illumos based OS, and ZFS RAIDZ2 sets. Ending up with 43TB / node. I present MFS with a single HD that is comprised of 4x 8 Drive RAIDZ2 per node. It's going to be extremely painful when I need to migrate data off a node, but I get good single stream performance. I could likely better balance single stream performance with time to migrate by going to 16 drive chassis or similar, however overall cost / TB would increase as you're now buying more CPUs, etc. Although you could drop down to a single socket, less RAM per node, and so on to displace this extra cost. Like everything it's a trade off. Ben On Apr 21, 2012, at 5:09 AM, Steve Thompson wrote: > On Fri, 20 Apr 2012, Atom Powers wrote: > >> Because Moose is so good at dealing with system failure but slow to >> re-balance chunks I would recommend several "smaller" capacity servers >> over a few very large ones. Even at 10TB per server it takes a very long >> time to re-balance when I add or remove a system from the cluster; I >> would avoid going over about 10TB per server. Less is more in this case. > > I have to agree with this. I have two chunkservers with 25TB storage, as > well as several smaller chunkservers, and I recently removed 20TB of disk > from one of them (about 70% full). It took a little over 10 days to > replicate the removed chunks. > > Steve > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |