From: Elliot F. <efi...@gm...> - 2011-07-24 01:19:22
|
I forgot to add that my question implies lots of very small files. Would I be able to scale a chunkserver by using sufficent cores/spindles to the point of saturating the 6Gbps SAS2 link with a workload of lots of very small files? On Sat, Jul 23, 2011 at 7:16 PM, Elliot Finley <efi...@gm...> wrote: > On Thu, Jul 21, 2011 at 8:12 PM, Robert Sandilands > <rsa...@ne...> wrote: >> The only time that many spindles will be remotely useful on 1 controller >> is if you are doing a large amount of small random read/write accesses. >> For anything else you are more likely to saturate the bus connecting the >> machine to the controller/expander. > > I appreciate you taking the time to answer. What you're saying here > is a common sense answer, but it doesn't really answer my specific > question. Most likely because I didn't state it clear enough. So > here is my question in a (hopefully) more clear fashion: > > Is MooseFS sufficiently multithreaded/multi-process to be able to make > use of lots of spindles to handle lots of IOPS... To the point of > saturating the 6Gbps SAS2 link? This obviously implies a 10G ethernet > link to the chunkserver. > > Also asked another way: does having more cores == more performance? > i.e. Can MooseFS take advantage of multiple cores? to what extent? > > I ask to what extent because a multithreaded/multi-process daemon can > get hung up with lock contention. > > TIA > Elliot > |