From: Steve W. <st...@pu...> - 2012-05-17 20:22:21
|
On 05/17/2012 04:17 PM, Steve Wilson wrote: > On 05/17/2012 04:05 PM, Atom Powers wrote: >> On 05/17/2012 12:44 PM, Steve Wilson wrote: >>> On 05/17/2012 03:26 PM, Atom Powers wrote: >>>> * Compression, 1.16x in my environment >>> I don't know if 1.16x would give me much improvement in performance. >>> I typically see about 1.4x on my ZFS backup servers which made me >>> think that this reduction in disk I/O could result in improved >>> overall performance for MooseFS. >> Not for performance, for disk efficiency. Ostensibly those 64MiB chunks >> won't always use 64MiB with compression on, especially for smaller >> files. > > This is a good point and it might help where it's most needed: all > those small configuration files, etc. that have a large impact on the > user's perception of disk performance. > >>>> Bad: * high RAM requirement >>> Is the high RAM due to using raidz{2-3}? I was thinking of making >>> each disk a separate ZFS volume and then letting MooseFS combine the >>> disks into an MFS volume (i.e., no raidz). I realize that greater >>> performance could be achieved by striping across disks in the chunk >>> servers but I'm willing to trade off that performance gain for >>> higher redundancy (in the case of using simple striping) and/or >>> greater capacity (in the case of using raidz, raidz2, or raidz3). >> ZFS does a lot of caching in RAM. My chunk servers use hardware RAID, >> not raidz, and still use several hundred MiB of RAM. >> >> Personally, I would prefer to use raidz for muliple disks over MooseFS, >> because managing individual disks and disk failures should be much >> better. For example, to minimize the amount of re-balancing MooseFS >> needs to do; not to mention the possible performance benefit. But I can >> think of no reason why you couldn't do a combination of both. >> > > That is certainly worth considering. I hope to have enough time with > the new chunk servers to try out different configurations before I > have to put them into service. > > Steve |