From: Chris P. <ch...@ec...> - 2012-03-30 18:25:30
|
On 2012/03/30 7:41 PM, Steve Thompson wrote: > On Fri, 30 Mar 2012, Ricardo J. Barberis wrote: > >> Do those drives happen to have 4 KB physical block size? >> That combined with unaligned partitions could explain such bad performance. > I always use whole disks combined into RAID sets via hardware raid, so > partition alignment is not an issue, and then use ext4 file systems as > chunk volumes. Raw I/O performance of the volumes are excellent, and > indeed the I/O performance numbers shown by the MFS cgi script are also > excellent (both read and write are greater than gigabit bandwidth). I use > high-end Dell servers with 3+ GHz processors and dual bonded gigabit links > for I/O, but nevertheless the resulting MFS I/O performance, which was > good initially when I built a testing setup, has just plummeted with a > real-world I/O load on it, to the point where I am getting a lot of > complaints. This looks like a show stopper to me, which is somewhat > upsetting. I'm not sure what I can do at this point to tune it further. Hi Steve Do those servers have bettery backed cache on the raid? If so, are they set to write-back or write-through? I have found that when running on standard SATA disks, the constant fsync is what slows things down tremendously. A battery backed write-back cache on the raid card would help a lot there. Chris |