From: Josh M. <jm...@nr...> - 2010-05-12 14:52:39
|
> Something less than a dozen is a small number. With 3, every write is > going to > require every disk head to seek and you'll wait until the slowest one > completes > before the next operation. Not to be contrary, but I'm running just fine on a 4-spindle MD-raid5 system under RHEL5 for by backuppc server. The array has 4 400GB WD SATA disks. I think it helps to tune the array slightly: Chunk Size : 256K Also, make sure native command queueing is enabled and working (hdparm -I /dev/sdx) BackupPC_nightly runs in 15 minutes across a pool like: # Pool is 345.69GB comprising 1212100 files and 4369 directories (as of 5/12 02:07), # Pool hashing gives 8331 repeated files with longest chain 192, # Nightly cleanup removed 380 files of size 0.19GB (around 5/12 02:07), # Pool file system was recently at 37% (5/12 09:54), today's max is 37% (5/12 02:00) and yesterday's max was 37%. Also, read performance shouldn't be a problem on a raid-5, but you can increase the per-spindle read-ahead using blockdev --setra 4096 /dev/sdx This will read-ahead 4096*512k blocks (2MB) from each spindle. I haven't set this on this machine but I do on others. This is a per-boot setting so do it in rc.local or equiv. -Josh -- -------------------------------------------------------- Joshua Malone Systems Administrator (jm...@nr...) NRAO Charlottesville 434-296-0263 www.cv.nrao.edu 434-249-5699 (mobile) BOFH excuse #360: Your parity check is overdrawn and you're out of cache. -------------------------------------------------------- |