From: Fyodor U. <uf...@uf...> - 2011-04-04 20:11:18
|
On 04/04/2011 10:51 PM, Fyodor Ustinov wrote: > Hi. > > ceph osd pool set data size 1 > > dd if=/dev/zero of=aaa bs=1024000 count=4000 > 4096000000 bytes (4.1 GB) copied, 31.3153 s, 131 MB/s > > ceph osd pool set data size 2 > 4096000000 bytes (4.1 GB) copied, 72.7146 s, 56.3 MB/s > > ceph osd pool set data size 3 > 4096000000 bytes (4.1 GB) copied, 136.263 s, 30.1 MB/s > > Why? I thought increase in the number of copies should increase the > performance (in the worst case does not affect). > > WBR, > Fyodor. Oops. Not about moose :) I'm test ceph and moosefs simultaneously/ this about ceph. :) About moosefs - bonnie++ show dramatically slow on rewrite test.. :( |