From: Razvan D. <ra...@re...> - 2011-04-14 17:27:53
|
Hello guys! First of all i have to say you made a great job with MooseFS, from all the open source distributed filesystems i loooked over this is by far the best in features, documentation and simple efficiency (installation, configuration, tools). Great piece of work! Now in order to be in pure extasy while looking on how good my storage performs i need to get an answer :). I have set up a MooseFS on a server box, 2 x Xeon X5355, 4GB RAM, Intel 5000P board, 2 x Areca 1321 controllers, 24 x ST3750640NS disks. What puzzles me is that when i mount the MoooseFS on same machine and i try to make different transfers the bottleneck the i hit is always around 1Gbit. My 2 storages sdc and sdd are both used around 15-20% while reaching transfer from moosefs of ~130MB/s (both controllers deliver around 600MB/s read and write). For this local test the loopback device is used (mounted on same machine) and from what i can see using atop the only section that seems to cap is the network at loopback device with a value slighltly under 1Gbit. Using iperf on same machine i get 20Gb/s so it seems loopback device can't be blamed for the limitation. Mounts: storage-02 / # df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 149G 15G 135G 10% / udev 10M 192K 9.9M 2% /dev shm 2.0G 0 2.0G 0% /dev/shm /dev/sdc 6.9T 31G 6.8T 1% /mnt/osd0 /dev/sdd 6.9T 31G 6.8T 1% /mnt/osd1 192.168.8.88:9421 14T 62G 14T 1% /mnt/mfs Network test: storage-02 / # iperf -s -B 127.0.0.1 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 127.0.0.1 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 52464 [ ID] Interval Transfer Bandwidth [ 4] 0.0-20.0 sec 47.6 GBytes 20.4 Gbits/sec storage-02 / # ls -la /mnt/mfs/ -rw-r--r-- 1 root root 17179869184 Apr 14 18:43 test.file Read test from MooseFS: dd if=/mnt/mfs/test.file of=/dev/null bs=4k 4194304+0 records in 4194304+0 records out 17179869184 bytes (17 GB) copied, 139.273 s, 123 MB/s atop while doing read test: PRC | sys 4.28s | user 3.10s | #proc 120 | #zombie 0 | #exit 0 | CPU | sys 58% | user 43% | irq 3% | idle 617% | wait 80% | cpu | sys 12% | user 12% | irq 1% | idle 57% | cpu003 w 18% | cpu | sys 14% | user 7% | irq 0% | idle 57% | cpu007 w 21% | cpu | sys 9% | user 6% | irq 1% | idle 64% | cpu000 w 20% | cpu | sys 9% | user 7% | irq 0% | idle 71% | cpu004 w 13% | cpu | sys 5% | user 5% | irq 0% | idle 90% | cpu005 w 0% | cpu | sys 5% | user 3% | irq 1% | idle 87% | cpu001 w 4% | cpu | sys 2% | user 2% | irq 0% | idle 96% | cpu002 w 0% | cpu | sys 3% | user 2% | irq 0% | idle 92% | cpu006 w 3% | CPL | avg1 0.68 | avg5 0.24 | avg15 0.15 | csw 123346 | intr 61464 | MEM | tot 3.9G | free 33.3M | cache 3.5G | buff 0.0M | slab 184.4M | SWP | tot 988.4M | free 983.9M | | vmcom 344.5M | vmlim 2.9G | PAG | scan 297315 | stall 0 | | swin 0 | swout 0 | DSK | sdd | busy 10% | read 540 | write 1 | avio 0 ms | DSK | sda | busy 6% | read 0 | write 6 | avio 50 ms | DSK | sdc | busy 6% | read 692 | write 0 | avio 0 ms | DSK | sdb | busy 1% | read 0 | write 6 | avio 6 ms | NET | transport | tcpi 62264 | tcpo 80771 | udpi 0 | udpo 0 | NET | network | ipi 62277 | ipo 62251 | ipfrw 0 | deliv 62270 | NET | eth1 0% | pcki 75 | pcko 7 | si 9 Kbps | so 6 Kbps | NET | eth0 0% | pcki 22 | pcko 24 | si 3 Kbps | so 3 Kbps | NET | lo ---- | pcki 62221 | pcko 62221 | si 978 Mbps | so 978 Mbps | NET | bond0 ---- | pcki 97 | pcko 31 | si 12 Kbps | so 10 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 28379 1.11s 1.31s -44K 408K mfs 24 -- - S 52% mfschunkserver 28459 1.53s 0.75s 0K 0K root 19 -- - S 49% mfsmount 28425 0.93s 1.02s 16K -256K mfs 24 -- - S 42% mfschunkserver 28687 0.48s 0.01s 0K 0K root 1 -- - D 10% dd 437 0.20s 0.00s 0K 0K root 1 -- - S 4% kswapd0 28697 0.02s 0.00s 8676K 516K root 1 -- - R 0% atop 28373 0.00s 0.01s 0K 0K mfs 1 -- - S 0% mfsmaster 28510 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/7:0 4574 0.00s 0.00s 0K 0K root 1 -- - S 0% kworker/u:2 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% md1_raid1 17959 0.00s 0.00s 0K 0K root 1 -- - S 0% xfsbufd/sdc Is there a limitation set in moosefs or fuse client than sets the maximum socket transfer at 1Gbit? Is there a trick to unleash the moose beast? :P If you have any ideea how can i get the full potential from my storage please let me know how. Looking forward for you reply! Kind regards, Razvan Dumitrescu System engineer Realitatea TV |