From: Razvan D. <ra...@re...> - 2011-04-15 16:03:54
|
Hello again Michal! I agree with you that i would not get 600MB/s when i will have 10 reads and 10 writes of video files but will be close to this value as a total because in my case we only deal with large video files 2 to 10 GB in size and i can tell from experience that this storage used directly via ftp deliver to the users around 40-5000MB/s on each controller with around 5 reads and 5 writes each of ~45MB/s. To me from what i could see in all the tests i've done it looks like somewhere the filesystem it's not reaching it's true transfer potential. I say this because when i read with 123MB/s, disks are used 15% each, CPU power is 70% free, RAM is also plenty but mfschunkserver processes are is sleep state(S) at 40% CPU time. So what i can understand from this is that chunkservers instead of delivering data as fast as is possible, they just sit and take a nap ( :D ) for 60% of the CPU time. If there is some filesystem IO moderation in action, that is part of how the system works then in understandable but if is just sleeping then we can say that MooseFS is also a "Green" filesystem (ECO-friendly) :P since consumes less CPU power. I will also try to look in the code in the following days and see if i can answer myself to these questions and also to get a better understanding on how the FS works. Well hope i haven't been too obnoxious with my jokes, i'm looking forward for you reply :). Kind regards, Razvan Dumitrescu PS: In order to justify my conclusions and answer to the questions you asked, i sumbit also the following test logs: ====================================== start test logs ================================= newsdesk-storage-02 / # uname -a Linux newsdesk-storage-02 2.6.38-gentoo-r1 #1 SMP Tue Apr 5 18:28:19 EEST 2011 x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux RAID 6 128k stripe size, total 12 disks, 10 data disks, 2 parity disks Each volume has been set with XFS and coresponding RAID array stripe aligning mkfs.xfs -f -d su=128k,sw=10 -l version=2,su=128k /dev/sdc mkfs.xfs -f -d su=128k,sw=10 -l version=2,su=128k /dev/sdd READ & WRITE TEST DIRECTLY TO DISKS =================================== (reads were done simultaneus on both controllers /mnt/osd0 and /mnt/osd1) (writes were done also simultaneous on both controllers as you will be able to see below in atop screen) controller 1 write: newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd0/test.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.3845 s, 597 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd0/test1.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.2794 s, 602 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd0/test2.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.9359 s, 575 MB/s read: newsdesk-storage-02 / # dd if=/mnt/osd0/test.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 11.7127 s, 733 MB/s newsdesk-storage-02 / # dd if=/mnt/osd0/test1.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 12.3034 s, 698 MB/s newsdesk-storage-02 / # dd if=/mnt/osd0/test2.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 13.2949 s, 646 MB/s controller 2 write: newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd1/test.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.8654 s, 578 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd1/test1.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.9257 s, 576 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd1/test2.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.6867 s, 585 MB/s read: newsdesk-storage-02 / # dd if=/mnt/osd1/test.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 11.5464 s, 744 MB/s newsdesk-storage-02 / # dd if=/mnt/osd1/test1.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 11.6056 s, 740 MB/s newsdesk-storage-02 / # dd if=/mnt/osd1/test2.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 12.2195 s, 703 MB/s ATOP - newsdesk-storage-0 2011/04/15 16:40:41 5 seconds elapsed PRC | sys 7.70s | user 0.22s | #proc 136 | #zombie 0 | #exit 0 | CPU | sys 147% | user 4% | irq 12% | idle 573% | wait 64% | cpu | sys 55% | user 3% | irq 5% | idle 11% | cpu000 w 27% | cpu | sys 51% | user 1% | irq 6% | idle 12% | cpu002 w 29% | cpu | sys 22% | user 0% | irq 0% | idle 78% | cpu001 w 0% | cpu | sys 7% | user 0% | irq 1% | idle 88% | cpu006 w 4% | cpu | sys 6% | user 0% | irq 0% | idle 91% | cpu004 w 2% | cpu | sys 5% | user 0% | irq 0% | idle 95% | cpu005 w 0% | CPL | avg1 1.05 | avg5 0.54 | avg15 0.31 | csw 10777 | intr 28418 | MEM | tot 3.9G | free 34.6M | cache 3.6G | buff 0.0M | slab 38.3M | SWP | tot 988.4M | free 979.4M | | vmcom 513.7M | vmlim 2.9G | PAG | scan 1769e3 | stall 0 | | swin 0 | swout 0 | DSK | sdc | busy 101% | read 6635 | write 0 | avio 0 ms | DSK | sdd | busy 101% | read 7144 | write 0 | avio 0 ms | NET | transport | tcpi 29 | tcpo 29 | udpi 0 | udpo 0 | NET | network | ipi 46 | ipo 29 | ipfrw 0 | deliv 37 | NET | eth1 0% | pcki 54 | pcko 7 | si 6 Kbps | so 4 Kbps | NET | eth0 0% | pcki 34 | pcko 14 | si 4 Kbps | so 1 Kbps | NET | bond0 ---- | pcki 88 | pcko 21 | si 11 Kbps | so 6 Kbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17681 3.19s 0.13s 0K 0K root 1 -- - R 67% dd 17680 3.07s 0.08s 0K 0K root 1 -- - R 64% dd 437 1.39s 0.00s 0K 0K root 1 -- - R 28% kswapd0 17450 0.00s 0.01s 0K 0K mfs 1 -- - S 0% mfsmaster 17677 0.01s 0.00s 0K 0K root 1 -- - R 0% atop 9 0.01s 0.00s 0K 0K root 1 -- - S 0% ksoftirqd/1 12 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/2:0 24 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/6:0 16809 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/0:2 4574 0.00s 0.00s 0K 0K root 1 -- - S 0% kworker/u:2 13791 0.00s 0.00s 0K 0K root 1 -- - S 0% xfsbufd/sdc 17648 0.00s 0.00s 0K 0K root 1 -- - S 0% flush-8:48 As atop shows this is the maximum transfer RAID provides sdc/sdd being busy 100% which is quite ok since that means around 70MB/s read and 60MB/s write for each of the 10 data disks in each array so far so good and pleased how things work, when i try to copy from one storage to another i get the following results which again i can understand since the limitation is the CPU. copy from storage to storage using cp newsdesk-storage-02 / # time cp /mnt/osd0/test2.file /mnt/osd1/test6.file real 0m19.376s user 0m0.050s sys 0m15.280s newsdesk-storage-02 / # time cp /mnt/osd1/test2.file /mnt/osd0/test6.file real 0m15.691s user 0m0.030s sys 0m15.560s newsdesk-storage-02 / # time cp /mnt/osd0/test4.file /mnt/osd1/test7.file real 0m16.652s user 0m0.070s sys 0m16.390s newsdesk-storage-02 / # time cp /mnt/osd1/test4.file /mnt/osd0/test7.file real 0m16.551s user 0m0.030s sys 0m15.760s 8589934592 bytes (8.6 GB) copied in 19.376s = 422.79 MB/s 8589934592 bytes (8.6 GB) copied in 15.691s = 522.08 MB/s 8589934592 bytes (8.6 GB) copied in 16.652s = 491.95 MB/s 8589934592 bytes (8.6 GB) copied in 16.551s = 494.95 MB/s The transfer reaches only this values cause cp caps in CPU while the disks are used half of available speed as you can see below ATOP - newsdesk-storage-0 2011/04/15 17:04:36 5 seconds elapsed PRC | sys 7.39s | user 0.01s | #proc 133 | #zombie 0 | #exit 0 | CPU | sys 141% | user 0% | irq 5% | idle 655% | wait 0% | cpu | sys 96% | user 0% | irq 4% | idle 0% | cpu001 w 0% | cpu | sys 27% | user 0% | irq 0% | idle 73% | cpu006 w 0% | cpu | sys 18% | user 0% | irq 0% | idle 82% | cpu002 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu000 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu005 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu007 w 0% | CPL | avg1 0.11 | avg5 0.27 | avg15 0.26 | csw 6104 | intr 23475 | MEM | tot 3.9G | free 31.6M | cache 3.5G | buff 0.0M | slab 160.8M | SWP | tot 988.4M | free 978.4M | | vmcom 513.4M | vmlim 2.9G | PAG | scan 1339e3 | stall 0 | | swin 0 | swout 0 | DSK | sdc | busy 56% | read 0 | write 5905 | avio 0 ms | DSK | sdd | busy 40% | read 5216 | write 1 | avio 0 ms | DSK | sda | busy 3% | read 0 | write 3 | avio 43 ms | DSK | sdb | busy 1% | read 0 | write 3 | avio 10 ms | NET | transport | tcpi 42 | tcpo 41 | udpi 0 | udpo 0 | NET | network | ipi 85 | ipo 41 | ipfrw 0 | deliv 56 | NET | eth0 0% | pcki 70 | pcko 22 | si 10 Kbps | so 2 Kbps | NET | eth1 0% | pcki 61 | pcko 9 | si 7 Kbps | so 3 Kbps | NET | bond0 ---- | pcki 131 | pcko 31 | si 18 Kbps | so 6 Kbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17745 4.97s 0.01s 0K -12K root 1 -- - R 100% cp 437 1.23s 0.00s 0K 0K root 1 -- - S 25% kswapd0 17700 0.76s 0.00s 0K 0K root 1 -- - S 15% flush-8:32 17650 0.19s 0.00s 0K 0K root 1 -- - S 4% kworker/2:2 17702 0.13s 0.00s 0K 0K root 1 -- - S 3% kworker/2:1 24 0.05s 0.00s 0K 0K root 1 -- - S 1% kworker/6:0 17742 0.02s 0.00s 0K 0K root 1 -- - R 0% atop 17450 0.01s 0.00s 0K 0K mfs 1 -- - S 0% mfsmaster 17456 0.01s 0.00s 0K 0K mfs 24 -- - S 0% mfschunkserver 581 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/1:2 17645 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/1:0 17234 0.00s 0.00s 0K -4K root 1 -- - S 0% bash 17483 0.00s 0.00s 0K 0K mfs 24 -- - S 0% mfschunkserver 121 0.00s 0.00s 0K 0K root 1 -- - S 0% sync_supers 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% md1_raid1 While doing dd read symultaneous with 2 clients using MooseFS i get these results: - disks not used - CPU not used - memory again plenty newsdesk-storage-01 / # dd if=/mnt/mfs/test2.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 201.276 s, 42.7 MB/s newsdesk-storage-01 / # dd if=/mnt/mfs/test2.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 183.349 s, 46.9 MB/s fluxuri / # dd if=/mnt/mfs/test1.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 144.376 s, 59.5 MB/s fluxuri / # dd if=/mnt/mfs/test1.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 144.132 s, 59.6 MB/s ATOP - newsdesk-storage-0 2011/04/15 16:21:24 5 seconds elapsed PRC | sys 1.72s | user 2.27s | #proc 132 | #zombie 0 | #exit 0 | CPU | sys 18% | user 23% | irq 6% | idle 750% | wait 4% | cpu | sys 3% | user 5% | irq 1% | idle 91% | cpu005 w 0% | cpu | sys 3% | user 3% | irq 1% | idle 94% | cpu001 w 0% | cpu | sys 2% | user 2% | irq 1% | idle 94% | cpu007 w 1% | cpu | sys 1% | user 3% | irq 1% | idle 94% | cpu003 w 2% | cpu | sys 3% | user 2% | irq 0% | idle 94% | cpu000 w 1% | cpu | sys 3% | user 3% | irq 1% | idle 93% | cpu002 w 0% | cpu | sys 2% | user 2% | irq 0% | idle 96% | cpu004 w 0% | cpu | sys 1% | user 3% | irq 1% | idle 94% | cpu006 w 0% | CPL | avg1 0.08 | avg5 0.11 | avg15 0.07 | csw 61782 | intr 71792 | MEM | tot 3.9G | free 32.1M | cache 3.6G | buff 0.0M | slab 37.8M | SWP | tot 988.4M | free 979.6M | | vmcom 530.8M | vmlim 2.9G | PAG | scan 121157 | stall 0 | | swin 0 | swout 0 | DSK | sdd | busy 11% | read 492 | write 0 | avio 1 ms | DSK | sdc | busy 5% | read 671 | write 0 | avio 0 ms | DSK | sda | busy 4% | read 2 | write 3 | avio 38 ms | DSK | sdb | busy 1% | read 1 | write 3 | avio 12 ms | NET | transport | tcpi 137827 | tcpo 356731 | udpi 0 | udpo 0 | NET | network | ipi 137862 | ipo 27661 | ipfrw 0 | deliv 137832 | NET | eth0 46% | pcki 70537 | pcko 193565 | si 7525 Kbps | so 463 Mbps | NET | eth1 39% | pcki 67369 | pcko 163226 | si 7221 Kbps | so 392 Mbps | NET | bond0 ---- | pcki 137906 | pcko 356791 | si 14 Mbps | so 856 Mbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17456 0.79s 1.22s 144K -28K mfs 24 -- - S 42% mfschunkserver 17483 0.82s 1.04s -44K -192K mfs 24 -- - S 39% mfschunkserver 437 0.08s 0.00s 0K 0K root 1 -- - S 2% kswapd0 17613 0.02s 0.00s 8676K 516K root 1 -- - R 0% atop 17450 0.01s 0.00s 0K 0K mfs 1 -- - S 0% mfsmaster 16376 0.00s 0.01s 0K 0K root 1 -- - S 0% apache2 12 0.00s 0.00s 0K 0K root 1 -- - R 0% kworker/2:0 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% md1_raid1 13277 0.00s 0.00s 0K 0K root 1 -- - S 0% xfsbufd/md1 Bonding works fine on newsdesk-storage-02 when i use iperf from same 2 clients i get around 1.9Gbit/s ATOP - newsdesk-storage-0 2011/04/15 17:29:05 5 seconds elapsed PRC | sys 0.96s | user 0.04s | #proc 132 | #zombie 0 | #exit 0 | CPU | sys 10% | user 1% | irq 2% | idle 788% | wait 0% | cpu | sys 2% | user 0% | irq 1% | idle 97% | cpu005 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 98% | cpu003 w 0% | cpu | sys 2% | user 0% | irq 0% | idle 98% | cpu000 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 98% | cpu001 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 98% | cpu006 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu004 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu002 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu007 w 0% | CPL | avg1 0.06 | avg5 0.23 | avg15 0.28 | csw 18503 | intr 45720 | MEM | tot 3.9G | free 40.7M | cache 3.5G | buff 0.0M | slab 158.8M | SWP | tot 988.4M | free 978.4M | | vmcom 545.8M | vmlim 2.9G | DSK | sda | busy 3% | read 0 | write 4 | avio 37 ms | DSK | sdb | busy 0% | read 0 | write 4 | avio 2 ms | DSK | sdc | busy 0% | read 0 | write 1 | avio 0 ms | DSK | sdd | busy 0% | read 0 | write 1 | avio 0 ms | NET | transport | tcpi 782002 | tcpo 372077 | udpi 0 | udpo 0 | NET | network | ipi 782031 | ipo 372084 | ipfrw 0 | deliv 782021 | NET | eth0 98% | pcki 406520 | pcko 372072 | si 984 Mbps | so 39 Mbps | NET | eth1 90% | pcki 375564 | pcko 7 | si 909 Mbps | so 2 Kbps | NET | bond0 ---- | pcki 782084 | pcko 372079 | si 1893 Mbps | so 39 Mbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17785 0.92s 0.03s 0K 0K root 5 -- - S 19% iperf 17450 0.01s 0.00s 0K 0K mfs 1 -- - S 0% mfsmaster 17789 0.01s 0.00s 0K 0K root 1 -- - R 0% atop 17456 0.01s 0.00s 0K 0K mfs 24 -- - S 0% mfschunkserver 17483 0.00s 0.01s 0K 0K mfs 24 -- - S 0% mfschunkserver 16809 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/0:2 4574 0.00s 0.00s 0K 0K root 1 -- - S 0% kworker/u:2 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% md1_raid1 ======================================= end test logs ================================== On Fri, 15 Apr 2011 14:51:03 +0200, Michal Borychowski wrote: > Hi Razvan! > > Thanks for nice words about MooseFS! :) > > There is absolutely no built-in limit for transfers in MooseFS. And > 130 MB/s > for SATA2 disks is a very good result - compare it to these > benchmarks: > http://hothardware.com/printarticle.aspx?articleid=881. I would never > expect > 600MB/s on SATA2 disks. And what are results of 'dd' for the disks > alone? > But remember that such tests do not reflect real life environments > where > lots of users use the system in the same time. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > > -----Original Message----- > From: Razvan Dumitrescu [mailto:ra...@re...] > Sent: Thursday, April 14, 2011 7:11 PM > To: moo...@li... > Subject: [Moosefs-users] Is there a limit set for network transfer of > 1Gbit > set in moosefs-fuse? > > Hello guys! > > First of all i have to say you made a great job with MooseFS, from > all the open source distributed filesystems i loooked over this is > by > far the best > in features, documentation and simple efficiency (installation, > configuration, tools). Great piece of work! > Now in order to be in pure extasy while looking on how good my > storage performs i need to get an answer :). > I have set up a MooseFS on a server box, 2 x Xeon X5355, 4GB RAM, > Intel 5000P board, 2 x Areca 1321 controllers, 24 x ST3750640NS > disks. > What puzzles me is that when i mount the MoooseFS on same machine > and > i try to make different transfers the bottleneck the i hit is always > around 1Gbit. My 2 storages sdc and sdd are both used around 15-20% > while reaching transfer from moosefs of ~130MB/s (both controllers > deliver around 600MB/s read and write). > For this local test the loopback device is used (mounted on same > machine) and from what i can see using atop > the only section that seems to cap is the network at loopback device > with a value slighltly under 1Gbit. > Using iperf on same machine i get 20Gb/s so it seems loopback device > can't be blamed for the limitation. > > Mounts: > > storage-02 / # df -h > Filesystem Size Used Avail Use% Mounted on > /dev/md1 149G 15G 135G 10% / > udev 10M 192K 9.9M 2% /dev > shm 2.0G 0 2.0G 0% /dev/shm > /dev/sdc 6.9T 31G 6.8T 1% /mnt/osd0 > /dev/sdd 6.9T 31G 6.8T 1% /mnt/osd1 > 192.168.8.88:9421 14T 62G 14T 1% /mnt/mfs > > Network test: > > > storage-02 / # iperf -s -B 127.0.0.1 > ------------------------------------------------------------ > Server listening on TCP port 5001 > Binding to local address 127.0.0.1 > TCP window size: 1.00 MByte (default) > ------------------------------------------------------------ > [ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 52464 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-20.0 sec 47.6 GBytes 20.4 Gbits/sec > > > storage-02 / # ls -la /mnt/mfs/ > -rw-r--r-- 1 root root 17179869184 Apr 14 18:43 test.file > > Read test from MooseFS: > > dd if=/mnt/mfs/test.file of=/dev/null bs=4k > 4194304+0 records in > 4194304+0 records out > 17179869184 bytes (17 GB) copied, 139.273 s, 123 MB/s > > atop while doing read test: > > PRC | sys 4.28s | user 3.10s | #proc 120 | #zombie 0 | > #exit > 0 | > CPU | sys 58% | user 43% | irq 3% | idle 617% | > wait > 80% | > cpu | sys 12% | user 12% | irq 1% | idle 57% | > cpu003 > w 18% | > cpu | sys 14% | user 7% | irq 0% | idle 57% | > cpu007 > w 21% | > cpu | sys 9% | user 6% | irq 1% | idle 64% | > cpu000 > w 20% | > cpu | sys 9% | user 7% | irq 0% | idle 71% | > cpu004 > w 13% | > cpu | sys 5% | user 5% | irq 0% | idle 90% | > cpu005 > w 0% | > cpu | sys 5% | user 3% | irq 1% | idle 87% | > cpu001 > w 4% | > cpu | sys 2% | user 2% | irq 0% | idle 96% | > cpu002 > w 0% | > cpu | sys 3% | user 2% | irq 0% | idle 92% | > cpu006 > w 3% | > CPL | avg1 0.68 | avg5 0.24 | avg15 0.15 | csw 123346 | > intr > 61464 | > MEM | tot 3.9G | free 33.3M | cache 3.5G | buff 0.0M | > slab > 184.4M | > SWP | tot 988.4M | free 983.9M | | vmcom 344.5M | > vmlim > 2.9G | > PAG | scan 297315 | stall 0 | | swin 0 | > swout > 0 | > DSK | sdd | busy 10% | read 540 | write 1 | > avio > 0 ms | > DSK | sda | busy 6% | read 0 | write 6 | > avio > 50 ms | > DSK | sdc | busy 6% | read 692 | write 0 | > avio > 0 ms | > DSK | sdb | busy 1% | read 0 | write 6 | > avio > 6 ms | > NET | transport | tcpi 62264 | tcpo 80771 | udpi 0 | > udpo > 0 | > NET | network | ipi 62277 | ipo 62251 | ipfrw 0 | > deliv > 62270 | > NET | eth1 0% | pcki 75 | pcko 7 | si 9 Kbps | so > 6 Kbps | > NET | eth0 0% | pcki 22 | pcko 24 | si 3 Kbps | so > 3 Kbps | > NET | lo ---- | pcki 62221 | pcko 62221 | si 978 Mbps | so > 978 Mbps | > NET | bond0 ---- | pcki 97 | pcko 31 | si 12 Kbps | so > 10 Kbps | > > PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD > 1/1 > 28379 1.11s 1.31s -44K 408K mfs 24 -- - S 52% > mfschunkserver > 28459 1.53s 0.75s 0K 0K root 19 -- - S 49% > mfsmount > 28425 0.93s 1.02s 16K -256K mfs 24 -- - S 42% > mfschunkserver > 28687 0.48s 0.01s 0K 0K root 1 -- - D 10% dd > 437 0.20s 0.00s 0K 0K root 1 -- - S 4% > kswapd0 > 28697 0.02s 0.00s 8676K 516K root 1 -- - R 0% > atop > 28373 0.00s 0.01s 0K 0K mfs 1 -- - S 0% > mfsmaster > 28510 0.01s 0.00s 0K 0K root 1 -- - S 0% > kworker/7:0 > 4574 0.00s 0.00s 0K 0K root 1 -- - S 0% > kworker/u:2 > 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% > md1_raid1 > 17959 0.00s 0.00s 0K 0K root 1 -- - S 0% > xfsbufd/sdc > > > Is there a limitation set in moosefs or fuse client than sets the > maximum socket transfer at 1Gbit? > Is there a trick to unleash the moose beast? :P > If you have any ideea how can i get the full potential from my > storage please let me know how. > Looking forward for you reply! > > > Kind regards, > > Razvan Dumitrescu > System engineer > Realitatea TV > > > > > > ---------------------------------------------------------------------------- > -- > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and > improve > application availability and disaster protection. Learn more about > boosting > the value of server virtualization. > http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |