From: Michał B. <mic...@ge...> - 2010-11-25 09:55:29
|
Hi! As written on http://www.moosefs.org/moosefs-faq.html#goal increasing goal may only increase the reading speed under certain conditions. You can just try increasing the goal, wait for the replication and see if it helps. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: 陈轶博 [mailto:che...@dv...] Sent: Monday, November 15, 2010 8:37 AM To: moosefs-users Subject: [Moosefs-users] A problem of reading the same file at the same moment Hi I am jacky. I'm using MFS to build a movies center for myown video streaming server. And, when i work on stress testing, there is a reading problem. first,in my application, the requirment is: the case is occuring all the time that many process read a file at the same moment. So, my movies center can surpport process as more as possible to read a file at same moment. secend,please allow me to instruduce my test enviroment: hardware: master: IBM 3550 chunks and clients are the same server: cpu: inetel Xeon5520 * 2 2.6G (quad-core) mem: 16G RAID card: Adaptec 52445 disk: 450G*24, SAS nic: 3*PCI-E GE switch: Gigabit Switch H3 S9306 software: MFS version: 1.6.17 http://www.moosefs.org/download.html OS: CentOS 5.4 ,64bit, one disk FS: XFS for CentOS5.4 nic: bonding 4GE, mode=6 RAID: the other 23 disks for RAID6 mfs goal = 1 netork structure: third, the result of my testing is followed: Sequential read testing: #cat /dev/sda1 > /dev/null ..................189MB/S sda is the single disk for OS. #cat /dev/sdb1 > /dev/null....................383MB/S sdb1 is the RAID6 #dd if=/dev/sdb1 of=/dev/null bs=4M.........413MB/S Random read testing on one client: (carbon is my testing program with c, multi-thread, each thread for one random file, just read the file to a buffer, then drop the buffer) #./carbon fp=/mnt/fent fn=1000 tn=8 bs=8M------------------250MB/S #./carbon fp=/mnt/fent fn=1000 tn=16 bs=8M------------------260MB/S #./carbon fp=/mnt/fent fn=1000 tn=32 bs=8M------------------240MB/S #./carbon fp=/mnt/fent fn=1000 tn=64 bs=8M------------------260MB/S fp=path of file for reading fn=number of files tn=number of thread bs=blocksize(KB) third, the problem: there are 3 clients. when I runed {#./carbon fp=/mnt/fent fn=1000 tn=8 bs=8M} on each client, I fond that the third client(maybe anyone client), may always waiting for reading, when the 1,2 finished reading some files, the third begin to read. then, I confirmed this problem in another way: I rebuilt the envirmont with pc and the same other configration. on the each client: I run for (1 to 8 ) {#dd if=/mnt/fent?(1 to 8).ts of=/dev/null bs=4M}, and I found that: run on first client: the read speed is 70MB/S run on first and second clients: the read speed is 30~40MB/S run on 3 clients: the read speed is < 10MB/S, in my opinion, the result means, the more process (either on one client or different clients) read the same file at the same moment, the reading performance is worse. Also, I can set the goal to a bigger value to improve the performance, but, in my application, the size of each the movies file is about 3GB. Bigger goal means more storage. The biggest goal value I can suffer is 3, I'm afraid this can't sovle the reading problem for me. finally, Is there any thing I can do,except setting the goal value? 2010-11-15 _____ 陈轶博 |