Hi,
I use moosefs for movies/music in addition to personal data. My hardware is
mainly second hand dells of 533 to 866 mhz and minimal ram for the majority
of chunkservers, one is a newer atom based box. My master (256meg ram) and
samba server also reside in dells. 100 meg LAN. This 6.2tb feeds my netgear
media player and 3 PC's via samba shares.
While ive never done any analysis or deliberately tried to hammer it in real
life we have no problems.
Typical movies are of 1.5gb size are you trying to move something less
compressed ?
I use ext4 and no raid
My goal for movies isnt as high as 3
Steve
-------Original Message-------
From: 陈轶博
Date: 22/11/2010 08:05:22
To: moosefs-users
Subject: [Moosefs-users] A problem of reading the same file at the
samemoment
Hi
I am jacky.
I'm using MFS to build a movies center for myown video streaming server. And
when I work on stress testing, there is a reading problem.
first,in my application, the requirment is:
the case is occuring all the time that many process read a file at the same
moment. So, my movies center can surpport process as more as possible to
read a file at same moment.
secend,please allow me to instruduce my test enviroment:
hardware:
master: IBM 3550
chunks and clients are the same server:
CPU: inetel Xeon5520 * 2 2.6G (quad-core)
mem: 16G
RAID card: Adaptec 52445
disk: 450G*24, SAS
nic: 3*PCI-E GE
switch: Gigabit Switch H3 S9306
software:
MFS version: 1.6.17 http://www.moosefs.org/download.html
OS: CentOS 5.4 ,64bit, one disk
FS: XFS for CentOS5.4
nic: bonding 4GE, mode=6
RAID: the other 23 disks for RAID6
mfs goal = 1
netork structure:
third, the result of my testing is followed:
Sequential read testing:
#cat /dev/sda1 > /dev/null ..................189MB/S sda is the single disk
for OS.
#cat /dev/sdb1 > /dev/null....................383MB/S sdb1 is the RAID6
#dd if=/dev/sdb1 of=/dev/null BS=4M.........413MB/S
Random read testing on one client: (carbon is my testing program with c,
multi-thread, each thread for one random file, just read the file to a
buffer, then drop the buffer)
#./carbon fp=/mnt/fent fn=1000 TN=8 BS=8M------------------250MB/S
#./carbon fp=/mnt/fent fn=1000 TN=16 BS=8M------------------260MB/S
#./carbon fp=/mnt/fent fn=1000 TN=32 BS=8M------------------240MB/S
#./carbon fp=/mnt/fent fn=1000 TN=64 BS=8M------------------260MB/S
fp=path of file for reading
fn=number of files
TN=number of thread
BS=blocksize(KB)
third, the problem:
there are 3 clients. when I runed {#./carbon fp=/mnt/fent fn=1000 TN=8
BS=8M} on each client,
I fond that the third client(maybe anyone client), may always waiting for
reading, when the 1,2 finished reading some files, the third begin to read.
then, I confirmed this problem in another way:
I rebuilt the envirmont with PC and the same other configration.
on the each client: I run
for (1 to 8 ) {#dd if=/mnt/fent?(1 to 8).ts of=/dev/null BS=4M},
and I found that:
run on first client: the read speed is 70MB/S
run on first and second clients: the read speed is 30~40MB/S
run on 3 clients: the read speed is < 10MB/S,
in my opinion, the result means, the more process (either on one client or
different clients) read the same file at the same moment, the reading
performance is worse. Also, I can set the goal to a bigger value to improve
the performance, but, in my application, the size of each the movies file is
about 3GB. Bigger goal means more storage. The biggest goal value I can
suffer is 3, I'm afraid this can't sovle the reading problem for me.
finally, Is there any thing I can do,except setting the goal value?
2010-11-15
陈轶博
|