Hi!
The whole system keeps testing the chunks. By default it reads one chunk
each 10 seconds and checks the checksums.
Such transfer usually doesn't influence other operations and it is not
recommended to do these tests more rarely. If you really need to change this
you can change HDD_TEST_FREQ from 10 seconds to 60 or 100.
(In our environment the whole testing process takes 50 days)
Kind regards
Michał Borychowski
MooseFS Support Manager
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Gemius S.A.
ul. Wołoska 7, 02-672 Warszawa
Budynek MARS, klatka D
Tel.: +4822 874-41-00
Fax : +4822 874-41-01
-----Original Message-----
From: i- [mailto:it...@it...]
Sent: Wednesday, July 06, 2011 1:52 PM
To: moo...@li...
Subject: [Moosefs-users] CGI interface charts + performance
Hi all,
I'm running a very simple cluster with 4 machines (1 master+meta+chunk,
2 chunk-only, 1 client, all on the same gigabit network, all servers
using SSDs) and I have 2 questions :
1/ in the server charts, my chunkservers all show a lot of bytes read
(7M / s) though they are idle, no client is doing anything, how can this
be possible ? I also noticed there are 2 colors in the charts : light
green and dark green. The chart shows both dark and light green when I
use the cluster and only light green when I don't. What are the colors for ?
2/ I'm using bonnie++ to do some basic performance testing and it shows
the following performance :
- Write : ~40MB/s
- Rewrite : ~4MB/s
- Read : ~50MB/s
I guess the network latency is the bottleneck here because "iostats -m
-x" shows small cpu load and small ssd usage on all machines. How can I
verify that ?
Thank you very much!
----------------------------------------------------------------------------
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
_______________________________________________
moosefs-users mailing list
moo...@li...
https://lists.sourceforge.net/lists/listinfo/moosefs-users
|