You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Neddy, N. N. <na...@nd...> - 2015-04-20 15:43:46
|
Hi, I'm not a smart man, can somebody help me to find out equation to calculate Total Space of Moosefs? For example: 4 chunkservers which have 2x4TB HDDs each, and goal = 3? Thanks, |
From: Tom I. H. <ti...@ha...> - 2015-04-20 10:33:32
|
Aleksander Wieliczko <ale...@mo...> writes: > Leak error was found and removed from 2.0.50 version. Descriptors number > ware increased in 3.0 version. > So upgrading to latest MooseFS version will solve all this problems. Terrific! Thanks for your help! :) -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier" |
From: Aleksander W. <ale...@mo...> - 2015-04-20 08:56:02
|
Hi. Leak error was found and removed from 2.0.50 version. Descriptors number ware increased in 3.0 version. So upgrading to latest MooseFS version will solve all this problems. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/20/2015 09:31 AM, Tom Ivar Helbekkmo wrote: > Running moosefs-pro-master-2.0.43 on RedHat RHEL6, we've observed an > mfsmaster crash related to its open file limit. The syslog entry is: > > mfsmaster[5732]: main master server module: accept error: EMFILE (Too many open files) > > ...and the other mfsmaster logs: > > mfsmaster[5734]: connection with MASTER-SYNC(1.2.3.4) has been closed by peer > > Now, since the file limit is hardcoded in the source (at least in the > community edition) to 4096, I guess it's not all that easily changeable > by me. > > Could this be a resource leak bug? Is it known? Would upgrading help? > > -tih |
From: Tom I. H. <ti...@ha...> - 2015-04-20 07:48:26
|
Running moosefs-pro-master-2.0.43 on RedHat RHEL6, we've observed an mfsmaster crash related to its open file limit. The syslog entry is: mfsmaster[5732]: main master server module: accept error: EMFILE (Too many open files) ...and the other mfsmaster logs: mfsmaster[5734]: connection with MASTER-SYNC(1.2.3.4) has been closed by peer Now, since the file limit is hardcoded in the source (at least in the community edition) to 4096, I guess it's not all that easily changeable by me. Could this be a resource leak bug? Is it known? Would upgrading help? -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier" |
From: Neddy, N. N. <na...@nd...> - 2015-04-16 14:48:02
|
Hi, I got the same errors when copy from Netware to my PC, so it's possibly wrong on Netware side. It's old data from years ago, perhaps bits rot or something similar that occurred and error happens. I also tried to open FTP from Samba server to Netware but I was unable to retrieve data. I'm not Novell admin, so I don't know how to fix it. About mounting Netware share folders to Samba, I haven't thought about it yet but will try it later. Thanks for your replies, ~Nedd On Thu, Apr 16, 2015 at 5:45 PM, Akhobadze Alexander <akh...@ri...> wrote: > > Hi All! > > And what happens when you copy files from NetWare to some other drive type (NO to SAMBA-MooseFS) ? > Does the error represent if not using SAMBA-MooseFS ? > > Where there is in Your opinion error occurs while reading from NetWare or while writing to SAMBA-MooseFS ? > > May be it is a good idea to mount NetWare volume directly on MooseFS client side and copy files without Windows ? > > Also You can try to copy files from Windows to MooseFS via ftp or sftp. > > --- >> -----Original Message----- >> From: Neddy, NH. Nam [mailto:na...@nd...] >> Sent: Wednesday, April 08, 2015 12:01 PM >> To: MooseFS Users >> Cc: moo...@li... >> Subject: Re: [MooseFS-Users] "An unexpected network error occurred" >> >> Hi Aleksander, >> >> On Samba (3.6.6) server, I switched to different protocols than >> default but still no luck. >> >> client max protocol = LANMAN1 >> >> Also use option -o mfscachemode=NO but the result is the same. >> >> Best regards, >> ~Nedd >> >> On Wed, Apr 8, 2015 at 2:58 PM, MooseFS Users <us...@mo...> >> wrote: >> > Hi. >> > We have two ideas that you can test in your environment. >> > >> > First of all try to force SAMBA server to use only SMB2 or even SMB1 >> > protocol. >> > More details about configuration you can find at: >> > www.samba.org/samba/history/samba-4.1.0.html >> > >> > Another idea is to disable caching in mfsmount: >> > mfsmount -o mfscachemode=NO -H mfsmaster.lan /mount/point >> > >> > We are waiting for your feedback. >> > >> > Best regards >> > Aleksander Wieliczko > > > |
From: Akhobadze A. <akh...@ri...> - 2015-04-16 10:45:35
|
Hi All! And what happens when you copy files from NetWare to some other drive type (NO to SAMBA-MooseFS) ? Does the error represent if not using SAMBA-MooseFS ? Where there is in Your opinion error occurs while reading from NetWare or while writing to SAMBA-MooseFS ? May be it is a good idea to mount NetWare volume directly on MooseFS client side and copy files without Windows ? Also You can try to copy files from Windows to MooseFS via ftp or sftp. --- > -----Original Message----- > From: Neddy, NH. Nam [mailto:na...@nd...] > Sent: Wednesday, April 08, 2015 12:01 PM > To: MooseFS Users > Cc: moo...@li... > Subject: Re: [MooseFS-Users] "An unexpected network error occurred" > > Hi Aleksander, > > On Samba (3.6.6) server, I switched to different protocols than > default but still no luck. > > client max protocol = LANMAN1 > > Also use option -o mfscachemode=NO but the result is the same. > > Best regards, > ~Nedd > > On Wed, Apr 8, 2015 at 2:58 PM, MooseFS Users <us...@mo...> > wrote: > > Hi. > > We have two ideas that you can test in your environment. > > > > First of all try to force SAMBA server to use only SMB2 or even SMB1 > > protocol. > > More details about configuration you can find at: > > www.samba.org/samba/history/samba-4.1.0.html > > > > Another idea is to disable caching in mfsmount: > > mfsmount -o mfscachemode=NO -H mfsmaster.lan /mount/point > > > > We are waiting for your feedback. > > > > Best regards > > Aleksander Wieliczko |
From: Aleksander W. <ale...@mo...> - 2015-04-14 09:40:53
|
Hi Peter Thank you for your replay. First of all. Reading speed will be slower than writing because this data need to be read physically from all hard drives. Writing data can be done asynchronously - I/O is directed to cache and completion is immediately confirmed to the host. This results in low latency and high throughput for write-intensive applications. This mechanism is called write-back cache. Network latency is very important not only for MooseFS but for all distributed file systems. We need to remember that each file is generating many ACK to all components in cluster. So if you want to copy file from MFS to your local disc from folder with goal 3, client need to ask master where is the file. Then master responds to the client and tells where are chunks and on which chunkserver - 27.0 ms Than client starts to communicate with all chunkservers to download file. Now you can see that the time, which is taken to talk with all components is quite high. By default MFS performs read-ahead using 1MB block. Read of each block will be delayed by about 54ms (master+chunkserver latency), so in one second it will perform only about 20 reads 1MB each, so with such latency maximum read speed is about 20MB/s. To increase throughput you may increase read-ahead block size (and also other parameters). For example you may try such parameters for mfsmount: -o mfsreadaheadsize=256 -o mfsreadaheadleng=4194304 -o mfsreadaheadtrigger=8388608 This will set read-ahead block size to 4MB and in your environment it should increase linear reading speed significantly. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/13/2015 02:57 PM, Peter wrote: > Hi,Aleks, > > According to your reply, I list out all the detail information about > my moosefs file system. you can see below, > > After several test, I find that , the read and write speed diff > because of the network latency. > > When the client and mfs master in the same IDC, the reading and > writing speed ,almost the same. > but when the client and mfs master in different city, it show a result > that the reading speed much slow than the write speed. > > can you explain why the network latency make such difference in > reading and writing ? > Also ,according to the test, seems that the reading speed never faster > than the write speed. it's strange, mostly, reading should be faster > than writing? > > > network topology: > ---------------------------------------------------------- > > master: > 10.153.136.230 > > master -metalogger: > 10.153.136.227 > > chunkserver: > 10.153.142.237 > 10.153.142.239 > 10.153.143.98 > > client: > same city: > 10.135.32.170 > > another city: > 10.149.131.111 > > mfs master , mfs metalogger, mfs chunk server are in same IDC. > > client 10.135.32.170 in same city ,different IDC. > > client 10.149.131.111 in other city. > > > Network Latency Test: > -------------------------------------------------------------------------------- > I do this test from the moosefs master server, ping all the other > component. > > mfs master ----> chunkserver > ping 10.153.142.237 > PING 10.153.142.237 (10.153.142.237) 56(84) bytes of data. > 64 bytes from 10.153.142.237 <http://10.153.142.237>: icmp_seq=1 > ttl=61 time=0.102 ms > 64 bytes from 10.153.142.237 <http://10.153.142.237>: icmp_seq=2 > ttl=61 time=0.084 ms > > > mfs master -----> another city client > ping 10.149.131.111 > PING 10.149.131.111 (10.149.131.111) 56(84) bytes of data. > 64 bytes from 10.149.131.111 <http://10.149.131.111>: icmp_seq=1 > ttl=55 time=27.0 ms > 64 bytes from 10.149.131.111 <http://10.149.131.111>: icmp_seq=2 > ttl=55 time=27.0 ms > > > mfs master -----------> same city client > ping 10.135.32.170 > PING 10.135.32.170 (10.135.32.170) 56(84) bytes of data. > 64 bytes from 10.135.32.170 <http://10.135.32.170>: icmp_seq=2 ttl=57 > time=2.28 ms > 64 bytes from 10.135.32.170 <http://10.135.32.170>: icmp_seq=3 ttl=57 > time=2.27 ms > > > mfs version information: > ----------------------------------------------------------------------------------- > the mfs version(current version): > ./mfsmaster -v > version: 2.0.61-1 > > also, I do the test in > ./mfsmaster -v > version: 1.6.27 > > ./mfsmount -V > MFS version 2.0.61-1 > FUSE library version: 2.8.3 > fusermount version: 2.8.3 > > > > *client read and write test:* > ------------------------------------------------------------------------------------------- > > > *Test1: same city client: 10.135.32.170* > read from mfs: > [root@TENCENT64 /mnt/mfs]# time dd if=/mnt/mfs/1-1.iso of=/dev/null > > 1024000+0 records in > 1024000+0 records out > 524288000 bytes (524 MB) copied, 12.8321 s, 40.9 MB/s > > real 0m12.841s > user 0m0.092s > sys 0m0.364s > [root@TENCENT64 /mnt/mfs]# time dd if=/mnt/mfs/1-2.iso of=/dev/null > 1024000+0 records in > 1024000+0 records out > 524288000 bytes (524 MB) copied, 10.148 s, 51.7 MB/s > > real 0m10.154s > user 0m0.120s > sys 0m0.328s > > write to mfs: > [root@TENCENT64 /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/2-2.iso > bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 5.04111 s, 104 MB/s > > real 0m5.047s > user 0m0.000s > sys 0m0.204s > [root@TENCENT64 /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/2-3.iso > bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 5.46042 s, 96.0 MB/s > > real 0m5.467s > user 0m0.000s > sys 0m0.208s > > > *another client in same city:* > read from mfs: > [root@TENCENT64 /mnt/mfs]# time cp /mnt/mfs/555-new.iso /data/ > > real 0m9.869s > user 0m0.000s > sys 0m0.669s > > [root@TENCENT64 /mnt/mfs]# time cp /mnt/mfs/555.iso /data/ > > real 0m9.980s > user 0m0.004s > sys 0m0.687s > > write to mfs: > [root@TENCENT64 /data]# time dd if=/dev/zero of=/mnt/mfs/1-1.iso bs=1M > count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 5.30263 s, 98.9 MB/s > > real 0m5.332s > user 0m0.000s > sys 0m0.349s > [root@TENCENT64 /data]# time dd if=/dev/zero of=/mnt/mfs/1-2.iso bs=1M > count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 5.12036 s, 102 MB/s > > real 0m5.128s > user 0m0.004s > sys 0m0.345s > > > > *client in other city test:* > > write to mfs: > [root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/3-1.iso > bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 13.2458 s, 39.6 MB/s > > real 0m13.332s > user 0m0.002s > sys 0m0.467s > [root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/3-2.iso > bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 12.1528 s, 43.1 MB/s > > real 0m12.211s > user 0m0.002s > sys 0m0.414s > > > > read from mfs: > > [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-1.iso of=/dev/null > 1024000+0 records in > 1024000+0 records out > 524288000 bytes (524 MB) copied, 44.355 s, 11.8 MB/s > > real 0m44.413s > user 0m0.202s > sys 0m0.653s > [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-2.iso of=/dev/null > 1024000+0 records in > 1024000+0 records out > 524288000 bytes (524 MB) copied, 45.0923 s, 11.6 MB/s > > real 0m45.151s > user 0m0.197s > sys 0m0.743s > > > > *last , test the client and mfs master in the same server (means in > same IDC):* > write to mfs: > > root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/4-1.iso > bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 4.71052 s, 111 MB/s > > real 0m4.712s > user 0m0.000s > sys 0m0.272s > [root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/4-2.iso > bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes (524 MB) copied, 4.63003 s, 113 MB/s > > real 0m4.632s > user 0m0.000s > sys 0m0.284s > > > read from mfs : > > [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-1.iso of=/dev/null > 1024000+0 records in > 1024000+0 records out > 524288000 bytes (524 MB) copied, 4.68863 s, 112 MB/s > > real 0m4.690s > user 0m0.140s > sys 0m0.412s > [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-2.iso of=/dev/null > 1024000+0 records in > 1024000+0 records out > 524288000 bytes (524 MB) copied, 4.68907 s, 112 MB/s > > real 0m4.690s > user 0m0.120s > sys 0m0.412s > > > > > > > > > > On Mon, Apr 13, 2015 at 2:57 AM, Aleksander Wieliczko > <ale...@mo... > <mailto:ale...@mo...>> wrote: > > Hi Peter > > Which version of MooseFS are you using? > > Can you tell something more about your topology - configuration? > Is your MooseFS components are in the same LAN ? > Can you check latency between all MooseFS components? > > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com> > > > On 12.04.2015 09:52, Peter wrote: >> Hi moosefs team, >> >> Recently, I setup a moosefs filesystem. >> with the following topology: >> >> Master server: 1 >> Mesta logger server: 1 >> chunkserver: 3 >> >> and each file will keep 3 copy. >> >> base in this topology, I have run some test: >> >> *and I find that write data to the mfs file system is much faster >> than the reading speed, this test result is confusing me. can you >> guys explain a little bit this for me?* >> >> >> all the test file are the same size >> /mnt/mfs --> moosefs file system. >> /data --> local disk >> --------------------------------------------------------------------------------------------- >> >> [root@TENCENT64 /mnt/mfs]# time cp 111.iso /mnt/mfs/ >> cp: cannot stat `111.iso': No such file or directory >> >> real 0m0.002s >> user 0m0.000s >> sys 0m0.000s >> [root@TENCENT64 /mnt/mfs]# time cp /data/111.iso /mnt/mfs/ >> >> real 0m5.065s >> user 0m0.000s >> sys 0m0.316s >> [root@TENCENT64 /mnt/mfs]# time cp 555.iso /data/ >> >> real 0m11.156s >> user 0m0.000s >> sys 0m0.780s >> [root@TENCENT64 /mnt/mfs]# ^C >> [root@TENCENT64 /mnt/mfs]# time cp 666.iso /data >> >> real 0m11.165s >> user 0m0.004s >> sys 0m0.864s >> [root@TENCENT64 /mnt/mfs]# time cp /data/555.iso /mnt/mfs/555-new.iso >> >> real 0m5.045s >> user 0m0.000s >> sys 0m0.300s >> [root@TENCENT64 /mnt/mfs]# ll -h >> total 3.0G >> -rw-r--r-- 1 root root 500M Apr 12 14:36 111.iso >> -rw-r--r-- 1 root root 500M Apr 12 15:33 555-new.iso >> -rw-r--r-- 1 root root 500M Apr 11 01:43 555.iso >> -rw-r--r-- 1 root root 500M Apr 11 01:46 666.iso >> -rw-r--r-- 1 root root 500M Apr 11 01:06 aaa.iso >> -rw-r--r-- 1 root root 500M Apr 11 01:39 bbb.iso >> >> >> also, i use another client server to run the test , (the client >> server in other city) >> >> we can see, the read speed is much more slow than the writing speed. >> >> [root@Tencent-SNG /mnt/mfs]# time cp 111.iso /data/ >> >> real 0m44.736s >> user 0m0.005s >> sys 0m1.299s >> [root@Tencent-SNG /mnt/mfs]# time cp /data/222.iso ./ >> >> [root@Tencent-SNG /data]# time cp 333.iso /mnt/mfs/333-new.iso >> >> real 0m12.432s >> user 0m0.008s >> sys 0m0.614s >> >> Also, in case there is some kernel or linux version problem, >> I setup the moosefs master in centos 6.3 and centos 7.0. >> still have the same problem. >> >> I also search the internet , and find that, the other also have >> the same problem. >> >> Other people's performance test data: >> Block size 1M Filesize20G >> Client1 write:68.4MB/s read:25.3MB/s >> Client2 write:67.5MB/s read:24.7MB/s >> >> >> >> ------------------------------------------------------------------------------ >> BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT >> Develop your own process in accordance with the BPMN 2 standard >> Learn Process modeling best practices with Bonita BPM through live exercises >> http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ >> source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Peter <new...@gm...> - 2015-04-13 12:57:22
|
Hi,Aleks, According to your reply, I list out all the detail information about my moosefs file system. you can see below, After several test, I find that , the read and write speed diff because of the network latency. When the client and mfs master in the same IDC, the reading and writing speed ,almost the same. but when the client and mfs master in different city, it show a result that the reading speed much slow than the write speed. can you explain why the network latency make such difference in reading and writing ? Also ,according to the test, seems that the reading speed never faster than the write speed. it's strange, mostly, reading should be faster than writing? network topology: ---------------------------------------------------------- master: 10.153.136.230 master -metalogger: 10.153.136.227 chunkserver: 10.153.142.237 10.153.142.239 10.153.143.98 client: same city: 10.135.32.170 another city: 10.149.131.111 mfs master , mfs metalogger, mfs chunk server are in same IDC. client 10.135.32.170 in same city ,different IDC. client 10.149.131.111 in other city. all these server has : 1000Mb/s speed. ------------------------------------------------------------ ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) Network Latency Test: -------------------------------------------------------------------------------- I do this test from the moosefs master server, ping all the other component. mfs master ----> chunkserver ping 10.153.142.237 PING 10.153.142.237 (10.153.142.237) 56(84) bytes of data. 64 bytes from 10.153.142.237: icmp_seq=1 ttl=61 time=0.102 ms 64 bytes from 10.153.142.237: icmp_seq=2 ttl=61 time=0.084 ms mfs master -----> another city client ping 10.149.131.111 PING 10.149.131.111 (10.149.131.111) 56(84) bytes of data. 64 bytes from 10.149.131.111: icmp_seq=1 ttl=55 time=27.0 ms 64 bytes from 10.149.131.111: icmp_seq=2 ttl=55 time=27.0 ms mfs master -----------> same city client ping 10.135.32.170 PING 10.135.32.170 (10.135.32.170) 56(84) bytes of data. 64 bytes from 10.135.32.170: icmp_seq=2 ttl=57 time=2.28 ms 64 bytes from 10.135.32.170: icmp_seq=3 ttl=57 time=2.27 ms mfs version information: ----------------------------------------------------------------------------------- the mfs version(current version): ./mfsmaster -v version: 2.0.61-1 also, I do the test in ./mfsmaster -v version: 1.6.27 ./mfsmount -V MFS version 2.0.61-1 FUSE library version: 2.8.3 fusermount version: 2.8.3 *client read and write test:* ------------------------------------------------------------------------------------------- *Test1: same city client: 10.135.32.170* read from mfs: [root@TENCENT64 /mnt/mfs]# time dd if=/mnt/mfs/1-1.iso of=/dev/null 1024000+0 records in 1024000+0 records out 524288000 bytes (524 MB) copied, 12.8321 s, 40.9 MB/s real 0m12.841s user 0m0.092s sys 0m0.364s [root@TENCENT64 /mnt/mfs]# time dd if=/mnt/mfs/1-2.iso of=/dev/null 1024000+0 records in 1024000+0 records out 524288000 bytes (524 MB) copied, 10.148 s, 51.7 MB/s real 0m10.154s user 0m0.120s sys 0m0.328s write to mfs: [root@TENCENT64 /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/2-2.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 5.04111 s, 104 MB/s real 0m5.047s user 0m0.000s sys 0m0.204s [root@TENCENT64 /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/2-3.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 5.46042 s, 96.0 MB/s real 0m5.467s user 0m0.000s sys 0m0.208s *another client in same city:* read from mfs: [root@TENCENT64 /mnt/mfs]# time cp /mnt/mfs/555-new.iso /data/ real 0m9.869s user 0m0.000s sys 0m0.669s [root@TENCENT64 /mnt/mfs]# time cp /mnt/mfs/555.iso /data/ real 0m9.980s user 0m0.004s sys 0m0.687s write to mfs: [root@TENCENT64 /data]# time dd if=/dev/zero of=/mnt/mfs/1-1.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 5.30263 s, 98.9 MB/s real 0m5.332s user 0m0.000s sys 0m0.349s [root@TENCENT64 /data]# time dd if=/dev/zero of=/mnt/mfs/1-2.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 5.12036 s, 102 MB/s real 0m5.128s user 0m0.004s sys 0m0.345s *client in other city test:* write to mfs: [root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/3-1.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 13.2458 s, 39.6 MB/s real 0m13.332s user 0m0.002s sys 0m0.467s [root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/3-2.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 12.1528 s, 43.1 MB/s real 0m12.211s user 0m0.002s sys 0m0.414s read from mfs: [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-1.iso of=/dev/null 1024000+0 records in 1024000+0 records out 524288000 bytes (524 MB) copied, 44.355 s, 11.8 MB/s real 0m44.413s user 0m0.202s sys 0m0.653s [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-2.iso of=/dev/null 1024000+0 records in 1024000+0 records out 524288000 bytes (524 MB) copied, 45.0923 s, 11.6 MB/s real 0m45.151s user 0m0.197s sys 0m0.743s *last , test the client and mfs master in the same server (means in same IDC):* write to mfs: root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/4-1.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 4.71052 s, 111 MB/s real 0m4.712s user 0m0.000s sys 0m0.272s [root@Tencent-SNG /mnt/mfs]# time dd if=/dev/zero of=/mnt/mfs/4-2.iso bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 4.63003 s, 113 MB/s real 0m4.632s user 0m0.000s sys 0m0.284s read from mfs : [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-1.iso of=/dev/null 1024000+0 records in 1024000+0 records out 524288000 bytes (524 MB) copied, 4.68863 s, 112 MB/s real 0m4.690s user 0m0.140s sys 0m0.412s [root@Tencent-SNG /mnt/mfs]# time dd if=/mnt/mfs/1-2.iso of=/dev/null 1024000+0 records in 1024000+0 records out 524288000 bytes (524 MB) copied, 4.68907 s, 112 MB/s real 0m4.690s user 0m0.120s sys 0m0.412s On Mon, Apr 13, 2015 at 2:57 AM, Aleksander Wieliczko < ale...@mo...> wrote: > Hi Peter > > Which version of MooseFS are you using? > > Can you tell something more about your topology - configuration? > Is your MooseFS components are in the same LAN ? > Can you check latency between all MooseFS components? > > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com> > > > On 12.04.2015 09:52, Peter wrote: > > Hi moosefs team, > > Recently, I setup a moosefs filesystem. > with the following topology: > > Master server: 1 > Mesta logger server: 1 > chunkserver: 3 > > and each file will keep 3 copy. > > base in this topology, I have run some test: > > *and I find that write data to the mfs file system is much faster than > the reading speed, this test result is confusing me. can you guys explain a > little bit this for me?* > > > all the test file are the same size > /mnt/mfs --> moosefs file system. > /data --> local disk > > --------------------------------------------------------------------------------------------- > > [root@TENCENT64 /mnt/mfs]# time cp 111.iso /mnt/mfs/ > cp: cannot stat `111.iso': No such file or directory > > real 0m0.002s > user 0m0.000s > sys 0m0.000s > [root@TENCENT64 /mnt/mfs]# time cp /data/111.iso /mnt/mfs/ > > real 0m5.065s > user 0m0.000s > sys 0m0.316s > [root@TENCENT64 /mnt/mfs]# time cp 555.iso /data/ > > real 0m11.156s > user 0m0.000s > sys 0m0.780s > [root@TENCENT64 /mnt/mfs]# ^C > [root@TENCENT64 /mnt/mfs]# time cp 666.iso /data > > real 0m11.165s > user 0m0.004s > sys 0m0.864s > [root@TENCENT64 /mnt/mfs]# time cp /data/555.iso /mnt/mfs/555-new.iso > > real 0m5.045s > user 0m0.000s > sys 0m0.300s > [root@TENCENT64 /mnt/mfs]# ll -h > total 3.0G > -rw-r--r-- 1 root root 500M Apr 12 14:36 111.iso > -rw-r--r-- 1 root root 500M Apr 12 15:33 555-new.iso > -rw-r--r-- 1 root root 500M Apr 11 01:43 555.iso > -rw-r--r-- 1 root root 500M Apr 11 01:46 666.iso > -rw-r--r-- 1 root root 500M Apr 11 01:06 aaa.iso > -rw-r--r-- 1 root root 500M Apr 11 01:39 bbb.iso > > > also, i use another client server to run the test , (the client server > in other city) > > we can see, the read speed is much more slow than the writing speed. > > [root@Tencent-SNG /mnt/mfs]# time cp 111.iso /data/ > > real 0m44.736s > user 0m0.005s > sys 0m1.299s > [root@Tencent-SNG /mnt/mfs]# time cp /data/222.iso ./ > > [root@Tencent-SNG /data]# time cp 333.iso /mnt/mfs/333-new.iso > > real 0m12.432s > user 0m0.008s > sys 0m0.614s > > Also, in case there is some kernel or linux version problem, > I setup the moosefs master in centos 6.3 and centos 7.0. > still have the same problem. > > I also search the internet , and find that, the other also have the same > problem. > > Other people's performance test data: > Block size 1M Filesize20G > Client1 write:68.4MB/s read:25.3MB/s > Client2 write:67.5MB/s read:24.7MB/s > > > > ------------------------------------------------------------------------------ > BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT > Develop your own process in accordance with the BPMN 2 standard > Learn Process modeling best practices with Bonita BPM through live exerciseshttp://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ > source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF > > > > _________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Piotr R. K. <pio...@mo...> - 2015-04-13 10:13:38
|
Hello, we are in the middle of testing MooseFS 3.0 in production environment. We think, that it should become stable until end of April. -- Best regards, Piotr Robert Konopelko *MooseFS Technical Support Engineer* | moosefs.com[1] -------- [1] http://moosefs.com |
From: Aleksander W. <ale...@mo...> - 2015-04-12 18:57:51
|
Hi Peter Which version of MooseFS are you using? Can you tell something more about your topology - configuration? Is your MooseFS components are in the same LAN ? Can you check latency between all MooseFS components? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 12.04.2015 09:52, Peter wrote: > Hi moosefs team, > > Recently, I setup a moosefs filesystem. > with the following topology: > > Master server: 1 > Mesta logger server: 1 > chunkserver: 3 > > and each file will keep 3 copy. > > base in this topology, I have run some test: > > *and I find that write data to the mfs file system is much faster than > the reading speed, this test result is confusing me. can you guys > explain a little bit this for me?* > > > all the test file are the same size > /mnt/mfs --> moosefs file system. > /data --> local disk > --------------------------------------------------------------------------------------------- > > [root@TENCENT64 /mnt/mfs]# time cp 111.iso /mnt/mfs/ > cp: cannot stat `111.iso': No such file or directory > > real 0m0.002s > user 0m0.000s > sys 0m0.000s > [root@TENCENT64 /mnt/mfs]# time cp /data/111.iso /mnt/mfs/ > > real 0m5.065s > user 0m0.000s > sys 0m0.316s > [root@TENCENT64 /mnt/mfs]# time cp 555.iso /data/ > > real 0m11.156s > user 0m0.000s > sys 0m0.780s > [root@TENCENT64 /mnt/mfs]# ^C > [root@TENCENT64 /mnt/mfs]# time cp 666.iso /data > > real 0m11.165s > user 0m0.004s > sys 0m0.864s > [root@TENCENT64 /mnt/mfs]# time cp /data/555.iso /mnt/mfs/555-new.iso > > real 0m5.045s > user 0m0.000s > sys 0m0.300s > [root@TENCENT64 /mnt/mfs]# ll -h > total 3.0G > -rw-r--r-- 1 root root 500M Apr 12 14:36 111.iso > -rw-r--r-- 1 root root 500M Apr 12 15:33 555-new.iso > -rw-r--r-- 1 root root 500M Apr 11 01:43 555.iso > -rw-r--r-- 1 root root 500M Apr 11 01:46 666.iso > -rw-r--r-- 1 root root 500M Apr 11 01:06 aaa.iso > -rw-r--r-- 1 root root 500M Apr 11 01:39 bbb.iso > > > also, i use another client server to run the test , (the client server > in other city) > > we can see, the read speed is much more slow than the writing speed. > > [root@Tencent-SNG /mnt/mfs]# time cp 111.iso /data/ > > real 0m44.736s > user 0m0.005s > sys 0m1.299s > [root@Tencent-SNG /mnt/mfs]# time cp /data/222.iso ./ > > [root@Tencent-SNG /data]# time cp 333.iso /mnt/mfs/333-new.iso > > real 0m12.432s > user 0m0.008s > sys 0m0.614s > > Also, in case there is some kernel or linux version problem, > I setup the moosefs master in centos 6.3 and centos 7.0. > still have the same problem. > > I also search the internet , and find that, the other also have the > same problem. > > Other people's performance test data: > Block size 1M Filesize20G > Client1 write:68.4MB/s read:25.3MB/s > Client2 write:67.5MB/s read:24.7MB/s > > > > ------------------------------------------------------------------------------ > BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT > Develop your own process in accordance with the BPMN 2 standard > Learn Process modeling best practices with Bonita BPM through live exercises > http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ > source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Peter <new...@gm...> - 2015-04-12 07:52:21
|
Hi moosefs team, Recently, I setup a moosefs filesystem. with the following topology: Master server: 1 Mesta logger server: 1 chunkserver: 3 and each file will keep 3 copy. base in this topology, I have run some test: *and I find that write data to the mfs file system is much faster than the reading speed, this test result is confusing me. can you guys explain a little bit this for me?* all the test file are the same size /mnt/mfs --> moosefs file system. /data --> local disk --------------------------------------------------------------------------------------------- [root@TENCENT64 /mnt/mfs]# time cp 111.iso /mnt/mfs/ cp: cannot stat `111.iso': No such file or directory real 0m0.002s user 0m0.000s sys 0m0.000s [root@TENCENT64 /mnt/mfs]# time cp /data/111.iso /mnt/mfs/ real 0m5.065s user 0m0.000s sys 0m0.316s [root@TENCENT64 /mnt/mfs]# time cp 555.iso /data/ real 0m11.156s user 0m0.000s sys 0m0.780s [root@TENCENT64 /mnt/mfs]# ^C [root@TENCENT64 /mnt/mfs]# time cp 666.iso /data real 0m11.165s user 0m0.004s sys 0m0.864s [root@TENCENT64 /mnt/mfs]# time cp /data/555.iso /mnt/mfs/555-new.iso real 0m5.045s user 0m0.000s sys 0m0.300s [root@TENCENT64 /mnt/mfs]# ll -h total 3.0G -rw-r--r-- 1 root root 500M Apr 12 14:36 111.iso -rw-r--r-- 1 root root 500M Apr 12 15:33 555-new.iso -rw-r--r-- 1 root root 500M Apr 11 01:43 555.iso -rw-r--r-- 1 root root 500M Apr 11 01:46 666.iso -rw-r--r-- 1 root root 500M Apr 11 01:06 aaa.iso -rw-r--r-- 1 root root 500M Apr 11 01:39 bbb.iso also, i use another client server to run the test , (the client server in other city) we can see, the read speed is much more slow than the writing speed. [root@Tencent-SNG /mnt/mfs]# time cp 111.iso /data/ real 0m44.736s user 0m0.005s sys 0m1.299s [root@Tencent-SNG /mnt/mfs]# time cp /data/222.iso ./ [root@Tencent-SNG /data]# time cp 333.iso /mnt/mfs/333-new.iso real 0m12.432s user 0m0.008s sys 0m0.614s Also, in case there is some kernel or linux version problem, I setup the moosefs master in centos 6.3 and centos 7.0. still have the same problem. I also search the internet , and find that, the other also have the same problem. Other people's performance test data: Block size 1M Filesize20G Client1 write:68.4MB/s read:25.3MB/s Client2 write:67.5MB/s read:24.7MB/s |
From: Michael T. <mic...@ho...> - 2015-04-11 02:54:50
|
In another thread (see "Running multiple chunk servers on one machine" thread): "By the way in our last test in 10Gb LAN with 10 chunkservers each 3x 7200 RPM HDD we achieved 900MB/s on one mfsmount during writing file."This got me thinking, which is better performance-wise: having a few chunkservers with lots of HDD's/SSD's or more chunkservers with fewer disks? --- mike t. |
From: Piotr R. K. <pio...@mo...> - 2015-04-10 20:49:18
|
Hello, CGI Server is, as before, in package moosefs-cgiserv (previously it was named moosefs-ce-cgiserv). -- Best regards, Piotr Robert Konopelko MooseFS Technical Support Engineer // Sent from my phone, sorry for condensed form On Apr 10, 2015 10:44 PM, web user <web...@gm...> wrote: > > Yes. I understand that and I was able to install the moosefs-master and moosefs-chunkserver and moosefs-client. However, I'm missing the the cgi server for the web admin page. Which package is it in? > > On Fri, Apr 10, 2015 at 4:37 PM, Aleksander Wieliczko <ale...@mo...> wrote: >> >> In new MooseFS 2.0.60 GPLv2 and above package names have been changed. >> Now to install MooseFS components just use moosefs-master, moosefs-chunkserver, moosefs-client instead of moosefs-ce-master, moosefs-ce-chunkserver, moosefs-ce-client. >> >> To get more information on how to install MooseFS GPLv2 product please read the MooseFS 2.0.60 manual instructions specific to your platform. >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com >> >> On 10.04.2015 19:24, web user wrote: >>> >>> Hi, >>> >>> I was just about to install moosefs server on a new machine and can't seem to find the following package: >>> >>> moosefs-ce-cgiserv >>> >>> I see that it is still there in the pro version. This was there a few months back and seems to have recently removed. Is this now only available in the pro version. If so, that sucks, since it was a nice to have tool... >>> >>> Thanks, >>> >>> WU >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT >>> >>> Develop your own process in accordance with the BPMN 2 standard >>> >>> Learn Process modeling best practices with Bonita BPM through live exercises >>> >>> http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ >>> >>> source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF >>> >>> >>> >>> _________________________________________ >>> >>> moosefs-users mailing list >>> >>> moo...@li... >>> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> > |
From: web u. <web...@gm...> - 2015-04-10 20:44:30
|
Yes. I understand that and I was able to install the moosefs-master and moosefs-chunkserver and moosefs-client. However, I'm missing the the cgi server for the web admin page. Which package is it in? On Fri, Apr 10, 2015 at 4:37 PM, Aleksander Wieliczko < ale...@mo...> wrote: > In new MooseFS 2.0.60 GPLv2 and above package names have been changed. > Now to install MooseFS components just use moosefs-master, > moosefs-chunkserver, moosefs-client instead of moosefs-ce-master, > moosefs-ce-chunkserver, moosefs-ce-client. > > To get more information on how to install MooseFS GPLv2 product please > read the MooseFS 2.0.60 manual instructions specific to your platform. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com> > > On 10.04.2015 19:24, web user wrote: > > Hi, > > I was just about to install moosefs server on a new machine and can't > seem to find the following package: > > moosefs-ce-cgiserv > > I see that it is still there in the pro version. This was there a few > months back and seems to have recently removed. Is this now only available > in the pro version. If so, that sucks, since it was a nice to have tool... > > Thanks, > > WU > > > ------------------------------------------------------------------------------ > BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT > Develop your own process in accordance with the BPMN 2 standard > Learn Process modeling best practices with Bonita BPM through live exerciseshttp://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ > source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF > > > > _________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Aleksander W. <ale...@mo...> - 2015-04-10 20:38:00
|
In new MooseFS 2.0.60 GPLv2 and above package names have been changed. Now to install MooseFS components just use moosefs-master, moosefs-chunkserver, moosefs-client instead of moosefs-ce-master, moosefs-ce-chunkserver, moosefs-ce-client. To get more information on how to install MooseFS GPLv2 product please read the MooseFS 2.0.60 manual instructions specific to your platform. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 10.04.2015 19:24, web user wrote: > Hi, > > I was just about to install moosefs server on a new machine and can't > seem to find the following package: > > moosefs-ce-cgiserv > > I see that it is still there in the pro version. This was there a few > months back and seems to have recently removed. Is this now only > available in the pro version. If so, that sucks, since it was a nice > to have tool... > > Thanks, > > WU > > > ------------------------------------------------------------------------------ > BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT > Develop your own process in accordance with the BPMN 2 standard > Learn Process modeling best practices with Bonita BPM through live exercises > http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ > source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: web u. <web...@gm...> - 2015-04-10 18:33:39
|
Figure it out. I cleaned up /var/lib/mfs and also removed the .metaid file in each of my disks. All looks good... On Fri, Apr 10, 2015 at 2:16 PM, web user <web...@gm...> wrote: > I'm in the process of setting up a fast moosefs (built on top of ssds) in > addition to mastermfs running. > > I did this first without creating a new dns server and the chunkservers > registered with my old moosefs. I realized this quickly and changed the > configuration files to create a new alias "mfsmaterfast" > > However, when I now start up the services, they all startup but somehow > the disks are not registering with the mfsmasterfast (running on the same > host). Here is the error message from syslog: > > Apr 10 14:06:07 msi-90 mfsmasterfast[1596]: connection with > CS(192.168.4.90) has been closed by peer > Apr 10 14:06:07 msi-90 mfsmasterfast[1596]: chunkserver disconnected - ip: > 192.168.4.90 / port: 9422, usedspace: 2054701056 (1.91 GiB), totalspace: > 3300572712960 (3073.90 GiB) > Apr 10 14:06:07 msi-90 mfsmasterfast[1596]: server ip: 192.168.4.90 / > port: 9422 has been fully removed from data structures > Apr 10 14:06:12 msi-90 mfschunkserverfast[1599]: connecting ... > Apr 10 14:06:12 msi-90 mfschunkserverfast[1599]: connected to Master > Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: csdb: found cs using ip:port > and csid (192.168.4.90:9422,3) > Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: chunkserver register begin > (packet version: 6) - ip: 192.168.4.90 / port: 9422, usedspace: 2054701056 > (1.91 GiB), totalspace: 3300572712960 (3073.90 GiB) > Apr 10 14:06:12 msi-90 mfschunkserverfast[1599]: MATOCS_MASTER_ACK - wrong > meta data id. Can't connect to master > Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: connection with > CS(192.168.4.90) has been closed by peer > Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: chunkserver disconnected - ip: > 192.168.4.90 / port: 9422, usedspace: 2054701056 (1.91 GiB), totalspace: > 3300572712960 (3073.90 GiB) > Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: server ip: 192.168.4.90 / > port: 9422 has been fully removed from data structures > > Any ideas on what I'm doing wrong? > |
From: web u. <web...@gm...> - 2015-04-10 18:16:13
|
I'm in the process of setting up a fast moosefs (built on top of ssds) in addition to mastermfs running. I did this first without creating a new dns server and the chunkservers registered with my old moosefs. I realized this quickly and changed the configuration files to create a new alias "mfsmaterfast" However, when I now start up the services, they all startup but somehow the disks are not registering with the mfsmasterfast (running on the same host). Here is the error message from syslog: Apr 10 14:06:07 msi-90 mfsmasterfast[1596]: connection with CS(192.168.4.90) has been closed by peer Apr 10 14:06:07 msi-90 mfsmasterfast[1596]: chunkserver disconnected - ip: 192.168.4.90 / port: 9422, usedspace: 2054701056 (1.91 GiB), totalspace: 3300572712960 (3073.90 GiB) Apr 10 14:06:07 msi-90 mfsmasterfast[1596]: server ip: 192.168.4.90 / port: 9422 has been fully removed from data structures Apr 10 14:06:12 msi-90 mfschunkserverfast[1599]: connecting ... Apr 10 14:06:12 msi-90 mfschunkserverfast[1599]: connected to Master Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: csdb: found cs using ip:port and csid (192.168.4.90:9422,3) Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: chunkserver register begin (packet version: 6) - ip: 192.168.4.90 / port: 9422, usedspace: 2054701056 (1.91 GiB), totalspace: 3300572712960 (3073.90 GiB) Apr 10 14:06:12 msi-90 mfschunkserverfast[1599]: MATOCS_MASTER_ACK - wrong meta data id. Can't connect to master Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: connection with CS(192.168.4.90) has been closed by peer Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: chunkserver disconnected - ip: 192.168.4.90 / port: 9422, usedspace: 2054701056 (1.91 GiB), totalspace: 3300572712960 (3073.90 GiB) Apr 10 14:06:12 msi-90 mfsmasterfast[1596]: server ip: 192.168.4.90 / port: 9422 has been fully removed from data structures Any ideas on what I'm doing wrong? |
From: web u. <web...@gm...> - 2015-04-10 17:24:58
|
Hi, I was just about to install moosefs server on a new machine and can't seem to find the following package: moosefs-ce-cgiserv I see that it is still there in the pro version. This was there a few months back and seems to have recently removed. Is this now only available in the pro version. If so, that sucks, since it was a nice to have tool... Thanks, WU |
From: Aleksander W. <ale...@mo...> - 2015-04-10 15:17:03
|
Hi > I'm setting up a fast network store for temp data backed up by multiple ssd drives. The server is dedicated for moosefs. I don't care about redundancy for now. Is it going to be faster to > run multiple chunk servers on one machine? We doubt that you will achieve better performance in this configuration. In our manual we don't advice to use such a configuration. But during our next internal tests we will check it. By the way in our last test in 10Gb LAN with 10 chunkservers each 3x 7200 RPM HDD we achieved 900MB/s on one mfsmount during writing file. P.S. If you really don't care about redundancy you can try to set-up RAID0 for ssd devices and connect it to mfschunkserver. Remember that we don't advice to use RAID in MooseFS but in this specific case you can try this configuration. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> |
From: Neddy, N. N. <na...@nd...> - 2015-04-10 14:20:26
|
Hi, May I know the roadmap of Moosefs 3 when will it become stable? I've read its label function and it looks like very promising. Thanks,. On Fri, Apr 3, 2015 at 6:10 PM, Krzysztof Kielak <krz...@mo...> wrote: > Dear MooseFS Community, > > A new version of MooseFS is coming to you on Easter with some new exciting > features and specially built packages for “MooseFS 3.0 Easter Edition for > Raspberry Pi 2” running on Raspbian Wheezy :) > > Among many updates and fixes, that are laboriously listed in source packages > change log, here’s a quick view on most important changes and new features: > > - Added new functionality for Storage Tiering (possibility to assign labels > to chunkservers and use those labels in mfssetgoal expressions when defining > policies for storing, keeping and archiving data), > - Performance improvement for small file random I/O due to the new semantics > for fsync() system call, which now by default has the same behaviour as on > any local Linux/FreeBSD filesystem, > - Support for global locks compatible with POSIX locks and flock() advisory > lock mechanism when using fuse 2.9+. > > Simple test of unpacking complete source tree for latest Linux kernel (more > than 50k objects with cumulative size of more than 630 MB when unpacked) > shows 100% speed improvement when going from MooseFS version 2.0.x to > version 3.0.x. > > Be sure to check the documentation on new storage tiering feature and > installation instructions at http://moosefs.com. > > Best Regards, > Krzysztof Kielak > Director of Operations and Customer Support > Mobile: +48 601 476 440 > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: web u. <web...@gm...> - 2015-04-10 13:45:06
|
I'm setting up a fast network store for temp data backed up by multiple ssd drives. The server is dedicated for moosefs. I don't care about redundancy for now. Is it going to be faster to run multiple chunk servers on one machine? I'm running on Ubuntu with the default ppm. How do I setup multiple chunk servers to see the different config files... Thanks, WU |
From: Aleksander W. <ale...@mo...> - 2015-04-10 09:54:30
|
Hi Check if you have admin flag added in your mfsexports.cfg file. For tests you can add this flag for "*" addresses and root folder. * / rw,alldirs,admin,maproot=0 This option was deployed in MooseFS 3.0 to prevent changing goals by default. Remember that new mfssetgoal tool is able to change not only goals but labels. So good idea is to give this rights only for administrator. More about labels you can find in our document "MooseFS 3.0 LABELS Manual" at: https://moosefs.com/documentation.html Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/10/2015 11:10 AM, Ben Harker wrote: > > Hi all, by no means a pro here, but i've had successful mfs instances > running before. > > > I'm trying out 3.0.13, with 6 or so chunkservers running on top of > fedora 21 / XFS and master + metalogger ubuntu boxes, everything is > running great - everything /except/ for mfssetgoal, which simply > returns [even when run as root] that i don't have permission to change > goals. > > > --- > > > administrator@mfsmaster:~$ sudo mfssetgoal -r 2 > /mnt/MFS/FLASH_CC_2014_IN_Install-1\ 2.dmg > > /mnt/MFS/FLASH_CC_2014_IN_Install-1 2.dmg: > > inodes with goal changed: 0 > > inodes with goal not changed: 0 > > inodes with permission denied: 1 > > > --- > > > am I missing something really simple here? I'm aware of the new Labels > function but thought that mfssetgoal should still work as it did before? > > > many thanks in advance for any help you can provide. > > > Ben // Southampton, UK. > > > Ben Harker > > Apple / Linux Specialist > > Barton Peveril College > Mob: 07525175882 > Tel: 02380367224 > Ext: 2731 > > > > ------------------------------------------------------------------------ > This message is sent in confidence for the addressee only. It may > contain confidential or sensitive information. The contents are not to > be disclosed to any one other than the addressee. Unauthorised > recipients are requested to preserve this confidentiality and to > advise us of any errors in transmission. Barton Peveril College > reserves the right to monitor emails as defined in current > legislation, and regards this notice to you as notification of such a > possibility. > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------------ > BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT > Develop your own process in accordance with the BPMN 2 standard > Learn Process modeling best practices with Bonita BPM through live exercises > http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ > source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Neddy, N. N. <na...@nd...> - 2015-04-08 09:00:54
|
Hi Aleksander, On Samba (3.6.6) server, I switched to different protocols than default but still no luck. client max protocol = LANMAN1 Also use option -o mfscachemode=NO but the result is the same. Best regards, ~Nedd On Wed, Apr 8, 2015 at 2:58 PM, MooseFS Users <us...@mo...> wrote: > Hi. > We have two ideas that you can test in your environment. > > First of all try to force SAMBA server to use only SMB2 or even SMB1 > protocol. > More details about configuration you can find at: > www.samba.org/samba/history/samba-4.1.0.html > > Another idea is to disable caching in mfsmount: > mfsmount -o mfscachemode=NO -H mfsmaster.lan /mount/point > > We are waiting for your feedback. > > Best regards > Aleksander Wieliczko > Technical Support Engineer |
From: MooseFS U. <us...@mo...> - 2015-04-08 07:58:19
|
Hi. We have two ideas that you can test in your environment. First of all try to force SAMBA server to use only SMB2 or even SMB1 protocol. More details about configuration you can find at: www.samba.org/samba/history/samba-4.1.0.html Another idea is to disable caching in mfsmount: mfsmount -o mfscachemode=NO -H mfsmaster.lan /mount/point We are waiting for your feedback. Best regards Aleksander Wieliczko Technical Support Engineer |
From: Krzysztof K. <krz...@mo...> - 2015-04-07 21:00:47
|
Dear Eduardo, We have tested to confirm that both parameters (CHUNKS_SOFT_DEL_LIMIT and CHUNKS_HARD_DEL_LIMIT) work as expected in MooseFS 2.0, but by design it works quite different than it was in MooseFS 1.6. * in MooseFS 1.6.x defined limits define how many deletions per second might be performed by any chunkserver * in MooseFS 2.0.x and above the limits define how many concurrent deletions might be running by chunkserver The tests that we have performed show that even with the parameters set to the lowest values (CHUNKS_SOFT_DEL_LIMIT=1, CHUNKS_HARD_DEL_LIMIT=2) we were able to see almost 17k deletions per minute on a three node MooseFS 2.0 instance. This translates to ~95 deletions per second per chunkserver in MooseFS 1.6.x. world - probably you don’t have so much deletions and that’s why you don’t see the difference when changing the profiles. MooseFS 2.0.x changed the way it interprets the parameters and the way it handles deletions. At the moment it’s not event possible to limit deletions as in MooseFS 1.6.x. Generally speaking MooseFS 2.0 employs improved algorithms for chunks deletions and can handle much more deletions without significant impact on overall system performance. Do you see any performance problems with deletions on your MooseFS 2.0.x system or you are just concerned that the behaviour changed after the upgrade? Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |