You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: web u. <web...@gm...> - 2015-10-16 13:30:03
|
Someone on this list pointed me to lizardfs. Will that client with the original moosefs distribution? If not, can we not bring the work done on lizardfs windows client into the the moosefs distribution? On Fri, Oct 16, 2015 at 9:04 AM, web user <web...@gm...> wrote: > Would love to be able to read and write to a mfs share from my windows > laptop. Any one know of a solution? Os is this something that is being > worked on.... > |
From: web u. <web...@gm...> - 2015-10-16 13:04:35
|
Would love to be able to read and write to a mfs share from my windows laptop. Any one know of a solution? Os is this something that is being worked on.... |
From: Aleksander W. <ale...@mo...> - 2015-10-15 19:50:11
|
Who is owner of your test/ directory? Did you remout mfsclient after mfsexports file modification and mfsmaster reload? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 15.10.2015 21:04, Angus Yang 杨阳 wrote: > All file is good. > > i set allroot=www:www on export configure file. so all file and folder > is www:www (uid:gid) > > i just want to chmod the folder to 777 and let other user like mysql > can write (now is 755) > > > 发自 网易邮箱大师 <http://u.163.com/signature> > > > On 2015-10-16 02:58 , Aleksander Wieliczko > <mailto:ale...@mo...> Wrote: > > Hi. > Do you have the same error when you creating file? > If yes - check your firewall. Chceck if you have open port 9422 on > all chunkservers. > If no - check /etc/mfs/mfsexports.cfg entries > > By the way - this ports should be open on: > > mfsmaster: > MATOML_LISTEN_PORT = 9419 > MATOCS_LISTEN_PORT = 9420 > MATOCL_LISTEN_PORT = 9421 > > chunkserver: > CSSERV_LISTEN_PORT = 9422 > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <moosefs.com> > On 15.10.2015 18:02, Angus Yang 杨阳 wrote: >> Hi All >> >> I am a new guy for mfs. And it is very helpful for me. >> >> I have a question want to ask. >> >> when i had already mount mfs. and on mfs folder use chmod or >> chown like: >> >> chmod -R 777 test, it show a error : chmod: changing permissions >> of `test/': Operation not permitted >> >> if i want to change folder rule, how to do? >> >> Thanks very much. >> >> >> ------------------------------------------------------------------------------ >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Piotr R. K. <pio...@mo...> - 2015-10-15 19:24:57
|
Hi Michael, > When is MooseFS 3.0 expected to supplant 2.0 and become the "stable" version? It is hard to say. Theoretically, we have a Release Candidate now (3.0.53, will be published on our website today, it is already in “current” repo). But we are still in progress of tests in different environments (including production). We want to release a very good piece of software, well-tested. Because we know, that in our users' environments it is crucial to have a rock-solid and stable data backend. So MFS 3 needs a lot of different tests. You can take a look at file I'm attaching (you can also always find such file, named "NEWS" in sources tarball - https://moosefs.com/download/sources.html <https://moosefs.com/download/sources.html>). Please pay attention, that last two months, or ever longer are mainly fixes. And testing software takes time. We want to release it as soon as possible, but I am not able to say specific date, I'm sure, that now you understand, why. Maybe, but only maybe, it will be by the end of the year. But unfortunately I can't guarantee. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 13 Oct 2015, at 7:54 am, Michael Tinsay <mic...@ho...> wrote: > > Hi! > > When is MooseFS 3.0 expected to supplant 2.0 and become the "stable" version? > > Best regards. > > > --- mike t. > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Angus Y. 杨阳 <ang...@vi...> - 2015-10-15 19:21:24
|
All file is good. i set allroot=www:www on export configure file. so all file and folder is www:www (uid:gid) i just want to chmod the folder to 777 and let other user like mysql can write (now is 755) 发自 网易邮箱大师 On 2015-10-16 02:58 , Aleksander Wieliczko Wrote: Hi. Do you have the same error when you creating file? If yes - check your firewall. Chceck if you have open port 9422 on all chunkservers. If no - check /etc/mfs/mfsexports.cfg entries By the way - this ports should be open on: mfsmaster: MATOML_LISTEN_PORT = 9419 MATOCS_LISTEN_PORT = 9420 MATOCL_LISTEN_PORT = 9421 chunkserver: CSSERV_LISTEN_PORT = 9422 Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 15.10.2015 18:02, Angus Yang 杨阳 wrote: Hi All I am a new guy for mfs. And it is very helpful for me. I have a question want to ask. when i had already mount mfs. and on mfs folder use chmod or chown like: chmod -R 777 test, it show a error : chmod: changing permissions of `test/': Operation not permitted if i want to change folder rule, how to do? Thanks very much. ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2015-10-15 18:58:36
|
Hi. Do you have the same error when you creating file? If yes - check your firewall. Chceck if you have open port 9422 on all chunkservers. If no - check /etc/mfs/mfsexports.cfg entries By the way - this ports should be open on: mfsmaster: MATOML_LISTEN_PORT = 9419 MATOCS_LISTEN_PORT = 9420 MATOCL_LISTEN_PORT = 9421 chunkserver: CSSERV_LISTEN_PORT = 9422 Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 15.10.2015 18:02, Angus Yang 杨阳 wrote: > Hi All > > I am a new guy for mfs. And it is very helpful for me. > > I have a question want to ask. > > when i had already mount mfs. and on mfs folder use chmod or chown like: > > chmod -R 777 test, it show a error : chmod: changing permissions of > `test/': Operation not permitted > > if i want to change folder rule, how to do? > > Thanks very much. > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Angus Y. 杨阳 <ang...@vi...> - 2015-10-15 16:02:38
|
Hi All I am a new guy for mfs. And it is very helpful for me. I have a question want to ask. when i had already mount mfs. and on mfs folder use chmod or chown like: chmod -R 777 test, it show a error : chmod: changing permissions of `test/': Operation not permitted if i want to change folder rule, how to do? Thanks very much. |
From: Wolfgang <moo...@wo...> - 2015-10-14 13:32:40
|
Hi! On 2015-10-14 14:13, Boli wrote: > thats very slow. was it ftp/ssl? or uncrypted ftp? It was uncrypted ftp (good old plain ftp) As written in my initial mail to the list - write performance direct on the banana pi to the sata disc via dd is abt. 50MB/s. Greetings Wolfgang > > On 14/10/2015 11:50, Wolfgang wrote: >> >>> >>> Can you set up some ftp or SAMBA server on chunkserver and tell as >>> what transfer speed you can achieve? >> I installed vsftp on one chunkserver - transfer 2GB file via FTP - to >> external PC >> Speed for Up-/ and Download was about 37MB/s >> >> >> Thanks for your effort and greetings >> Wolfgang >> > > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Boli <bo...@le...> - 2015-10-14 12:35:29
|
thats very slow. was it ftp/ssl? or uncrypted ftp? On 14/10/2015 11:50, Wolfgang wrote: > >> >> Can you set up some ftp or SAMBA server on chunkserver and tell as >> what transfer speed you can achieve? > I installed vsftp on one chunkserver - transfer 2GB file via FTP - to > external PC > Speed for Up-/ and Download was about 37MB/s > > > Thanks for your effort and greetings > Wolfgang > |
From: Wolfgang <moo...@wo...> - 2015-10-14 08:50:43
|
Hi! On 2015-10-12 08:29, Aleksander Wieliczko wrote: > Hi! > Can you tell as some details about your configuration like: > > - MooseFS version? Master (Ubuntu 14.04) 3.0.51 Chunkservers (from arm rp2 Repository) 3.0.13 > - Ping time from chunkserver to mfsmaster? 0.252ms > - Chunkserver(bananaPI) NIC speed? 1Gbit > > Can you set up some ftp or SAMBA server on chunkserver and tell as > what transfer speed you can achieve? I installed vsftp on one chunkserver - transfer 2GB file via FTP - to external PC Speed for Up-/ and Download was about 37MB/s Thanks for your effort and greetings Wolfgang > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <moosefs.com> > > On 10/10/2015 11:40 AM, Wolfgang wrote: >> Hi List! >> >> I'm currently experimenting with 8x Banabapi [1] (similar to Raspberry >> Pi Computers) as my Chunkservers >> and a virtual Ubuntu as Master. >> Advantage of Bananapi to the better known Raspberry Pi is 1Gbit NIC and >> the direct SATA support for 2.5'' HDD's (Sata and Power plug) [2] >> I'm using it in my private Gbit LAN at home. >> Performance for sequental data stream (dd if=/dev/zero ...) and goal=1 >> is good - but performance for goal = 2 is just abt. 20MB/s and with >> small files coming from attic backup [3] is poor ~ 2MB/s) >> Write performance (dd) direct on one of my nodes to the SATA disc is >> abt. 50MB/s >> >> Is anyone in the list also fiddeling around with this type of setup? >> Is there any configuration which I can do to get more performance? (more >> nodes?) >> >> Thank you. >> >> Greetings >> Wolfgang >> >> >> [1] ... >> http://www.pollin.de/shop/dt/MTA4NzkyOTk-/Bausaetze_Module/Entwicklerboards/Banana_Pi_Dual_Core_1_GB_DDR3_SATA_G_LAN.html >> [2] ... >> http://www.pollin.de/shop/dt/NzI3NzkyOTk-/Bausaetze_Module/Entwicklerboards/Banana_Pi_SATA_Kabel.html >> [3] ...https://attic-backup.org/ >> >> ------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michael T. <mic...@ho...> - 2015-10-13 05:54:32
|
Hi! When is MooseFS 3.0 expected to supplant 2.0 and become the "stable" version? Best regards. --- mike t. |
From: Aleksander W. <ale...@mo...> - 2015-10-12 06:29:37
|
Hi! Can you tell as some details about your configuration like: - MooseFS version? - Ping time from chunkserver to mfsmaster? - Chunkserver(bananaPI) NIC speed? Can you set up some ftp or SAMBA server on chunkserver and tell as what transfer speed you can achieve? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 10/10/2015 11:40 AM, Wolfgang wrote: > Hi List! > > I'm currently experimenting with 8x Banabapi [1] (similar to Raspberry > Pi Computers) as my Chunkservers > and a virtual Ubuntu as Master. > Advantage of Bananapi to the better known Raspberry Pi is 1Gbit NIC and > the direct SATA support for 2.5'' HDD's (Sata and Power plug) [2] > I'm using it in my private Gbit LAN at home. > Performance for sequental data stream (dd if=/dev/zero ...) and goal=1 > is good - but performance for goal = 2 is just abt. 20MB/s and with > small files coming from attic backup [3] is poor ~ 2MB/s) > Write performance (dd) direct on one of my nodes to the SATA disc is > abt. 50MB/s > > Is anyone in the list also fiddeling around with this type of setup? > Is there any configuration which I can do to get more performance? (more > nodes?) > > Thank you. > > Greetings > Wolfgang > > > [1] ... > http://www.pollin.de/shop/dt/MTA4NzkyOTk-/Bausaetze_Module/Entwicklerboards/Banana_Pi_Dual_Core_1_GB_DDR3_SATA_G_LAN.html > [2] ... > http://www.pollin.de/shop/dt/NzI3NzkyOTk-/Bausaetze_Module/Entwicklerboards/Banana_Pi_SATA_Kabel.html > [3] ... https://attic-backup.org/ > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Casper L. <ca...@la...> - 2015-10-10 17:58:57
|
Hi Wolfgang, I have a BananaPi - OrangePi pair with a custom replication script using rsync. The BananaPi serving an ownCloud instance (I love ownCloud almost as much as I love MooseFS) and the OrangePi is only used as 'slave'. I also use MooseFS professionally, which is why I'm listening in on this mailinglist. I am not surprised by your throughput. You should test a netcat read from disk to another computer. My tests: root@bananapi:/srv# dd if=/srv/swapfile bs=1M > /dev/null ^C1838+0 records in 1837+0 records out 1926234112 bytes (1.9 GB) copied, 13.3472 s, 144 MB/s My harddisk gets a base speed of 144MB/s, on the swapfile (which is not fragmented) on a Toshiba MQ01ABB200 disk. I started netcat listening on port 2000 on my linux desktop machine: nc -kl -p 2000 > /dev/null Then read from disk and write to that machine: root@bananapi:/srv# dd if=/srv/swapfile bs=1M | nc <linux-desktop-ip> 2000 ^C192+0 records in 191+0 records out 200278016 bytes (200 MB) copied, 7.7876 s, 25.7 MB/s So I get only 25.7 MB/s. Add a bit of overhead for moosefs chunkservers. I think you are lucky to 'pay' only 20% speed reduction to get the redundancy and flexibility moosefs brings to the table ;-) Reading from /dev/zero only gets me a little more throughput: root@bananapi:/srv# dd if=/dev/zero bs=1M | nc <linux-desktop-ip> 2000 ^C303+0 records in 302+0 records out 316669952 bytes (317 MB) copied, 10.3139 s, 30.7 MB/s Because I also own an OrangePi, I tested that one too and was a little surprised it performed worse: root@orangepi:~# dd if=/dev/zero bs=1M | nc <linux-desktop-ip> 2000 ^C197+0 records in 196+0 records out 205520896 bytes (206 MB) copied, 11.1619 s, 18.4 MB/s In these tests, all machines are connected to a small gigabit switch. Nothing fancy, but definitely not the cause for these limits. I have to add that the Pi's where still doing their normal stuff, I haven't stopped any running services on them, but load average was 0.04, 0.05, 0.05 on the OrangePi and even lower on the BananaPi. According to this benchmark http://www.htpcguides.com/raspberry-pi-vs-pi-2-vs-banana-pi-pro-benchmarks/ The BananaPi Pro should have radically improved network speed, I would like to see a dd if=/dev/zero bs=1M | nc <linux-desktop-ip> 2000 If anyone on this list happens to own one... Greetings, Casper |
From: Wolfgang <moo...@wo...> - 2015-10-10 09:57:14
|
Hi List! I'm currently experimenting with 8x Banabapi [1] (similar to Raspberry Pi Computers) as my Chunkservers and a virtual Ubuntu as Master. Advantage of Bananapi to the better known Raspberry Pi is 1Gbit NIC and the direct SATA support for 2.5'' HDD's (Sata and Power plug) [2] I'm using it in my private Gbit LAN at home. Performance for sequental data stream (dd if=/dev/zero ...) and goal=1 is good - but performance for goal = 2 is just abt. 20MB/s and with small files coming from attic backup [3] is poor ~ 2MB/s) Write performance (dd) direct on one of my nodes to the SATA disc is abt. 50MB/s Is anyone in the list also fiddeling around with this type of setup? Is there any configuration which I can do to get more performance? (more nodes?) Thank you. Greetings Wolfgang [1] ... http://www.pollin.de/shop/dt/MTA4NzkyOTk-/Bausaetze_Module/Entwicklerboards/Banana_Pi_Dual_Core_1_GB_DDR3_SATA_G_LAN.html [2] ... http://www.pollin.de/shop/dt/NzI3NzkyOTk-/Bausaetze_Module/Entwicklerboards/Banana_Pi_SATA_Kabel.html [3] ... https://attic-backup.org/ |
From: 叶俊[技术中心] <jo...@vi...> - 2015-10-08 15:06:31
|
Dear Piotr 1. Thanks for your reply . regarding the log meesage , it have something to do with replication process . We do have MFS upgrade experience for another MFS Cluster A(from 1.6.x to 2.0.x), we will also deploy upgrade to this issue Cluster B. 2. But we have over 100TB data in this issued MFS cluster , restart master would take over 2 hours; regarding to log err , it seems replication process cause that problem . is it able to dig further info , like master can't connect chunks' port 9422, port 9422 timeout ? ________________________________ 发件人: Piotr Robert Konopelko <pio...@mo...> 发送时间: 2015年10月8日 20:37 收件人: 叶俊[技术中心] 抄送: moo...@li... 主题: Re: [MooseFS-Users] reply: reply: mooseFS_issue Hi John, the solution for problems you encounter is to upgrade your MooseFS instance to version 2.0. In MooseFS 2.0 a lot of problems and issues have been fixed. Also - in 2.0 a lot of algorithms (including replication algorithms) have been improved. MooseFS 2.0 is released for more than a year already and it is really stable version. Frankly, MFS 1.6 is no longer supported and we strongly recommend to do the upgrade. Upgrade is a simple process - mainly it is 1. update package version 2. restart the service. But some crucial aspects (like configuration files paths change, order of upgrade) are described in manual. Please take a look at MooseFS Upgrade Manual and MooseFS Step by Step installation Guide. You can find these documents here: https://moosefs.com/documentation/moosefs-2-0.html Please remember, that we support only upgrade from 1.6.27-5 MooseFS version, so if any of your components (especially Master, excepting mounts) is running in different (older) MFS version, you first of all need to upgrade them to 1.6.27-5, an then to the newest MFS 2.0 (2.0.77 at the time of writing this message). Before starting the upgrade process please remember to do a backup of metadata.mfs file. In case of any further questions or problems, you can contact me directly. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com<https://moosefs.com/> On 08 Oct 2015, at 10:57 am, 叶俊[技术中心] <jo...@vi...<mailto:jo...@vi...>> wrote: 1. With more information , MFS GUI info is as below : 2. We have 3 MFS cluster , Cluster A rebalance number is 0 , somehow Cluster B with replication disconnect trouble , replication number is 53432; <image001.png> 发件人: 叶俊[技术中心] 发送时间: 2015年10月8日 15:17 收件人: 'moo...@li...<mailto:moo...@li...>' 主题: reply: mooseFS_issue 1. On addtion, the File system is ext4; 2. The old cluster with 8 old chunk server runs well, didn’t meet any issue before; John.ye 发件人: 叶俊[技术中心] 发送时间: 2015年10月8日 15:13 收件人: 'moo...@li...<mailto:moo...@li...>' 主题: mooseFS_issue Dear support team, This is John from vip.com<http://vip.com/>; system administrator team ; our mfs system is : OS: CentOS 6.3 Master: 1 Metalog:1 Chunk:8 (16TB / server) version: 1.6.27 Expend an other chunk: 8 (16TB / Server) version: 1.6.27 Total chunk: 8+8=16 1. Why we expand: since old chunkx8 is over 90% disk usage 2. What issue we meet: After expand another 8 chunk server , old chunk try to replicate data to new chunk , but it fail due to some reason, /var/log/message log of master : Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000AF7D9DC replication status: Disconnected Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.171:9422) chunk: 000000000BB7D9DC replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.180:9422) chunk: 000000000B68703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.173:9422) chunk: 000000000B48703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000BB8703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.172:9422) chunk: 000000000B88703F replication status: Disconnected 3. the error log beyond come out once we setup another 8 chunk server; error log detect every day; Can pls help to give some advice to solve this replication failure issue? a. Why replication fail b. What we can do 4. master config is as below: cat /usr/local/mfs/etc/mfs/mfsmaster.cfg # WORKING_USER = mfs # WORKING_GROUP = mfs # SYSLOG_IDENT = mfsmaster # LOCK_MEMORY = 0 # NICE_LEVEL = -19 # EXPORTS_FILENAME = /usr/local/mfs/etc/mfs/mfsexports.cfg # TOPOLOGY_FILENAME = /usr/local/mfs/etc/mfs/mfstopology.cfg # DATA_PATH = /usr/local/mfs/lib/mfs # BACK_LOGS = 50 # BACK_META_KEEP_PREVIOUS = 1 # REPLICATIONS_DELAY_INIT = 300 # REPLICATIONS_DELAY_DISCONNECT = 3600 # MATOML_LISTEN_HOST = * # MATOML_LISTEN_PORT = 9419 # MATOML_LOG_PRESERVE_SECONDS = 600 # MATOCS_LISTEN_HOST = * # MATOCS_LISTEN_PORT = 9420 # MATOCL_LISTEN_HOST = * # MATOCL_LISTEN_PORT = 9421 # CHUNKS_LOOP_MAX_CPS = 100000 # CHUNKS_LOOP_MIN_TIME = 300 # CHUNKS_SOFT_DEL_LIMIT = 10 # CHUNKS_HARD_DEL_LIMIT = 25 # CHUNKS_WRITE_REP_LIMIT = 2 # CHUNKS_READ_REP_LIMIT = 10 # ACCEPTABLE_DIFFERENCE = 0.1 # SESSION_SUSTAIN_TIME = 86400 # REJECT_OLD_CLIENTS = 0 # deprecated: # CHUNKS_DEL_LIMIT - use CHUNKS_SOFT_DEL_LIMIT instead # LOCK_FILE - lock system has been changed, and this option is used only to search for old lockfile Best regards John.ye System administraotr email: jo...@vi...<mailto:jo...@vi...> VIP.com<http://vip.com/> | 唯品会 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! This communication is intended only for the addressee(s) and may contain information that is privileged and confidential. You are hereby notified that, if you are not an intended recipient listed above, or an authorized employee or agent of an addressee of this communication responsible for delivering e-mail messages to an intended recipient, any dissemination, distribution or reproduction of this communication (including any attachments hereto) is strictly prohibited. If you have received this communication in error, please notify us immediately by a reply e-mail addressed to the sender and permanently delete the original e-mail communication and any attachments from all storage devices without making or otherwise retaining a copy. ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! This communication is intended only for the addressee(s) and may contain information that is privileged and confidential. You are hereby notified that, if you are not an intended recipient listed above, or an authorized employee or agent of an addressee of this communication responsible for delivering e-mail messages to an intended recipient, any dissemination, distribution or reproduction of this communication (including any attachments hereto) is strictly prohibited. If you have received this communication in error, please notify us immediately by a reply e-mail addressed to the sender and permanently delete the original e-mail communication and any attachments from all storage devices without making or otherwise retaining a copy. |
From: Piotr R. K. <pio...@mo...> - 2015-10-08 12:38:07
|
Hi John, the solution for problems you encounter is to upgrade your MooseFS instance to version 2.0. In MooseFS 2.0 a lot of problems and issues have been fixed. Also - in 2.0 a lot of algorithms (including replication algorithms) have been improved. MooseFS 2.0 is released for more than a year already and it is really stable version. Frankly, MFS 1.6 is no longer supported and we strongly recommend to do the upgrade. Upgrade is a simple process - mainly it is 1. update package version 2. restart the service. But some crucial aspects (like configuration files paths change, order of upgrade) are described in manual. Please take a look at MooseFS Upgrade Manual and MooseFS Step by Step installation Guide. You can find these documents here: https://moosefs.com/documentation/moosefs-2-0.html <https://moosefs.com/documentation/moosefs-2-0.html> Please remember, that we support only upgrade from 1.6.27-5 MooseFS version, so if any of your components (especially Master, excepting mounts) is running in different (older) MFS version, you first of all need to upgrade them to 1.6.27-5, an then to the newest MFS 2.0 (2.0.77 at the time of writing this message). Before starting the upgrade process please remember to do a backup of metadata.mfs file. In case of any further questions or problems, you can contact me directly. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 08 Oct 2015, at 10:57 am, 叶俊[技术中心] <jo...@vi...> wrote: > > 1. With more information , MFS GUI info is as below : > 2. We have 3 MFS cluster , Cluster A rebalance number is 0 , somehow Cluster B with replication disconnect trouble , replication number is 53432; > > <image001.png> > > > 发件人: 叶俊[技术中心] > 发送时间: 2015年10月8日 15:17 > 收件人: 'moo...@li... <mailto:moo...@li...>' > 主题: reply: mooseFS_issue > > 1. On addtion, the File system is ext4; > 2. The old cluster with 8 old chunk server runs well, didn’t meet any issue before; > > John.ye > > 发件人: 叶俊[技术中心] > 发送时间: 2015年10月8日 15:13 > 收件人: 'moo...@li... <mailto:moo...@li...>' > 主题: mooseFS_issue > > Dear support team, > > > This is John from vip.com <http://vip.com/>; system administrator team ; our mfs system is : > OS: CentOS 6.3 > Master: 1 > Metalog:1 > Chunk:8 (16TB / server) version: 1.6.27 > Expend an other chunk: 8 (16TB / Server) version: 1.6.27 > Total chunk: 8+8=16 > > > 1. Why we expand: > since old chunkx8 is over 90% disk usage > > 2. What issue we meet: > After expand another 8 chunk server , old chunk try to replicate data to new chunk , but it fail due to some reason, > /var/log/message log of master : > Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000AF7D9DC replication status: Disconnected > Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.171:9422) chunk: 000000000BB7D9DC replication status: Disconnected > Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.180:9422) chunk: 000000000B68703F replication status: Disconnected > Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.173:9422) chunk: 000000000B48703F replication status: Disconnected > Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000BB8703F replication status: Disconnected > Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.172:9422) chunk: 000000000B88703F replication status: Disconnected > > 3. the error log beyond come out once we setup another 8 chunk server; error log detect every day; > > > Can pls help to give some advice to solve this replication failure issue? > a. Why replication fail > b. What we can do > > > > 4. master config is as below: > > > cat /usr/local/mfs/etc/mfs/mfsmaster.cfg > # WORKING_USER = mfs > # WORKING_GROUP = mfs > # SYSLOG_IDENT = mfsmaster > # LOCK_MEMORY = 0 > # NICE_LEVEL = -19 > > # EXPORTS_FILENAME = /usr/local/mfs/etc/mfs/mfsexports.cfg > > # TOPOLOGY_FILENAME = /usr/local/mfs/etc/mfs/mfstopology.cfg > > # DATA_PATH = /usr/local/mfs/lib/mfs > > # BACK_LOGS = 50 > # BACK_META_KEEP_PREVIOUS = 1 > > # REPLICATIONS_DELAY_INIT = 300 > # REPLICATIONS_DELAY_DISCONNECT = 3600 > > # MATOML_LISTEN_HOST = * > # MATOML_LISTEN_PORT = 9419 > # MATOML_LOG_PRESERVE_SECONDS = 600 > > # MATOCS_LISTEN_HOST = * > # MATOCS_LISTEN_PORT = 9420 > > # MATOCL_LISTEN_HOST = * > # MATOCL_LISTEN_PORT = 9421 > > # CHUNKS_LOOP_MAX_CPS = 100000 > # CHUNKS_LOOP_MIN_TIME = 300 > > # CHUNKS_SOFT_DEL_LIMIT = 10 > # CHUNKS_HARD_DEL_LIMIT = 25 > # CHUNKS_WRITE_REP_LIMIT = 2 > # CHUNKS_READ_REP_LIMIT = 10 > # ACCEPTABLE_DIFFERENCE = 0.1 > > # SESSION_SUSTAIN_TIME = 86400 > # REJECT_OLD_CLIENTS = 0 > > # deprecated: > # CHUNKS_DEL_LIMIT - use CHUNKS_SOFT_DEL_LIMIT instead > # LOCK_FILE - lock system has been changed, and this option is used only to search for old lockfile > > > > Best regards > > > > John.ye > System administraotr > email: jo...@vi... <mailto:jo...@vi...> > VIP.com <http://vip.com/> | 唯品会 > > 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! This communication is intended only for the addressee(s) and may contain information that is privileged and confidential. You are hereby notified that, if you are not an intended recipient listed above, or an authorized employee or agent of an addressee of this communication responsible for delivering e-mail messages to an intended recipient, any dissemination, distribution or reproduction of this communication (including any attachments hereto) is strictly prohibited. If you have received this communication in error, please notify us immediately by a reply e-mail addressed to the sender and permanently delete the original e-mail communication and any attachments from all storage devices without making or otherwise retaining a copy. ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: 叶俊[技术中心] <jo...@vi...> - 2015-10-08 08:57:58
|
1. With more information , MFS GUI info is as below : 2. We have 3 MFS cluster , Cluster A rebalance number is 0 , somehow Cluster B with replication disconnect trouble , replication number is 53432; [cid:image001.png@01D101EA.7A269920] 发件人: 叶俊[技术中心] 发送时间: 2015年10月8日 15:17 收件人: 'moo...@li...' 主题: reply: mooseFS_issue 1. On addtion, the File system is ext4; 2. The old cluster with 8 old chunk server runs well, didn’t meet any issue before; John.ye 发件人: 叶俊[技术中心] 发送时间: 2015年10月8日 15:13 收件人: 'moo...@li...' 主题: mooseFS_issue Dear support team, This is John from vip.com; system administrator team ; our mfs system is : OS: CentOS 6.3 Master: 1 Metalog:1 Chunk:8 (16TB / server) version: 1.6.27 Expend an other chunk: 8 (16TB / Server) version: 1.6.27 Total chunk: 8+8=16 1. Why we expand: since old chunkx8 is over 90% disk usage 2. What issue we meet: After expand another 8 chunk server , old chunk try to replicate data to new chunk , but it fail due to some reason, /var/log/message log of master : Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000AF7D9DC replication status: Disconnected Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.171:9422) chunk: 000000000BB7D9DC replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.180:9422) chunk: 000000000B68703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.173:9422) chunk: 000000000B48703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000BB8703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.172:9422) chunk: 000000000B88703F replication status: Disconnected 3. the error log beyond come out once we setup another 8 chunk server; error log detect every day; Can pls help to give some advice to solve this replication failure issue? a. Why replication fail b. What we can do 4. master config is as below: cat /usr/local/mfs/etc/mfs/mfsmaster.cfg # WORKING_USER = mfs # WORKING_GROUP = mfs # SYSLOG_IDENT = mfsmaster # LOCK_MEMORY = 0 # NICE_LEVEL = -19 # EXPORTS_FILENAME = /usr/local/mfs/etc/mfs/mfsexports.cfg # TOPOLOGY_FILENAME = /usr/local/mfs/etc/mfs/mfstopology.cfg # DATA_PATH = /usr/local/mfs/lib/mfs # BACK_LOGS = 50 # BACK_META_KEEP_PREVIOUS = 1 # REPLICATIONS_DELAY_INIT = 300 # REPLICATIONS_DELAY_DISCONNECT = 3600 # MATOML_LISTEN_HOST = * # MATOML_LISTEN_PORT = 9419 # MATOML_LOG_PRESERVE_SECONDS = 600 # MATOCS_LISTEN_HOST = * # MATOCS_LISTEN_PORT = 9420 # MATOCL_LISTEN_HOST = * # MATOCL_LISTEN_PORT = 9421 # CHUNKS_LOOP_MAX_CPS = 100000 # CHUNKS_LOOP_MIN_TIME = 300 # CHUNKS_SOFT_DEL_LIMIT = 10 # CHUNKS_HARD_DEL_LIMIT = 25 # CHUNKS_WRITE_REP_LIMIT = 2 # CHUNKS_READ_REP_LIMIT = 10 # ACCEPTABLE_DIFFERENCE = 0.1 # SESSION_SUSTAIN_TIME = 86400 # REJECT_OLD_CLIENTS = 0 # deprecated: # CHUNKS_DEL_LIMIT - use CHUNKS_SOFT_DEL_LIMIT instead # LOCK_FILE - lock system has been changed, and this option is used only to search for old lockfile Best regards John.ye System administraotr email: jo...@vi...<mailto:jo...@vi...> VIP.com | 唯品会 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! This communication is intended only for the addressee(s) and may contain information that is privileged and confidential. You are hereby notified that, if you are not an intended recipient listed above, or an authorized employee or agent of an addressee of this communication responsible for delivering e-mail messages to an intended recipient, any dissemination, distribution or reproduction of this communication (including any attachments hereto) is strictly prohibited. If you have received this communication in error, please notify us immediately by a reply e-mail addressed to the sender and permanently delete the original e-mail communication and any attachments from all storage devices without making or otherwise retaining a copy. |
From: 叶俊[技术中心] <jo...@vi...> - 2015-10-08 07:27:40
|
1. On addtion, the File system is ext4; 2. The old cluster with 8 old chunk server runs well, didn’t meet any issue before; John.ye 发件人: 叶俊[技术中心] 发送时间: 2015年10月8日 15:13 收件人: 'moo...@li...' 主题: mooseFS_issue Dear support team, This is John from vip.com; system administrator team ; our mfs system is : OS: CentOS 6.3 Master: 1 Metalog:1 Chunk:8 (16TB / server) version: 1.6.27 Expend an other chunk: 8 (16TB / Server) version: 1.6.27 Total chunk: 8+8=16 1. Why we expand: since old chunkx8 is over 90% disk usage 2. What issue we meet: After expand another 8 chunk server , old chunk try to replicate data to new chunk , but it fail due to some reason, /var/log/message log of master : Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000AF7D9DC replication status: Disconnected Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.171:9422) chunk: 000000000BB7D9DC replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.180:9422) chunk: 000000000B68703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.173:9422) chunk: 000000000B48703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000BB8703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.172:9422) chunk: 000000000B88703F replication status: Disconnected 3. the error log beyond come out once we setup another 8 chunk server; error log detect every day; Can pls help to give some advice to solve this replication failure issue? a. Why replication fail b. What we can do 4. master config is as below: cat /usr/local/mfs/etc/mfs/mfsmaster.cfg # WORKING_USER = mfs # WORKING_GROUP = mfs # SYSLOG_IDENT = mfsmaster # LOCK_MEMORY = 0 # NICE_LEVEL = -19 # EXPORTS_FILENAME = /usr/local/mfs/etc/mfs/mfsexports.cfg # TOPOLOGY_FILENAME = /usr/local/mfs/etc/mfs/mfstopology.cfg # DATA_PATH = /usr/local/mfs/lib/mfs # BACK_LOGS = 50 # BACK_META_KEEP_PREVIOUS = 1 # REPLICATIONS_DELAY_INIT = 300 # REPLICATIONS_DELAY_DISCONNECT = 3600 # MATOML_LISTEN_HOST = * # MATOML_LISTEN_PORT = 9419 # MATOML_LOG_PRESERVE_SECONDS = 600 # MATOCS_LISTEN_HOST = * # MATOCS_LISTEN_PORT = 9420 # MATOCL_LISTEN_HOST = * # MATOCL_LISTEN_PORT = 9421 # CHUNKS_LOOP_MAX_CPS = 100000 # CHUNKS_LOOP_MIN_TIME = 300 # CHUNKS_SOFT_DEL_LIMIT = 10 # CHUNKS_HARD_DEL_LIMIT = 25 # CHUNKS_WRITE_REP_LIMIT = 2 # CHUNKS_READ_REP_LIMIT = 10 # ACCEPTABLE_DIFFERENCE = 0.1 # SESSION_SUSTAIN_TIME = 86400 # REJECT_OLD_CLIENTS = 0 # deprecated: # CHUNKS_DEL_LIMIT - use CHUNKS_SOFT_DEL_LIMIT instead # LOCK_FILE - lock system has been changed, and this option is used only to search for old lockfile Best regards John.ye System administraotr email: jo...@vi...<mailto:jo...@vi...> VIP.com | 唯品会 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! This communication is intended only for the addressee(s) and may contain information that is privileged and confidential. You are hereby notified that, if you are not an intended recipient listed above, or an authorized employee or agent of an addressee of this communication responsible for delivering e-mail messages to an intended recipient, any dissemination, distribution or reproduction of this communication (including any attachments hereto) is strictly prohibited. If you have received this communication in error, please notify us immediately by a reply e-mail addressed to the sender and permanently delete the original e-mail communication and any attachments from all storage devices without making or otherwise retaining a copy. |
From: 叶俊[技术中心] <jo...@vi...> - 2015-10-08 07:25:28
|
Dear support team, This is John from vip.com; system administrator team ; our mfs system is : OS: CentOS 6.3 Master: 1 Metalog:1 Chunk:8 (16TB / server) version: 1.6.27 Expend an other chunk: 8 (16TB / Server) version: 1.6.27 Total chunk: 8+8=16 1. Why we expand: since old chunkx8 is over 90% disk usage 2. What issue we meet: After expand another 8 chunk server , old chunk try to replicate data to new chunk , but it fail due to some reason, /var/log/message log of master : Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000AF7D9DC replication status: Disconnected Oct 8 14:29:35 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.171:9422) chunk: 000000000BB7D9DC replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.180:9422) chunk: 000000000B68703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.173:9422) chunk: 000000000B48703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.175:9422) chunk: 000000000BB8703F replication status: Disconnected Oct 8 14:29:36 GD6-MFS-MASTER-FLASHCACHE-001 mfsmaster[2168]: (10.201.70.172:9422) chunk: 000000000B88703F replication status: Disconnected 3. the error log beyond come out once we setup another 8 chunk server; error log detect every day; Can pls help to give some advice to solve this replication failure issue? a. Why replication fail b. What we can do 4. master config is as below: cat /usr/local/mfs/etc/mfs/mfsmaster.cfg # WORKING_USER = mfs # WORKING_GROUP = mfs # SYSLOG_IDENT = mfsmaster # LOCK_MEMORY = 0 # NICE_LEVEL = -19 # EXPORTS_FILENAME = /usr/local/mfs/etc/mfs/mfsexports.cfg # TOPOLOGY_FILENAME = /usr/local/mfs/etc/mfs/mfstopology.cfg # DATA_PATH = /usr/local/mfs/lib/mfs # BACK_LOGS = 50 # BACK_META_KEEP_PREVIOUS = 1 # REPLICATIONS_DELAY_INIT = 300 # REPLICATIONS_DELAY_DISCONNECT = 3600 # MATOML_LISTEN_HOST = * # MATOML_LISTEN_PORT = 9419 # MATOML_LOG_PRESERVE_SECONDS = 600 # MATOCS_LISTEN_HOST = * # MATOCS_LISTEN_PORT = 9420 # MATOCL_LISTEN_HOST = * # MATOCL_LISTEN_PORT = 9421 # CHUNKS_LOOP_MAX_CPS = 100000 # CHUNKS_LOOP_MIN_TIME = 300 # CHUNKS_SOFT_DEL_LIMIT = 10 # CHUNKS_HARD_DEL_LIMIT = 25 # CHUNKS_WRITE_REP_LIMIT = 2 # CHUNKS_READ_REP_LIMIT = 10 # ACCEPTABLE_DIFFERENCE = 0.1 # SESSION_SUSTAIN_TIME = 86400 # REJECT_OLD_CLIENTS = 0 # deprecated: # CHUNKS_DEL_LIMIT - use CHUNKS_SOFT_DEL_LIMIT instead # LOCK_FILE - lock system has been changed, and this option is used only to search for old lockfile Best regards John.ye System administraotr email: jo...@vi...<mailto:jo...@vi...> VIP.com | 唯品会 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! This communication is intended only for the addressee(s) and may contain information that is privileged and confidential. You are hereby notified that, if you are not an intended recipient listed above, or an authorized employee or agent of an addressee of this communication responsible for delivering e-mail messages to an intended recipient, any dissemination, distribution or reproduction of this communication (including any attachments hereto) is strictly prohibited. If you have received this communication in error, please notify us immediately by a reply e-mail addressed to the sender and permanently delete the original e-mail communication and any attachments from all storage devices without making or otherwise retaining a copy. |
From: Joseph L. <jo...@ge...> - 2015-10-02 18:33:14
|
Hi Aleksander, Thanks for getting the Moose team to look into this. I was able to get a single stream with mfscachemode=DIRECT to be a few hundred MB/s (my test cluster isn’t large enough to saturate 10gbe), which was definitely better than the 100MB/s I was capping out at before. Some synthetic testing showed that other performance characteristics differed a lot, presumably because the client-side caching was no longer there. Thanks again, -Joe > On Sep 23, 2015, at 1:59 AM, Aleksander Wieliczko <ale...@mo...> wrote: > > Hello Joe > > First of all thank You for pointing the problem to us. > > The reason why you don't get high performance is FreeBSD design. > We made some investigations and it appears that the block size in all I/O is only 4kB. > All operating systems use cache during I/O. The standard size of cache block is 4kB (standard page size), so transfers via cache are done using the same block size. > In some operating systems (for example Linux) there are algorithms that join these small blocks into larger groups and therefore increase performance of I/O, but that's not the case in FreeBSD. > So in FreeBSD, even when you set block size to 1M, inside the kernel all operations are split into 4k blocks (because of the cache). > > Our developer noticed, that during DIRECT I/O operations (without using cache), all I/O are split into 128k blocks (maximum allowable transfer size sent by our mfsmount to the kernel). It increases performance significantly. In our test environment we reached 900MB/s on 10Gb network. Be aware that in this case cache is not used at all. > > So to sum it all up: FreeBSD can use block size larger than 4k but only without cache. > > Mainly for FreeBSD we added special cache option for MooseFS client called DIRECT. > This option is available in MooseFS client since version 3.0.49. > To disable local cache and enable DIRECT communication please use this option during mount: > > mfsmount -H mfsmaster.your.domain.com -o mfscachemode=DIRECT /mount/point > > More details about current version of MooseFS you can find on http://moosefs.com/download.html <http://moosefs.com/download.html> page. > > > Please test this option. > We are waiting for your feedback. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <x-msg://30/moosefs.com> > > On 10.09.2015 14:56, Joseph Love wrote: >> Thanks. I look forward to hearing what you’ve uncovered. >> >> -Joe >> >>> On Sep 10, 2015, at 6:53 AM, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> wrote: >>> >>> Hi. >>> >>> Yes. >>> We are in progress of resolving this problem. >>> We found few potential reasons of this behaviour, but we need some more time to find the best solution. >>> >>> If we solve this problem, we respond as quick as it is possible with some more details. >>> >>> Best regards >>> Aleksander Wieliczko >>> Technical Support Engineer >>> MooseFS.com <x-msg://11/moosefs.com> >>> >>> On 09.09.2015 22:23, Joseph Love wrote: >>>> Hi Aleksander, >>>> >>>> Will there be time for some investigation into the performance of the FreeBSD client coming up? >>>> >>>> Thanks, >>>> -Joe >>>> >>>>> On Aug 28, 2015, at 1:38 AM, Aleksander Wieliczko < <mailto:ale...@mo...>ale...@mo... <mailto:ale...@mo...>> wrote: >>>>> >>>>> Hi. >>>>> Thank you for this information. >>>>> >>>>> Can you do one more simple test. >>>>> What network bandwidth can you achieve between two FreeBSD machines? >>>>> >>>>> I mean, something like: >>>>> FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null >>>>> (simple nc and dd tool will tell you a lot.) >>>>> >>>>> We know that FUSE on FreeBSD systems had some problems but we need to take close look to this issue. >>>>> We will try to repeat this scenario in our test environment and return to you after 08.09.2015, because we are in progress of different tests till this day. >>>>> >>>>> I would like to add one more aspect. >>>>> Mfsmaster application is single-thread so we compared your cpu with our. >>>>> This are the results: >>>>> (source: cpubenchmark.net <http://cpubenchmark.net/>) >>>>> >>>>> CPU Xeon E5-1620 v2 @ 3.70GHz >>>>> Avarage points: 9508 >>>>> Single Thread points: 1920 >>>>> >>>>> CPU Intel Atom C2758 @ 2.40GHz >>>>> Avarage points: 3620 >>>>> Single Thread points: 520 >>>>> >>>>> Best regards >>>>> Aleksander Wieliczko >>>>> Technical Support Engineer >>>>> MooseFS.com <x-msg://9/moosefs.com> >>>>> >>>> >>> >> > |
From: Aleksander W. <ale...@mo...> - 2015-09-29 09:10:38
|
Hi We made a quick fix in MooseFS 2.0.77 few days ago. Please use the latest MooseFS version to see if your problem is solved. http://moosefs.com/download.html Right now we are working on better solution, but this kind of problem needs some time to be solved well. We will inform you when new implementation will appear. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 09/25/2015 09:26 AM, Krzysztof Kielak wrote: > Hi Matt, > > Thank you for your submission. We confirm that the problem exists in > Ubuntu 15.04 with MooseFS 2.0.76. It's connected with mechanism for > reusing inodes that was changed in MooseFS 2.0.74 to use kernel FORGET > functionality. Both version 2.0.74 and 2.0.76 were tested on Ubuntu LTS > editions and worked fine. > > Unfortunately the new kernel in Ubuntu 15.04 changed ABI for working > with FORGET's for inodes. With this new ABI kernel does not even send > any request to mfsmount when invoking the syscall GETCWD. > > Quick fix might be to execute script as follows: > > [ ~]$ cat test.rb > #!/bin/env ruby > puts Dir.pwd > [ ~]$ cd ; ./test.rb > /home/mfs > > In this situation the "cd" command before actual script executes LOOKUP > syscall for current directory and next GETCWD syscall - that usually is > invoked in the beginning of the script - works fine. > > Anyway we'll start working on this problem and solve it in the next > version of MooseFS. > > Best Regards, > Krzysztof Kielak > Director of Operations and Customer Support > > > W dniu 2015-09-25 05:58, Matt Welland napisał(a): >> I'm on Ubuntu 15.04 using moose 2.0.76. I have a master + chunk server >> running on a Ubuntu 14.10 machine and moosefs is mounted on that >> machine without problems. On the 15.04 machine the getcwd call seems >> to be failing completely. Note that ls, cat, vi and other commands all >> work fine. >> >> Examples: >> >> 1. Fossil can't see the current directory >> $ fossil status >> cannot find current working directory; No such file or directory >> >> 2. Ruby call to getcwd fails >> >> $ irb >> irb(main):001:0> Dir.pwd >> Errno::ENOENT: No such file or directory - getcwd >> from (irb):1:in `pwd' >> from (irb):1 >> from /usr/bin/irb:11:in `<main>' >> >> 3. Chicken scheme call to getcwd fails >> csi> (current-directory) >> >> Error: (current-directory) cannot retrieve current directory >> >> Call history: >> >> <syntax> (current-directory) >> <eval> (current-directory) <-- >> >> this was working fine up until a few days ago, I think a recent >> apt-get update;apt-get upgrade might have coincided with it breaking. >> I haven't changed any configurations. Any help fixing this >> appreciated. >> >> >> ------------------------------------------------------------------------------ >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Matt W. <mat...@gm...> - 2015-09-26 20:10:40
|
Thanks Krzysztof, I'll use a different system and wait for the fix, this is the price I pay for using the bleeding edge :) On Sep 25, 2015 12:27 AM, "Krzysztof Kielak" <krz...@mo...> wrote: > > Hi Matt, > > Thank you for your submission. We confirm that the problem exists in > Ubuntu 15.04 with MooseFS 2.0.76. It's connected with mechanism for > reusing inodes that was changed in MooseFS 2.0.74 to use kernel FORGET > functionality. Both version 2.0.74 and 2.0.76 were tested on Ubuntu LTS > editions and worked fine. > > Unfortunately the new kernel in Ubuntu 15.04 changed ABI for working > with FORGET's for inodes. With this new ABI kernel does not even send > any request to mfsmount when invoking the syscall GETCWD. > > Quick fix might be to execute script as follows: > > [ ~]$ cat test.rb > #!/bin/env ruby > puts Dir.pwd > [ ~]$ cd ; ./test.rb > /home/mfs > > In this situation the "cd" command before actual script executes LOOKUP > syscall for current directory and next GETCWD syscall - that usually is > invoked in the beginning of the script - works fine. > > Anyway we'll start working on this problem and solve it in the next > version of MooseFS. > > Best Regards, > Krzysztof Kielak > Director of Operations and Customer Support > > > W dniu 2015-09-25 05:58, Matt Welland napisał(a): > > I'm on Ubuntu 15.04 using moose 2.0.76. I have a master + chunk server > > running on a Ubuntu 14.10 machine and moosefs is mounted on that > > machine without problems. On the 15.04 machine the getcwd call seems > > to be failing completely. Note that ls, cat, vi and other commands all > > work fine. > > > > Examples: > > > > 1. Fossil can't see the current directory > > $ fossil status > > cannot find current working directory; No such file or directory > > > > 2. Ruby call to getcwd fails > > > > $ irb > > irb(main):001:0> Dir.pwd > > Errno::ENOENT: No such file or directory - getcwd > > from (irb):1:in `pwd' > > from (irb):1 > > from /usr/bin/irb:11:in `<main>' > > > > 3. Chicken scheme call to getcwd fails > > csi> (current-directory) > > > > Error: (current-directory) cannot retrieve current directory > > > > Call history: > > > > <syntax> (current-directory) > > <eval> (current-directory) <-- > > > > this was working fine up until a few days ago, I think a recent > > apt-get update;apt-get upgrade might have coincided with it breaking. > > I haven't changed any configurations. Any help fixing this > > appreciated. > > > > > > ------------------------------------------------------------------------------ > > > > _________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Krzysztof K. <krz...@mo...> - 2015-09-25 07:26:57
|
Hi Matt, Thank you for your submission. We confirm that the problem exists in Ubuntu 15.04 with MooseFS 2.0.76. It's connected with mechanism for reusing inodes that was changed in MooseFS 2.0.74 to use kernel FORGET functionality. Both version 2.0.74 and 2.0.76 were tested on Ubuntu LTS editions and worked fine. Unfortunately the new kernel in Ubuntu 15.04 changed ABI for working with FORGET's for inodes. With this new ABI kernel does not even send any request to mfsmount when invoking the syscall GETCWD. Quick fix might be to execute script as follows: [ ~]$ cat test.rb #!/bin/env ruby puts Dir.pwd [ ~]$ cd ; ./test.rb /home/mfs In this situation the "cd" command before actual script executes LOOKUP syscall for current directory and next GETCWD syscall - that usually is invoked in the beginning of the script - works fine. Anyway we'll start working on this problem and solve it in the next version of MooseFS. Best Regards, Krzysztof Kielak Director of Operations and Customer Support W dniu 2015-09-25 05:58, Matt Welland napisał(a): > I'm on Ubuntu 15.04 using moose 2.0.76. I have a master + chunk server > running on a Ubuntu 14.10 machine and moosefs is mounted on that > machine without problems. On the 15.04 machine the getcwd call seems > to be failing completely. Note that ls, cat, vi and other commands all > work fine. > > Examples: > > 1. Fossil can't see the current directory > $ fossil status > cannot find current working directory; No such file or directory > > 2. Ruby call to getcwd fails > > $ irb > irb(main):001:0> Dir.pwd > Errno::ENOENT: No such file or directory - getcwd > from (irb):1:in `pwd' > from (irb):1 > from /usr/bin/irb:11:in `<main>' > > 3. Chicken scheme call to getcwd fails > csi> (current-directory) > > Error: (current-directory) cannot retrieve current directory > > Call history: > > <syntax> (current-directory) > <eval> (current-directory) <-- > > this was working fine up until a few days ago, I think a recent > apt-get update;apt-get upgrade might have coincided with it breaking. > I haven't changed any configurations. Any help fixing this > appreciated. > > > ------------------------------------------------------------------------------ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Matt W. <mat...@gm...> - 2015-09-25 03:58:43
|
I'm on Ubuntu 15.04 using moose 2.0.76. I have a master + chunk server running on a Ubuntu 14.10 machine and moosefs is mounted on that machine without problems. On the 15.04 machine the getcwd call seems to be failing completely. Note that ls, cat, vi and other commands all work fine. Examples: 1. Fossil can't see the current directory $ fossil status cannot find current working directory; No such file or directory 2. Ruby call to getcwd fails $ irb irb(main):001:0> Dir.pwd Errno::ENOENT: No such file or directory - getcwd from (irb):1:in `pwd' from (irb):1 from /usr/bin/irb:11:in `<main>' 3. Chicken scheme call to getcwd fails csi> (current-directory) Error: (current-directory) cannot retrieve current directory Call history: <syntax> (current-directory) <eval> (current-directory) <-- this was working fine up until a few days ago, I think a recent apt-get update;apt-get upgrade might have coincided with it breaking. I haven't changed any configurations. Any help fixing this appreciated. |
From: Piotr R. K. <pio...@mo...> - 2015-09-24 10:31:15
|
Dear William, > Does mfsappendchunks work the same way? (with the ability to merge multiple files) > Almost. mfsappendchunks (equivalent of mfssnapshot from MooseFS 1.5) appends a lazy copy of specified file(s) to specified snapshot file ("lazy" means that creation of new chunks is delayed to the moment one copy is modified). If multiple files are given, they are merged into one target file in the way that each file begins at chunk (64MB) boundary; padding space is left empty. mfsmakesnapshot makes a "real" snapshot (lazy copy, like in case of mfsappendchunks) of some object(s) or subtree (similarly to cp -r command). It's atomic with respect to each SOURCE argument separately. If DESTINATION points to already existing file, error will be reported unless -o (overwrite) option is given. Note: if SOURCE is a directory, it's copied as a whole; but if it's followed by trailing slash, only directory content is copied. Please find the example below. As you can see, after mfsappendchunks, file “a” have length over 64M. It is because first chunk is filled up with zeroes: root@ts10:/mnt/mfs/mfsappendchunks# echo "A" > a root@ts10:/mnt/mfs/mfsappendchunks# echo "B" > b root@ts10:/mnt/mfs/mfsappendchunks# ls -l total 1 -rw-r--r-- 1 root root 2 Sep 24 12:25 a -rw-r--r-- 1 root root 2 Sep 24 12:25 b root@ts10:/mnt/mfs/mfsappendchunks# mfsfileinfo a a: chunk 0: 0000000002912925_00000001 / (id:43067685 ver:1) copy 1: 10.10.10.6:9422 (status:VALID) copy 2: 10.10.10.7:9422 (status:VALID) root@ts10:/mnt/mfs/mfsappendchunks# mfsfileinfo b b: chunk 0: 0000000002912926_00000001 / (id:43067686 ver:1) copy 1: 10.10.10.4:9422 (status:VALID) copy 2: 10.10.10.10:9422 (status:VALID) root@ts10:/mnt/mfs/mfsappendchunks# mfsappendchunks a b root@ts10:/mnt/mfs/mfsappendchunks# mfsfileinfo a a: chunk 0: 0000000002912925_00000001 / (id:43067685 ver:1) copy 1: 10.10.10.6:9422 (status:VALID) copy 2: 10.10.10.7:9422 (status:VALID) chunk 1: 0000000002912926_00000001 / (id:43067686 ver:1) copy 1: 10.10.10.4:9422 (status:VALID) copy 2: 10.10.10.10:9422 (status:VALID) root@ts10:/mnt/mfs/mfsappendchunks# mfsfileinfo b b: chunk 0: 0000000002912926_00000001 / (id:43067686 ver:1) copy 1: 10.10.10.4:9422 (status:VALID) copy 2: 10.10.10.10:9422 (status:VALID) root@ts10:/mnt/mfs/mfsappendchunks# hexdump -C a 00000000 41 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |A...............| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 04000000 42 0a |B.| 04000002 root@ts10:/mnt/mfs/mfsappendchunks# ls -lha total 70M drwxr-xr-x 2 root root 2.0M Sep 24 12:25 . drwxr-xr-x 12 root root 3.9M Sep 24 12:19 .. -rw-r--r-- 1 root root 65M Sep 24 12:26 a -rw-r--r-- 1 root root 2 Sep 24 12:25 b root@ts10:/mnt/mfs/mfsappendchunks# Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 19 Sep 2015, at 5:13 am, William Kern <wk...@bn...> wrote: > > So I imagine that > mfsmakesnapshot is really just cloning the metadata and then fills in after the fact when the source changes. > Does mfsappendchunks work the same way? (with the ability to merge multiple files) > Is there an advantage to one method or the other (especially in regards to a VM image)? > -wk > > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |