You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Aleksander W. <ale...@mo...> - 2015-11-12 06:43:15
|
Hi Michael, Exactly. You can update old MooseFS master to new 2.0, and start whole cluster with old components like: chunkservers, clients, metaloggers. After this step you can start one by one update for other MooseFS components under running MooseFS cluster. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 11/12/2015 03:01 AM, Michael Tinsay wrote: > Does this mean that once I have upgrade the master node to 2.x the > 1.6.27-5 metaloggers, chunkservers, and clients can be started up and > used while I upgrade them to 2.x one at a time? > > > -- mike t. > > > Date: Fri, 30 Oct 2015 12:02:13 +0100 > > From: Piotr Robert Konopelko <pio...@mo...> > > Subject: Re: [MooseFS-Users] Access to data while upgradeing from > > 1.6.27 to 2.xxx? > > To: Andreas Hirczy <ah...@it...> > > Cc: MooseFS-Users <moo...@li...> > > Message-ID: <DE0...@mo...> > > Content-Type: text/plain; charset="utf-8" > > > > Hi Andreas, > > > > yes, > > you?ll be able to access your data with your 1.6.27-5 clients. > > > > Please remember to do a backup of metadata, and please remember that > > we recommend to do the upgrade in the following order: > > > > 1. MFS Master (needs to be done first of all) > > 2. MFS Metalogger > > 3. MFS Chunkservers > > 4. MFS Clients (Mounts) > > > > Please also read this document before starting the upgrade: > > https://moosefs.com/Content/Downloads/moosefs-upgrade.pdf > <https://moosefs.com/Content/Downloads/moosefs-upgrade.pdf> > > > > In case of any questions, please don?t hesitate to contact us. > > > > > > > > Best regards, > > > > -- > > Piotr Robert Konopelko > > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Angus Y. 杨阳 <ang...@vi...> - 2015-11-12 02:55:50
|
Dear All i was using mfs to collect my server's Log. like jboss or nginx's Log. when i check yesterday. i found the software log stop writing and lost servral hours log message. i check my server's /var/log/message. it show: Nov 11 09:30:11 localhost mfsmount[26420]: file: 662, index: 4, chunk: 9463, version: 3 - writeworker: connection with (10.172.217.237:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Nov 11 09:30:12 localhost mfsmount[26420]: file: 662, index: 4 - fs_writechunk returned status: Chunk lost Nov 11 09:30:12 localhost mfsmount[26420]: error writing file number 662: ENXIO (No such device or address) ENXIO (No such device or address) what meaning? |
From: Michael T. <mic...@ho...> - 2015-11-12 02:01:39
|
Does this mean that once I have upgrade the master node to 2.x the 1.6.27-5 metaloggers, chunkservers, and clients can be started up and used while I upgrade them to 2.x one at a time? -- mike t. > Date: Fri, 30 Oct 2015 12:02:13 +0100 > From: Piotr Robert Konopelko <pio...@mo...> > Subject: Re: [MooseFS-Users] Access to data while upgradeing from > 1.6.27 to 2.xxx? > To: Andreas Hirczy <ah...@it...> > Cc: MooseFS-Users <moo...@li...> > Message-ID: <DE0...@mo...> > Content-Type: text/plain; charset="utf-8" > > Hi Andreas, > > yes, > you?ll be able to access your data with your 1.6.27-5 clients. > > Please remember to do a backup of metadata, and please remember that > we recommend to do the upgrade in the following order: > > 1. MFS Master (needs to be done first of all) > 2. MFS Metalogger > 3. MFS Chunkservers > 4. MFS Clients (Mounts) > > Please also read this document before starting the upgrade: > https://moosefs.com/Content/Downloads/moosefs-upgrade.pdf <https://moosefs.com/Content/Downloads/moosefs-upgrade.pdf> > > In case of any questions, please don?t hesitate to contact us. > > > > Best regards, > > -- > Piotr Robert Konopelko |
From: Wilson, S. M <st...@pu...> - 2015-11-05 14:53:17
|
Dear Krzysztof, Thanks for looking into it and locating the cause! Regards, Steve On Thu, 2015-11-05 at 09:07 +0100, Krzysztof Kielak wrote: Dear Steve, Thank you for pointing this out. We have checked the issue and the problem that you reported is connected with one of the mfsmount's threads, that is waking up too often (when looking and strace, you’ll see a lot of nanosleep syscalls). We will tweak the timing configuration for this thread in the next release of MooseFS. Best Regards, Krzysztof Kielak Moosefs.com<http://moosefs.com> | Director of Operations and Customer Support Mobile: +48 601 476 440 On 04 Nov 2015, at 19:03, Wilson, Steven M <st...@pu...<mailto:st...@pu...>> wrote: Hi, I've noticed that recent versions of mfsmount (after about version 2.73) are always consuming CPU cycles even though there is no activity on the mounted MooseFS file system. On one client, for example, top shows between 1.0% and 2.0% CPU utilization for six mfsmount processes and fairly uniform accumulated run times for each of these mfsmount processes. But the user consistently only accesses the same two mounted file systems; the other four aren't accessed. Here's the output of ps on that system filtered for mfsmount processes: root@puma:~# ps axu | grep mfsmount | grep -v grep root 2854 1.4 0.2 529200 140036 ? S<sl Oct28 152:15 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.126 -Hem-data.bio.purdue.edu<http://hem-data.bio.purdue.edu> -S/data root 2877 1.5 0.2 786140 155568 ? S<sl Oct28 154:19 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/apps root 2893 1.4 0.2 529200 140016 ? S<sl Oct28 152:12 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit root 2909 1.4 0.2 529200 140012 ? S<sl Oct28 152:09 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit-scratch root 4870 1.5 0.0 759432 35536 ? S<sl Oct28 158:53 /usr/bin/mfsmount /net/kihara -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu<http://hkihara-data.bio.purdue.edu> -S/home root 4887 1.5 0.0 463664 18976 ? S<sl Oct28 152:33 /usr/bin/mfsmount /net/kihara-scratch -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu<http://hkihara-data.bio.purdue.edu> -S/scratch I have about a dozen clients using mfsmount version 2.76 or higher and they all show this behavior. The remaining ~75 clients use mfsmount version 2.73 or lower and they show a more expected behavior (i.e., mfsmount CPU utilization correlates to user access of mounted file systems). Here's the typical ps output for a system using one of the older clients (note the highly variable accumulated run times that correlates to the usage): root@noro:~# ps axu | grep mfsmount | grep -v grep root 2283 0.0 0.0 536556 4392 ? S<sl Sep09 8:11 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.90 -Hem-data.bio.purdue.edu<http://hem-data.bio.purdue.edu> -S/data root 2298 0.3 0.3 825344 38308 ? S<sl Sep09 250:02 /usr/bin/mfsmount /net/jiang -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu<http://hjiang-data.bio.purdue.edu> -S/home root 2316 0.0 0.0 774608 1800 ? S<sl Sep09 16:13 /usr/bin/mfsmount /net/jiang-scratch -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu<http://hjiang-data.bio.purdue.edu> -S/scratch root 2333 0.0 0.3 815896 40220 ? S<sl Sep09 21:07 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/apps root 2348 0.0 0.0 462620 3052 ? S<sl Sep09 11:27 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit root 2363 0.0 0.0 462620 3172 ? S<sl Sep09 9:56 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit-scratch What is mfsmount doing that would consume CPU cycles when there's no activity on the file system that it has mounted? I didn't notice anything in the change log that would account for this. Thanks! Steve ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Krzysztof K. <krz...@mo...> - 2015-11-05 08:07:21
|
Dear Steve, Thank you for pointing this out. We have checked the issue and the problem that you reported is connected with one of the mfsmount's threads, that is waking up too often (when looking and strace, you’ll see a lot of nanosleep syscalls). We will tweak the timing configuration for this thread in the next release of MooseFS. Best Regards, Krzysztof Kielak Moosefs.com | Director of Operations and Customer Support Mobile: +48 601 476 440 > On 04 Nov 2015, at 19:03, Wilson, Steven M <st...@pu...> wrote: > > Hi, > > I've noticed that recent versions of mfsmount (after about version 2.73) > are always consuming CPU cycles even though there is no activity on the > mounted MooseFS file system. On one client, for example, top shows > between 1.0% and 2.0% CPU utilization for six mfsmount processes and > fairly uniform accumulated run times for each of these mfsmount > processes. But the user consistently only accesses the same two mounted > file systems; the other four aren't accessed. Here's the output of ps > on that system filtered for mfsmount processes: > root@puma:~# ps axu | grep mfsmount | grep -v grep > root 2854 1.4 0.2 529200 140036 ? S<sl Oct28 152:15 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.126 -Hem-data.bio.purdue.edu -S/data > root 2877 1.5 0.2 786140 155568 ? S<sl Oct28 154:19 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu -S/apps > root 2893 1.4 0.2 529200 140016 ? S<sl Oct28 152:12 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu -S/cosit > root 2909 1.4 0.2 529200 140012 ? S<sl Oct28 152:09 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu -S/cosit-scratch > root 4870 1.5 0.0 759432 35536 ? S<sl Oct28 158:53 /usr/bin/mfsmount /net/kihara -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu -S/home > root 4887 1.5 0.0 463664 18976 ? S<sl Oct28 152:33 /usr/bin/mfsmount /net/kihara-scratch -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu -S/scratch > > I have about a dozen clients using mfsmount version 2.76 or higher and > they all show this behavior. The remaining ~75 clients use mfsmount > version 2.73 or lower and they show a more expected behavior (i.e., > mfsmount CPU utilization correlates to user access of mounted file > systems). Here's the typical ps output for a system using one of the > older clients (note the highly variable accumulated run times that > correlates to the usage): > root@noro:~# ps axu | grep mfsmount | grep -v grep > root 2283 0.0 0.0 536556 4392 ? S<sl Sep09 8:11 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.90 -Hem-data.bio.purdue.edu -S/data > root 2298 0.3 0.3 825344 38308 ? S<sl Sep09 250:02 /usr/bin/mfsmount /net/jiang -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu -S/home > root 2316 0.0 0.0 774608 1800 ? S<sl Sep09 16:13 /usr/bin/mfsmount /net/jiang-scratch -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu -S/scratch > root 2333 0.0 0.3 815896 40220 ? S<sl Sep09 21:07 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu -S/apps > root 2348 0.0 0.0 462620 3052 ? S<sl Sep09 11:27 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu -S/cosit > root 2363 0.0 0.0 462620 3172 ? S<sl Sep09 9:56 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu -S/cosit-scratch > > What is mfsmount doing that would consume CPU cycles when there's no > activity on the file system that it has mounted? I didn't notice > anything in the change log that would account for this. > > Thanks! > > Steve > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wilson, S. M <st...@pu...> - 2015-11-04 18:03:25
|
Hi, I've noticed that recent versions of mfsmount (after about version 2.73) are always consuming CPU cycles even though there is no activity on the mounted MooseFS file system. On one client, for example, top shows between 1.0% and 2.0% CPU utilization for six mfsmount processes and fairly uniform accumulated run times for each of these mfsmount processes. But the user consistently only accesses the same two mounted file systems; the other four aren't accessed. Here's the output of ps on that system filtered for mfsmount processes: root@puma:~# ps axu | grep mfsmount | grep -v grep root 2854 1.4 0.2 529200 140036 ? S<sl Oct28 152:15 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.126 -Hem-data.bio.purdue.edu -S/data root 2877 1.5 0.2 786140 155568 ? S<sl Oct28 154:19 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu -S/apps root 2893 1.4 0.2 529200 140016 ? S<sl Oct28 152:12 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu -S/cosit root 2909 1.4 0.2 529200 140012 ? S<sl Oct28 152:09 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu -S/cosit-scratch root 4870 1.5 0.0 759432 35536 ? S<sl Oct28 158:53 /usr/bin/mfsmount /net/kihara -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu -S/home root 4887 1.5 0.0 463664 18976 ? S<sl Oct28 152:33 /usr/bin/mfsmount /net/kihara-scratch -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu -S/scratch I have about a dozen clients using mfsmount version 2.76 or higher and they all show this behavior. The remaining ~75 clients use mfsmount version 2.73 or lower and they show a more expected behavior (i.e., mfsmount CPU utilization correlates to user access of mounted file systems). Here's the typical ps output for a system using one of the older clients (note the highly variable accumulated run times that correlates to the usage): root@noro:~# ps axu | grep mfsmount | grep -v grep root 2283 0.0 0.0 536556 4392 ? S<sl Sep09 8:11 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.90 -Hem-data.bio.purdue.edu -S/data root 2298 0.3 0.3 825344 38308 ? S<sl Sep09 250:02 /usr/bin/mfsmount /net/jiang -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu -S/home root 2316 0.0 0.0 774608 1800 ? S<sl Sep09 16:13 /usr/bin/mfsmount /net/jiang-scratch -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu -S/scratch root 2333 0.0 0.3 815896 40220 ? S<sl Sep09 21:07 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu -S/apps root 2348 0.0 0.0 462620 3052 ? S<sl Sep09 11:27 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu -S/cosit root 2363 0.0 0.0 462620 3172 ? S<sl Sep09 9:56 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu -S/cosit-scratch What is mfsmount doing that would consume CPU cycles when there's no activity on the file system that it has mounted? I didn't notice anything in the change log that would account for this. Thanks! Steve |
From: Aleksander W. <ale...@mo...> - 2015-11-02 07:08:39
|
Hi. Can you send as some log file from master? It's not an error. This is only mfsmaster information that one of chunk is in lock state. Do you see any information(files) in MFS CGI web page in "Filesystem check info" section - on the bottom of the page? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 11/01/2015 01:32 PM, 240036175 wrote: > Hi,i install MFS2.0 in CentOS6.5_x86-64, yesterday is on error "pipe > error: EMFILE (Too many open files)", i increase the ulimit=655350, > now my MFS is on error "file: 1170 fs_writechunk returned status: > Chunk locked", i search in google and find no answer. how can i fix > it ? > Thank you very much, forgive my pool english > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: 2. <240...@qq...> - 2015-11-01 12:33:00
|
hi,i install MFS2.0 in CentOS6.5_x86-64, yesterday is on error "pipe error: EMFILE (Too many open files)", i increase the ulimit=655350, now my MFS is on error "file: 1170 fs_writechunk returned status: Chunk locked", i search in google and find no answer. how can i fix it ? thank you very much, forgive my pool english |
From: Xavier R. <XR...@ne...> - 2015-10-30 11:33:10
|
Hello, I've been testing v3 and I have some doubts about that new and very interesting feature CREATE/KEEP/ARCHIVE. - Can I monitor somehow how much time remains for a file to be moved to archive? - I see that modified files (ctime) move back from ARCHIVE to KEEP, wouldn't it be nice to also allow moving back on read (atime)? That is for chunks or applies at the whole file? o Looking at this I see that files in moosefs 3 don't update its atime.. that's right? - I wasn't able to set a delay archive lower than 1d, i.e 8h. It's not supported? A suggestion that would perfectly fit in my case: chance to also define a delay (also based on atime/ctime) for the migration from CREATE to KEEP. So I could retain in CREATE pool my very hot data. Best regards, Xavier Romero |
From: Piotr R. K. <pio...@mo...> - 2015-10-30 11:02:25
|
Hi Andreas, yes, you’ll be able to access your data with your 1.6.27-5 clients. Please remember to do a backup of metadata, and please remember that we recommend to do the upgrade in the following order: 1. MFS Master (needs to be done first of all) 2. MFS Metalogger 3. MFS Chunkservers 4. MFS Clients (Mounts) Please also read this document before starting the upgrade: https://moosefs.com/Content/Downloads/moosefs-upgrade.pdf <https://moosefs.com/Content/Downloads/moosefs-upgrade.pdf> In case of any questions, please don’t hesitate to contact us. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 30 Oct 2015, at 11:18 AM, Andreas Hirczy <ah...@it...> wrote: > > Hi! > > I plan to upgrade MooseFS from 1.6.27-5 to 2.0.8x. in the near future. > Can I expect to access the data on MooseFS with the old client software > while upgrading the Chunkservers und already running the new Master? > > Best regards, > Andreas > -- > Andreas Hirczy <ah...@it...> https://itp.tugraz.at/~ahi/ > Graz University of Technology phone: +43/316/873- 8190 > Institute of Theoretical and Computational Physics fax: +43/316/873-10 8190 > Petersgasse 16, A-8010 Graz mobile: +43/664/859 23 57 > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Andreas H. <ah...@it...> - 2015-10-30 10:48:01
|
Hi! I plan to upgrade MooseFS from 1.6.27-5 to 2.0.8x. in the near future. Can I expect to access the data on MooseFS with the old client software while upgrading the Chunkservers und already running the new Master? Best regards, Andreas -- Andreas Hirczy <ah...@it...> https://itp.tugraz.at/~ahi/ Graz University of Technology phone: +43/316/873- 8190 Institute of Theoretical and Computational Physics fax: +43/316/873-10 8190 Petersgasse 16, A-8010 Graz mobile: +43/664/859 23 57 |
From: Piotr R. K. <pio...@mo...> - 2015-10-30 09:15:42
|
Dear MooseFS Users, We would like to inform you about some changes that were made in MooseFS recently: We added changelog of changes in MooseFS on our website: https://moosefs.com/documentation/changes-in-moosefs-2-0.html <https://moosefs.com/documentation/changes-in-moosefs-2-0.html> https://moosefs.com/documentation/changes-in-moosefs-3-0.html <https://moosefs.com/documentation/changes-in-moosefs-3-0.html> We fixed problem with installation MooseFS packages for MacOS X 10.11 W published MooseFS v. 2.0.80 (stable) and 3.0.57 (current / testing) We published newer version of packages in RaspberryPI repo (3.0.55) Several bug fixes Please keep in mind, that MooseFS 3.0 is not yet a stable version and it is not recommended for production environments. Please find below changes made in MooseFS 2.0 this month: MooseFS 2.0.80-1 (2015-10-27) (cs,master,metalogger) added 1 second timeout when connecting to master (cs) force disconnection from master couple seconds after term signal (frozen I/O threads can prevent CS from termination) (systemd) fixed typo in mfscgiserv service file (macosx) fixed packages to be compatible with OS X 10.11+ MooseFS 2.0.79-1 (2015-10-16) (master) fixed setting version of new chunks registered as 'marked for removal' (master) added stronger condition for deleting invalid chunks MooseFS 2.0.78-1 (2015-10-09) (rpm) added network-online.target to Wants and After in systemd service files (startup issues after reboot) MooseFS 2.0.77-1 (2015-09-25) (mount) removed using fuse notify/forget mechanism in kernel with fuse api 7.23+ (due to unexpected kernel behaviour - getcwd returns ENOENT) MooseFS 2.0.76-1 (2015-09-11) (mount) fixed rare bug in writing module (unrecoverable write error could lead to infinite loop during write) MooseFS 2.0.75-1 (2015-09-10) (mount) fixed data-cache issue (delete only directories from kerenel dentry cache) (mount) inserting into xattr cache "nonexistent" xattr "security.capability" after file creation. (speed up writing small files) (master) fixed scenario causing deleting chunks from chunkservers marked for removal Please find below changes made in MooseFS 3.0 this month: MooseFS 3.0.57-1 (2015-10-27) (metalogger) added 1 second timeout when connecting to master (systemd) fixed typo in mfscgiserv service file (macosx) fixed packages to be compatible with OS X 10.11+ MooseFS 3.0.56-1 (2015-10-26) (mount) fixed reading scenario: (read from empty chunk -> write chunk -> read this chunk again) MooseFS 3.0.55-1 (2015-10-20) (master+cs) added 1 second timeout when connecting to master MooseFS 3.0.54-1 (2015-10-16) (master) fixed setting version of new chunks registered as 'marked for removal' (master) added stronger condition for deleting invalid chunks (cs) changed condition for number of blocks to change to mark disk as damaged (allow changes up to 10%) MooseFS 3.0.53-1 (2015-10-13) (cs) fixed typo (cnunk) (mount) create in deleted directory returns EACCES only in OS X (ENOENT in other systems) MooseFS 3.0.52-1 (2015-10-09) (mount) added new mechanism for sustaining working directories (replaces mechanism added in 3.0.40) (cs) force disconnection from master couple seconds after term signal (frozen I/O threads can prevent CS from termination) (cs) when RO/RW status or total blocks changes then device is automatically marked as damaged (master) added support for root inode end deleted inodes in MASS_RESOLVE_PATHS (cli) fixed error displaying disconnected chunkservers (rpm) added network-online.target to Wants and After in systemd service files (startup issues after reboot) Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Michael T. <mic...@ho...> - 2015-10-29 02:09:29
|
Hi Piotr, It seems my earlier reply didn't get through. Anyways, thank you for the information. We currently have a v1 installation and a second installation on v2 already. I have plans to upgrade the v1 site already, but I'm thinking it may be more practical to wait for the GA release of v3 rather than upgrade now and then upgrade again after only a few months. Regards. --- mike t. ________________________________ From: Piotr Robert Konopelko<mailto:pio...@mo...> Sent: 10/29/2015 12:34 AM To: moo...@li...<mailto:moo...@li...> Cc: Michael Tinsay<mailto:mic...@ho...> Subject: Re: [MooseFS-Users] MFS 3.0.x Hi, I also want to inform all MooseFS Users, that we published on our website the log of changes in MooseFS. You can reach it here: MooseFS 3.0 changelog: https://moosefs.com/documentation/changes-in-moosefs-3-0.html <https://moosefs.com/documentation/changes-in-moosefs-3-0.html> MooseFS 2.0 changelog: https://moosefs.com/documentation/changes-in-moosefs-2-0.html <https://moosefs.com/documentation/changes-in-moosefs-2-0.html> Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 15 Oct 2015, at 9:24 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Hi Michael, > >> When is MooseFS 3.0 expected to supplant 2.0 and become the "stable" version? > > It is hard to say. > Theoretically, we have a Release Candidate now (3.0.53, will be published on our website today, it is already in “current” repo). But we are still in progress of tests in different environments (including production). > We want to release a very good piece of software, well-tested. Because we know, that in our users' environments it is crucial to have a rock-solid and stable data backend. So MFS 3 needs a lot of different tests. > > You can take a look at file I'm attaching (you can also always find such file, named "NEWS" in sources tarball - https://moosefs.com/download/sources.html <https://moosefs.com/download/sources.html>). > Please pay attention, that last two months, or ever longer are mainly fixes. > And testing software takes time. > > We want to release it as soon as possible, but I am not able to say specific date, I'm sure, that now you understand, why. Maybe, but only maybe, it will be by the end of the year. But unfortunately I can't guarantee. > > > <NEWS> > > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> On 13 Oct 2015, at 7:54 am, Michael Tinsay <mic...@ho... <mailto:mic...@ho...>> wrote: >> >> Hi! >> >> When is MooseFS 3.0 expected to supplant 2.0 and become the "stable" version? >> >> Best regards. >> >> >> --- mike t. >> ------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Piotr R. K. <pio...@mo...> - 2015-10-28 16:34:26
|
Hi, I also want to inform all MooseFS Users, that we published on our website the log of changes in MooseFS. You can reach it here: MooseFS 3.0 changelog: https://moosefs.com/documentation/changes-in-moosefs-3-0.html <https://moosefs.com/documentation/changes-in-moosefs-3-0.html> MooseFS 2.0 changelog: https://moosefs.com/documentation/changes-in-moosefs-2-0.html <https://moosefs.com/documentation/changes-in-moosefs-2-0.html> Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 15 Oct 2015, at 9:24 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Hi Michael, > >> When is MooseFS 3.0 expected to supplant 2.0 and become the "stable" version? > > It is hard to say. > Theoretically, we have a Release Candidate now (3.0.53, will be published on our website today, it is already in “current” repo). But we are still in progress of tests in different environments (including production). > We want to release a very good piece of software, well-tested. Because we know, that in our users' environments it is crucial to have a rock-solid and stable data backend. So MFS 3 needs a lot of different tests. > > You can take a look at file I'm attaching (you can also always find such file, named "NEWS" in sources tarball - https://moosefs.com/download/sources.html <https://moosefs.com/download/sources.html>). > Please pay attention, that last two months, or ever longer are mainly fixes. > And testing software takes time. > > We want to release it as soon as possible, but I am not able to say specific date, I'm sure, that now you understand, why. Maybe, but only maybe, it will be by the end of the year. But unfortunately I can't guarantee. > > > <NEWS> > > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> On 13 Oct 2015, at 7:54 am, Michael Tinsay <mic...@ho... <mailto:mic...@ho...>> wrote: >> >> Hi! >> >> When is MooseFS 3.0 expected to supplant 2.0 and become the "stable" version? >> >> Best regards. >> >> >> --- mike t. >> ------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Angus Y. 杨阳 <ang...@vi...> - 2015-10-27 03:02:48
|
Dear All i found my mfs mount server show that error in /var/log/message what happend and how to resolve it? Oct 27 10:28:20 localhost mfsmount[26420]: file: 489, index: 1, chunk: 8590, version: 1 - writeworker: connection with (172.0.10.190:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Oct 27 10:33:28 localhost mfsmount[26420]: file: 489, index: 1, chunk: 8590, version: 1 - writeworker: connection with (172.0.10.190:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Oct 27 10:33:29 localhost mfsmount[26420]: file: 489, index: 1, chunk: 8590, version: 1 - writeworker: connection with (172.0.10.190:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 2) Oct 27 10:43:59 localhost mfsmount[26420]: file: 489, index: 1, chunk: 8590, version: 1 - writeworker: connection with (172.0.10.190:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Oct 27 10:50:18 localhost mfsmount[26420]: file: 489, index: 1, chunk: 8590, version: 1 - writeworker: connection with (172.0.10.190:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Oct 27 10:51:30 localhost mfsmount[26420]: file: 489, index: 1, chunk: 8590, version: 1 - writeworker: connection with (172.0.10.190:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Oct 27 10:55:48 localhost mfsmount[26420]: file: 489, index: 1, chunk: 8590, version: 1 - writeworker: connection with (172.0.10.190:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) |
From: Krzysztof K. <krz...@mo...> - 2015-10-26 06:32:21
|
Dear Edward, Unfortunately MooseFS does not support distributed inotify. Chris > On 24 Oct 2015, at 20:08, Edward Ned Harvey (moosefs) <mo...@ne...> wrote: > > Does MooseFS support distributed inotify, or anything similar? > In other words, if several clients are all "watching" some directory, and something changes in that directory, can they all be notified? > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Edward N. H. (moosefs) <mo...@ne...> - 2015-10-24 19:42:57
|
Does MooseFS support distributed inotify, or anything similar? In other words, if several clients are all "watching" some directory, and something changes in that directory, can they all be notified? |
From: Piotr R. K. <pio...@mo...> - 2015-10-23 14:53:16
|
For MooseFS-Users Information: > On 23 Oct 2015, at 10:19 am, che...@st... wrote: > > Thanks for your reply, > After read your suggestion, I checked my network configuration, I found that the firewall was not stoped, I stoped the firewall and restart moosefs service, > The question was solved. > > > Thanks very much. > Best regards, > che...@st... <mailto:che...@st...> Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 23 Oct 2015, at 10:19 am, che...@st... wrote: > > Thanks for your reply, > After read your suggestion, I checked my network configuration, I found that the firewall was not stoped, I stoped the firewall and restart moosefs service, > The question was solved. > > > Thanks very much. > Best regards, > che...@st... <mailto:che...@st...> > > From: Piotr Robert Konopelko <mailto:pio...@mo...> > Date: 2015-10-23 10:22 > To: che...@st... <mailto:che...@st...> > CC: moosefs-users <mailto:moo...@li...> > Subject: [***SPAM*** Score/Req: 10.1/8.0] Re: [MooseFS-Users] some questions about moosefs v2.0 > Hi, > > probably your problem lies in network configuration - please check if you have ports 9419-9425 > open between all MooseFS components (mfsmaster, mfschunkserver(s) and clients (mounts)). > > By the way - you can also download MooseFS via our official repository, so package management will be much easier: > > You can find the instructions how to add and use repo here: > https://moosefs.com/download/centosfedorarhel.html <https://moosefs.com/download/centosfedorarhel.html> > > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> On 23 Oct 2015, at 3:17 am, che...@st... <mailto:che...@st...> wrote: >> >> hello >> We installed moosefs v2.0 on the centos operation system, The network environment as follows: >> 172.18.70.1 (centos 6, installed masterserver) >> 172.18.70.2 (centos 6, Configured as a DNS server, installed moosefs client) >> 172.18.70.3 (centos 6, installed chunkserver) >> 172.18.70.4 (centos 6, installed moosefs client) >> >> >> The configuration of mfsmaster.cfg as follows: >> We used the default value, Just modified the Item of master address(MATOCS_LISTEN_HOST = 172.18.70.1) >> >> The configuration of mfschunkserver.cfg as follows: >> We used the default value, Just modified the Item of master address(MASTER_HOST = 172.18.70.1) >> >> On the server of 172.18.70.4 We installed fuse V2.8.3.4. >> We mount the moosefs on the local filesystem /mnt/mfs, we can create directory on the moosefs, but can't write any data into the file, all created file can find the filename on >> the moosefs only, the file size is zero. We use cp command copy file , this operation is blocked and we can found the file on moosef but has no data. >> >> >> >> <CatchF38B.jpg> >> >> <Catch2D55.jpg> >> >> >> <Catch.jpg> >> >> <Catch4A69.jpg> >> >> >> <CatchA4A5.jpg> >> che...@st... <mailto:che...@st...>------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Piotr R. K. <pio...@mo...> - 2015-10-23 02:22:59
|
Hi, probably your problem lies in network configuration - please check if you have ports 9419-9425 open between all MooseFS components (mfsmaster, mfschunkserver(s) and clients (mounts)). By the way - you can also download MooseFS via our official repository, so package management will be much easier: You can find the instructions how to add and use repo here: https://moosefs.com/download/centosfedorarhel.html <https://moosefs.com/download/centosfedorarhel.html> Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 23 Oct 2015, at 3:17 am, che...@st... wrote: > > hello > We installed moosefs v2.0 on the centos operation system, The network environment as follows: > 172.18.70.1 (centos 6, installed masterserver) > 172.18.70.2 (centos 6, Configured as a DNS server, installed moosefs client) > 172.18.70.3 (centos 6, installed chunkserver) > 172.18.70.4 (centos 6, installed moosefs client) > > > The configuration of mfsmaster.cfg as follows: > We used the default value, Just modified the Item of master address(MATOCS_LISTEN_HOST = 172.18.70.1) > > The configuration of mfschunkserver.cfg as follows: > We used the default value, Just modified the Item of master address(MASTER_HOST = 172.18.70.1) > > On the server of 172.18.70.4 We installed fuse V2.8.3.4. > We mount the moosefs on the local filesystem /mnt/mfs, we can create directory on the moosefs, but can't write any data into the file, all created file can find the filename on > the moosefs only, the file size is zero. We use cp command copy file , this operation is blocked and we can found the file on moosef but has no data. > > > > <CatchF38B.jpg> > > <Catch2D55.jpg> > > > <Catch.jpg> > > <Catch4A69.jpg> > > > <CatchA4A5.jpg> > che...@st... <mailto:che...@st...>------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Piotr R. K. <pio...@mo...> - 2015-10-23 02:18:41
|
Hi, > I'm using Debian Wheezy and fuse-utils wasn't installed. After installing the missing package, it can mount without error. And you're right about _netdev doesn't matter. Excellent! > Thanks for your support. No problem. Fell free to write to our mailing list in the future. I also invite you to subscribe MooseFS-Users mailing list at https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users>. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 23 Oct 2015, at 4:12 am, 18...@gm... wrote: > > Hi, > > I'm using Debian Wheezy and fuse-utils wasn't installed. After installing the missing package, it can mount without error. And you're right about _netdev doesn't matter. > > Thanks for your support. > > On Thu, Oct 22, 2015 at 5:19 PM, Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>> wrote: > Hello, > > what Linux distribution are you using? > > Do you have package “fuse-utils” installed? > The lack of this package installed probably causes this problem. > > The entry should now look like this: > > mfsmount /mnt/mfs fuse defaults 0 0 > > There’s no need to set mfsmaster=mfsmaster and mfsport=9421 since they are default values. > If in your environment they are different than defaults, you should include them, e.g.: > > mfsmount /mnt/mfs fuse defaults,mfsmaster=mymaster.xyz.lan,mfsport=9921 0 0 > > As far as I remember there’s no need to use _netdev now. > > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> On 22 Oct 2015, at 4:56 am, 18...@gm... <mailto:18...@gm...> wrote: >> >> I'm using MooseFS 2.0.77 but I can't put mfsmount on /etc/fstab >> >> # cat /etc/fstab | grep mfsmount >> mfsmount /mnt/mfs fuse mfsmaster=mfsmaster,mfsport=9421,_netdev 0 0 >> >> # mount /mnt/mfs >> mount: wrong fs type, bad option, bad superblock on mfsmount, >> missing codepage or helper program, or other error >> (for several filesystems (e.g. nfs, cifs) you might >> need a /sbin/mount.<type> helper program) >> In some cases useful info is found in syslog - try >> dmesg | tail or so >> >> Then I have to manually run mfsmount. >> >> What can I do to fix this? >> Thanks, >> ------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > |
From: <18...@gm...> - 2015-10-23 02:12:43
|
Hi, I'm using Debian Wheezy and fuse-utils wasn't installed. After installing the missing package, it can mount without error. And you're right about _netdev doesn't matter. Thanks for your support. On Thu, Oct 22, 2015 at 5:19 PM, Piotr Robert Konopelko < pio...@mo...> wrote: > Hello, > > what Linux distribution are you using? > > Do you have package “fuse-utils” installed? > The lack of this package installed probably causes this problem. > > The entry should now look like this: > > mfsmount /mnt/mfs fuse defaults 0 0 > > There’s no need to set mfsmaster=mfsmaster and mfsport=9421 since they > are default values. > If in your environment they are different than defaults, you should > include them, e.g.: > > mfsmount /mnt/mfs fuse defaults,mfsmaster=mymaster.xyz.lan,mfsport=9921 0 > 0 > > As far as I remember there’s no need to use _netdev now. > > > Best regards, > > -- > Piotr Robert Konopelko > *MooseFS Technical Support Engineer* | moosefs.com > > On 22 Oct 2015, at 4:56 am, 18...@gm... wrote: > > I'm using MooseFS 2.0.77 but I can't put mfsmount on /etc/fstab > > # cat /etc/fstab | grep mfsmount > mfsmount /mnt/mfs fuse mfsmaster=mfsmaster,mfsport=9421,_netdev 0 0 > > # mount /mnt/mfs > mount: wrong fs type, bad option, bad superblock on mfsmount, > missing codepage or helper program, or other error > (for several filesystems (e.g. nfs, cifs) you might > need a /sbin/mount.<type> helper program) > In some cases useful info is found in syslog - try > dmesg | tail or so > > Then I have to manually run mfsmount. > > What can I do to fix this? > Thanks, > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: <che...@st...> - 2015-10-23 01:27:28
|
hello We installed moosefs v2.0 on the centos operation system, The network environment as follows: 172.18.70.1 (centos 6, installed masterserver) 172.18.70.2 (centos 6, Configured as a DNS server, installed moosefs client) 172.18.70.3 (centos 6, installed chunkserver) 172.18.70.4 (centos 6, installed moosefs client) The configuration of mfsmaster.cfg as follows: We used the default value, Just modified the Item of master address(MATOCS_LISTEN_HOST = 172.18.70.1) The configuration of mfschunkserver.cfg as follows: We used the default value, Just modified the Item of master address(MASTER_HOST = 172.18.70.1) On the server of 172.18.70.4 We installed fuse V2.8.3.4. We mount the moosefs on the local filesystem /mnt/mfs, we can create directory on the moosefs, but can't write any data into the file, all created file can find the filename on the moosefs only, the file size is zero. We use cp command copy file , this operation is blocked and we can found the file on moosef but has no data. che...@st... |
From: Piotr R. K. <pio...@mo...> - 2015-10-22 10:19:14
|
Hello, what Linux distribution are you using? Do you have package “fuse-utils” installed? The lack of this package installed probably causes this problem. The entry should now look like this: mfsmount /mnt/mfs fuse defaults 0 0 There’s no need to set mfsmaster=mfsmaster and mfsport=9421 since they are default values. If in your environment they are different than defaults, you should include them, e.g.: mfsmount /mnt/mfs fuse defaults,mfsmaster=mymaster.xyz.lan,mfsport=9921 0 0 As far as I remember there’s no need to use _netdev now. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 22 Oct 2015, at 4:56 am, 18...@gm... wrote: > > I'm using MooseFS 2.0.77 but I can't put mfsmount on /etc/fstab > > # cat /etc/fstab | grep mfsmount > mfsmount /mnt/mfs fuse mfsmaster=mfsmaster,mfsport=9421,_netdev 0 0 > > # mount /mnt/mfs > mount: wrong fs type, bad option, bad superblock on mfsmount, > missing codepage or helper program, or other error > (for several filesystems (e.g. nfs, cifs) you might > need a /sbin/mount.<type> helper program) > In some cases useful info is found in syslog - try > dmesg | tail or so > > Then I have to manually run mfsmount. > > What can I do to fix this? > Thanks, > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: <18...@gm...> - 2015-10-22 02:56:24
|
I'm using MooseFS 2.0.77 but I can't put mfsmount on /etc/fstab # cat /etc/fstab | grep mfsmount mfsmount /mnt/mfs fuse mfsmaster=mfsmaster,mfsport=9421,_netdev 0 0 # mount /mnt/mfs mount: wrong fs type, bad option, bad superblock on mfsmount, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try dmesg | tail or so Then I have to manually run mfsmount. What can I do to fix this? Thanks, |
From: Aleksander W. <ale...@mo...> - 2015-10-16 13:45:42
|
Hello, We are in progress of MooseFS native Windows client development. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> |