You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Piotr R. K. <pio...@mo...> - 2016-02-01 16:48:47
|
Hi Alex, did upgrade to 3.0.71 resolve the issue? Sorry for the delay, but we can test it on Thursday. If upgrading resolves the issue, probably there's nothing to test :) Best regards, -- <https://moosefs.com/> Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail : pio...@mo... <mailto:pio...@mo...> www : https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 28 Jan 2016, at 9:08 PM, Alex Crow <ac...@in...> wrote: > > Hi Piotr, > > I am using latest Centos (equivalent to RHEL 7.2) and chunkservers were on latest repo from zfsonlinux.org. For now I'm running a git pull of ZoL due to a fix in L2ARC eviction code that caused cache eviction to use 100% CPU. Kernel is what comes with the OS, no custom stuff. > > I've tuned the kernel on masters and chunkserver like this: > > vm.vfs_cache_pressure = 1000 > vm.swappiness = 25 > > to reduce thrashing to swap. > > We have no plans to ever move away from CentOS, RHEL, or any other tracking derivative. We're a small. low-risk financial intermediary with good profits so we see no reason to risk anything else. MooseFS gives up a layer of protection over the ZoL code, which gives us the benefit of compression, checksums and flexible usage of raw disks without the worry of RAID controllers going out of date. > > Target usage is Windows clients accessing CIFS via Samba+CTDB and backup from this cluster to another, both clusters split over sites about 2-3km apart, so network latency is will below disk. > > I hope this is enough info for you! > > Many thanks > > Alex > > On 28/01/16 19:32, Piotr Robert Konopelko wrote: >> Hi Alex, >> >>> I saw that just now. I seriously love you guys, your responsiveness is amazing. >> >> It's really nice to read this :) Thanks! >> >> Could you let us know, which OS you use on Master Server >> so we can try to reproduce it in a similar environment? >> >> >> Best regards, >> >> -- >> Piotr Robert Konopelko >> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >>> On 28 Jan 2016, at 8:20 PM, Alex Crow < <mailto:ac...@in...>ac...@in... <mailto:ac...@in...>> wrote: >>> >>> I saw that just now. I seriously love you guys, your responsiveness is amazing. >>> >>> I've upgraded both machines and just waiting for a scan to finish. >>> >>> Brilliant! >>> >>> Alex >>> >>> On 28/01/16 17:18, Piotr Robert Konopelko wrote: >>>> Dear Alex, >>>> >>>> It may be connected with segfault fixed recently: >>>> >>>> https://moosefs.com/documentation/changes-in-moosefs-3-0.html <https://moosefs.com/documentation/changes-in-moosefs-3-0.html> >>>> >>>> MooseFS 3.0.71-1 (2016-01-21) >>>> (master) fixed emptying trash issue (intr. in 3.0.64) >>>> (master) fixed possible segfault in chunkservers database (intr. in 3.0.67) >>>> (master) changed trash part choice from nondeterministic to deterministic >>>> >>>> We can check it tomorrow; for now please upgrade to 3.0.71-1 and let us know if problem persists. >>>> We recommend to use always the newest version (especially if you use 3.0/unstable/testing branch). >>>> >>>> >>>> Best regards, >>>> >>>> -- >>>> Piotr Robert Konopelko >>>> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >>>>> On 28 Jan 2016, at 4:23 PM, Alex Crow <ac...@in... <mailto:ac...@in...>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> I have a pair of servers running 3.0.69 and under high load (many rsync >>>>> processes) the master (running on one of the servers) is segfaulting: >>>>> >>>>> [1044721.599157] mfsmaster[18388]: segfault at 15 ip 00000000004415ac sp >>>>> 00007ffdf4df0f70 error 4 in mfsmaster[400000+9a000] >>>>> [1107293.383944] mfsmaster[18907]: segfault at 15 ip 00000000004415ac sp >>>>> 00007ffed9db6780 error 4 in mfsmaster[400000+9a000] >>>>> >>>>> Any ideas of how to fix this? >>>>> >>>>> Many thanks >>>>> >>>>> Alex >>>>> >>>>> -- >>>>> This message is intended only for the addressee and may contain >>>>> confidential information. Unless you are that person, you may not >>>>> disclose its contents or use it in any way and are requested to delete >>>>> the message along with any attachments and notify us immediately. >>>>> This email is not intended to, nor should it be taken to, constitute advice. >>>>> The information provided is correct to our knowledge & belief and must not >>>>> be used as a substitute for obtaining tax, regulatory, investment, legal or >>>>> any other appropriate advice. >>>>> >>>>> "Transact" is operated by Integrated Financial Arrangements Ltd. >>>>> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. >>>>> (Registered office: as above; Registered in England and Wales under >>>>> number: 3727592). Authorised and regulated by the Financial Conduct >>>>> Authority (entered on the Financial Services Register; no. 190856). >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Site24x7 APM Insight: Get Deep Visibility into Application Performance >>>>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >>>>> Monitor end-to-end web transactions and take corrective actions now >>>>> Troubleshoot faster and improve end-user experience. Signup Now! >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 <http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140> >>>>> _________________________________________ >>>>> moosefs-users mailing list >>>>> moo...@li... <mailto:moo...@li...> >>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>> >>> >>> >>> -- >>> This message is intended only for the addressee and may contain >>> confidential information. Unless you are that person, you may not >>> disclose its contents or use it in any way and are requested to delete >>> the message along with any attachments and notify us immediately. >>> This email is not intended to, nor should it be taken to, constitute advice. >>> The information provided is correct to our knowledge & belief and must not >>> be used as a substitute for obtaining tax, regulatory, investment, legal or >>> any other appropriate advice. >>> >>> "Transact" is operated by Integrated Financial Arrangements Ltd. >>> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. >>> (Registered office: as above; Registered in England and Wales under >>> number: 3727592). Authorised and regulated by the Financial Conduct >>> Authority (entered on the Financial Services Register; no. 190856). >>> >>> ------------------------------------------------------------------------------ >>> Site24x7 APM Insight: Get Deep Visibility into Application Performance >>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >>> Monitor end-to-end web transactions and take corrective actions now >>> Troubleshoot faster and improve end-user experience. Signup Now! >>> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________ <http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________> >>> moosefs-users mailing list >>> moo...@li... <mailto:moo...@li...> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alex C. <ac...@in...> - 2016-01-28 20:08:49
|
Hi Piotr, I am using latest Centos (equivalent to RHEL 7.2) and chunkservers were on latest repo from zfsonlinux.org. For now I'm running a git pull of ZoL due to a fix in L2ARC eviction code that caused cache eviction to use 100% CPU. Kernel is what comes with the OS, no custom stuff. I've tuned the kernel on masters and chunkserver like this: vm.vfs_cache_pressure = 1000 vm.swappiness = 25 to reduce thrashing to swap. We have no plans to ever move away from CentOS, RHEL, or any other tracking derivative. We're a small. low-risk financial intermediary with good profits so we see no reason to risk anything else. MooseFS gives up a layer of protection over the ZoL code, which gives us the benefit of compression, checksums and flexible usage of raw disks without the worry of RAID controllers going out of date. Target usage is Windows clients accessing CIFS via Samba+CTDB and backup from this cluster to another, both clusters split over sites about 2-3km apart, so network latency is will below disk. I hope this is enough info for you! Many thanks Alex On 28/01/16 19:32, Piotr Robert Konopelko wrote: > Hi Alex, > >> I saw that just now. I seriously love you guys, your responsiveness >> is amazing. > > It's really nice to read this :) Thanks! > > Could you let us know, which OS you use on Master Server > so we can try to reproduce it in a similar environment? > > > Best regards, > > -- > Piotr Robert Konopelko > *MooseFS Technical Support Engineer* | moosefs.com <https://moosefs.com> > >> On 28 Jan 2016, at 8:20 PM, Alex Crow <ac...@in... >> <mailto:ac...@in...>> wrote: >> >> I saw that just now. I seriously love you guys, your responsiveness >> is amazing. >> >> I've upgraded both machines and just waiting for a scan to finish. >> >> Brilliant! >> >> Alex >> >> On 28/01/16 17:18, Piotr Robert Konopelko wrote: >>> Dear Alex, >>> >>> It may be connected with segfault fixed recently: >>> >>> https://moosefs.com/documentation/changes-in-moosefs-3-0.html >>> >>> *MooseFS 3.0.71-1 (2016-01-21)* >>> >>> * (master) fixed emptying trash issue (intr. in 3.0.64) >>> * *(master) fixed possible segfault in chunkservers database >>> (intr. in 3.0.67)* >>> * (master) changed trash part choice from nondeterministic to >>> deterministic >>> >>> >>> We can check it tomorrow; for now please upgrade to 3.0.71-1 and let >>> us know if problem persists. >>> We recommend to use always the newest version (especially if you use >>> 3.0/unstable/testing branch). >>> >>> >>> Best regards, >>> >>> -- >>> Piotr Robert Konopelko >>> *MooseFS Technical Support Engineer* | moosefs.com >>> <https://moosefs.com/> >>> >>>> On 28 Jan 2016, at 4:23 PM, Alex Crow <ac...@in...> wrote: >>>> >>>> Hi, >>>> >>>> I have a pair of servers running 3.0.69 and under high load (many >>>> rsync >>>> processes) the master (running on one of the servers) is segfaulting: >>>> >>>> [1044721.599157] mfsmaster[18388]: segfault at 15 ip >>>> 00000000004415ac sp >>>> 00007ffdf4df0f70 error 4 in mfsmaster[400000+9a000] >>>> [1107293.383944] mfsmaster[18907]: segfault at 15 ip >>>> 00000000004415ac sp >>>> 00007ffed9db6780 error 4 in mfsmaster[400000+9a000] >>>> >>>> Any ideas of how to fix this? >>>> >>>> Many thanks >>>> >>>> Alex >>>> >>>> -- >>>> This message is intended only for the addressee and may contain >>>> confidential information. Unless you are that person, you may not >>>> disclose its contents or use it in any way and are requested to delete >>>> the message along with any attachments and notify us immediately. >>>> This email is not intended to, nor should it be taken to, >>>> constitute advice. >>>> The information provided is correct to our knowledge & belief and >>>> must not >>>> be used as a substitute for obtaining tax, regulatory, investment, >>>> legal or >>>> any other appropriate advice. >>>> >>>> "Transact" is operated by Integrated Financial Arrangements Ltd. >>>> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) >>>> 7608 5300. >>>> (Registered office: as above; Registered in England and Wales under >>>> number: 3727592). Authorised and regulated by the Financial Conduct >>>> Authority (entered on the Financial Services Register; no. 190856). >>>> >>>> ------------------------------------------------------------------------------ >>>> Site24x7 APM Insight: Get Deep Visibility into Application Performance >>>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >>>> Monitor end-to-end web transactions and take corrective actions now >>>> Troubleshoot faster and improve end-user experience. Signup Now! >>>> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 >>>> _________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> >> -- >> This message is intended only for the addressee and may contain >> confidential information. Unless you are that person, you may not >> disclose its contents or use it in any way and are requested to delete >> the message along with any attachments and notify us immediately. >> This email is not intended to, nor should it be taken to, constitute advice. >> The information provided is correct to our knowledge & belief and must not >> be used as a substitute for obtaining tax, regulatory, investment, legal or >> any other appropriate advice. >> >> "Transact" is operated by Integrated Financial Arrangements Ltd. >> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. >> (Registered office: as above; Registered in England and Wales under >> number: 3727592). Authorised and regulated by the Financial Conduct >> Authority (entered on the Financial Services Register; no. 190856). >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Piotr R. K. <pio...@mo...> - 2016-01-28 19:32:39
|
Hi Alex, > I saw that just now. I seriously love you guys, your responsiveness is amazing. It's really nice to read this :) Thanks! Could you let us know, which OS you use on Master Server so we can try to reproduce it in a similar environment? Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 28 Jan 2016, at 8:20 PM, Alex Crow <ac...@in...> wrote: > > I saw that just now. I seriously love you guys, your responsiveness is amazing. > > I've upgraded both machines and just waiting for a scan to finish. > > Brilliant! > > Alex > > On 28/01/16 17:18, Piotr Robert Konopelko wrote: >> Dear Alex, >> >> It may be connected with segfault fixed recently: >> >> https://moosefs.com/documentation/changes-in-moosefs-3-0.html <https://moosefs.com/documentation/changes-in-moosefs-3-0.html> >> >> MooseFS 3.0.71-1 (2016-01-21) >> (master) fixed emptying trash issue (intr. in 3.0.64) >> (master) fixed possible segfault in chunkservers database (intr. in 3.0.67) >> (master) changed trash part choice from nondeterministic to deterministic >> >> We can check it tomorrow; for now please upgrade to 3.0.71-1 and let us know if problem persists. >> We recommend to use always the newest version (especially if you use 3.0/unstable/testing branch). >> >> >> Best regards, >> >> -- >> Piotr Robert Konopelko >> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >>> On 28 Jan 2016, at 4:23 PM, Alex Crow < <mailto:ac...@in...>ac...@in... <mailto:ac...@in...>> wrote: >>> >>> Hi, >>> >>> I have a pair of servers running 3.0.69 and under high load (many rsync >>> processes) the master (running on one of the servers) is segfaulting: >>> >>> [1044721.599157] mfsmaster[18388]: segfault at 15 ip 00000000004415ac sp >>> 00007ffdf4df0f70 error 4 in mfsmaster[400000+9a000] >>> [1107293.383944] mfsmaster[18907]: segfault at 15 ip 00000000004415ac sp >>> 00007ffed9db6780 error 4 in mfsmaster[400000+9a000] >>> >>> Any ideas of how to fix this? >>> >>> Many thanks >>> >>> Alex >>> >>> -- >>> This message is intended only for the addressee and may contain >>> confidential information. Unless you are that person, you may not >>> disclose its contents or use it in any way and are requested to delete >>> the message along with any attachments and notify us immediately. >>> This email is not intended to, nor should it be taken to, constitute advice. >>> The information provided is correct to our knowledge & belief and must not >>> be used as a substitute for obtaining tax, regulatory, investment, legal or >>> any other appropriate advice. >>> >>> "Transact" is operated by Integrated Financial Arrangements Ltd. >>> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. >>> (Registered office: as above; Registered in England and Wales under >>> number: 3727592). Authorised and regulated by the Financial Conduct >>> Authority (entered on the Financial Services Register; no. 190856). >>> >>> ------------------------------------------------------------------------------ >>> Site24x7 APM Insight: Get Deep Visibility into Application Performance >>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >>> Monitor end-to-end web transactions and take corrective actions now >>> Troubleshoot faster and improve end-user experience. Signup Now! >>> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 <http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140> >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... <mailto:moo...@li...> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alex C. <ac...@in...> - 2016-01-28 19:21:05
|
I saw that just now. I seriously love you guys, your responsiveness is amazing. I've upgraded both machines and just waiting for a scan to finish. Brilliant! Alex On 28/01/16 17:18, Piotr Robert Konopelko wrote: > Dear Alex, > > It may be connected with segfault fixed recently: > > https://moosefs.com/documentation/changes-in-moosefs-3-0.html > > *MooseFS 3.0.71-1 (2016-01-21)* > > * (master) fixed emptying trash issue (intr. in 3.0.64) > * *(master) fixed possible segfault in chunkservers database (intr. > in 3.0.67)* > * (master) changed trash part choice from nondeterministic to > deterministic > > > We can check it tomorrow; for now please upgrade to 3.0.71-1 and let > us know if problem persists. > We recommend to use always the newest version (especially if you use > 3.0/unstable/testing branch). > > > Best regards, > > -- > Piotr Robert Konopelko > *MooseFS Technical Support Engineer* | moosefs.com <https://moosefs.com> > >> On 28 Jan 2016, at 4:23 PM, Alex Crow <ac...@in... >> <mailto:ac...@in...>> wrote: >> >> Hi, >> >> I have a pair of servers running 3.0.69 and under high load (many rsync >> processes) the master (running on one of the servers) is segfaulting: >> >> [1044721.599157] mfsmaster[18388]: segfault at 15 ip 00000000004415ac sp >> 00007ffdf4df0f70 error 4 in mfsmaster[400000+9a000] >> [1107293.383944] mfsmaster[18907]: segfault at 15 ip 00000000004415ac sp >> 00007ffed9db6780 error 4 in mfsmaster[400000+9a000] >> >> Any ideas of how to fix this? >> >> Many thanks >> >> Alex >> >> -- >> This message is intended only for the addressee and may contain >> confidential information. Unless you are that person, you may not >> disclose its contents or use it in any way and are requested to delete >> the message along with any attachments and notify us immediately. >> This email is not intended to, nor should it be taken to, constitute >> advice. >> The information provided is correct to our knowledge & belief and >> must not >> be used as a substitute for obtaining tax, regulatory, investment, >> legal or >> any other appropriate advice. >> >> "Transact" is operated by Integrated Financial Arrangements Ltd. >> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) >> 7608 5300. >> (Registered office: as above; Registered in England and Wales under >> number: 3727592). Authorised and regulated by the Financial Conduct >> Authority (entered on the Financial Services Register; no. 190856). >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Piotr R. K. <pio...@mo...> - 2016-01-28 17:18:44
|
Dear Alex, It may be connected with segfault fixed recently: https://moosefs.com/documentation/changes-in-moosefs-3-0.html <https://moosefs.com/documentation/changes-in-moosefs-3-0.html> MooseFS 3.0.71-1 (2016-01-21) (master) fixed emptying trash issue (intr. in 3.0.64) (master) fixed possible segfault in chunkservers database (intr. in 3.0.67) (master) changed trash part choice from nondeterministic to deterministic We can check it tomorrow; for now please upgrade to 3.0.71-1 and let us know if problem persists. We recommend to use always the newest version (especially if you use 3.0/unstable/testing branch). Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 28 Jan 2016, at 4:23 PM, Alex Crow <ac...@in...> wrote: > > Hi, > > I have a pair of servers running 3.0.69 and under high load (many rsync > processes) the master (running on one of the servers) is segfaulting: > > [1044721.599157] mfsmaster[18388]: segfault at 15 ip 00000000004415ac sp > 00007ffdf4df0f70 error 4 in mfsmaster[400000+9a000] > [1107293.383944] mfsmaster[18907]: segfault at 15 ip 00000000004415ac sp > 00007ffed9db6780 error 4 in mfsmaster[400000+9a000] > > Any ideas of how to fix this? > > Many thanks > > Alex > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alex C. <ac...@in...> - 2016-01-28 15:40:00
|
Hi, I have a pair of servers running 3.0.69 and under high load (many rsync processes) the master (running on one of the servers) is segfaulting: [1044721.599157] mfsmaster[18388]: segfault at 15 ip 00000000004415ac sp 00007ffdf4df0f70 error 4 in mfsmaster[400000+9a000] [1107293.383944] mfsmaster[18907]: segfault at 15 ip 00000000004415ac sp 00007ffed9db6780 error 4 in mfsmaster[400000+9a000] Any ideas of how to fix this? Many thanks Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Piotr R. K. <pio...@mo...> - 2016-01-27 16:51:41
|
Dear MooseFS Users, today we published the newest version from 3.x branch: 3.0.71. Please find the changes since 3.0.69 below: MooseFS 3.0.71-1 (2016-01-21) (master) fixed emptying trash issue (intr. in 3.0.64) (master) fixed possible segfault in chunkservers database (intr. in 3.0.67) (master) changed trash part choice from nondeterministic to deterministic MooseFS 3.0.70-1 (2016-01-19) (cgi+cli) fixed displaying info when there are no active masters (intr. in 3.0.67) (mount+common) refactoring code to be Windows ready (mount) added option 'mfsflattrash' (makes trash look like before version 3.0.64) (mount) added fixes for NetBSD (patch contributed by Tom Ivar Helbekkmo) Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Piotr R. K. <pio...@mo...> - 2016-01-25 11:29:36
|
Hi (Cześć) Leszek, > Just one question - is this undocumented feature with < and > also available in lizardfs which is a open source clone of MooseFS? no, this feature is not available in LizardFS. And MooseFS is OpenSource Software released on GPL-2. You can download the sources here: https://moosefs.com/download/sources.html <https://moosefs.com/download/sources.html>. -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 25 Jan 2016, at 12:19 PM, Leszek Szczepanowski <tw...@ms...> wrote: > > Just one question - is this undocumented feature with < and > also available in lizardfs which is a open source clone of MooseFS? > > 2016-01-25 10:51 GMT+01:00 Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>>: > Hello Wolfgang, > >> So when my drive has no more chunks on it is it save to comment out the line in mfshdd.cfg > > Yes, when all chunks are moved out from this drive > (in MFS CGI next to this drive, in column "chunks" 0 is shown), > you can safely comment this drive out in mfshdd.cfg. > > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> On 22 Jan 2016, at 11:13 AM, Wolfgang <moo...@wo... <mailto:moo...@wo...>> wrote: >> >> Hi Piotr Robert! >> >> I think using "<" moves the data - as my drive marked is slowly decreasing in chunk-count. >> Also syslogs says so: >> --- >> move chunk /data/disc1/mfschunk/ -> /data/disc2/mfschunk/ (chunk: 0000000000026440_00000001) >> --- >> >> So when my drive has no more chunks on it is it save to comment out the line in mfshdd.cfg or should this drive then be marked with * and waited for "READY" in MFS CGI ? >> >> >> Thanks for your always detailed answers! >> Greetings >> Wolfgang >> >> >> >> On 2016-01-21 23:55, Piotr Robert Konopelko wrote: >>> Hi Wolfgang, >>> >>>> On 21 Jan 2016, at 10:55 PM, Wolfgang < <mailto:moo...@wo...>moo...@wo... <mailto:moo...@wo...>> wrote: >>>> >>>> Hi Piotr Robert! >>>> >>>> When using < or > should I use the * also? >>> >>> No, you shouldn't use both * and (< or >) >>> >>>> like: >>>> <* /data/disc1 >>>> when disc1 should be removed in near future? >>> >>> No >>> >>>> or either >>>> * /data/disc1 >>>> < /data/disc1 >>> >>> Yes >>> >>>> So to make this clear: >>>> data from discs marked with * will be moved anywhere in the cluster >>> >>> Yes >>> >>>> data from discs marked with < will be moved to other discs (or if set only to the ">" marked discs) on the same chunkserver >>> >>> Yes >>> >>>> So I guess using * or < but not both - am I right? >>> >>> Yes :) >>> >>> One more note: >>> When you use * before disk path, data (chunks) are not moved, they are replicated, >>> so an extra copy of chunks from such disk marked for removal with * is made by MFS. >>> (You can see a lot of overgoal chunks in blue on all chunks state matrix in MFS CGI and when >>> you disconnect such drive when it is "ready", no undergoal chunks should be visible, >>> only most of overgoal chunks should disappear from chunks matrix). >>> >>> I'm not sure at this moment, but as far as I remember, when you use "<" (and/or ">"), >>> data (chunks) are not replicated, they are moved to another drive(s) on the same machine. >>> >>> I didn't used / tested this feature ("<" and ">") for a long time, so I'm not sure here whether >>> it replicates the data or moves it, but as I said above, probably it moves. >>> (I can perform a test when I'll be at the office - most possibly I can do it tomorrow's >>> afternoon, so probably you'll be faster than me :) >>> >>> >>>> Thank you & Greetings >>>> Wolfgang >>> >>> >>> Best, >>> >>> -- >>> Piotr Robert Konopelko >>> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 <http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140> > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > > > > -- > -- > Leszek A. Szczepanowski > tw...@ms... <mailto:tw...@ms...> |
From: Piotr R. K. <pio...@mo...> - 2016-01-25 09:52:02
|
Hello Wolfgang, > So when my drive has no more chunks on it is it save to comment out the line in mfshdd.cfg Yes, when all chunks are moved out from this drive (in MFS CGI next to this drive, in column "chunks" 0 is shown), you can safely comment this drive out in mfshdd.cfg. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 22 Jan 2016, at 11:13 AM, Wolfgang <moo...@wo...> wrote: > > Hi Piotr Robert! > > I think using "<" moves the data - as my drive marked is slowly decreasing in chunk-count. > Also syslogs says so: > --- > move chunk /data/disc1/mfschunk/ -> /data/disc2/mfschunk/ (chunk: 0000000000026440_00000001) > --- > > So when my drive has no more chunks on it is it save to comment out the line in mfshdd.cfg or should this drive then be marked with * and waited for "READY" in MFS CGI ? > > > Thanks for your always detailed answers! > Greetings > Wolfgang > > > > On 2016-01-21 23:55, Piotr Robert Konopelko wrote: >> Hi Wolfgang, >> >>> On 21 Jan 2016, at 10:55 PM, Wolfgang < <mailto:moo...@wo...>moo...@wo... <mailto:moo...@wo...>> wrote: >>> >>> Hi Piotr Robert! >>> >>> When using < or > should I use the * also? >> >> No, you shouldn't use both * and (< or >) >> >>> like: >>> <* /data/disc1 >>> when disc1 should be removed in near future? >> >> No >> >>> or either >>> * /data/disc1 >>> < /data/disc1 >> >> Yes >> >>> So to make this clear: >>> data from discs marked with * will be moved anywhere in the cluster >> >> Yes >> >>> data from discs marked with < will be moved to other discs (or if set only to the ">" marked discs) on the same chunkserver >> >> Yes >> >>> So I guess using * or < but not both - am I right? >> >> Yes :) >> >> One more note: >> When you use * before disk path, data (chunks) are not moved, they are replicated, >> so an extra copy of chunks from such disk marked for removal with * is made by MFS. >> (You can see a lot of overgoal chunks in blue on all chunks state matrix in MFS CGI and when >> you disconnect such drive when it is "ready", no undergoal chunks should be visible, >> only most of overgoal chunks should disappear from chunks matrix). >> >> I'm not sure at this moment, but as far as I remember, when you use "<" (and/or ">"), >> data (chunks) are not replicated, they are moved to another drive(s) on the same machine. >> >> I didn't used / tested this feature ("<" and ">") for a long time, so I'm not sure here whether >> it replicates the data or moves it, but as I said above, probably it moves. >> (I can perform a test when I'll be at the office - most possibly I can do it tomorrow's >> afternoon, so probably you'll be faster than me :) >> >> >>> Thank you & Greetings >>> Wolfgang >> >> >> Best, >> >> -- >> Piotr Robert Konopelko >> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Tom I. H. <ti...@ha...> - 2016-01-23 10:30:04
|
Piotr Robert Konopelko <pio...@mo...> writes: > we applied the [NetBSD] patches you sent, they are available in > sources since MFS 3.0.70 (not released yet). Cool! :) In case someone needs them, here are the startup scripts I use for running MooseFS under NetBSD: # This is a shell archive. Save it in a file, remove anything before # this line, and then unpack it by entering "sh file". Note, it may # create directories; files and directories will be owned by you and # have default permissions. # # This archive contains: # # mfscgiserv # mfschunkserver # mfsmaster # mfsmetalogger # echo x - mfscgiserv sed 's/^X//' >mfscgiserv << 'END-of-mfscgiserv' X#!/bin/sh X X# PROVIDE: mfscgiserv X# REQUIRE: DAEMON X# KEYWORD: shutdown X# X# Add the following to /etc/rc.conf to enable mfscgiserv: X# X# mfscgiserv="YES" X# X Xif [ -f /etc/rc.subr ]; then X . /etc/rc.subr Xfi X Xname="mfscgiserv" Xrcvar=$name Xcommand="/usr/local/sbin/mfscgiserv" X Xstart_cmd="mfs_doit start" Xstop_cmd="mfs_doit stop" Xrestart_cmd="mfs_doit restart" Xtest_cmd="mfs_doit test" X Xmfs_doit() X{ X ${command} $* X} X Xif [ -f /etc/rc.subr ]; then X load_rc_config $name X run_rc_command "$1" Xelse X echo -n " ${name}" X ${command} $* Xfi END-of-mfscgiserv echo x - mfschunkserver sed 's/^X//' >mfschunkserver << 'END-of-mfschunkserver' X#!/bin/sh X X# PROVIDE: mfschunkserver X# REQUIRE: DAEMON X# KEYWORD: shutdown X# X# Add the following to /etc/rc.conf to enable mfschunkserver: X# X# mfschunkserver="YES" X# X Xif [ -f /etc/rc.subr ]; then X . /etc/rc.subr Xfi X Xname="mfschunkserver" Xrcvar=$name Xcommand="/usr/local/sbin/mfschunkserver" X Xstart_cmd="mfs_doit start" Xstop_cmd="mfs_doit stop" Xrestart_cmd="mfs_doit restart" Xreload_cmd="mfs_doit reload" Xinfo_cmd="mfs_doit info" Xtest_cmd="mfs_doit test" Xkill_cmd="mfs_doit kill" X Xmfs_doit() X{ X ${command} $* X} X Xif [ -f /etc/rc.subr ]; then X load_rc_config $name X run_rc_command "$1" Xelse X echo -n " ${name}" X ${command} $* Xfi END-of-mfschunkserver echo x - mfsmaster sed 's/^X//' >mfsmaster << 'END-of-mfsmaster' X#!/bin/sh X X# PROVIDE: mfsmaster X# REQUIRE: DAEMON X# KEYWORD: shutdown X# X# Add the following to /etc/rc.conf to enable mfsmaster: X# X# mfsmaster="YES" X# X Xif [ -f /etc/rc.subr ]; then X . /etc/rc.subr Xfi X Xname="mfsmaster" Xrcvar=$name Xcommand="/usr/local/sbin/mfsmaster" X Xstart_cmd="mfs_doit start" Xstop_cmd="mfs_doit stop" Xrestart_cmd="mfs_doit restart" Xreload_cmd="mfs_doit reload" Xinfo_cmd="mfs_doit info" Xtest_cmd="mfs_doit test" Xkill_cmd="mfs_doit kill" X Xmfs_doit() X{ X ${command} $* X} X Xif [ -f /etc/rc.subr ]; then X load_rc_config $name X run_rc_command "$1" Xelse X echo -n " ${name}" X ${command} $* Xfi END-of-mfsmaster echo x - mfsmetalogger sed 's/^X//' >mfsmetalogger << 'END-of-mfsmetalogger' X#!/bin/sh X X# PROVIDE: mfsmetalogger X# REQUIRE: DAEMON X# KEYWORD: shutdown X# X# Add the following to /etc/rc.conf to enable mfsmetalogger: X# X# mfsmetalogger="YES" X# X Xif [ -f /etc/rc.subr ]; then X . /etc/rc.subr Xfi X Xname="mfsmetalogger" Xrcvar=$name Xcommand="/usr/local/sbin/mfsmetalogger" X Xstart_cmd="mfs_doit start" Xstop_cmd="mfs_doit stop" Xrestart_cmd="mfs_doit restart" Xreload_cmd="mfs_doit reload" Xinfo_cmd="mfs_doit info" Xtest_cmd="mfs_doit test" Xkill_cmd="mfs_doit kill" X Xmfs_doit() X{ X ${command} $* X} X Xif [ -f /etc/rc.subr ]; then X load_rc_config $name X run_rc_command "$1" Xelse X echo -n " ${name}" X ${command} $* Xfi END-of-mfsmetalogger exit -tih -- Elections cannot be allowed to change anything. --Dr. Wolfgang Schäuble |
From: Wolfgang <moo...@wo...> - 2016-01-22 10:14:04
|
Hi Piotr Robert! I think using "<" moves the data - as my drive marked is slowly decreasing in chunk-count. Also syslogs says so: --- move chunk /data/disc1/mfschunk/ -> /data/disc2/mfschunk/ (chunk: 0000000000026440_00000001) --- So when my drive has no more chunks on it is it save to comment out the line in mfshdd.cfg or should this drive then be marked with * and waited for "READY" in MFS CGI ? Thanks for your always detailed answers! Greetings Wolfgang On 2016-01-21 23:55, Piotr Robert Konopelko wrote: > Hi Wolfgang, > >> On 21 Jan 2016, at 10:55 PM, Wolfgang <moo...@wo... >> <mailto:moo...@wo...>> wrote: >> >> Hi Piotr Robert! >> >> When using < or > should I use the * also? > > No, you shouldn't use both * and (< or >) > >> like: >> <* /data/disc1 >> when disc1 should be removed in near future? > > No > >> or either >> * /data/disc1 >> < /data/disc1 > > Yes > >> So to make this clear: >> data from discs marked with * will be moved anywhere in the cluster > > Yes > >> data from discs marked with < will be moved to other discs (or if set >> only to the ">" marked discs) on the same chunkserver > > Yes > >> So I guess using * or < but not both - am I right? > > Yes :) > > One more note: > When you use * before disk path, data (chunks) are not moved, they are > _replicated_, > so an extra copy of chunks from such disk marked for removal with * is > made by MFS. > (You can see a lot of overgoal chunks in blue on all chunks state > matrix in MFS CGI and when > you disconnect such drive when it is "ready", no undergoal chunks > should be visible, > only most of overgoal chunks should disappear from chunks matrix). > > I'm not sure at this moment, but as far as I remember, when you use > "<" (and/or ">"), > data (chunks) are not replicated, they are _moved_ to another drive(s) > on the same machine. > > I didn't used / tested this feature ("<" and ">") for a long time, so > I'm not sure here whether > it replicates the data or moves it, but as I said above, probably it > moves. > (I can perform a test when I'll be at the office - most possibly I can > do it tomorrow's > afternoon, so probably you'll be faster than me :) > > >> Thank you & Greetings >> Wolfgang > > > Best, > > -- > Piotr Robert Konopelko > *MooseFS Technical Support Engineer* | moosefs.com <https://moosefs.com> |
From: Piotr R. K. <pio...@mo...> - 2016-01-21 23:03:50
|
Hi again, > On 21 Jan 2016, at 11:55 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > >> or either >> * /data/disc1 >> < /data/disc1 > > Yes Small errata one more time ;) To make it clear: of course you shouldn't duplicate a drive entry like above, so e.g. the following example mfshdd.cfg is right: * /data/disc1 < /data/disc2 > /data/disc3 > /data/disc4 /data/disc5 (Chunks from /data/disc1 are replicated to anywhere in the cluster, chunks from /data/disk2 are moved to /data/disc3 or /data/disc4, chunks on /data/disc5 are normally used in the cluster and not touched) One more note :) The thing, that you observed and described in your first mail in this topic (that when you use *, chunks are not replicated to another drives on the same chunkserver) is normal and very correct. One of MooseFS principals is not to store more than one copy of chunk on the same machine (of course because of data safety). Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Piotr R. K. <pio...@mo...> - 2016-01-21 22:55:53
|
Hi Wolfgang, > On 21 Jan 2016, at 10:55 PM, Wolfgang <moo...@wo...> wrote: > > Hi Piotr Robert! > > When using < or > should I use the * also? No, you shouldn't use both * and (< or >) > like: > <* /data/disc1 > when disc1 should be removed in near future? No > or either > * /data/disc1 > < /data/disc1 Yes > So to make this clear: > data from discs marked with * will be moved anywhere in the cluster Yes > data from discs marked with < will be moved to other discs (or if set only to the ">" marked discs) on the same chunkserver Yes > So I guess using * or < but not both - am I right? Yes :) One more note: When you use * before disk path, data (chunks) are not moved, they are replicated, so an extra copy of chunks from such disk marked for removal with * is made by MFS. (You can see a lot of overgoal chunks in blue on all chunks state matrix in MFS CGI and when you disconnect such drive when it is "ready", no undergoal chunks should be visible, only most of overgoal chunks should disappear from chunks matrix). I'm not sure at this moment, but as far as I remember, when you use "<" (and/or ">"), data (chunks) are not replicated, they are moved to another drive(s) on the same machine. I didn't used / tested this feature ("<" and ">") for a long time, so I'm not sure here whether it replicates the data or moves it, but as I said above, probably it moves. (I can perform a test when I'll be at the office - most possibly I can do it tomorrow's afternoon, so probably you'll be faster than me :) > Thank you & Greetings > Wolfgang Best, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Wolfgang <moo...@wo...> - 2016-01-21 21:55:47
|
Hi Piotr Robert! When using < or > should I use the * also? like: <* /data/disc1 when disc1 should be removed in near future? or either * /data/disc1 < /data/disc1 So to make this clear: data from discs marked with * will be moved anywhere in the cluster data from discs marked with < will be moved to other discs (or if set only to the ">" marked discs) on the same chunkserver So I guess using * or < but not both - am I right? Thank you & Greetings Wolfgang On 2016-01-21 17:31, Piotr Robert Konopelko wrote: > Marking disk as removal just informs MooseFS that: > > 1. It should not write new chunks to this drive (as you observed) > 2. Chunks are replicated to another Chunkservers > > > There is a feature available to achieve what you want, but it is not > documented yet (it will be). > > You can migrate chunks from e.g. one (or more than one) drive(s) on > chunkserver to another drive(s) on the same chunkserver. > > If you put in mfshdd.conf sign '>' (without quotes of course) before > specific path, such disk is marked as target/destination for migration. > If you put sign '<', such disk is marked as a source for migration. > > If you define in mfshdd.conf more than one source or destination > drive, all of them will be grouped - respectively source ones or > destination ones. If you mark only source drive(s), all other will be > treated as target (so it is your case - mark only one as a source), > and similarly - if you mark only target drive(s), all other will be > treated as source ones. > > Target drives can be filled up in 99% max (it is a hard-coded > constant, maybe it will be configurable for specific HDD(s) in the future) > > And one more note: if any drive is marked as source or target drive, > space usage equalisation algorithm is disabled. > > > Best regards, > > -- > Piotr Robert Konopelko > *MooseFS Technical Support Engineer*| moosefs.com <https://moosefs.com> > >> On 21 Jan 2016, at 5:01 PM, Wolfgang <moo...@wo... >> <mailto:moo...@wo...>> wrote: >> >> Hi List! >> >> In my cluster I have a chunkserver with 3 discs mounted to seperate >> directories and showing as 3 lines in the "Disks" overview. >> >> When I mark one of these discs for removal (by prepending a * in the >> mfshdd.cfg and reload this disc gets status "marked for removal (not >> ready)" and doesn't get any new chunks which is what I expected. >> >> When I now watch as chunks get copied so this disc can be removed - it >> seems to me that the 2 other discs of this chunkserver (or the whole >> chunkserver) don't get any new chunks either? Why is this? Wouldn't it >> be the fastest to move the chunks from the "marked for removal" disc to >> the other discs at the same chunkserver? >> >> Thanks for clearification. >> >> Greetings from snowy austria! >> Wolfgang >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Piotr R. K. <pio...@mo...> - 2016-01-21 16:35:22
|
Hi Wolfgang, small errata: > Chunks are replicated to another Chunkservers Should be "Tells MooseFS to replicate chunks from such drive to another" Greetings from also-snowy Poland :) -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 21 Jan 2016, at 5:31 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Marking disk as removal just informs MooseFS that: > It should not write new chunks to this drive (as you observed) > Chunks are replicated to another Chunkservers > > There is a feature available to achieve what you want, but it is not documented yet (it will be). > > You can migrate chunks from e.g. one (or more than one) drive(s) on chunkserver to another drive(s) on the same chunkserver. > > If you put in mfshdd.conf sign '>' (without quotes of course) before specific path, such disk is marked as target/destination for migration. > If you put sign '<', such disk is marked as a source for migration. > > If you define in mfshdd.conf more than one source or destination drive, all of them will be grouped - respectively source ones or destination ones. If you mark only source drive(s), all other will be treated as target (so it is your case - mark only one as a source), and similarly - if you mark only target drive(s), all other will be treated as source ones. > > Target drives can be filled up in 99% max (it is a hard-coded constant, maybe it will be configurable for specific HDD(s) in the future) > > And one more note: if any drive is marked as source or target drive, space usage equalisation algorithm is disabled. > > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> On 21 Jan 2016, at 5:01 PM, Wolfgang <moo...@wo... <mailto:moo...@wo...>> wrote: >> >> Hi List! >> >> In my cluster I have a chunkserver with 3 discs mounted to seperate >> directories and showing as 3 lines in the "Disks" overview. >> >> When I mark one of these discs for removal (by prepending a * in the >> mfshdd.cfg and reload this disc gets status "marked for removal (not >> ready)" and doesn't get any new chunks which is what I expected. >> >> When I now watch as chunks get copied so this disc can be removed - it >> seems to me that the 2 other discs of this chunkserver (or the whole >> chunkserver) don't get any new chunks either? Why is this? Wouldn't it >> be the fastest to move the chunks from the "marked for removal" disc to >> the other discs at the same chunkserver? >> >> Thanks for clearification. >> >> Greetings from snowy austria! >> Wolfgang >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 <http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Piotr R. K. <pio...@mo...> - 2016-01-21 16:32:10
|
Marking disk as removal just informs MooseFS that: It should not write new chunks to this drive (as you observed) Chunks are replicated to another Chunkservers There is a feature available to achieve what you want, but it is not documented yet (it will be). You can migrate chunks from e.g. one (or more than one) drive(s) on chunkserver to another drive(s) on the same chunkserver. If you put in mfshdd.conf sign '>' (without quotes of course) before specific path, such disk is marked as target/destination for migration. If you put sign '<', such disk is marked as a source for migration. If you define in mfshdd.conf more than one source or destination drive, all of them will be grouped - respectively source ones or destination ones. If you mark only source drive(s), all other will be treated as target (so it is your case - mark only one as a source), and similarly - if you mark only target drive(s), all other will be treated as source ones. Target drives can be filled up in 99% max (it is a hard-coded constant, maybe it will be configurable for specific HDD(s) in the future) And one more note: if any drive is marked as source or target drive, space usage equalisation algorithm is disabled. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 21 Jan 2016, at 5:01 PM, Wolfgang <moo...@wo...> wrote: > > Hi List! > > In my cluster I have a chunkserver with 3 discs mounted to seperate > directories and showing as 3 lines in the "Disks" overview. > > When I mark one of these discs for removal (by prepending a * in the > mfshdd.cfg and reload this disc gets status "marked for removal (not > ready)" and doesn't get any new chunks which is what I expected. > > When I now watch as chunks get copied so this disc can be removed - it > seems to me that the 2 other discs of this chunkserver (or the whole > chunkserver) don't get any new chunks either? Why is this? Wouldn't it > be the fastest to move the chunks from the "marked for removal" disc to > the other discs at the same chunkserver? > > Thanks for clearification. > > Greetings from snowy austria! > Wolfgang > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wolfgang <moo...@wo...> - 2016-01-21 16:02:09
|
Hi List! In my cluster I have a chunkserver with 3 discs mounted to seperate directories and showing as 3 lines in the "Disks" overview. When I mark one of these discs for removal (by prepending a * in the mfshdd.cfg and reload this disc gets status "marked for removal (not ready)" and doesn't get any new chunks which is what I expected. When I now watch as chunks get copied so this disc can be removed - it seems to me that the 2 other discs of this chunkserver (or the whole chunkserver) don't get any new chunks either? Why is this? Wouldn't it be the fastest to move the chunks from the "marked for removal" disc to the other discs at the same chunkserver? Thanks for clearification. Greetings from snowy austria! Wolfgang |
From: Piotr R. K. <pio...@mo...> - 2016-01-21 14:19:55
|
Dear Tom, we applied the patches you sent, they are available in sources since MFS 3.0.70 (not released yet). Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 14 Jan 2016, at 2:46 PM, Tom Ivar Helbekkmo <ti...@ha...> wrote: > > Piotr Robert Konopelko <pio...@mo...> writes: > >> (mount) use direct I/O as a default mode on Mac OS X (due to >> keep_cache bug in kernel/fuse) >> [...] >> (mount) use direct I/O as a default mode on FreeBSD (due to keep_cache >> bug in kernel/fuse) > > Do you have any more information on this? And maybe a way to test > whether one has this bug? I'm running MooseFS 3 on NetBSD, and I'm > wondering whether I ought to do the same thing there. If the problem is > in the kernel, as opposed to the userland FUSE software, we probably > don't have it... > > Meanwhile, here's a couple of NetBSD patches I use. The /proc file > system in NetBSD is modeled on the one in Linux, but is slightly > different: under NetBSD, 'cat /proc/*/status' is a sort of ps. ;) > > -tih > > --- mfsmount/getgroups.c.orig 2016-01-13 18:37:01.000000000 +0100 > +++ mfsmount/getgroups.c 2016-01-14 08:59:16.000000000 +0100 > @@ -45,11 +45,15 @@ > static int keep_alive; > > uint32_t get_groups(pid_t pid,gid_t gid,uint32_t **gidtab) { > -#if defined(__linux__) > +#if defined(__linux__) || defined(__NetBSD__) > // Linux - supplementary groups are in file: > // /proc/<PID>/status > // line: > // Groups: <GID1> <GID2> <GID3> ... > +// > +// NetBSD - supplementary groups are in file: > +// /proc/<PID>/status > +// as comma separated list of gids at end of (single) line. > char proc_filename[50]; > char linebuff[4096]; > char *ptr; > @@ -67,11 +71,18 @@ > return 1; > } > while (fgets(linebuff,4096,fd)) { > +#if defined(__NetBSD__) > + if ((ptr = strrchr(linebuff, ' '))) { > + if (strlen(linebuff) > (2 * strlen(ptr) + 8)) { > + sprintf(linebuff, "Groups: %s", ptr); > + } > + } > +#endif > if (strncmp(linebuff,"Groups:",7)==0) { > gcount = 1; > ptr = linebuff+7; > do { > - while (*ptr==' ' || *ptr=='\t') { > + while (*ptr==' ' || *ptr=='\t' || *ptr==',') { > ptr++; > } > if (*ptr>='0' && *ptr<='9') { > @@ -80,14 +91,14 @@ > gcount++; > } > } > - } while (*ptr==' ' || *ptr=='\t'); > + } while (*ptr==' ' || *ptr=='\t' || *ptr==','); > *gidtab = malloc(sizeof(uint32_t)*gcount); > passert(*gidtab); > (*gidtab)[0] = gid; > n = 1; > ptr = linebuff+7; > do { > - while (*ptr==' ' || *ptr=='\t') { > + while (*ptr==' ' || *ptr=='\t' || *ptr==',') { > ptr++; > } > if (*ptr>='0' && *ptr<='9') { > @@ -97,7 +108,7 @@ > n++; > } > } > - } while ((*ptr==' ' || *ptr=='\t') && n<gcount); > + } while ((*ptr==' ' || *ptr=='\t' || *ptr==',') && n<gcount); > fclose(fd); > return n; > } > --- mfsmount/sustained_inodes.c.orig 2016-01-12 08:20:34.000000000 +0100 > +++ mfsmount/sustained_inodes.c 2016-01-14 09:08:00.000000000 +0100 > @@ -130,7 +130,7 @@ > } > > > -#if defined(__linux__) > +#if defined(__linux__) || defined(__NetBSD__) > #include <dirent.h> > #elif defined(__APPLE__) > #include <sys/types.h> > @@ -150,7 +150,7 @@ > uint64_t inode; > // printf("pid: %d\n",ki->ki_pid); > > -#if defined(__linux__) > +#if defined(__linux__) || defined(__NetBSD__) > char path[100]; > struct stat st; > snprintf(path,100,"/proc/%lld/cwd",(long long int)pid); > @@ -251,7 +251,7 @@ > } > > void sinodes_all_pids(void) { > -#if defined(__linux__) > +#if defined(__linux__) || defined(__NetBSD__) > DIR *dd; > struct dirent *de,*destorage; > const char *np; > > > -- > Elections cannot be allowed to change anything. --Dr. Wolfgang Schäuble |
From: Piotr R. K. <pio...@mo...> - 2016-01-21 13:55:09
|
Dear Michael, as long as "try counter" < 5, you can ignore this message, it is only an information, that write was not successful (and has been re-tried). Regarding > (10.255.230.42:9422) error: EPIPE (Broken pipe) / NEGWRITE (unfinished writes: 1; try counter: 1) please check your network, because it looks like some network issue. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 21 Jan 2016, at 11:41 AM, Michael Suhanov <mas...@li...> wrote: > > > I have a problem on the mfsmount server > he show that error in /var/log/message > > > Jan 21 11:48:17 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - readworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (received: 0/4096; try counter: 1) > Jan 21 11:48:19 imap2 mfsmount[4108]: file: 3240052, index: 0, chunk: 19955390, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:48:46 imap2 mfsmount[4108]: file: 3240307, index: 0, chunk: 19955639, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:48:56 imap2 mfsmount[4108]: file: 3239249, index: 0, chunk: 19954619, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:48:57 imap2 mfsmount[4108]: file: 3238059, index: 0, chunk: 19953349, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:48:57 imap2 mfsmount[4108]: file: 3239249, index: 0, chunk: 19954619, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:49:07 imap2 mfsmount[4108]: file: 3240544, index: 0, chunk: 19955855, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:49:24 imap2 mfsmount[4108]: file: 3181237, index: 0, chunk: 5569545, version: 69 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:49:33 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:49:42 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:49:44 imap2 mfsmount[4108]: file: 3240929, index: 0, chunk: 19956213, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:49:44 imap2 mfsmount[4108]: file: 3240930, index: 0, chunk: 19956214, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:50:02 imap2 mfsmount[4108]: file: 3241111, index: 0, chunk: 19956382, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:50:10 imap2 mfsmount[4108]: file: 3241169, index: 0, chunk: 19956437, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:50:10 imap2 mfsmount[4108]: file: 3241172, index: 0, chunk: 19956440, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:50:11 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:50:12 imap2 mfsmount[4108]: file: 3241198, index: 0, chunk: 19956465, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:50:24 imap2 mfsmount[4108]: file: 3241328, index: 0, chunk: 19956587, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:50:33 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:50:42 imap2 mfsmount[4108]: file: 3241490, index: 0, chunk: 19956741, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:50:56 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:51:10 imap2 mfsmount[4108]: file: 2206886, index: 0, chunk: 19956978, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:51:18 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:51:26 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:51:34 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:51:34 imap2 mfsmount[4108]: file: 3241719, index: 0, chunk: 19957201, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / POLLHUP (unfinished writes: 2; try counter: 1) > Jan 21 11:51:46 imap2 mfsmount[4108]: file: 3241717, index: 0, chunk: 19957199, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:51:55 imap2 mfsmount[4108]: file: 2205974, index: 0, chunk: 19956909, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:51:56 imap2 mfsmount[4108]: file: 2205974, index: 0, chunk: 19956909, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 2) > Jan 21 11:52:06 imap2 mfsmount[4108]: file: 3241717, index: 0, chunk: 19957199, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:52:23 imap2 mfsmount[4108]: file: 3240927, index: 0, chunk: 19956212, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:52:24 imap2 mfsmount[4108]: file: 3242168, index: 0, chunk: 19957632, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:52:34 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:52:34 imap2 mfsmount[4108]: file: 1989091, index: 0, chunk: 3122942, version: 25 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) > Jan 21 11:52:43 imap2 mfsmount[4108]: file: 3241717, index: 0, chunk: 19957199, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:52:49 imap2 mfsmount[4108]: file: 3242404, index: 0, chunk: 19957863, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) > Jan 21 11:52:49 imap2 mfsmount[4108]: file: 3242405, index: 0, chunk: 19957864, version: 1 - writeworker: write to (10.255.230.42:9422) error: EPIPE (Broken pipe) / NEGWRITE (unfinished writes: 1; try counter: 1) > > > what could be the problem? > > > Centos 6.7 x64 > kernel 2.6.32-573.12.1.el6.x86_64 > moosefs 2.0.83 versions > > Regards, > Michael Suhanov > mas...@li... > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michael S. <mas...@li...> - 2016-01-21 10:41:59
|
I have a problem on the mfsmount server he show that error in /var/log/message Jan 21 11:48:17 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - readworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (received: 0/4096; try counter: 1) Jan 21 11:48:19 imap2 mfsmount[4108]: file: 3240052, index: 0, chunk: 19955390, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:48:46 imap2 mfsmount[4108]: file: 3240307, index: 0, chunk: 19955639, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:48:56 imap2 mfsmount[4108]: file: 3239249, index: 0, chunk: 19954619, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:48:57 imap2 mfsmount[4108]: file: 3238059, index: 0, chunk: 19953349, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:48:57 imap2 mfsmount[4108]: file: 3239249, index: 0, chunk: 19954619, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:49:07 imap2 mfsmount[4108]: file: 3240544, index: 0, chunk: 19955855, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:49:24 imap2 mfsmount[4108]: file: 3181237, index: 0, chunk: 5569545, version: 69 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:49:33 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:49:42 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:49:44 imap2 mfsmount[4108]: file: 3240929, index: 0, chunk: 19956213, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:49:44 imap2 mfsmount[4108]: file: 3240930, index: 0, chunk: 19956214, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:50:02 imap2 mfsmount[4108]: file: 3241111, index: 0, chunk: 19956382, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:50:10 imap2 mfsmount[4108]: file: 3241169, index: 0, chunk: 19956437, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:50:10 imap2 mfsmount[4108]: file: 3241172, index: 0, chunk: 19956440, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:50:11 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:50:12 imap2 mfsmount[4108]: file: 3241198, index: 0, chunk: 19956465, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:50:24 imap2 mfsmount[4108]: file: 3241328, index: 0, chunk: 19956587, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:50:33 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:50:42 imap2 mfsmount[4108]: file: 3241490, index: 0, chunk: 19956741, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:50:56 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:51:10 imap2 mfsmount[4108]: file: 2206886, index: 0, chunk: 19956978, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:51:18 imap2 mfsmount[4108]: file: 3239955, index: 0, chunk: 19955295, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:51:26 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:51:34 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:51:34 imap2 mfsmount[4108]: file: 3241719, index: 0, chunk: 19957201, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / POLLHUP (unfinished writes: 2; try counter: 1) Jan 21 11:51:46 imap2 mfsmount[4108]: file: 3241717, index: 0, chunk: 19957199, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:51:55 imap2 mfsmount[4108]: file: 2205974, index: 0, chunk: 19956909, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:51:56 imap2 mfsmount[4108]: file: 2205974, index: 0, chunk: 19956909, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 2) Jan 21 11:52:06 imap2 mfsmount[4108]: file: 3241717, index: 0, chunk: 19957199, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:52:23 imap2 mfsmount[4108]: file: 3240927, index: 0, chunk: 19956212, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:52:24 imap2 mfsmount[4108]: file: 3242168, index: 0, chunk: 19957632, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:52:34 imap2 mfsmount[4108]: file: 3241548, index: 0, chunk: 19956798, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:52:34 imap2 mfsmount[4108]: file: 1989091, index: 0, chunk: 3122942, version: 25 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 2; try counter: 1) Jan 21 11:52:43 imap2 mfsmount[4108]: file: 3241717, index: 0, chunk: 19957199, version: 1 - writeworker: connection with (10.255.230.42:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:52:49 imap2 mfsmount[4108]: file: 3242404, index: 0, chunk: 19957863, version: 1 - writeworker: connection with (10.255.230.41:9422) was reset by peer / ZEROREAD (unfinished writes: 1; try counter: 1) Jan 21 11:52:49 imap2 mfsmount[4108]: file: 3242405, index: 0, chunk: 19957864, version: 1 - writeworker: write to (10.255.230.42:9422) error: EPIPE (Broken pipe) / NEGWRITE (unfinished writes: 1; try counter: 1) what could be the problem? Centos 6.7 x64 kernel 2.6.32-573.12.1.el6.x86_64 moosefs 2.0.83 versions Regards, Michael Suhanov mas...@li... |
From: Piotr R. K. <pio...@mo...> - 2016-01-20 16:41:42
|
Dear Philipp, I am sorry for late reply. We had the problem with our repository on that time, it was fixed on the same day. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 08 Jan 2016, at 5:43 PM, Kaiser, Philipp <ph...@fr...> wrote: > > Dear Sir or Madam, > > I just tried to install Moosefs on one of our new servers, and it seems that the stable repo contains the packet for moosefs-client 3.0.64-1. We use Ubuntu Precise and this package source: > > deb http://ppa.moosefs.com/stable/apt/ubuntu/precise <http://ppa.moosefs.com/stable/apt/ubuntu/precise> precise main > > I also tried this, with the same effect: > > deb http://ppa.moosefs.com/apt/ubuntu/precise <http://ppa.moosefs.com/apt/ubuntu/precise> precise main > > When I try to mount a share provided by our 2.x MFS Master, I get: > > incompatible mfsmaster version > > I was looking for an announcement on the stable release for this version, but I couldn't find any. Also there seems to be no documentation for the update from 2.x to 3.x. Is it possible that the version 3 was accidentially put into the stable branch? > > Kind regards, > > Philipp > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@do...> - 2016-01-15 14:28:50
|
El Viernes 15/01/2016, Aleksander Wieliczko escribió: > Hi, > I would like to inform that fuse is expected only when we want to mount > mfsclient from fstab. > To use mfsmount from command line fuse-libs is enough. Ah, that makes sense. > We will discuss your request with our developer and give you feed back. Thanks! > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <moosefs.com> > > On 01/14/2016 11:54 PM, Ricardo J. Barberis wrote: > > Hello, > > > > I noticed that the moosefs-client rpm depends on fuse-libs, but on CentOS > > 6 and 7 mfsmount doesn't work till I install fuse. > > > > Could that package be added as a Requires in moosefs-client's .spec file? > > > > Regards, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Wilson, S. M <st...@pu...> - 2016-01-15 13:57:39
|
Thanks, Aleksander. I'm glad to hear that you were able to duplicate the problem and are working to fix it. Best regards, Steve On Fri, 2016-01-15 at 12:29 +0100, Aleksander Wieliczko wrote: Hi, First of all we would like to thanks for reporting this issue. We can confirm that problem appears in mfsmount 2.0.72 and even in the latest release 2.0.83. Problem not occurred in MooseFS mount 3.0.69. Right now we are working to solve this problem. We will inform you about progress of our work. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com<moosefs.com> On 01/14/2016 11:37 PM, Wilson, Steven M wrote: Hi, When I attempt to fetch an SVN repository using the "git svn fetch" command, it always fails when writing to a MooseFS volume. At some point during the process I get a "checksum mismatch" error like the following: Checksum mismatch: testmails/positive/348 expected: 42723d0ae0353368e9adb648da2eb6bc got: 637d201f8f22d9d71ba12bf7f39f14c8 I've tried this on two different MooseFS volumes from different servers (running MooseFS v. 2.0.72) with the same results. If I fetch to a local disk formatted in XFS, though, I don't get a checksum error. These are the commands that initially demonstrated the problem for me: git svn init http://emg.nysbc.org/svn/myami -s git svn fetch But I've since tested it with Spamassassin and had the same results: git svn init --prefix=origin/ -s https://svn.apache.org/repos/asf/spamassassin git svn fetch Could someone try this out on their own MooseFS installation to see if it also gives checksum errors? Is this a known bug? Thanks! Steve |
From: Aleksander W. <ale...@mo...> - 2016-01-15 12:15:59
|
Hi, I would like to inform that fuse is expected only when we want to mount mfsclient from fstab. To use mfsmount from command line fuse-libs is enough. We will discuss your request with our developer and give you feed back. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 01/14/2016 11:54 PM, Ricardo J. Barberis wrote: > Hello, > > I noticed that the moosefs-client rpm depends on fuse-libs, but on CentOS 6 > and 7 mfsmount doesn't work till I install fuse. > > Could that package be added as a Requires in moosefs-client's .spec file? > > Regards, |
From: Aleksander W. <ale...@mo...> - 2016-01-15 11:29:09
|
Hi, First of all we would like to thanks for reporting this issue. We can confirm that problem appears in mfsmount 2.0.72 and even in the latest release 2.0.83. Problem not occurred in MooseFS mount 3.0.69. Right now we are working to solve this problem. We will inform you about progress of our work. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 01/14/2016 11:37 PM, Wilson, Steven M wrote: > Hi, > > When I attempt to fetch an SVN repository using the "git svn fetch" > command, it always fails when writing to a MooseFS volume. At some > point during the process I get a "checksum mismatch" error like the > following: > Checksum mismatch: testmails/positive/348 > expected: 42723d0ae0353368e9adb648da2eb6bc > got: 637d201f8f22d9d71ba12bf7f39f14c8 > > I've tried this on two different MooseFS volumes from different > servers (running MooseFS v. 2.0.72) with the same results. If I fetch > to a local disk formatted in XFS, though, I don't get a checksum > error. These are the commands that initially demonstrated the problem > for me: > git svn init http://emg.nysbc.org/svn/myami -s > git svn fetch > > But I've since tested it with Spamassassin and had the same results: > git svn init --prefix=origin/ -s > https://svn.apache.org/repos/asf/spamassassin > git svn fetch > > Could someone try this out on their own MooseFS installation to see if > it also gives checksum errors? Is this a known bug? > > Thanks! > > Steve > |