You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Piotr R. K. <pio...@ge...> - 2019-02-05 13:12:31
|
Hi Tom, We have changed humanize_number() name in MooseFS sources in order not to collide with NetBSD :) Best, Piotr Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 20 Jan 2019, at 6:59 PM, Tom Ivar Helbekkmo via moosefs-users <moo...@li...> wrote: > > I run MooseFS on NetBSD, and while the new mfsmetadirinfo is a nice > addition, it doesn't compile cleanly for me out of the box. The reason > is that NetBSD, like the other BSDs, has a humanize_number() already, so > when you create your own function by that name, there's a collision. > > You haven't observed this on FreeBSD, because they want you to link with > libutil, and #include <libutil.h> in your source code, to get access to > this function. In NetBSD, it's in the standard C library, and stdlib.h. > > Our humanize_number() originated in NetBSD, but has been adopted by the > other BSDs, and is also available on Linux, where you link with libbsd, > and #include <bsd/stdlib.h>, to get access to it. It works exactly the > same way on all these systems. > > I'm appending the change I've made locally - if you'd like to do > something similar in the official distribution, to avoid maintaining > your own humanize_number() in the future, you might apply this, or > something like it, with the addition of #ifdef bits to add the right > #include directives for your supported operating systems. > > Alternatively, if you'd rather keep your own implementation, consider > this a polite request to change its name so it doesn't clash with the > existing humanize_number() function... ;) > > My local modification: > > diff --git a/mfsmetatools/mfsmetadirinfo.c b/mfsmetatools/mfsmetadirinfo.c > index 66cffe5..672e186 100644 > --- a/mfsmetatools/mfsmetadirinfo.c > +++ b/mfsmetatools/mfsmetadirinfo.c > @@ -14,68 +14,22 @@ > //static uint8_t humode=0; > //static uint8_t numbermode=0; > > +// For the humanized representation of numbers, decide how many > +// digits will be used before moving to the next scale factor. > +// The value 4 here means you'll get up to "9999 MiB" before it > +// changes to "10 GiB", while 3 jumps from "999 MiB" to "1 GiB". > +// The magic '3' in the calculation below is for a space, a > +// scaling letter, and a terminating 0 byte for the result. > +#define HUMAN_DIGITS 4 > +#define HUMAN_SUFFIX "iB" > +#define HUMAN_LENGTH (HUMAN_DIGITS+3+strlen(HUMAN_SUFFIX)) > + > enum { > STATUS_OK = 0, > STATUS_ENOENT = 1, > STATUS_ANY = 2 > }; > > -#define PHN_USESI 0x01 > -#define PHN_USEIEC 0x00 > -char* humanize_number(uint64_t number,uint8_t flags) { > - static char numbuf[6]; // [ "xxx" , "xx" , "x" , "x.x" ] + ["" , "X" , "Xi"] > - uint64_t divisor; > - uint16_t b; > - uint8_t i; > - uint8_t scale; > - > - if (flags & PHN_USESI) { > - divisor = 1000; > - } else { > - divisor = 1024; > - } > - if (number>(UINT64_MAX/100)) { > - number /= divisor; > - number *= 100; > - scale = 1; > - } else { > - number *= 100; > - scale = 0; > - } > - while (number>=99950) { > - number /= divisor; > - scale+=1; > - } > - i=0; > - if (number<995 && scale>0) { > - b = ((uint32_t)number + 5) / 10; > - numbuf[i++]=(b/10)+'0'; > - numbuf[i++]='.'; > - numbuf[i++]=(b%10)+'0'; > - } else { > - b = ((uint32_t)number + 50) / 100; > - if (b>=100) { > - numbuf[i++]=(b/100)+'0'; > - b%=100; > - } > - if (b>=10 || i>0) { > - numbuf[i++]=(b/10)+'0'; > - b%=10; > - } > - numbuf[i++]=b+'0'; > - } > - if (scale>0) { > - if (flags&PHN_USESI) { > - numbuf[i++]="-kMGTPE"[scale]; > - } else { > - numbuf[i++]="-KMGTPE"[scale]; > - numbuf[i++]='i'; > - } > - } > - numbuf[i++]='\0'; > - return numbuf; > -} > - > typedef struct _metasection { > off_t offset; > uint64_t length; > @@ -633,6 +587,7 @@ int calc_dirinfos(FILE *fd) { > > void print_result_plain(FILE *ofd) { > dirinfostate *dis; > + char numbuf[16]; > fprintf(ofd,"------------------------------\n"); > for (dis = dishead ; dis!=NULL ; dis=dis->next) { > fprintf(ofd,"path: %s\n",dis->path); > @@ -643,11 +598,16 @@ void print_result_plain(FILE *ofd) { > fprintf(ofd,"chunks: %"PRIu64"\n",liset_card(dis->chunk_liset)); > fprintf(ofd," keep chunks: %"PRIu64"\n",dis->s.kchunks); > fprintf(ofd," arch chunks: %"PRIu64"\n",dis->s.achunks); > - fprintf(ofd,"length: %"PRIu64" = %5sB\n",dis->s.length,humanize_number(dis->s.length,PHN_USEIEC)); > - fprintf(ofd,"size: %"PRIu64" = %5sB\n",dis->s.size,humanize_number(dis->s.size,PHN_USEIEC)); > - fprintf(ofd,"keep size: %"PRIu64" = %5sB\n",dis->s.keeprsize,humanize_number(dis->s.keeprsize,PHN_USEIEC)); > - fprintf(ofd,"arch size: %"PRIu64" = %5sB\n",dis->s.archrsize,humanize_number(dis->s.archrsize,PHN_USEIEC)); > - fprintf(ofd,"real size: %"PRIu64" = %5sB\n",dis->s.rsize,humanize_number(dis->s.rsize,PHN_USEIEC)); > + humanize_number(numbuf,HUMAN_LENGTH,dis->s.length,HUMAN_SUFFIX,HN_AUTOSCALE,0); > + fprintf(ofd,"length: %"PRIu64" = %s\n",dis->s.length,numbuf); > + humanize_number(numbuf,HUMAN_LENGTH,dis->s.size,HUMAN_SUFFIX,HN_AUTOSCALE,0); > + fprintf(ofd,"size: %"PRIu64" = %s\n",dis->s.size,numbuf); > + humanize_number(numbuf,HUMAN_LENGTH,dis->s.keeprsize,HUMAN_SUFFIX,HN_AUTOSCALE,0); > + fprintf(ofd,"keep size: %"PRIu64" = %s\n",dis->s.keeprsize,numbuf); > + humanize_number(numbuf,HUMAN_LENGTH,dis->s.archrsize,HUMAN_SUFFIX,HN_AUTOSCALE,0); > + fprintf(ofd,"arch size: %"PRIu64" = %s\n",dis->s.archrsize,numbuf); > + humanize_number(numbuf,HUMAN_LENGTH,dis->s.rsize,HUMAN_SUFFIX,HN_AUTOSCALE,0); > + fprintf(ofd,"real size: %"PRIu64" = %s\n",dis->s.rsize,numbuf); > } else { > fprintf(ofd,"path not found !!!\n"); > } > > -tih > -- > Most people who graduate with CS degrees don't understand the significance > of Lisp. Lisp is the most important idea in computer science. --Alan Kay > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Tom I. H. <ti...@ha...> - 2019-01-20 18:24:54
|
I run MooseFS on NetBSD, and while the new mfsmetadirinfo is a nice addition, it doesn't compile cleanly for me out of the box. The reason is that NetBSD, like the other BSDs, has a humanize_number() already, so when you create your own function by that name, there's a collision. You haven't observed this on FreeBSD, because they want you to link with libutil, and #include <libutil.h> in your source code, to get access to this function. In NetBSD, it's in the standard C library, and stdlib.h. Our humanize_number() originated in NetBSD, but has been adopted by the other BSDs, and is also available on Linux, where you link with libbsd, and #include <bsd/stdlib.h>, to get access to it. It works exactly the same way on all these systems. I'm appending the change I've made locally - if you'd like to do something similar in the official distribution, to avoid maintaining your own humanize_number() in the future, you might apply this, or something like it, with the addition of #ifdef bits to add the right #include directives for your supported operating systems. Alternatively, if you'd rather keep your own implementation, consider this a polite request to change its name so it doesn't clash with the existing humanize_number() function... ;) My local modification: |
From: Piotr R. K. <pio...@ge...> - 2019-01-17 17:29:58
|
Hi Alex, We have updated ports for MooseFS 3.0.103 and raised a new bug 235028 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235028> on FreeBSD Bugzilla in order to have them merged with ports tree. Hope it will happen soon :) See also corresponding change on GitHub <https://github.com/moosefs/moosefs-freebsd-ports/commit/3c30528ef6498cd51743791724ea9812a6e0183f>. Hope it helps! Thanks, Piotr Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 26 Dec 2018, at 10:05 AM, Alexander AKHOBADZE <ba...@ya...> wrote: > > Hi! > > Yes. You are right. 3.0.103 :--) > > Great news! Thanks! > > > On 26.12.2018 8:02, Piotr Robert Konopelko wrote: >> Hi Alex, >> I think you meant 3.0.103. Yes, it will be prepared in coming days and posted to FreeBSD bugzilla in order to be added to FreeBSD ports tree. >> >> Best regards, >> Peter >> >> Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> >> Business & Technical Support Manager >> MooseFS Client Support Team >> >> WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> >> >>> On 25 Dec 2018, at 8:04 AM, Alexander AKHOBADZE <ba...@ya... <mailto:ba...@ya...>> wrote: >>> >>> Hi dear developers! >>> >>> Let me ask will you make a FreeBSD port for current 2.0.103 MooseFS version? >>> >>> >>> WBR >>> Alexander >>> >>> >>> >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... <mailto:moo...@li...> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <ba...@ya...> - 2018-12-26 09:05:34
|
Hi! Yes. You are right. 3.0.103 :--) Great news! Thanks! On 26.12.2018 8:02, Piotr Robert Konopelko wrote: > Hi Alex, > I think you meant 3.0.103. Yes, it will be prepared in coming days and > posted to FreeBSD bugzilla in order to be added to FreeBSD ports tree. > > Best regards, > Peter > > *Piotr Robert Konopelko*| m:+48 601 476 440 | e: > pio...@mo... <mailto:pio...@mo...> > *Business & Technical Support Manager* > MooseFS Client Support Team > > WWW <http://moosefs.com/> | GitHub > <https://github.com/moosefs/moosefs> | Twitter > <https://twitter.com/moosefs> | Facebook > <https://www.facebook.com/moosefs> | LinkedIn > <https://www.linkedin.com/company/moosefs> > >> On 25 Dec 2018, at 8:04 AM, Alexander AKHOBADZE <ba...@ya... >> <mailto:ba...@ya...>> wrote: >> >> Hi dear developers! >> >> Let me ask will you make a FreeBSD port for current 2.0.103 MooseFS >> version? >> >> >> WBR >> Alexander >> >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Piotr R. K. <pio...@ge...> - 2018-12-26 05:02:51
|
Hi Alex, I think you meant 3.0.103. Yes, it will be prepared in coming days and posted to FreeBSD bugzilla in order to be added to FreeBSD ports tree. Best regards, Peter Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 25 Dec 2018, at 8:04 AM, Alexander AKHOBADZE <ba...@ya...> wrote: > > Hi dear developers! > > Let me ask will you make a FreeBSD port for current 2.0.103 MooseFS version? > > > WBR > Alexander > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <ba...@ya...> - 2018-12-25 07:04:51
|
Hi dear developers! Let me ask will you make a FreeBSD port for current 2.0.103 MooseFS version? WBR Alexander |
From: Piotr R. K. <pio...@ge...> - 2018-12-22 04:00:27
|
Hi Mike, > Other clients, who are still in a previous version on moosefs-client, do not exhibit this. I was thinking on downgrading the moosefs-client package on the affected host, but they are not available for download anymore in ppa.moosefs.com <http://ppa.moosefs.com/>. Is there any place where I can download it? They are available – the instruction is here: https://moosefs.com/download/#older <https://moosefs.com/download/#older>. In two words just replace "moosefs-3" string in your repo URL with desired version number, e.g. "3.0.101" Best regards, Peter Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 22 Dec 2018, at 2:19 AM, Michael Tinsay <mic...@ho...> wrote: > > Hi, > > Since I upgraded my mfs set (master, metalogger, chunkservers) to the the latest (3.0.103), I've noticed an increased frequency of the following entries in the logs: > > Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.103 / port: 9422, usedspace: 16885627727872 (15725.97 GiB), totalspace: 19796317044736 (18436.76 GiB) > Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.102 / port: 9422, usedspace: 16800836857856 (15647.00 GiB), totalspace: 18827063713792 (17534.07 GiB) > Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.101 / port: 9422, usedspace: 16997673164800 (15830.32 GiB), totalspace: 19796319059968 (18436.76 GiB) > Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: connection with client(ip:10.77.77.112) has been closed by peer > Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: child finished > > And every time this happens, the following happens on my busiest mfsmount client: > > Dec 21 10:00:21 KVM02 mfsmount[2822]: master: connection lost (data) > Dec 21 10:00:21 KVM02 mfsmount[2840]: master: connection lost (data) > Dec 21 10:00:21 KVM02 mfsmount[2822]: registered to master > Dec 21 10:00:21 KVM02 mfsmount[2840]: registered to master > Dec 21 10:00:22 KVM02 mfsmount[2840]: file: 4315203, index: 20 - fs_writechunk returned status: No space left > Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 4315203: ENOSPC (No space left on device) > Dec 21 10:00:22 KVM02 mfsmount[2840]: file: 22817113, index: 28 - fs_writechunk returned status: No space left > Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 22817113: ENOSPC (No space left on device) > Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 4315203: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 407 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 460 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 476 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 47 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 22895109, index: 48 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 22895109, index: 269 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 48 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 341 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 22895109: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 22895109: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) > Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 9939425, index: 34 - fs_writechunk returned status: No space left > Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 9939425: ENOSPC (No space left on device) > > This particular client is a KVM server with about 10 VM guests whose disk images are stored on a 'mfsmount'ed folder. The VM guests are then "paused" by the host and are not able to be manually unpaused/resumed (whether via GUI admin or cli command). > > Other clients, who are still in a previous version on moosefs-client, do not exhibit this. I was thinking on downgrading the moosefs-client package on the affected host, but they are not available for download anymore in ppa.moosefs.com <http://ppa.moosefs.com/>. Is there any place where I can download it? > > Best Regards, > > > --- mike t. > > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Michael T. <mic...@ho...> - 2018-12-22 03:46:54
|
Thanks Peter. I will try that. Happy Holidays!!! --- mike t. ________________________________ From: Piotr Robert Konopelko <pio...@ge...> Sent: Saturday, 22 December 2018 11:42 AM To: Michael Tinsay Cc: MooseFS-Users Subject: Re: [MooseFS-Users] Frequent disconnections to master since upgrade to 2.0.103 Hi Mike, Other clients, who are still in a previous version on moosefs-client, do not exhibit this. I was thinking on downgrading the moosefs-client package on the affected host, but they are not available for download anymore in ppa.moosefs.com<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fppa.moosefs.com%2F&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400123108&sdata=dF9sUR1FCozrVfldCF3RI3NGgdtnteMA6Ivjj%2BI%2B%2Bbw%3D&reserved=0>. Is there any place where I can download it? They are available – the instruction is here: https://moosefs.com/download/#older<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmoosefs.com%2Fdownload%2F%23older&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400123108&sdata=irgLVNdaHUk4hgnhLG9MUsClvxQA6E5fqNK7uJqpTUM%3D&reserved=0>. In two words just replace "moosefs-3" string in your repo URL with desired version number, e.g. "3.0.101" Best regards, Peter Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo...<mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmoosefs.com%2F&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400123108&sdata=HTFucqtvfy0qArOf5aKeA%2FvTsUUu8lJGoi%2B7faUnD2o%3D&reserved=0> | GitHub<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmoosefs%2Fmoosefs&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400123108&sdata=Hi6hKrtVcDZ08hO2cdAZjRaKpecjLtrqGL%2By72NKrew%3D&reserved=0> | Twitter<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2Fmoosefs&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400123108&sdata=pZd16HwCW8bxYrx%2FvxxTRIlLZEUKRYC9Qj425RbOhqc%3D&reserved=0> | Facebook<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.facebook.com%2Fmoosefs&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400279362&sdata=ihwp227KF0LOe51aPOEsh%2FnbX5iR9y41J5LNT%2F6j5fk%3D&reserved=0> | LinkedIn<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fmoosefs&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400279362&sdata=35o3by2Zz1xAOYizr9voG%2FuhgxRSa7MwtMH404W9XOY%3D&reserved=0> On 22 Dec 2018, at 2:19 AM, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: Hi, Since I upgraded my mfs set (master, metalogger, chunkservers) to the the latest (3.0.103), I've noticed an increased frequency of the following entries in the logs: Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.103 / port: 9422, usedspace: 16885627727872 (15725.97 GiB), totalspace: 19796317044736 (18436.76 GiB) Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.102 / port: 9422, usedspace: 16800836857856 (15647.00 GiB), totalspace: 18827063713792 (17534.07 GiB) Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.101 / port: 9422, usedspace: 16997673164800 (15830.32 GiB), totalspace: 19796319059968 (18436.76 GiB) Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: connection with client(ip:10.77.77.112) has been closed by peer Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: child finished And every time this happens, the following happens on my busiest mfsmount client: Dec 21 10:00:21 KVM02 mfsmount[2822]: master: connection lost (data) Dec 21 10:00:21 KVM02 mfsmount[2840]: master: connection lost (data) Dec 21 10:00:21 KVM02 mfsmount[2822]: registered to master Dec 21 10:00:21 KVM02 mfsmount[2840]: registered to master Dec 21 10:00:22 KVM02 mfsmount[2840]: file: 4315203, index: 20 - fs_writechunk returned status: No space left Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 4315203: ENOSPC (No space left on device) Dec 21 10:00:22 KVM02 mfsmount[2840]: file: 22817113, index: 28 - fs_writechunk returned status: No space left Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 22817113: ENOSPC (No space left on device) Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 4315203: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 407 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 460 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 476 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 47 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 22895109, index: 48 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 22895109, index: 269 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 48 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 341 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 22895109: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 22895109: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 9939425, index: 34 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 9939425: ENOSPC (No space left on device) This particular client is a KVM server with about 10 VM guests whose disk images are stored on a 'mfsmount'ed folder. The VM guests are then "paused" by the host and are not able to be manually unpaused/resumed (whether via GUI admin or cli command). Other clients, who are still in a previous version on moosefs-client, do not exhibit this. I was thinking on downgrading the moosefs-client package on the affected host, but they are not available for download anymore in ppa.moosefs.com<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fppa.moosefs.com%2F&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400279362&sdata=quop3tPL3TcykdeDpZ8Gr4N9ytJILnap4HVPp%2FzYXd8%3D&reserved=0>. Is there any place where I can download it? Best Regards, --- mike t. _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.sourceforge.net%2Flists%2Flistinfo%2Fmoosefs-users&data=02%7C01%7C%7Caee1e454defe4e75bea308d667bf7925%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636810469400279362&sdata=piwDsStoP%2F1IoFflxcw%2BDufIhBaokoJ861y8DqB%2F904%3D&reserved=0> |
From: Michael T. <mic...@ho...> - 2018-12-22 01:19:53
|
Hi, Since I upgraded my mfs set (master, metalogger, chunkservers) to the the latest (3.0.103), I've noticed an increased frequency of the following entries in the logs: Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.103 / port: 9422, usedspace: 16885627727872 (15725.97 GiB), totalspace: 19796317044736 (18436.76 GiB) Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.102 / port: 9422, usedspace: 16800836857856 (15647.00 GiB), totalspace: 18827063713792 (17534.07 GiB) Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: chunkserver disconnected - ip: 10.77.77.101 / port: 9422, usedspace: 16997673164800 (15830.32 GiB), totalspace: 19796319059968 (18436.76 GiB) Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: connection with client(ip:10.77.77.112) has been closed by peer Dec 21 10:00:21 HO-MFSMaster01 mfsmaster[848]: child finished And every time this happens, the following happens on my busiest mfsmount client: Dec 21 10:00:21 KVM02 mfsmount[2822]: master: connection lost (data) Dec 21 10:00:21 KVM02 mfsmount[2840]: master: connection lost (data) Dec 21 10:00:21 KVM02 mfsmount[2822]: registered to master Dec 21 10:00:21 KVM02 mfsmount[2840]: registered to master Dec 21 10:00:22 KVM02 mfsmount[2840]: file: 4315203, index: 20 - fs_writechunk returned status: No space left Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 4315203: ENOSPC (No space left on device) Dec 21 10:00:22 KVM02 mfsmount[2840]: file: 22817113, index: 28 - fs_writechunk returned status: No space left Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 22817113: ENOSPC (No space left on device) Dec 21 10:00:22 KVM02 mfsmount[2840]: error writing file number 4315203: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 407 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 460 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 476 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 47 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 22895109, index: 48 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 22895109, index: 269 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 48 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 23977380, index: 341 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 22895109: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 22895109: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 23977380: ENOSPC (No space left on device) Dec 21 10:00:27 KVM02 mfsmount[2840]: file: 9939425, index: 34 - fs_writechunk returned status: No space left Dec 21 10:00:27 KVM02 mfsmount[2840]: error writing file number 9939425: ENOSPC (No space left on device) This particular client is a KVM server with about 10 VM guests whose disk images are stored on a 'mfsmount'ed folder. The VM guests are then "paused" by the host and are not able to be manually unpaused/resumed (whether via GUI admin or cli command). Other clients, who are still in a previous version on moosefs-client, do not exhibit this. I was thinking on downgrading the moosefs-client package on the affected host, but they are not available for download anymore in ppa.moosefs.com. Is there any place where I can download it? Best Regards, --- mike t. |
From: Alexander A. <ba...@ya...> - 2018-12-20 09:55:15
|
Hi colleagues! My question is about mfsbdev - a new great feature appeared in v. 3.0.103 What is the right way to make /dev/nbd0 appear automatically after reboot? WBR Alexander |
From: Wilson, S. M <st...@pu...> - 2018-10-19 19:26:36
|
Hi Peter, Let me get some of this information to you. Ping results: Client to Master Server (which is also Chunkserver #1) 25 packets transmitted, 25 received, 0% packet loss, time 24581ms rtt min/avg/max/mdev = 0.082/0.108/0.147/0.023 ms Client to Chunkserver #2 25 packets transmitted, 25 received, 0% packet loss, time 24567ms rtt min/avg/max/mdev = 0.102/0.118/0.135/0.016 ms Client to Chunkserver #3 25 packets transmitted, 25 received, 0% packet loss, time 24556ms rtt min/avg/max/mdev = 0.104/0.126/0.153/0.014 ms Client to Chunkserver #4 25 packets transmitted, 25 received, 0% packet loss, time 24557ms rtt min/avg/max/mdev = 0.067/0.073/0.081/0.012 ms Three tests run at the same time from different clients (using the results from client #1): Create IOPS: 174 Create MB/s: 0.68 Read IOPS: 3166 Read MB/s: 12.37 Append IOPS: 1221 Append MB/s: 4.77 Rename files/s: 1158 tar -xf linux-4.9-rc3.tar: 824 secs Single test run on client #1: Create IOPS: 281 Create MB/s: 1.1 Read IOPS: 3351 Read MB/s: 13.09 Append IOPS: 1678 Append MB/s: 6.55 Rename files/s: 1319 tar -xf linux-4.9-rc3.tar: 937 secs (not sure why this took longer... perhaps due to an increase in other activity) These tests took place between 14:00 and 15:15 so you can see the related activity on the attached Master Charts images. Thanks! Steve ________________________________ From: Piotr Robert Konopelko <pio...@ge...> Sent: Friday, October 19, 2018 12:58 PM To: Wilson, Steven M Cc: moo...@li... Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files Hi Steve, what is the latency: Client <----> Master Server Client <----> Chunkservers? When we consider small files operations, latency becomes the most crucial parameter. Could you please provide us with some ping test results? Also, what are results (of single tests and summed up) if you run e.g. two, three, more such tests at the same time? Can you plz paste also some Master charts from the time of tests are being ran? Thank you, Best regards, Peter Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo...<mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW<http://moosefs.com/> | GitHub<https://github.com/moosefs/moosefs> | Twitter<https://twitter.com/moosefs> | Facebook<https://www.facebook.com/moosefs> | LinkedIn<https://www.linkedin.com/company/moosefs> On 19 Oct 2018, at 6:36 PM, Marco Milano <mar...@gm...<mailto:mar...@gm...>> wrote: Steve, My wild guess is that somehow the master server is not very efficient to handle that many files. (Obviously if this is the case, it is very bad.) I will do some tests with the 4.x series and 0.5 billion files and let you know.(it will take me several weeks to create that test environment.) In the meantime, you can split the namespace into two which may help on the same hardware. (Basically you can run as many namespaces as you want on the same hardware setup, however this will require a lot of work to setup and migrate, in this case, there will be two master server processes running on your master server hardware just at different ports) Or, just hope that the performance of the master server is better with version 4.x -- Marco On 10/19/18 11:10 AM, Wilson, Steven M wrote: Hi Diego, I appreciate you taking the time to run these tests on your own setup! My parameters to the smallfile benchmark were a little different (I took them from some GlusterFS benchmarking documentation): smallfile_cli.py --top smallfile-tests --threads 4 --file-size 4 --files 10000 --response-times Y Steve ------------------------------------------------------------------------ *From:* Remolina, Diego J <dij...@ae...<mailto:dij...@ae...>> *Sent:* Friday, October 19, 2018 7:19 AM *To:* MooseFS-Users *Subject:* Re: [MooseFS-Users] Performance suggestions for millions of small files Hi Steve, I have by no means a similar amount of files and space, as I am just testing, but this is what I see with MooseFS 4.6.0 and goal=3 on a pretty new (in testing phase, no load) 3-way server setup: time tar -xf linux-4.9-rc3.tar real 4m0.332s user 0m1.668s sys 0m9.517s python /tmp/smallfile/smallfile_cli.py --operation create --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 15948 total IOPS = 15948 total data = 0.015 GiB 97.34% of requested files processed, minimum is 90.00 elapsed time = 11.608 files/sec = 1373.870032 IOPS = 1373.870032 MiB/sec = 1.341670 python /tmp/smallfile/smallfile_cli.py --operation read --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 16384 total IOPS = 16384 total data = 0.016 GiB 100.00% of requested files processed, minimum is 90.00 elapsed time = 2.553 files/sec = 6416.909838 IOPS = 6416.909838 MiB/sec = 6.266514 python /tmp/smallfile/smallfile_cli.py --operation append --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 15348 total IOPS = 15348 total data = 0.015 GiB 93.68% of requested files processed, minimum is 90.00 elapsed time = 8.018 files/sec = 1914.272783 IOPS = 1914.272783 MiB/sec = 1.869407 I will be happy to adjust the smallfile test settings if any of my tests are useful to you and re-run them for comparison. Diego ------------------------------------------------------------------------ *From:* Wilson, Steven M <st...@pu...<mailto:st...@pu...>> *Sent:* Thursday, October 18, 2018 4:47:14 PM *To:* MooseFS-Users *Subject:* [MooseFS-Users] Performance suggestions for millions of small files Hi, We have ten different MooseFS installations in our research group and one, in particular, is struggling with poor I/O performance. This installation currently has 315 million files occupying 170TB of disk space (goal = 2). If anyone else has a similar installation, I would like to hear what you have done to maintain performance at a reasonable level. Here are some metrics to give a basic idea of the performance characteristics. I'll include in parentheses the range of measurements from other MFS installations with far fewer files for comparison. * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! * smallfile test, read MB/s: 10.7 (12.8 - 15.4) * smallfile test, append MB/s: 6.1 (3.0 - 7.7) It looks file creation is where I'm losing most of my performance compared to the other installations. My master server has a Xeon E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. I tried several mfsmount options but the only one that showed any significant improvement was the mfsfsyncmintime option ("mfsfsyncmintime=5"). As to be expected, the improvement gained was during the write/append operation. Here are the results using the same tests as above: * tar xf linux-4.9-rc3.tar: 683 secs * smallfile test, create MB/s: 1.2 * smallfile test, read MB/s: 11.7 * smallfile test, append MB/s: 11.4 <== Dramatic improvement over 6.1 MB/s The smallfile benchmark test I used is from https://github.com/distributed-system-analysis/smallfile. Thanks for any suggestions you might have! Regards, Steve _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wilson, S. M <st...@pu...> - 2018-10-19 17:19:43
|
Marco, Ah, yes... I could also run multiple instances of MooseFS to help divide the load. I was hoping to avoid doing something like that, though. I will keep it mind as a last resort. For the moment, I'm seeing about a 50% improvement in the benchmarks with adding the mfsmount option "mfsfsyncmintime=5" so I'll see how that plays out using real workloads. Thanks for your willingness to try this out in your own environment. It is very kind of you but please do not make too much work for yourself! Regards, Steve ________________________________________ From: Marco Milano <mar...@gm...> Sent: Friday, October 19, 2018 12:36 PM To: moo...@li... Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files Steve, My wild guess is that somehow the master server is not very efficient to handle that many files. (Obviously if this is the case, it is very bad.) I will do some tests with the 4.x series and 0.5 billion files and let you know.(it will take me several weeks to create that test environment.) In the meantime, you can split the namespace into two which may help on the same hardware. (Basically you can run as many namespaces as you want on the same hardware setup, however this will require a lot of work to setup and migrate, in this case, there will be two master server processes running on your master server hardware just at different ports) Or, just hope that the performance of the master server is better with version 4.x -- Marco On 10/19/18 11:10 AM, Wilson, Steven M wrote: > Hi Diego, > > > I appreciate you taking the time to run these tests on your own setup! > My parameters to the smallfile benchmark were a little different (I took > them from some GlusterFS benchmarking documentation): > > smallfile_cli.py --top smallfile-tests --threads 4 --file-size 4 > --files 10000 --response-times Y > > > > Steve > > > ------------------------------------------------------------------------ > *From:* Remolina, Diego J <dij...@ae...> > *Sent:* Friday, October 19, 2018 7:19 AM > *To:* MooseFS-Users > *Subject:* Re: [MooseFS-Users] Performance suggestions for millions of > small files > > Hi Steve, > > > I have by no means a similar amount of files and space, as I am just > testing, but this is what I see with MooseFS 4.6.0 and goal=3 on a > pretty new (in testing phase, no load) 3-way server setup: > > > time tar -xf linux-4.9-rc3.tar > > real 4m0.332s > user 0m1.668s > sys 0m9.517s > > python /tmp/smallfile/smallfile_cli.py --operation create --threads 8 > --file-size 1 --files 2048 --top /nethome/dijuremo/test > > total threads = 8 > total files = 15948 > total IOPS = 15948 > total data = 0.015 GiB > 97.34% of requested files processed, minimum is 90.00 > elapsed time = 11.608 > files/sec = 1373.870032 > IOPS = 1373.870032 > MiB/sec = 1.341670 > > > python /tmp/smallfile/smallfile_cli.py --operation read --threads 8 > --file-size 1 --files 2048 --top /nethome/dijuremo/test > > total threads = 8 > total files = 16384 > total IOPS = 16384 > total data = 0.016 GiB > 100.00% of requested files processed, minimum is 90.00 > elapsed time = 2.553 > files/sec = 6416.909838 > IOPS = 6416.909838 > MiB/sec = 6.266514 > > python /tmp/smallfile/smallfile_cli.py --operation append --threads 8 > --file-size 1 --files 2048 --top /nethome/dijuremo/test > > > total threads = 8 > total files = 15348 > total IOPS = 15348 > total data = 0.015 GiB > 93.68% of requested files processed, minimum is 90.00 > elapsed time = 8.018 > files/sec = 1914.272783 > IOPS = 1914.272783 > MiB/sec = 1.869407 > > > > > I will be happy to adjust the smallfile test settings if any of my tests > are useful to you and re-run them for comparison. > > > Diego > > ------------------------------------------------------------------------ > *From:* Wilson, Steven M <st...@pu...> > *Sent:* Thursday, October 18, 2018 4:47:14 PM > *To:* MooseFS-Users > *Subject:* [MooseFS-Users] Performance suggestions for millions of small > files > > Hi, > > > We have ten different MooseFS installations in our research group and > one, in particular, is struggling with poor I/O performance. This > installation currently has 315 million files occupying 170TB of disk > space (goal = 2). If anyone else has a similar installation, I would > like to hear what you have done to maintain performance at a reasonable > level. > > > Here are some metrics to give a basic idea of the performance > characteristics. I'll include in parentheses the range of measurements > from other MFS installations with far fewer files for comparison. > > * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) > > * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! > > * smallfile test, read MB/s: 10.7 (12.8 - 15.4) > > * smallfile test, append MB/s: 6.1 (3.0 - 7.7) > > > It looks file creation is where I'm losing most of my performance > compared to the other installations. My master server has a Xeon > E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. > > > I tried several mfsmount options but the only one that showed any > significant improvement was the mfsfsyncmintime option > ("mfsfsyncmintime=5"). As to be expected, the improvement gained was > during the write/append operation. Here are the results using the same > tests as above: > > * tar xf linux-4.9-rc3.tar: 683 secs > * smallfile test, create MB/s: 1.2 > * smallfile test, read MB/s: 11.7 > * smallfile test, append MB/s: 11.4 <== Dramatic improvement over > 6.1 MB/s > > > The smallfile benchmark test I used is from > https://github.com/distributed-system-analysis/smallfile. > > > Thanks for any suggestions you might have! > > > Regards, > > Steve > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Piotr R. K. <pio...@ge...> - 2018-10-19 17:14:33
|
Hi Steve, what is the latency: Client <----> Master Server Client <----> Chunkservers? When we consider small files operations, latency becomes the most crucial parameter. Could you please provide us with some ping test results? Also, what are results (of single tests and summed up) if you run e.g. two, three, more such tests at the same time? Can you plz paste also some Master charts from the time of tests are being ran? Thank you, Best regards, Peter Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 19 Oct 2018, at 6:36 PM, Marco Milano <mar...@gm...> wrote: > > Steve, > > My wild guess is that somehow the master server is not very efficient > to handle that many files. > (Obviously if this is the case, it is very bad.) > > I will do some tests with the 4.x series and 0.5 billion files > and let you know.(it will take me several weeks to create that test > environment.) > > In the meantime, you can split the namespace into two which may help > on the same hardware. (Basically you can run as many namespaces > as you want on the same hardware setup, however this will require > a lot of work to setup and migrate, in this case, there will be > two master server processes running on your master server hardware > just at different ports) > > Or, just hope that the performance of the master server is better > with version 4.x > > -- Marco > > On 10/19/18 11:10 AM, Wilson, Steven M wrote: >> Hi Diego, >> I appreciate you taking the time to run these tests on your own setup! My parameters to the smallfile benchmark were a little different (I took them from some GlusterFS benchmarking documentation): >> smallfile_cli.py --top smallfile-tests --threads 4 --file-size 4 >> --files 10000 --response-times Y >> >> Steve >> ------------------------------------------------------------------------ >> *From:* Remolina, Diego J <dij...@ae...> >> *Sent:* Friday, October 19, 2018 7:19 AM >> *To:* MooseFS-Users >> *Subject:* Re: [MooseFS-Users] Performance suggestions for millions of small files >> Hi Steve, >> I have by no means a similar amount of files and space, as I am just testing, but this is what I see with MooseFS 4.6.0 and goal=3 on a pretty new (in testing phase, no load) 3-way server setup: >> time tar -xf linux-4.9-rc3.tar >> real 4m0.332s >> user 0m1.668s >> sys 0m9.517s >> python /tmp/smallfile/smallfile_cli.py --operation create --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test >> total threads = 8 >> total files = 15948 >> total IOPS = 15948 >> total data = 0.015 GiB >> 97.34% of requested files processed, minimum is 90.00 >> elapsed time = 11.608 >> files/sec = 1373.870032 >> IOPS = 1373.870032 >> MiB/sec = 1.341670 >> python /tmp/smallfile/smallfile_cli.py --operation read --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test >> total threads = 8 >> total files = 16384 >> total IOPS = 16384 >> total data = 0.016 GiB >> 100.00% of requested files processed, minimum is 90.00 >> elapsed time = 2.553 >> files/sec = 6416.909838 >> IOPS = 6416.909838 >> MiB/sec = 6.266514 >> python /tmp/smallfile/smallfile_cli.py --operation append --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test >> total threads = 8 >> total files = 15348 >> total IOPS = 15348 >> total data = 0.015 GiB >> 93.68% of requested files processed, minimum is 90.00 >> elapsed time = 8.018 >> files/sec = 1914.272783 >> IOPS = 1914.272783 >> MiB/sec = 1.869407 >> I will be happy to adjust the smallfile test settings if any of my tests are useful to you and re-run them for comparison. >> Diego >> ------------------------------------------------------------------------ >> *From:* Wilson, Steven M <st...@pu...> >> *Sent:* Thursday, October 18, 2018 4:47:14 PM >> *To:* MooseFS-Users >> *Subject:* [MooseFS-Users] Performance suggestions for millions of small files >> Hi, >> We have ten different MooseFS installations in our research group and one, in particular, is struggling with poor I/O performance. This installation currently has 315 million files occupying 170TB of disk space (goal = 2). If anyone else has a similar installation, I would like to hear what you have done to maintain performance at a reasonable level. >> Here are some metrics to give a basic idea of the performance characteristics. I'll include in parentheses the range of measurements from other MFS installations with far fewer files for comparison. >> * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) >> * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! >> * smallfile test, read MB/s: 10.7 (12.8 - 15.4) >> * smallfile test, append MB/s: 6.1 (3.0 - 7.7) >> It looks file creation is where I'm losing most of my performance compared to the other installations. My master server has a Xeon E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. >> I tried several mfsmount options but the only one that showed any significant improvement was the mfsfsyncmintime option ("mfsfsyncmintime=5"). As to be expected, the improvement gained was during the write/append operation. Here are the results using the same tests as above: >> * tar xf linux-4.9-rc3.tar: 683 secs >> * smallfile test, create MB/s: 1.2 >> * smallfile test, read MB/s: 11.7 >> * smallfile test, append MB/s: 11.4 <== Dramatic improvement over 6.1 MB/s >> The smallfile benchmark test I used is from https://github.com/distributed-system-analysis/smallfile. >> Thanks for any suggestions you might have! >> Regards, >> Steve >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Marco M. <mar...@gm...> - 2018-10-19 16:36:36
|
Steve, My wild guess is that somehow the master server is not very efficient to handle that many files. (Obviously if this is the case, it is very bad.) I will do some tests with the 4.x series and 0.5 billion files and let you know.(it will take me several weeks to create that test environment.) In the meantime, you can split the namespace into two which may help on the same hardware. (Basically you can run as many namespaces as you want on the same hardware setup, however this will require a lot of work to setup and migrate, in this case, there will be two master server processes running on your master server hardware just at different ports) Or, just hope that the performance of the master server is better with version 4.x -- Marco On 10/19/18 11:10 AM, Wilson, Steven M wrote: > Hi Diego, > > > I appreciate you taking the time to run these tests on your own setup! > My parameters to the smallfile benchmark were a little different (I took > them from some GlusterFS benchmarking documentation): > > smallfile_cli.py --top smallfile-tests --threads 4 --file-size 4 > --files 10000 --response-times Y > > > > Steve > > > ------------------------------------------------------------------------ > *From:* Remolina, Diego J <dij...@ae...> > *Sent:* Friday, October 19, 2018 7:19 AM > *To:* MooseFS-Users > *Subject:* Re: [MooseFS-Users] Performance suggestions for millions of > small files > > Hi Steve, > > > I have by no means a similar amount of files and space, as I am just > testing, but this is what I see with MooseFS 4.6.0 and goal=3 on a > pretty new (in testing phase, no load) 3-way server setup: > > > time tar -xf linux-4.9-rc3.tar > > real 4m0.332s > user 0m1.668s > sys 0m9.517s > > python /tmp/smallfile/smallfile_cli.py --operation create --threads 8 > --file-size 1 --files 2048 --top /nethome/dijuremo/test > > total threads = 8 > total files = 15948 > total IOPS = 15948 > total data = 0.015 GiB > 97.34% of requested files processed, minimum is 90.00 > elapsed time = 11.608 > files/sec = 1373.870032 > IOPS = 1373.870032 > MiB/sec = 1.341670 > > > python /tmp/smallfile/smallfile_cli.py --operation read --threads 8 > --file-size 1 --files 2048 --top /nethome/dijuremo/test > > total threads = 8 > total files = 16384 > total IOPS = 16384 > total data = 0.016 GiB > 100.00% of requested files processed, minimum is 90.00 > elapsed time = 2.553 > files/sec = 6416.909838 > IOPS = 6416.909838 > MiB/sec = 6.266514 > > python /tmp/smallfile/smallfile_cli.py --operation append --threads 8 > --file-size 1 --files 2048 --top /nethome/dijuremo/test > > > total threads = 8 > total files = 15348 > total IOPS = 15348 > total data = 0.015 GiB > 93.68% of requested files processed, minimum is 90.00 > elapsed time = 8.018 > files/sec = 1914.272783 > IOPS = 1914.272783 > MiB/sec = 1.869407 > > > > > I will be happy to adjust the smallfile test settings if any of my tests > are useful to you and re-run them for comparison. > > > Diego > > ------------------------------------------------------------------------ > *From:* Wilson, Steven M <st...@pu...> > *Sent:* Thursday, October 18, 2018 4:47:14 PM > *To:* MooseFS-Users > *Subject:* [MooseFS-Users] Performance suggestions for millions of small > files > > Hi, > > > We have ten different MooseFS installations in our research group and > one, in particular, is struggling with poor I/O performance. This > installation currently has 315 million files occupying 170TB of disk > space (goal = 2). If anyone else has a similar installation, I would > like to hear what you have done to maintain performance at a reasonable > level. > > > Here are some metrics to give a basic idea of the performance > characteristics. I'll include in parentheses the range of measurements > from other MFS installations with far fewer files for comparison. > > * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) > > * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! > > * smallfile test, read MB/s: 10.7 (12.8 - 15.4) > > * smallfile test, append MB/s: 6.1 (3.0 - 7.7) > > > It looks file creation is where I'm losing most of my performance > compared to the other installations. My master server has a Xeon > E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. > > > I tried several mfsmount options but the only one that showed any > significant improvement was the mfsfsyncmintime option > ("mfsfsyncmintime=5"). As to be expected, the improvement gained was > during the write/append operation. Here are the results using the same > tests as above: > > * tar xf linux-4.9-rc3.tar: 683 secs > * smallfile test, create MB/s: 1.2 > * smallfile test, read MB/s: 11.7 > * smallfile test, append MB/s: 11.4 <== Dramatic improvement over > 6.1 MB/s > > > The smallfile benchmark test I used is from > https://github.com/distributed-system-analysis/smallfile. > > > Thanks for any suggestions you might have! > > > Regards, > > Steve > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Wilson, S. M <st...@pu...> - 2018-10-19 15:10:12
|
Hi Diego, I appreciate you taking the time to run these tests on your own setup! My parameters to the smallfile benchmark were a little different (I took them from some GlusterFS benchmarking documentation): smallfile_cli.py --top smallfile-tests --threads 4 --file-size 4 --files 10000 --response-times Y ? ?Steve ________________________________ From: Remolina, Diego J <dij...@ae...> Sent: Friday, October 19, 2018 7:19 AM To: MooseFS-Users Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files Hi Steve, I have by no means a similar amount of files and space, as I am just testing, but this is what I see with MooseFS 4.6.0 and goal=3 on a pretty new (in testing phase, no load) 3-way server setup: time tar -xf linux-4.9-rc3.tar real 4m0.332s user 0m1.668s sys 0m9.517s python /tmp/smallfile/smallfile_cli.py --operation create --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 15948 total IOPS = 15948 total data = 0.015 GiB 97.34% of requested files processed, minimum is 90.00 elapsed time = 11.608 files/sec = 1373.870032 IOPS = 1373.870032 MiB/sec = 1.341670 python /tmp/smallfile/smallfile_cli.py --operation read --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 16384 total IOPS = 16384 total data = 0.016 GiB 100.00% of requested files processed, minimum is 90.00 elapsed time = 2.553 files/sec = 6416.909838 IOPS = 6416.909838 MiB/sec = 6.266514 python /tmp/smallfile/smallfile_cli.py --operation append --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 15348 total IOPS = 15348 total data = 0.015 GiB 93.68% of requested files processed, minimum is 90.00 elapsed time = 8.018 files/sec = 1914.272783 IOPS = 1914.272783 MiB/sec = 1.869407 [cid:d2ba5e5d-face-451f-84da-62b3e8282ebc] I will be happy to adjust the smallfile test settings if any of my tests are useful to you and re-run them for comparison. Diego ________________________________ From: Wilson, Steven M <st...@pu...> Sent: Thursday, October 18, 2018 4:47:14 PM To: MooseFS-Users Subject: [MooseFS-Users] Performance suggestions for millions of small files Hi, We have ten different MooseFS installations in our research group and one, in particular, is struggling with poor I/O performance. This installation currently has 315 million files occupying 170TB of disk space (goal = 2). If anyone else has a similar installation, I would like to hear what you have done to maintain performance at a reasonable level. Here are some metrics to give a basic idea of the performance characteristics. I'll include in parentheses the range of measurements from other MFS installations with far fewer files for comparison. * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! * smallfile test, read MB/s: 10.7 (12.8 - 15.4) * smallfile test, append MB/s: 6.1 (3.0 - 7.7) It looks file creation is where I'm losing most of my performance compared to the other installations. My master server has a Xeon E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. I tried several mfsmount options but the only one that showed any significant improvement was the mfsfsyncmintime option ("mfsfsyncmintime=5"). As to be expected, the improvement gained was during the write/append operation. Here are the results using the same tests as above: * tar xf linux-4.9-rc3.tar: 683 secs * smallfile test, create MB/s: 1.2 * smallfile test, read MB/s: 11.7 * smallfile test, append MB/s: 11.4 <== Dramatic improvement over 6.1 MB/s The smallfile benchmark test I used is ?from https://github.com/distributed-system-analysis/smallfile. Thanks for any suggestions you might have! Regards, Steve |
From: Remolina, D. J <dij...@ae...> - 2018-10-19 14:52:33
|
Hi Steve, I have by no means a similar amount of files and space, as I am just testing, but this is what I see with MooseFS 4.6.0 and goal=3 on a pretty new (in testing phase, no load) 3-way server setup: time tar -xf linux-4.9-rc3.tar real 4m0.332s user 0m1.668s sys 0m9.517s python /tmp/smallfile/smallfile_cli.py --operation create --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 15948 total IOPS = 15948 total data = 0.015 GiB 97.34% of requested files processed, minimum is 90.00 elapsed time = 11.608 files/sec = 1373.870032 IOPS = 1373.870032 MiB/sec = 1.341670 python /tmp/smallfile/smallfile_cli.py --operation read --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 16384 total IOPS = 16384 total data = 0.016 GiB 100.00% of requested files processed, minimum is 90.00 elapsed time = 2.553 files/sec = 6416.909838 IOPS = 6416.909838 MiB/sec = 6.266514 python /tmp/smallfile/smallfile_cli.py --operation append --threads 8 --file-size 1 --files 2048 --top /nethome/dijuremo/test total threads = 8 total files = 15348 total IOPS = 15348 total data = 0.015 GiB 93.68% of requested files processed, minimum is 90.00 elapsed time = 8.018 files/sec = 1914.272783 IOPS = 1914.272783 MiB/sec = 1.869407 [cid:d2ba5e5d-face-451f-84da-62b3e8282ebc] I will be happy to adjust the smallfile test settings if any of my tests are useful to you and re-run them for comparison. Diego ________________________________ From: Wilson, Steven M <st...@pu...> Sent: Thursday, October 18, 2018 4:47:14 PM To: MooseFS-Users Subject: [MooseFS-Users] Performance suggestions for millions of small files Hi, We have ten different MooseFS installations in our research group and one, in particular, is struggling with poor I/O performance. This installation currently has 315 million files occupying 170TB of disk space (goal = 2). If anyone else has a similar installation, I would like to hear what you have done to maintain performance at a reasonable level. Here are some metrics to give a basic idea of the performance characteristics. I'll include in parentheses the range of measurements from other MFS installations with far fewer files for comparison. * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! * smallfile test, read MB/s: 10.7 (12.8 - 15.4) * smallfile test, append MB/s: 6.1 (3.0 - 7.7) It looks file creation is where I'm losing most of my performance compared to the other installations. My master server has a Xeon E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. I tried several mfsmount options but the only one that showed any significant improvement was the mfsfsyncmintime option ("mfsfsyncmintime=5"). As to be expected, the improvement gained was during the write/append operation. Here are the results using the same tests as above: * tar xf linux-4.9-rc3.tar: 683 secs * smallfile test, create MB/s: 1.2 * smallfile test, read MB/s: 11.7 * smallfile test, append MB/s: 11.4 <== Dramatic improvement over 6.1 MB/s The smallfile benchmark test I used is from https://github.com/distributed-system-analysis/smallfile. Thanks for any suggestions you might have! Regards, Steve |
From: Wilson, S. M <st...@pu...> - 2018-10-19 14:44:40
|
Thanks! Yes, it is very good to be thorough and this is a good reminder to double-check the things that you mentioned. I am also planning to get all the systems updated to 3.0.101 as soon as possible. Steve ________________________________________ From: Zlatko Čalušić <zca...@bi...> Sent: Friday, October 19, 2018 9:28 AM To: Wilson, Steven M; Marco Milano; moo...@li... Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files Hello Steve, It's of course hard to debug problems without knowing all the details, so I can only suggest what I would look for, if I were in your shoes. If you suspect that system is slower than it should be, try to find what's it's weakest link, i.e. if there's any resource in the cluster that is exhausted. Since you provided info that actually file creation is the weakest link, definitely start from the master server: - what is CPU usage %? - what is /var/lib/mfs disk busy %? - any other issue, memory contention, swapping? If you can eliminate master server as the culprit, then proceed to chunkservers, where you also want to know: - disk busy % per each spindle you have in the pool? - CPU usage % per each chunk server? - swapping, etc...? It would be best to collect all those metrics while you're running creation test (say, in a loop). Then you look, if any particular resource is exhausted. Finally, check the network, any packet loss in any segment which would provoke TCP retransmissions would slow down whole cluster a lot. Yeah, a lot of stuff to check, but then again you do have a hefty 170 TB cluster, with lots of moving parts in it, aren't you. :) I'd also suggest upgrading the whole cluster to the newest 3.0.101 version, which has some caching improvements, i.e. might utilize memory on chunk servers slightly better than any previous version. Hope it helps! On 19. 10. 2018. 15:02, Wilson, Steven M wrote: > Marco, > > I re-ran the benchmark test asking for four threads to create four 10GB files and the problematic cluster shows 45.8 MB/s while one of the other clusters shows 77.3 MB/s. Certainly much better than many small files being created! > > And in answer to your second question, the versions of MooseFS on the slow cluster are mixed (two chunkservers and many clients running 3.0.97, master server and two other chunkservers running 3.0.101). > > Steve > > > ________________________________________ > From: Marco Milano <mar...@gm...> > Sent: Friday, October 19, 2018 6:26 AM > To: moo...@li... > Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files > > Steve, > > I don't have a solution for this problem. > Just out of curiosity: > > -- what is your large file create speed on this cluster > compared to your other clusters? > (i.e how long does it take to create a single 10GB file > on this one compared to others ?) > > -- You said you have a mix of 3.0.97 and 3.0.101, > are the versions of MooseFS uniform on this "slow cluster" ? > > -- Marco > > On 10/18/18 8:35 PM, Wilson, Steven M wrote: >> We have a mix of 3.0.97 and 3.0.101. >> >> Steve >> >> ________________________________________ >> From: Marco Milano <mar...@gm...> >> Sent: Thursday, October 18, 2018 5:43 PM >> To: moo...@li... >> Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files >> >> Steve, >> >> What is the version of the MooseFS ? >> >> -- Marco >> >> On 10/18/18 4:47 PM, Wilson, Steven M wrote: >>> Hi, >>> >>> >>> We have ten different MooseFS installations in our research group and >>> one, in particular, is struggling with poor I/O performance. This >>> installation currently has 315 million files occupying 170TB of disk >>> space (goal = 2). If anyone else has a similar installation, I would >>> like to hear what you have done to maintain performance at a reasonable >>> level. >>> >>> >>> Here are some metrics to give a basic idea of the performance >>> characteristics. I'll include in parentheses the range of measurements >>> from other MFS installations with far fewer files for comparison. >>> >>> * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) >>> >>> * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! >>> >>> * smallfile test, read MB/s: 10.7 (12.8 - 15.4) >>> >>> * smallfile test, append MB/s: 6.1 (3.0 - 7.7) >>> >>> >>> It looks file creation is where I'm losing most of my performance >>> compared to the other installations. My master server has a Xeon >>> E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. >>> >>> >>> I tried several mfsmount options but the only one that showed any >>> significant improvement was the mfsfsyncmintime option >>> ("mfsfsyncmintime=5"). As to be expected, the improvement gained was >>> during the write/append operation. Here are the results using the same >>> tests as above: >>> >>> * tar xf linux-4.9-rc3.tar: 683 secs >>> * smallfile test, create MB/s: 1.2 >>> * smallfile test, read MB/s: 11.7 >>> * smallfile test, append MB/s: 11.4 <== Dramatic improvement over >>> 6.1 MB/s >>> >>> >>> The smallfile benchmark test I used is from >>> https://github.com/distributed-system-analysis/smallfile. >>> >>> >>> Thanks for any suggestions you might have! >>> >>> >>> Regards, >>> >>> Steve >>> >>> >>> >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Zlatko |
From: Zlatko Č. <zca...@bi...> - 2018-10-19 13:46:21
|
Hello Steve, It's of course hard to debug problems without knowing all the details, so I can only suggest what I would look for, if I were in your shoes. If you suspect that system is slower than it should be, try to find what's it's weakest link, i.e. if there's any resource in the cluster that is exhausted. Since you provided info that actually file creation is the weakest link, definitely start from the master server: - what is CPU usage %? - what is /var/lib/mfs disk busy %? - any other issue, memory contention, swapping? If you can eliminate master server as the culprit, then proceed to chunkservers, where you also want to know: - disk busy % per each spindle you have in the pool? - CPU usage % per each chunk server? - swapping, etc...? It would be best to collect all those metrics while you're running creation test (say, in a loop). Then you look, if any particular resource is exhausted. Finally, check the network, any packet loss in any segment which would provoke TCP retransmissions would slow down whole cluster a lot. Yeah, a lot of stuff to check, but then again you do have a hefty 170 TB cluster, with lots of moving parts in it, aren't you. :) I'd also suggest upgrading the whole cluster to the newest 3.0.101 version, which has some caching improvements, i.e. might utilize memory on chunk servers slightly better than any previous version. Hope it helps! On 19. 10. 2018. 15:02, Wilson, Steven M wrote: > Marco, > > I re-ran the benchmark test asking for four threads to create four 10GB files and the problematic cluster shows 45.8 MB/s while one of the other clusters shows 77.3 MB/s. Certainly much better than many small files being created! > > And in answer to your second question, the versions of MooseFS on the slow cluster are mixed (two chunkservers and many clients running 3.0.97, master server and two other chunkservers running 3.0.101). > > Steve > > > ________________________________________ > From: Marco Milano <mar...@gm...> > Sent: Friday, October 19, 2018 6:26 AM > To: moo...@li... > Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files > > Steve, > > I don't have a solution for this problem. > Just out of curiosity: > > -- what is your large file create speed on this cluster > compared to your other clusters? > (i.e how long does it take to create a single 10GB file > on this one compared to others ?) > > -- You said you have a mix of 3.0.97 and 3.0.101, > are the versions of MooseFS uniform on this "slow cluster" ? > > -- Marco > > On 10/18/18 8:35 PM, Wilson, Steven M wrote: >> We have a mix of 3.0.97 and 3.0.101. >> >> Steve >> >> ________________________________________ >> From: Marco Milano <mar...@gm...> >> Sent: Thursday, October 18, 2018 5:43 PM >> To: moo...@li... >> Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files >> >> Steve, >> >> What is the version of the MooseFS ? >> >> -- Marco >> >> On 10/18/18 4:47 PM, Wilson, Steven M wrote: >>> Hi, >>> >>> >>> We have ten different MooseFS installations in our research group and >>> one, in particular, is struggling with poor I/O performance. This >>> installation currently has 315 million files occupying 170TB of disk >>> space (goal = 2). If anyone else has a similar installation, I would >>> like to hear what you have done to maintain performance at a reasonable >>> level. >>> >>> >>> Here are some metrics to give a basic idea of the performance >>> characteristics. I'll include in parentheses the range of measurements >>> from other MFS installations with far fewer files for comparison. >>> >>> * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) >>> >>> * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! >>> >>> * smallfile test, read MB/s: 10.7 (12.8 - 15.4) >>> >>> * smallfile test, append MB/s: 6.1 (3.0 - 7.7) >>> >>> >>> It looks file creation is where I'm losing most of my performance >>> compared to the other installations. My master server has a Xeon >>> E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. >>> >>> >>> I tried several mfsmount options but the only one that showed any >>> significant improvement was the mfsfsyncmintime option >>> ("mfsfsyncmintime=5"). As to be expected, the improvement gained was >>> during the write/append operation. Here are the results using the same >>> tests as above: >>> >>> * tar xf linux-4.9-rc3.tar: 683 secs >>> * smallfile test, create MB/s: 1.2 >>> * smallfile test, read MB/s: 11.7 >>> * smallfile test, append MB/s: 11.4 <== Dramatic improvement over >>> 6.1 MB/s >>> >>> >>> The smallfile benchmark test I used is from >>> https://github.com/distributed-system-analysis/smallfile. >>> >>> >>> Thanks for any suggestions you might have! >>> >>> >>> Regards, >>> >>> Steve >>> >>> >>> >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Zlatko |
From: Wilson, S. M <st...@pu...> - 2018-10-19 13:02:26
|
Marco, I re-ran the benchmark test asking for four threads to create four 10GB files and the problematic cluster shows 45.8 MB/s while one of the other clusters shows 77.3 MB/s. Certainly much better than many small files being created! And in answer to your second question, the versions of MooseFS on the slow cluster are mixed (two chunkservers and many clients running 3.0.97, master server and two other chunkservers running 3.0.101). Steve ________________________________________ From: Marco Milano <mar...@gm...> Sent: Friday, October 19, 2018 6:26 AM To: moo...@li... Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files Steve, I don't have a solution for this problem. Just out of curiosity: -- what is your large file create speed on this cluster compared to your other clusters? (i.e how long does it take to create a single 10GB file on this one compared to others ?) -- You said you have a mix of 3.0.97 and 3.0.101, are the versions of MooseFS uniform on this "slow cluster" ? -- Marco On 10/18/18 8:35 PM, Wilson, Steven M wrote: > We have a mix of 3.0.97 and 3.0.101. > > Steve > > ________________________________________ > From: Marco Milano <mar...@gm...> > Sent: Thursday, October 18, 2018 5:43 PM > To: moo...@li... > Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files > > Steve, > > What is the version of the MooseFS ? > > -- Marco > > On 10/18/18 4:47 PM, Wilson, Steven M wrote: >> Hi, >> >> >> We have ten different MooseFS installations in our research group and >> one, in particular, is struggling with poor I/O performance. This >> installation currently has 315 million files occupying 170TB of disk >> space (goal = 2). If anyone else has a similar installation, I would >> like to hear what you have done to maintain performance at a reasonable >> level. >> >> >> Here are some metrics to give a basic idea of the performance >> characteristics. I'll include in parentheses the range of measurements >> from other MFS installations with far fewer files for comparison. >> >> * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) >> >> * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! >> >> * smallfile test, read MB/s: 10.7 (12.8 - 15.4) >> >> * smallfile test, append MB/s: 6.1 (3.0 - 7.7) >> >> >> It looks file creation is where I'm losing most of my performance >> compared to the other installations. My master server has a Xeon >> E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. >> >> >> I tried several mfsmount options but the only one that showed any >> significant improvement was the mfsfsyncmintime option >> ("mfsfsyncmintime=5"). As to be expected, the improvement gained was >> during the write/append operation. Here are the results using the same >> tests as above: >> >> * tar xf linux-4.9-rc3.tar: 683 secs >> * smallfile test, create MB/s: 1.2 >> * smallfile test, read MB/s: 11.7 >> * smallfile test, append MB/s: 11.4 <== Dramatic improvement over >> 6.1 MB/s >> >> >> The smallfile benchmark test I used is from >> https://github.com/distributed-system-analysis/smallfile. >> >> >> Thanks for any suggestions you might have! >> >> >> Regards, >> >> Steve >> >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wilson, S. M <st...@pu...> - 2018-10-19 12:47:38
|
?Thanks for the suggestion! I had thought about that also but a lot of our files need to be accessed simultaneously from different clients and, as you mentioned, this approach doesn't support that. Steve ________________________________ From: Alexander AKHOBADZE <ale...@op...> Sent: Friday, October 19, 2018 1:55 AM To: Wilson, Steven M; MooseFS-Users Subject: RE: Performance suggestions for millions of small files Hi Steve! You know... I'v made a conclution not store millions of small files on MooseFS ;--) In such case I use a big file stored on MooseFS as container, format it as EXT4 or XFS and then mount it on client side. Yes. I know that in such case small files are not clustered anymore ... it's a pity but here you are. Wbr Alexander (Anri) Akhobadze, ale...@op... System administrator, DATAVISION NN From: Wilson, Steven M [mailto:st...@pu...] Sent: Thursday, October 18, 2018 11:47 PM To: MooseFS-Users Subject: [MooseFS-Users] Performance suggestions for millions of small files Hi, We have ten different MooseFS installations in our research group and one, in particular, is struggling with poor I/O performance. This installation currently has 315 million files occupying 170TB of disk space (goal = 2). If anyone else has a similar installation, I would like to hear what you have done to maintain performance at a reasonable level. Here are some metrics to give a basic idea of the performance characteristics. I'll include in parentheses the range of measurements from other MFS installations with far fewer files for comparison. * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! * smallfile test, read MB/s: 10.7 (12.8 - 15.4) * smallfile test, append MB/s: 6.1 (3.0 - 7.7) It looks file creation is where I'm losing most of my performance compared to the other installations. My master server has a Xeon E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. I tried several mfsmount options but the only one that showed any significant improvement was the mfsfsyncmintime option ("mfsfsyncmintime=5"). As to be expected, the improvement gained was during the write/append operation. Here are the results using the same tests as above: * tar xf linux-4.9-rc3.tar: 683 secs * smallfile test, create MB/s: 1.2 * smallfile test, read MB/s: 11.7 * smallfile test, append MB/s: 11.4 <== Dramatic improvement over 6.1 MB/s The smallfile benchmark test I used is ?from https://github.com/distributed-system-analysis/smallfile. Thanks for any suggestions you might have! Regards, Steve |
From: Marco M. <mar...@gm...> - 2018-10-19 10:27:06
|
Steve, I don't have a solution for this problem. Just out of curiosity: -- what is your large file create speed on this cluster compared to your other clusters? (i.e how long does it take to create a single 10GB file on this one compared to others ?) -- You said you have a mix of 3.0.97 and 3.0.101, are the versions of MooseFS uniform on this "slow cluster" ? -- Marco On 10/18/18 8:35 PM, Wilson, Steven M wrote: > We have a mix of 3.0.97 and 3.0.101. > > Steve > > ________________________________________ > From: Marco Milano <mar...@gm...> > Sent: Thursday, October 18, 2018 5:43 PM > To: moo...@li... > Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files > > Steve, > > What is the version of the MooseFS ? > > -- Marco > > On 10/18/18 4:47 PM, Wilson, Steven M wrote: >> Hi, >> >> >> We have ten different MooseFS installations in our research group and >> one, in particular, is struggling with poor I/O performance. This >> installation currently has 315 million files occupying 170TB of disk >> space (goal = 2). If anyone else has a similar installation, I would >> like to hear what you have done to maintain performance at a reasonable >> level. >> >> >> Here are some metrics to give a basic idea of the performance >> characteristics. I'll include in parentheses the range of measurements >> from other MFS installations with far fewer files for comparison. >> >> * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) >> >> * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! >> >> * smallfile test, read MB/s: 10.7 (12.8 - 15.4) >> >> * smallfile test, append MB/s: 6.1 (3.0 - 7.7) >> >> >> It looks file creation is where I'm losing most of my performance >> compared to the other installations. My master server has a Xeon >> E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. >> >> >> I tried several mfsmount options but the only one that showed any >> significant improvement was the mfsfsyncmintime option >> ("mfsfsyncmintime=5"). As to be expected, the improvement gained was >> during the write/append operation. Here are the results using the same >> tests as above: >> >> * tar xf linux-4.9-rc3.tar: 683 secs >> * smallfile test, create MB/s: 1.2 >> * smallfile test, read MB/s: 11.7 >> * smallfile test, append MB/s: 11.4 <== Dramatic improvement over >> 6.1 MB/s >> >> >> The smallfile benchmark test I used is from >> https://github.com/distributed-system-analysis/smallfile. >> >> >> Thanks for any suggestions you might have! >> >> >> Regards, >> >> Steve >> >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Alexander A. <ale...@op...> - 2018-10-19 06:11:26
|
Hi Steve! You know… I’v made a conclution not store millions of small files on MooseFS ;--) In such case I use a big file stored on MooseFS as container, format it as EXT4 or XFS and then mount it on client side. Yes. I know that in such case small files are not clustered anymore … it's a pity but here you are. Wbr Alexander (Anri) Akhobadze, ale...@op... System administrator, DATAVISION NN From: Wilson, Steven M [mailto:st...@pu...] Sent: Thursday, October 18, 2018 11:47 PM To: MooseFS-Users Subject: [MooseFS-Users] Performance suggestions for millions of small files Hi, We have ten different MooseFS installations in our research group and one, in particular, is struggling with poor I/O performance. This installation currently has 315 million files occupying 170TB of disk space (goal = 2). If anyone else has a similar installation, I would like to hear what you have done to maintain performance at a reasonable level. Here are some metrics to give a basic idea of the performance characteristics. I'll include in parentheses the range of measurements from other MFS installations with far fewer files for comparison. * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! * smallfile test, read MB/s: 10.7 (12.8 - 15.4) * smallfile test, append MB/s: 6.1 (3.0 - 7.7) It looks file creation is where I'm losing most of my performance compared to the other installations. My master server has a Xeon E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. I tried several mfsmount options but the only one that showed any significant improvement was the mfsfsyncmintime option ("mfsfsyncmintime=5"). As to be expected, the improvement gained was during the write/append operation. Here are the results using the same tests as above: * tar xf linux-4.9-rc3.tar: 683 secs * smallfile test, create MB/s: 1.2 * smallfile test, read MB/s: 11.7 * smallfile test, append MB/s: 11.4 <== Dramatic improvement over 6.1 MB/s The smallfile benchmark test I used is from https://github.com/distributed-system-analysis/smallfile. Thanks for any suggestions you might have! Regards, Steve |
From: Wilson, S. M <st...@pu...> - 2018-10-19 00:36:12
|
We have a mix of 3.0.97 and 3.0.101. Steve ________________________________________ From: Marco Milano <mar...@gm...> Sent: Thursday, October 18, 2018 5:43 PM To: moo...@li... Subject: Re: [MooseFS-Users] Performance suggestions for millions of small files Steve, What is the version of the MooseFS ? -- Marco On 10/18/18 4:47 PM, Wilson, Steven M wrote: > Hi, > > > We have ten different MooseFS installations in our research group and > one, in particular, is struggling with poor I/O performance. This > installation currently has 315 million files occupying 170TB of disk > space (goal = 2). If anyone else has a similar installation, I would > like to hear what you have done to maintain performance at a reasonable > level. > > > Here are some metrics to give a basic idea of the performance > characteristics. I'll include in parentheses the range of measurements > from other MFS installations with far fewer files for comparison. > > * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) > > * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! > > * smallfile test, read MB/s: 10.7 (12.8 - 15.4) > > * smallfile test, append MB/s: 6.1 (3.0 - 7.7) > > > It looks file creation is where I'm losing most of my performance > compared to the other installations. My master server has a Xeon > E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. > > > I tried several mfsmount options but the only one that showed any > significant improvement was the mfsfsyncmintime option > ("mfsfsyncmintime=5"). As to be expected, the improvement gained was > during the write/append operation. Here are the results using the same > tests as above: > > * tar xf linux-4.9-rc3.tar: 683 secs > * smallfile test, create MB/s: 1.2 > * smallfile test, read MB/s: 11.7 > * smallfile test, append MB/s: 11.4 <== Dramatic improvement over > 6.1 MB/s > > > The smallfile benchmark test I used is from > https://github.com/distributed-system-analysis/smallfile. > > > Thanks for any suggestions you might have! > > > Regards, > > Steve > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Marco M. <mar...@gm...> - 2018-10-18 21:43:36
|
Steve, What is the version of the MooseFS ? -- Marco On 10/18/18 4:47 PM, Wilson, Steven M wrote: > Hi, > > > We have ten different MooseFS installations in our research group and > one, in particular, is struggling with poor I/O performance. This > installation currently has 315 million files occupying 170TB of disk > space (goal = 2). If anyone else has a similar installation, I would > like to hear what you have done to maintain performance at a reasonable > level. > > > Here are some metrics to give a basic idea of the performance > characteristics. I'll include in parentheses the range of measurements > from other MFS installations with far fewer files for comparison. > > * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) > > * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! > > * smallfile test, read MB/s: 10.7 (12.8 - 15.4) > > * smallfile test, append MB/s: 6.1 (3.0 - 7.7) > > > It looks file creation is where I'm losing most of my performance > compared to the other installations. My master server has a Xeon > E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. > > > I tried several mfsmount options but the only one that showed any > significant improvement was the mfsfsyncmintime option > ("mfsfsyncmintime=5"). As to be expected, the improvement gained was > during the write/append operation. Here are the results using the same > tests as above: > > * tar xf linux-4.9-rc3.tar: 683 secs > * smallfile test, create MB/s: 1.2 > * smallfile test, read MB/s: 11.7 > * smallfile test, append MB/s: 11.4 <== Dramatic improvement over > 6.1 MB/s > > > The smallfile benchmark test I used is from > https://github.com/distributed-system-analysis/smallfile. > > > Thanks for any suggestions you might have! > > > Regards, > > Steve > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Wilson, S. M <st...@pu...> - 2018-10-18 21:22:20
|
Hi, We have ten different MooseFS installations in our research group and one, in particular, is struggling with poor I/O performance. This installation currently has 315 million files occupying 170TB of disk space (goal = 2). If anyone else has a similar installation, I would like to hear what you have done to maintain performance at a reasonable level. Here are some metrics to give a basic idea of the performance characteristics. I'll include in parentheses the range of measurements from other MFS installations with far fewer files for comparison. * tar xf linux-4.9-rc3.tar: 1185 secs (220 - 296 secs) * smallfile test, create MB/s: 0.8 (2.3 - 4.8) <== Ouch! * smallfile test, read MB/s: 10.7 (12.8 - 15.4) * smallfile test, append MB/s: 6.1 (3.0 - 7.7) It looks file creation is where I'm losing most of my performance compared to the other installations. My master server has a Xeon E5-1630v3 3.7GHz CPU with 256GB of DDR4 2133MHz memory. I tried several mfsmount options but the only one that showed any significant improvement was the mfsfsyncmintime option ("mfsfsyncmintime=5"). As to be expected, the improvement gained was during the write/append operation. Here are the results using the same tests as above: * tar xf linux-4.9-rc3.tar: 683 secs * smallfile test, create MB/s: 1.2 * smallfile test, read MB/s: 11.7 * smallfile test, append MB/s: 11.4 <== Dramatic improvement over 6.1 MB/s The smallfile benchmark test I used is ?from https://github.com/distributed-system-analysis/smallfile. Thanks for any suggestions you might have! Regards, Steve |