You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Piotr R. K. <pio...@mo...> - 2017-04-07 12:41:48
|
Hey Zhongbo, We've followed you suggestion and added this change to MooseFS sources. Thanks! Please find an URL to appropriate commit below: https://github.com/moosefs/moosefs/commit/709eba4e72c997888f7e319740dcf45cbde5acbd <https://github.com/moosefs/moosefs/commit/709eba4e72c997888f7e319740dcf45cbde5acbd> Best regards, Peter -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 30 Mar 2017, at 5:02 AM, 田忠博(Zhongbo Tian) <win...@gm...> wrote: > > Hi Aleksander, > > Thanks for the quick reply. 3.0.90 DO fix the issue! Thank you. > > And on our production cluster, we found a lot of unconsumed messages in client's TCP receive queue. This led to periodically high load. After some investigation, we guess the client's `conncache` is too slow to digest KEEPALIVE messages. So we modified the source code to decrease the sleep time, and it seemed working for us. Here is our patch: > > """ > diff --git a/mfscommon/conncache.c b/mfscommon/conncache.c > index 4d33c19..b7a99bf 100644 > --- a/mfscommon/conncache.c > +++ b/mfscommon/conncache.c > @@ -161,7 +161,7 @@ void* conncache_keepalive_thread(void* arg) { > } > ka = keep_alive; > zassert(pthread_mutex_unlock(&glock)); > - portable_usleep(10000); > + portable_usleep(5000); > } > return arg; > } > """ > > Finally, I am curious on the progress of MooseFS 4.0. we are looking forward for the erase-coding implementation for a quite long time. And we also want to know how the MooseFS guys's option on Container Storage Interface (CSI), here you can find more details on it: https://github.com/docker/docker/issues/31923 <https://github.com/docker/docker/issues/31923> > > > And at the end, thank you for this excellent project. > > On Wed, Mar 29, 2017 at 6:04 PM Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> wrote: > Hi. > Did you tried the last stable MooseFS version 3.0.90? > > MooseFS 3.0.86 client has a few bugs, but they were fixed. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com/> > On 29.03.2017 11:46, 田忠博(Zhongbo Tian) wrote: >> Hi all, >> >> We had encountered a weird issue after upgrading to moosefs 3.0.86. When we try to run ' TMPDIR=/some/moosefs/path python -c "import ctypes" ', we end up with a SIGBUS. >> After some investigations, we found it seems related with mmap, and we can reproduce this bug using following C code: >> """ >> >> #include <stdio.h> >> #include <fcntl.h> >> #include <unistd.h> >> #include <sys/mman.h> >> >> int main(int argc, char** argv) { >> int fd; >> char* filename; >> char *c2; >> if (argc != 2) { >> fprintf(stderr, "usage: %s <file>\n", argv[0]); >> return 1; >> } >> filename = argv[1]; >> unlink(filename); >> fd = open(filename, O_RDWR|O_CREAT, 0600); >> ftruncate(fd, 4096); >> c2 = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); >> *c2 = '\0'; // SIGBUS >> return 0; >> } >> >> """ >> Here is the strace for when we run this on a moosefs path: >> >> """ >> >> $ strace ./test /mfs/user/tianzhongbo/temp/test >> execve("./test", ["./test", "/mfs/user/tianzhongbo/temp/test"], [/* 52 vars */]) = 0 >> brk(0) = 0x949000 >> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d85a000 >> access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) >> open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 >> fstat(3, {st_mode=S_IFREG|0644, st_size=114873, ...}) = 0 >> mmap(NULL, 114873, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f825d83d000 >> close(3) = 0 >> open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 >> read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\t\2\0\0\0\0\0"..., 832) = 832 >> fstat(3, {st_mode=S_IFREG|0755, st_size=1697568, ...}) = 0 >> mmap(NULL, 3804928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f825d299000 >> mprotect(0x7f825d430000, 2097152, PROT_NONE) = 0 >> mmap(0x7f825d630000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x197000) = 0x7f825d630000 >> mmap(0x7f825d636000, 16128, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f825d636000 >> close(3) = 0 >> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d83c000 >> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d83b000 >> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d83a000 >> arch_prctl(ARCH_SET_FS, 0x7f825d83b700) = 0 >> mprotect(0x7f825d630000, 16384, PROT_READ) = 0 >> mprotect(0x600000, 4096, PROT_READ) = 0 >> mprotect(0x7f825d85b000, 4096, PROT_READ) = 0 >> munmap(0x7f825d83d000, 114873) = 0 >> unlink("/mfs/user/tianzhongbo/temp/test") = 0 >> open("/mfs/user/tianzhongbo/temp/test", O_RDWR|O_CREAT, 0600) = 3 >> ftruncate(3, 4096) = 0 >> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0x7f825d859000 >> --- SIGBUS {si_signo=SIGBUS, si_code=BUS_ADRERR, si_addr=0x7f825d859000} --- >> +++ killed by SIGBUS +++ >> Bus error >> >> """ >> >> Can anyone help to resolve this? >> >> > >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot <http://sdm.link/slashdot> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wilson, S. M <st...@pu...> - 2017-04-07 12:09:41
|
No, the goal is set at 2 for this file system. Steve ________________________________ From: Casper Langemeijer <cas...@pr...> Sent: Friday, April 7, 2017 4:37 AM To: Wilson, Steven M; MooseFS-Users Subject: Re: [MooseFS-Users] Fw: Number of files > number of chunks? A file of zero length does not have any chunks. But still: a number of chunks very similar to number of files. Are you running a system with a goal of 1? Op do 6 apr. 2017 om 14:31 schreef Wilson, Steven M <st...@pu...<mailto:st...@pu...>>: ?Ah, yes, that's probably what happening here. Thanks! Steve On Apr 5, 2017 5:50 PM, "Ricardo J. Barberis" <ric...@do...<mailto:ric...@do...>> wrote: In my case, whenever I see more files than chunks it's almost always empty files, like temporary locks. El Miércoles 05/04/2017 a las 17:26, Wilson, Steven M escribió: > Hi, > > I was looking at one of our MooseFS file systems today and noticed that it > has more files than it has chunks. Can anyone explain how that can happen? > Here's the output of mfsdirinfo at the root of the file system: > ? > > inodes: 224665855 > directories: 6659685 > files: 217339466 > chunks: 215970818 > length: 44007258442765 > size: 57959637532672 > realsize: 115919275065344 > > Thanks, > Steve Cheers, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com<http://www.DonWeb.com> _____ ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Casper L. <cas...@pr...> - 2017-04-07 09:07:02
|
A file of zero length does not have any chunks. But still: a number of chunks very similar to number of files. Are you running a system with a goal of 1? Op do 6 apr. 2017 om 14:31 schreef Wilson, Steven M <st...@pu...>: > Ah, yes, that's probably what happening here. Thanks! > > Steve > > > On Apr 5, 2017 5:50 PM, "Ricardo J. Barberis" <ric...@do...> > wrote: > > In my case, whenever I see more files than chunks it's almost always empty > files, like temporary locks. > > El Miércoles 05/04/2017 a las 17:26, Wilson, Steven M escribió: > > Hi, > > > > I was looking at one of our MooseFS file systems today and noticed that > it > > has more files than it has chunks. Can anyone explain how that can > happen? > > Here's the output of mfsdirinfo at the root of the file system: > > ? > > > > inodes: 224665855 > > directories: 6659685 > > files: 217339466 > > chunks: 215970818 > > length: 44007258442765 > > size: 57959637532672 > > realsize: 115919275065344 > > > > Thanks, > > Steve > > Cheers, > -- > Ricardo J. Barberis > Senior SysAdmin / IT Architect > DonWeb > La Actitud Es Todo > www.DonWeb.com > _____ > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Wilson, S. M <st...@pu...> - 2017-04-06 12:30:50
|
?Ah, yes, that's probably what happening here. Thanks! Steve On Apr 5, 2017 5:50 PM, "Ricardo J. Barberis" <ric...@do...> wrote: In my case, whenever I see more files than chunks it's almost always empty files, like temporary locks. El Miércoles 05/04/2017 a las 17:26, Wilson, Steven M escribió: > Hi, > > I was looking at one of our MooseFS file systems today and noticed that it > has more files than it has chunks. Can anyone explain how that can happen? > Here's the output of mfsdirinfo at the root of the file system: > ? > > inodes: 224665855 > directories: 6659685 > files: 217339466 > chunks: 215970818 > length: 44007258442765 > size: 57959637532672 > realsize: 115919275065344 > > Thanks, > Steve Cheers, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com<http://www.DonWeb.com> _____ |
From: Ben H. <bj...@ba...> - 2017-04-06 05:34:08
|
I was of the understanding that files smaller than 64mb were concatenated together into a single chunk, unless I've been getting that wrong for like... Four years? |
From: Ricardo J. B. <ric...@do...> - 2017-04-05 22:15:11
|
In my case, whenever I see more files than chunks it's almost always empty files, like temporary locks. El Miércoles 05/04/2017 a las 17:26, Wilson, Steven M escribió: > Hi, > > I was looking at one of our MooseFS file systems today and noticed that it > has more files than it has chunks. Can anyone explain how that can happen? > Here's the output of mfsdirinfo at the root of the file system: > ? > > inodes: 224665855 > directories: 6659685 > files: 217339466 > chunks: 215970818 > length: 44007258442765 > size: 57959637532672 > realsize: 115919275065344 > > Thanks, > Steve Cheers, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ |
From: Wilson, S. M <st...@pu...> - 2017-04-05 20:40:39
|
Hi, I was looking at one of our MooseFS file systems today and noticed that it has more files than it has chunks. Can anyone explain how that can happen? Here's the output of mfsdirinfo at the root of the file system: ? inodes: 224665855 directories: 6659685 files: 217339466 chunks: 215970818 length: 44007258442765 size: 57959637532672 realsize: 115919275065344 Thanks, Steve |
From: 田忠博(Zhongbo T. <win...@gm...> - 2017-03-30 03:02:20
|
Hi Aleksander, Thanks for the quick reply. 3.0.90 DO fix the issue! Thank you. And on our production cluster, we found a lot of unconsumed messages in client's TCP receive queue. This led to periodically high load. After some investigation, we guess the client's `conncache` is too slow to digest KEEPALIVE messages. So we modified the source code to decrease the sleep time, and it seemed working for us. Here is our patch: """ diff --git a/mfscommon/conncache.c b/mfscommon/conncache.c index 4d33c19..b7a99bf 100644 --- a/mfscommon/conncache.c +++ b/mfscommon/conncache.c @@ -161,7 +161,7 @@ void* conncache_keepalive_thread(void* arg) { } ka = keep_alive; zassert(pthread_mutex_unlock(&glock)); - portable_usleep(10000); + portable_usleep(5000); } return arg; } """ Finally, I am curious on the progress of MooseFS 4.0. we are looking forward for the erase-coding implementation for a quite long time. And we also want to know how the MooseFS guys's option on Container Storage Interface (CSI), here you can find more details on it: https://github.com/docker/docker/issues/31923 And at the end, thank you for this excellent project. On Wed, Mar 29, 2017 at 6:04 PM Aleksander Wieliczko < ale...@mo...> wrote: > Hi. > Did you tried the last stable MooseFS version 3.0.90? > > MooseFS 3.0.86 client has a few bugs, but they were fixed. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com> > On 29.03.2017 11:46, 田忠博(Zhongbo Tian) wrote: > > Hi all, > > We had encountered a weird issue after upgrading to moosefs 3.0.86. > When we try to run ' TMPDIR=/some/moosefs/path python -c "import ctypes" ', > we end up with a SIGBUS. > After some investigations, we found it seems related with mmap, and we > can reproduce this bug using following C code: > """ > > #include <stdio.h> > #include <fcntl.h> > #include <unistd.h> > #include <sys/mman.h> > > int main(int argc, char** argv) { > int fd; > char* filename; > char *c2; > if (argc != 2) { > fprintf(stderr, "usage: %s <file>\n", argv[0]); > return 1; > } > filename = argv[1]; > unlink(filename); > fd = open(filename, O_RDWR|O_CREAT, 0600); > ftruncate(fd, 4096); > c2 = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); > *c2 = '\0'; // SIGBUS > return 0; > } > > """ > Here is the strace for when we run this on a moosefs path: > > """ > > $ strace ./test /mfs/user/tianzhongbo/temp/test > execve("./test", ["./test", "/mfs/user/tianzhongbo/temp/test"], [/* 52 > vars */]) = 0 > brk(0) = 0x949000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d85a000 > access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or > directory) > open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 > fstat(3, {st_mode=S_IFREG|0644, st_size=114873, ...}) = 0 > mmap(NULL, 114873, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f825d83d000 > close(3) = 0 > open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 > read(3, > "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\t\2\0\0\0\0\0"..., 832) = > 832 > fstat(3, {st_mode=S_IFREG|0755, st_size=1697568, ...}) = 0 > mmap(NULL, 3804928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) > = 0x7f825d299000 > mprotect(0x7f825d430000, 2097152, PROT_NONE) = 0 > mmap(0x7f825d630000, 24576, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x197000) = 0x7f825d630000 > mmap(0x7f825d636000, 16128, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f825d636000 > close(3) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d83c000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d83b000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d83a000 > arch_prctl(ARCH_SET_FS, 0x7f825d83b700) = 0 > mprotect(0x7f825d630000, 16384, PROT_READ) = 0 > mprotect(0x600000, 4096, PROT_READ) = 0 > mprotect(0x7f825d85b000, 4096, PROT_READ) = 0 > munmap(0x7f825d83d000, 114873) = 0 > unlink("/mfs/user/tianzhongbo/temp/test") = 0 > open("/mfs/user/tianzhongbo/temp/test", O_RDWR|O_CREAT, 0600) = 3 > ftruncate(3, 4096) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0x7f825d859000 > --- SIGBUS {si_signo=SIGBUS, si_code=BUS_ADRERR, si_addr=0x7f825d859000} > --- > +++ killed by SIGBUS +++ > Bus error > > """ > > Can anyone help to resolve this? > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Aleksander W. <ale...@mo...> - 2017-03-29 10:04:29
|
Hi. Did you tried the last stable MooseFS version 3.0.90? MooseFS 3.0.86 client has a few bugs, but they were fixed. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 29.03.2017 11:46, 田忠博(Zhongbo Tian) wrote: > Hi all, > > We had encountered a weird issue after upgrading to moosefs > 3.0.86. When we try to run ' TMPDIR=/some/moosefs/path python -c > "import ctypes" ', we end up with a SIGBUS. > After some investigations, we found it seems related with mmap, > and we can reproduce this bug using following C code: > """ > > #include <stdio.h> > #include <fcntl.h> > #include <unistd.h> > #include <sys/mman.h> > > int main(int argc, char** argv) { > int fd; > char* filename; > char *c2; > if (argc != 2) { > fprintf(stderr, "usage: %s <file>\n", argv[0]); > return 1; > } > filename = argv[1]; > unlink(filename); > fd = open(filename, O_RDWR|O_CREAT, 0600); > ftruncate(fd, 4096); > c2 = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); > *c2 = '\0'; // SIGBUS > return 0; > } > > """ > Here is the strace for when we run this on a moosefs path: > > """ > > $ strace ./test /mfs/user/tianzhongbo/temp/test > execve("./test", ["./test", "/mfs/user/tianzhongbo/temp/test"], [/* 52 > vars */]) = 0 > brk(0) = 0x949000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = 0x7f825d85a000 > access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or > directory) > open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 > fstat(3, {st_mode=S_IFREG|0644, st_size=114873, ...}) = 0 > mmap(NULL, 114873, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f825d83d000 > close(3) = 0 > open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 > read(3, > "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\t\2\0\0\0\0\0"..., > 832) = 832 > fstat(3, {st_mode=S_IFREG|0755, st_size=1697568, ...}) = 0 > mmap(NULL, 3804928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, > 0) = 0x7f825d299000 > mprotect(0x7f825d430000, 2097152, PROT_NONE) = 0 > mmap(0x7f825d630000, 24576, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x197000) = 0x7f825d630000 > mmap(0x7f825d636000, 16128, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f825d636000 > close(3) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = 0x7f825d83c000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = 0x7f825d83b000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = 0x7f825d83a000 > arch_prctl(ARCH_SET_FS, 0x7f825d83b700) = 0 > mprotect(0x7f825d630000, 16384, PROT_READ) = 0 > mprotect(0x600000, 4096, PROT_READ) = 0 > mprotect(0x7f825d85b000, 4096, PROT_READ) = 0 > munmap(0x7f825d83d000, 114873) = 0 > unlink("/mfs/user/tianzhongbo/temp/test") = 0 > open("/mfs/user/tianzhongbo/temp/test", O_RDWR|O_CREAT, 0600) = 3 > ftruncate(3, 4096) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0x7f825d859000 > --- SIGBUS {si_signo=SIGBUS, si_code=BUS_ADRERR, > si_addr=0x7f825d859000} --- > +++ killed by SIGBUS +++ > Bus error > > """ > > Can anyone help to resolve this? > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: 田忠博(Zhongbo T. <win...@gm...> - 2017-03-29 09:47:12
|
Hi all, We had encountered a weird issue after upgrading to moosefs 3.0.86. When we try to run ' TMPDIR=/some/moosefs/path python -c "import ctypes" ', we end up with a SIGBUS. After some investigations, we found it seems related with mmap, and we can reproduce this bug using following C code: """ #include <stdio.h> #include <fcntl.h> #include <unistd.h> #include <sys/mman.h> int main(int argc, char** argv) { int fd; char* filename; char *c2; if (argc != 2) { fprintf(stderr, "usage: %s <file>\n", argv[0]); return 1; } filename = argv[1]; unlink(filename); fd = open(filename, O_RDWR|O_CREAT, 0600); ftruncate(fd, 4096); c2 = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); *c2 = '\0'; // SIGBUS return 0; } """ Here is the strace for when we run this on a moosefs path: """ $ strace ./test /mfs/user/tianzhongbo/temp/test execve("./test", ["./test", "/mfs/user/tianzhongbo/temp/test"], [/* 52 vars */]) = 0 brk(0) = 0x949000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d85a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=114873, ...}) = 0 mmap(NULL, 114873, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f825d83d000 close(3) = 0 open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\t\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1697568, ...}) = 0 mmap(NULL, 3804928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f825d299000 mprotect(0x7f825d430000, 2097152, PROT_NONE) = 0 mmap(0x7f825d630000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x197000) = 0x7f825d630000 mmap(0x7f825d636000, 16128, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f825d636000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d83c000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d83b000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f825d83a000 arch_prctl(ARCH_SET_FS, 0x7f825d83b700) = 0 mprotect(0x7f825d630000, 16384, PROT_READ) = 0 mprotect(0x600000, 4096, PROT_READ) = 0 mprotect(0x7f825d85b000, 4096, PROT_READ) = 0 munmap(0x7f825d83d000, 114873) = 0 unlink("/mfs/user/tianzhongbo/temp/test") = 0 open("/mfs/user/tianzhongbo/temp/test", O_RDWR|O_CREAT, 0600) = 3 ftruncate(3, 4096) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0x7f825d859000 --- SIGBUS {si_signo=SIGBUS, si_code=BUS_ADRERR, si_addr=0x7f825d859000} --- +++ killed by SIGBUS +++ Bus error """ Can anyone help to resolve this? |
From: Michael T. <mic...@ho...> - 2017-03-29 01:32:50
|
Hi. I recently upgraded my setup from 2.0 to 3.0. My master, all metaloggers, and all chunkservers are already upgraded. Most of my clients are still on 2.0 and will be upgraded this weekend. A couple of days ago, one of the chunkservers had a disk start to fail. So I marked it for removal and restarted the chunkserver service, and MFS started removing th chunks. Not long after, the disk totally failed, but the removal of the chunks have not finished. No big deal though. So I replaced the disk with a larger sized one: old=2TB, new=4TB. Restarted the chunkserver and the new disk shows up in the CGI GUI. 24 hours later, however, I noticed that the number of undergoal chunks did not seem to shrink much. Looking at the logs, it seems that Internal Rebalancing (e.g. moving chunks from other disks into the new one) seems to be taking priority over reaching goal for undergoal chunks. How can I get the master to do the reverse, i.e. prioritize reaching the goal for undergoal chunks over internal rebalancing? --- mike t. |
From: Piotr R. K. <pio...@mo...> - 2017-03-24 17:05:35
|
Hi Ali, if there's a possibility to install Linux on Isilon nodes, it probably will be possible to run MooseFS on them. You can check out supported OS-es for MooseFS here: https://moosefs.com/download.html <https://moosefs.com/download.html>. Best regards, Peter -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 22 Mar 2017, at 8:37 PM, Ali Moeinvaziri <moe...@gm...> wrote: > > Hi, > > We have a few end of support isilon nodes (NL36) that would like to load moosefs on them > and use. I wonder if anybody has already gone down this road. > > I appreciate any notes, thoughts, or comments. > -AM > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ali M. <moe...@gm...> - 2017-03-22 19:37:26
|
Hi, We have a few end of support isilon nodes (NL36) that would like to load moosefs on them and use. I wonder if anybody has already gone down this road. I appreciate any notes, thoughts, or comments. -AM |
From: Marin B. <li...@ol...> - 2017-03-20 15:41:25
|
03 mars 2017 18:44 Warren Myers a écrit: > What practices does the community suggest for encrypting files in MooseFS? > > > > Does Moose support the underlying filesystems being encrypted? > > > > Is there a way to incorporate TLS or similar between chunkservers and masters? > > > > Thanks, > > > > Warren Myers > https://antipaucity.com > https://www.digitalocean.com/?refcode=d197a961987a > |
From: Marin B. <li...@ol...> - 2017-03-20 15:32:14
|
Hi, I would really love to see TLS or Kerberos support added to MooseFS, and md5 password hashes replaced by proper HMAC. The lack of such features makes MooseFS vulnerable to several classes of exploits, mostly similar to those affecting unencrypted NFS. As of now, using MooseFS on unsecure / public networks is fairly dangerous. Here is what we did to help securing the whole cluster: - We emulate TLS with IPsec. We encrypt MooseFS traffic with IPsec between all members of the cluster (including clients). Masters and chunk servers are on the same subnet and use IPsec in transport mode, while clients use tunnel mode through our gateways, even internally. The overhead is minimal with recent CPUs and NICs. - We use certificates to authenticate IPsec peers. This allows us to easily revoke the access of a compromised system by revoking its certificate. - We use secrets to authenticate masters and chunk servers. It's not secure, but doesn't harm. - Out of the box, MooseFS relies on DNS to locate the cluster master by querying the 'mfsmaster' host name. This makes the cluster vulnerable to DDoS and DNS spoofing. We use DNSSEC to authenticate DNS answers. - We encrypt metadata (master / metaloggers) and data (chunk servers) with OS level encryption. We use GELI on FreeBSD. - Sensitive information may be further encrypted on client-side. We use PEFS on FreeBSD, which is able to encrypt selected files or folders. - Since user authentication happens on client-side, a smart user with administrative access may use bogus UID/GID to bypass POSIX file system permissions. Kerberos authentication would not let such a scenario happen, but since it is not an option, we decided to consider file system permissions as unsecure and not to rely on them. Instead, we handle access control with exports: we use one export per shared directory and ACL. Each export is protected by a md5 hash and we rely on the export password to authenticate remote systems. Each export line includes a 'mapall' option which maps any remote UIDs/GIDs to an unprivilegied local user/group whose permissions are limited to the shared location only. This configuration protects the cluster from most exploits, except if the export password is leaked. Hope that helps, Marin. |
From: Casper L. <cas...@pr...> - 2017-03-06 06:38:13
|
Hi WU, I run something similar, I have two different versions of mfs running simultaneously, because I am transitioning from a platter-disk mfs2 cluster to an ssd mfs3 pro cluster. I'm not experiencing any problems except that with normal Debian packages there is no way to simultaneously use two client versions. Greetings, Casper Op zo 5 mrt. 2017 23:39 schreef web user <web...@gm...>: > I'm trying to create a second mfs server and chunks on my network > consisting of only ssd drives. To do this I wanted to use available space > on existing ssd drives on some of my compute servers. > > I see that the option of using a virtual drive is not mentioned in the > existing docs for (2.x and 3.x). Here are the steps that I found from an > old doc: > > #mkdir -p /storage/mfschunks > #dd if=/dev/zero of=/storage/mfschunks/mfschunks1 bs=1024 count=1 > seek=$((2*1024*1024-1)) > #mkfs -t ext3 /storage/mfschunks/mfschunks1 > #mkdir -p /mnt/mfschunks1 > #mount -t ext3 -o loop /storage/mfschunks/mfschunks1 /mnt/mfschunks1 > #dd if=/dev/zero of=/storage/mfschunks/mfschunks2 bs=1024 count=1 > seek=$((2*1024*1024-1)) > #mkfs -t ext3 /storage/mfschunks/mfschunks2 > #mkdir -p /mnt/mfschunks2 > #mount -t ext3 -o loop /storage/mfschunks/mfschunks2 /mnt/mfsc > > I have the following questions: > > 1. The data for this partition is scratch and I'm not too worried if I > loose all of it. Are the instructions still valid? I'm using ubuntu > 14.04LTS. Any downsides that I should be aware off > 2. Is there any harm of having two mfs servers running on two different > machines? Will I able to mound two different mfs drives on one client > machine. > > mfsmount /mnt/mfs -H mfsserever > mfsmount /mnt/mfsssd -H mfssereverssd > > Thanks in advance, > > WU > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: web u. <web...@gm...> - 2017-03-05 22:38:22
|
I'm trying to create a second mfs server and chunks on my network consisting of only ssd drives. To do this I wanted to use available space on existing ssd drives on some of my compute servers. I see that the option of using a virtual drive is not mentioned in the existing docs for (2.x and 3.x). Here are the steps that I found from an old doc: #mkdir -p /storage/mfschunks #dd if=/dev/zero of=/storage/mfschunks/mfschunks1 bs=1024 count=1 seek=$((2*1024*1024-1)) #mkfs -t ext3 /storage/mfschunks/mfschunks1 #mkdir -p /mnt/mfschunks1 #mount -t ext3 -o loop /storage/mfschunks/mfschunks1 /mnt/mfschunks1 #dd if=/dev/zero of=/storage/mfschunks/mfschunks2 bs=1024 count=1 seek=$((2*1024*1024-1)) #mkfs -t ext3 /storage/mfschunks/mfschunks2 #mkdir -p /mnt/mfschunks2 #mount -t ext3 -o loop /storage/mfschunks/mfschunks2 /mnt/mfsc I have the following questions: 1. The data for this partition is scratch and I'm not too worried if I loose all of it. Are the instructions still valid? I'm using ubuntu 14.04LTS. Any downsides that I should be aware off 2. Is there any harm of having two mfs servers running on two different machines? Will I able to mound two different mfs drives on one client machine. mfsmount /mnt/mfs -H mfsserever mfsmount /mnt/mfsssd -H mfssereverssd Thanks in advance, WU |
From: Warren M. <wa...@an...> - 2017-03-03 17:57:03
|
What practices does the community suggest for encrypting files in MooseFS? Does Moose support the underlying filesystems being encrypted? Is there a way to incorporate TLS or similar between chunkservers and masters? Thanks, Warren Myers https://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a |
From: Michael T. <mic...@ho...> - 2017-03-03 02:57:01
|
First of all, my apologies if this reply is messing up the threading as I'm replying to a digest feed instead of the actual thread. As far as trash is concerned, based on the GUI: Trash files: 4,996 Trash size: 26GiB Chunks with Goal=0: 1,329 There is a 19GB file named core in /var/lib/mfs: -rw------- 1 mfs mfs 19G Mar 2 11:01 core Is this the core dump file? If so, I'll upload it somewhere. Please note though that the date stamp is 1 day later, if it matters. As far as syslog goes, I did not find any logged crash/abort entry around the time of the crash. Just a lot of messages like this: "Mar 1 10:01:19 HO-MFSMaster01 mfsmaster[1035]: chunkserver has nonexistent chunk (0000000001AC5130_00000001), so create it for future deletion" --- mike t. ________________________________ Date: Thu, 2 Mar 2017 10:31:38 +0100 From: Aleksander Wieliczko <ale...@mo...> Subject: Re: [MooseFS-Users] MFS 2.0.91 Master is crashing To: moo...@li... Message-ID: <b1e...@mo...> Content-Type: text/plain; charset="windows-1252" Hi. First of all we would like to know how many objects you have in trash? Secondly. Do you have core dump file? System log is always good option to see, so please send it on list or directly to 'support at moosefs.com' I'm looking forward to hearing from you. Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 02.03.2017 03:59, Michael Tinsay wrote: > > Hi. > > > MFSmaster crashed a couple of times when I did the following: > > > 1. log in to mfsmaster machine. Then su into root. > > 2. execute: mfsmount -m -p /mnt/mfsmeta > > 3. execute: cd /mnt/mfsmeta/trash > > 4. execute: ls -hl <== in less than a minute mfsmaster will crash. > > > If I don't do step 4, mfsmaster runs just fine. > > > The mfsmaster is installed a dedicated, bare metal machine for > mfsmaster, not a VM. It is running Ubuntu 14.04 with the latest > updates applied. The machine has 32GB of ram, and mfsmaster is using > 20GB. > > > Please advise what other info is needed. Or log files. Or whatever. > > > Regards. > > > > --- mike t. > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot Slashdot: News for nerds, stuff that matters<http://sdm.link/slashdot> sdm.link Slashdot: News for nerds, stuff that matters. Timely news source for technology related news with a heavy slant towards Linux and Open Source issues. > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users moosefs-users Info Page - SourceForge<https://lists.sourceforge.net/lists/listinfo/moosefs-users> lists.sourceforge.net To see the collection of prior postings to the list, visit the moosefs-users Archives. Using moosefs-users: To post a message to all the list members ... -------------- next part -------------- An HTML attachment was scrubbed... ------------------------------ ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot ------------------------------ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users moosefs-users Info Page - SourceForge<https://lists.sourceforge.net/lists/listinfo/moosefs-users> lists.sourceforge.net To see the collection of prior postings to the list, visit the moosefs-users Archives. Using moosefs-users: To post a message to all the list members ... End of moosefs-users Digest, Vol 87, Issue 1 ******************************************** |
From: Aleksander W. <ale...@mo...> - 2017-03-02 09:31:52
|
Hi. First of all we would like to know how many objects you have in trash? Secondly. Do you have core dump file? System log is always good option to see, so please send it on list or directly to 'support at moosefs.com' I'm looking forward to hearing from you. Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 02.03.2017 03:59, Michael Tinsay wrote: > > Hi. > > > MFSmaster crashed a couple of times when I did the following: > > > 1. log in to mfsmaster machine. Then su into root. > > 2. execute: mfsmount -m -p /mnt/mfsmeta > > 3. execute: cd /mnt/mfsmeta/trash > > 4. execute: ls -hl <== in less than a minute mfsmaster will crash. > > > If I don't do step 4, mfsmaster runs just fine. > > > The mfsmaster is installed a dedicated, bare metal machine for > mfsmaster, not a VM. It is running Ubuntu 14.04 with the latest > updates applied. The machine has 32GB of ram, and mfsmaster is using > 20GB. > > > Please advise what other info is needed. Or log files. Or whatever. > > > Regards. > > > > --- mike t. > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michael T. <mic...@ho...> - 2017-03-02 02:59:55
|
Hi. MFSmaster crashed a couple of times when I did the following: 1. log in to mfsmaster machine. Then su into root. 2. execute: mfsmount -m -p /mnt/mfsmeta 3. execute: cd /mnt/mfsmeta/trash 4. execute: ls -hl <== in less than a minute mfsmaster will crash. If I don't do step 4, mfsmaster runs just fine. The mfsmaster is installed a dedicated, bare metal machine for mfsmaster, not a VM. It is running Ubuntu 14.04 with the latest updates applied. The machine has 32GB of ram, and mfsmaster is using 20GB. Please advise what other info is needed. Or log files. Or whatever. Regards. --- mike t. |
From: Aleksander W. <ale...@mo...> - 2017-02-20 06:08:06
|
Hi. There will be no problems. If you like you can stop whole cluster and than execute update. You can find instruction how to stop MooseFS cluster in our documentation, but basically this are the steps that you have to do to stop whole cluster. 1. Umount all clients 2. Stop master 3. Stop metaloggers 4. Stop all chunkservers Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 02/16/2017 11:41 PM, Wolfgang wrote: > > Hi! > > I'm running Moosesfs 3.0.86 on Ubuntu 14.04 master and chunkservers, > metaloggers and would like to upgrade to > 3.0.88 > > In the documentation I found following order to upgrade: > > 1. master > 2. cigserv > 3. metaloggers (one-by-one) > 4. chunkservers (one-by-one) > 5. mfs-clients (one-by-one) > > In my case I can allow my cluster a downtime > > So I could stop all mfs processes (master,chunkserver,metaloggers) and > do an > aptitude update > aptitude safe-upgrade > > on each machine. > Is this procedure recommended or will there be any problem with this ? > > Thank you & greetings > Wolfgang > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |