You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve B. <bo...@st...> - 2010-07-12 15:31:17
|
I've just followed the instructions in the step-by-step tutorial for Installing MooseFS on one server. Fuse installed fine, the CGI monitor is working fine, I can see the two chunks. Everything seems to work until mounting the system: /usr/bin/mfsmount /mnt/mfs -H mfsmaster but /usr/bin/mfsmount doesn't exist. Have I missed something simple here? Thanks! --Steve |
From: Michał B. <mic...@ge...> - 2010-07-12 07:33:19
|
Yes, probably these patches would be applied to the new version or we would implement a still better solution for registering large amounts of files. Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Friday, July 09, 2010 2:47 PM To: Michał Borychowski Cc: moo...@li...; marco lu Subject: Re: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) Hi. 2010/6/21 Micha Borychowski <mic...@ge...> We give you here some quick patches you can implement to the master server to improve its performance for that amount of files: In matocsserv.c in mfsmaster you need to change this line: #define MaxPacketSize 50000000 into this: #define MaxPacketSize 500000000 Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" function. Change this line: if ((uint32_t)(main_time())<=starttime+150) { into: if ((uint32_t)(main_time())<=starttime+900) { And also changing this line: for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { into this: for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { You need to recompile the master server and start it again. The above changes should make the master server work more stable with large amount of files. Can these changes be added to next MFS release? Or they impact the performance in any way for smaller amounts? Regards. |
From: Michał B. <mic...@ge...> - 2010-07-12 07:16:45
|
The proper way to mount MooseFS from fstab is to put there a line similar to this one: Mfsmount /mnt/mfs fuse mfsmaster=mfsmaster.gem.lan,mfsport=9421,_netdev 0 0 You can also use all available options (-o parameter) for mfsmount (apart from "debug"). Now MooseFS resources should be automatically mounted during system startup. Without restart you can test this entry by issuing: mount -a -t fuse which means to remount all filesystems (of the given types) mentioned in fstab. Regards Michal > -----Original Message----- > From: Laurent Wandrebeck [mailto:lw...@hy...] > Sent: Friday, July 09, 2010 10:17 AM > To: Stas Oskin > Cc: Scoleri, Steven; moo...@li... > Subject: Re: [Moosefs-users] MooseFS init files > > On Wed, 7 Jul 2010 11:55:48 +0300 > Stas Oskin <sta...@gm...> wrote: > > > Hi. > > > > This is my example (/etc/fstab): > > > > > > > > > > > > mfsmount /mnt/mfs fuse > > > mfswritecachesize=0,mfsmaster=secintmoosemaster01,mfssubfolder=/virt > > > -repo > > > 0 0 > > > > > > > > Thanks for the example. > > > > Laurent, this probably means that there is no need for mfsmount init > > file as it will be mounted straight from fstab. > > > > In case mfsmount crashes, what would bring it back online? fstab? > mount -a will do the trick. > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Fabien G. <fab...@gm...> - 2010-07-11 20:54:58
|
Hi, On Fri, Jul 9, 2010 at 2:27 PM, Stas Oskin <sta...@gm...> wrote: > As for the time it takes, it depends on the number of chunks you have, and >> the hardware server you use (CPU + HDD speed). For example in our case (15 >> million chunks, 6 GB of metadata), it takes between 1 and 2 minutes on a >> Xeon processor. >> > > I actually meant, how much time backwards could be recovered by replaying > all the logs? > > Michael said that backup log is created every hour so up to 1.5 hour can be > potentially lost. > If all the logs are replayed, can the data be consistent up to the moment > of crash? > By default, logs are written to disk every 1 hour. So you can loose up to 1 hour if your mfsmaster crashes just before a new disk flush occurs (everything is stored in memory for performance reasons : http://www.moosefs.org/moosefs-faq.html#cluster). Fabien |
From: Stas O. <sta...@gm...> - 2010-07-09 22:23:28
|
Hi. This is my example (/etc/fstab): > > > > mfsmount /mnt/mfs fuse > mfswritecachesize=0,mfsmaster=secintmoosemaster01,mfssubfolder=/virt-repo > 0 0 > > Can you please tell why you disabling the write cache completely in your example? Is there any issue using the default 128MB cache? Regards. |
From: Laurent W. <lw...@hy...> - 2010-07-09 14:18:20
|
On Fri, 9 Jul 2010 17:01:11 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > > > > Please find attached the .spec used for rpms creation. > > > > > > > > Is this version 1.6.15-2 the latest of the spec file? > Were there any other updates? It's the latest one I created. > > I would like to do some changes, and want to make sure it's the latest > version. What do you plan to do ? > > Also, probably would be worthwhile to commit it to MFS git afterward. I've asked devs if they could add it to git. Problem is, git is not up to date, neither stable branch, nor dev, which doesn't even exist publicly. So I've had to create the RPM from the 1.6.15 tarball. I plan to add init files in a -3 release (with metarestore option), and maybe a -4 with user creation if possible, once I've read how to do such a thing. That *may* be done this week-end. Did I say it's unsure ? :) -- Laurent Wandrebeck GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-07-09 14:01:32
|
Hi. > > Please find attached the .spec used for rpms creation. > > > Is this version 1.6.15-2 the latest of the spec file? Were there any other updates? I would like to do some changes, and want to make sure it's the latest version. Also, probably would be worthwhile to commit it to MFS git afterward. Regards. |
From: Stas O. <sta...@gm...> - 2010-07-09 13:29:55
|
Hi. > As for the time it takes, it depends on the number of chunks you have, and > the hardware server you use (CPU + HDD speed). For example in our case (15 > million chunks, 6 GB of metadata), it takes between 1 and 2 minutes on a > Xeon processor. > > I actually meant, how much time backwards could be recovered by replaying all the logs? Michael said that backup log is created every hour so up to 1.5 hour can be potentially lost. If all the logs are replayed, can the data be consistent up to the moment of crash? Regards. |
From: Stas O. <sta...@gm...> - 2010-07-09 12:57:10
|
> > Sure ! But the problem is more general than 32bit machines : just catching > the "no more memory available" error would be great, since it can happen on > both 32bit and 64bit machines. For example, on our 64 bit machine with a 64 > bit compiled mfsmaster binary, metadata has became such big that mfsmaster > crashed, and we can't even restore it since it takes too much memory, and > ends with a segmentation fault : > > [root@mfsmaster ~]# mfsmetarestore -a -d /data/MFS/ > loading objects (files,directories,etc.) ... ok > loading names ... ok > loading deletion timestamps ... ok > checking filesystem consistency ... ok > loading chunks data ... Segmentation fault > [root@mfsmaster ~]# > > [root@mfsmaster ~]# strace mfsmetarestore -a -d /data/MFS/ > [...] > read(3, > "\0\0\0\0\36\347\314\0\0\0\1\0\0\0\0\0\0\0\0\0\35\347\314\0\0\0\1\0\0\0\0\0"..., > 4096) = 4096 > mmap2(NULL, 561152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) > = -1 ENOMEM (Cannot allocate memory) > brk(0xb08e8000) = 0xffffffffb0844000 > mmap2(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = -1 ENOMEM (Cannot allocate memory) > mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, > -1, 0) = -1 ENOMEM (Cannot allocate memory) > mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, > -1, 0) = -1 ENOMEM (Cannot allocate memory) > mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, > -1, 0) = -1 ENOMEM (Cannot allocate memory) > mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, > -1, 0) = -1 ENOMEM (Cannot allocate memory) > --- SIGSEGV (Segmentation fault) @ 0 (0) --- > +++ killed by SIGSEGV +++ > [root@mfsmaster ~]# > > *[MB] We’ll look into it* > > > Another suggestion: Perhaps it's possible to measure the total available memory to MFS master / logger, and show via chart how much is left? Similar to how disk space is measured today per chunk servers. That would allow to plan the memory expansion in advance, and not to be pressed to locate and add more memory modules when the MFSg master / logger has crashed (or even normally stopped once this added) due to insufficient memory. Regards. |
From: Stas O. <sta...@gm...> - 2010-07-09 12:47:42
|
Hi. 2010/6/21 Michał Borychowski <mic...@ge...> > We give you here some quick patches you can implement to the master > server to improve its performance for that amount of files: > > > > In matocsserv.c in mfsmaster you need to change this line: > > #define MaxPacketSize 50000000 > > > > into this: > > #define MaxPacketSize 500000000 > > > > > > > > Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" > function. Change this line: > > if ((uint32_t)(main_time())<=starttime+150) { > > > > into: > > if ((uint32_t)(main_time())<=starttime+900) { > > > > > > And also changing this line: > > for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { > > > > into this: > > for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { > > > > > > > > You need to recompile the master server and start it again. The above > changes should make the master server work more stable with large amount of > files. > > > Can these changes be added to next MFS release? Or they impact the performance in any way for smaller amounts? Regards. |
From: Stas O. <sta...@gm...> - 2010-07-09 12:43:02
|
Hi. Sure ! But the problem is more general than 32bit machines : just catching > the "no more memory available" error would be great, since it can happen on > both 32bit and 64bit machines. For example, on our 64 bit machine with a 64 > bit compiled mfsmaster binary, metadata has became such big that mfsmaster > crashed, and we can't even restore it since it takes too much memory, and > ends with a segmentation fault : > Can you tell the total amount of files stored, and the total space stored you have, that you hitting this issue? Also, how much the metadata takes, and how much memory you have? Regards. |
From: Laurent W. <lw...@hy...> - 2010-07-09 09:35:22
|
Hi, Previous version of scripts were buggy, forgot to change the sendmail part in reload function (scripts for moose are based on sendmail one). That has been fixed, and the « metarestore » argument has been added to the mfsmaster script. It stops master, tries to metarestore. If it fails, master is not started again, if restore is successful, master is respawn. DATA_PATH is taken care of. executables are considered to be in /usr/sbin. Feedback welcome ! -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-07-09 08:59:43
|
From: Fabien Germain [mailto:fab...@gm...] Sent: Friday, July 09, 2010 10:48 AM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) Hi, 2010/7/9 Micha Borychowski <mic...@ge...> It's just a mistake I made to compile it on a 32 bits platform. But maybe you could tell the dev team that in case of memory allocation failure, mfsmaster crashes without a message... well, if we consider that "segmentation fault" is not a real error message :-) [MB] It’s on our todo list (but to be honest, with low priority – one cannot expect a 32bit machine to work with more than 4GB RAM :)) Sure ! But the problem is more general than 32bit machines : just catching the "no more memory available" error would be great, since it can happen on both 32bit and 64bit machines. For example, on our 64 bit machine with a 64 bit compiled mfsmaster binary, metadata has became such big that mfsmaster crashed, and we can't even restore it since it takes too much memory, and ends with a segmentation fault : [root@mfsmaster ~]# mfsmetarestore -a -d /data/MFS/ loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... Segmentation fault [root@mfsmaster ~]# [root@mfsmaster ~]# strace mfsmetarestore -a -d /data/MFS/ [...] read(3, "\0\0\0\0\36\347\314\0\0\0\1\0\0\0\0\0\0\0\0\0\35\347\314\0\0\0\1\0\0\0\0\0"..., 4096) = 4096 mmap2(NULL, 561152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory) brk(0xb08e8000) = 0xffffffffb0844000 mmap2(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ [root@mfsmaster ~]# [MB] We’ll look into it [...] So this is a safe operation but still is not recommended. It is better when different process on different machines write to different files and later some other system combine this data from many files into one target file (something like in “map-reduce” processing). I totally agree with you Michal, but we make webhosting for thousands of customers and most of them don't even know what a cluster is ;-) [MB] So what kind of simultaneous writing happens there mainly? Could you give us some examples? Michal |
From: Fabien G. <fab...@gm...> - 2010-07-09 08:48:01
|
Hi, 2010/7/9 Michał Borychowski <mic...@ge...> > It's just a mistake I made to compile it on a 32 bits platform. > > But maybe you could tell the dev team that in case of memory allocation > failure, mfsmaster crashes without a message... well, if we consider that > "segmentation fault" is not a real error message :-) > > *[MB] It’s on our todo list (but to be honest, with low priority – one > cannot expect a 32bit machine to work with more than 4GB RAM :))* > Sure ! But the problem is more general than 32bit machines : just catching the "no more memory available" error would be great, since it can happen on both 32bit and 64bit machines. For example, on our 64 bit machine with a 64 bit compiled mfsmaster binary, metadata has became such big that mfsmaster crashed, and we can't even restore it since it takes too much memory, and ends with a segmentation fault : [root@mfsmaster ~]# mfsmetarestore -a -d /data/MFS/ loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... Segmentation fault [root@mfsmaster ~]# [root@mfsmaster ~]# strace mfsmetarestore -a -d /data/MFS/ [...] read(3, "\0\0\0\0\36\347\314\0\0\0\1\0\0\0\0\0\0\0\0\0\35\347\314\0\0\0\1\0\0\0\0\0"..., 4096) = 4096 mmap2(NULL, 561152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory) brk(0xb08e8000) = 0xffffffffb0844000 mmap2(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory) --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ [root@mfsmaster ~]# [MB] What for do you need LOCK for webhosting? > > > Dynamic websites, writing information to files. Several web servers using > the same MooseFS could try to write to the same file in the same time. > > *[MB] It should not be a problem for you. > * > > *There is a mechanism of chunk locking for write, but the writing process > would be slow. There is no mechanism of informing the client waiting to > write that the lock had been released (probably we’ll implement it one > time). So now client which couldn’t start writing process will try again > every second. This solution can in theory lead to starvation. But > practically it shouldn’t.* > Oh, ok ! I missed that part of the documentation, shame on me. Thank you for the information. > * So this is a safe operation but still is not recommended. It is better > when different process on different machines write to different files and > later some other system combine this data from many files into one target > file (something like in “map-reduce” processing).* > I totally agree with you Michal, but we make webhosting for thousands of customers and most of them don't even know what a cluster is ;-) Fabien |
From: Fabien G. <fab...@gm...> - 2010-07-09 08:34:49
|
Hello Stas, On Fri, Jul 9, 2010 at 10:15 AM, Stas Oskin <sta...@gm...> wrote: > Can I also save the change logs, and replay them into meta-data? > How much time it will be able recover up to point of crash? > Yes, you must save everything in /var/mfs/ (or wherever you placed it). When you launch "mfsmetarestore -a", it will use the last metadata dumpled file and replay changelogs on it. From that, you'll get an up-to-date metadata file. As for the time it takes, it depends on the number of chunks you have, and the hardware server you use (CPU + HDD speed). For example in our case (15 million chunks, 6 GB of metadata), it takes between 1 and 2 minutes on a Xeon processor. Fabien |
From: Laurent W. <lw...@hy...> - 2010-07-09 08:33:04
|
Stas, Did you have time to test the init scripts I posted ? Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-07-09 08:30:44
|
Hi! As we wrote at the website: "While it is possible to create a script to perform the metadata restore operation, it is recommended that the metadata restoring commands be issued manually to observe any errors that might occur during the fixing operation." So we do not recommend it :) It can be done automatically but only when someone fully understands what he is doing and what (bad) consequences it could bring. Possibly it could be done as an option. And if some errors appear admin should be somehow informed about them. Regards Michał > -----Original Message----- > From: Laurent Wandrebeck [mailto:lw...@hy...] > Sent: Friday, July 09, 2010 10:21 AM > To: Stas Oskin > Cc: moo...@li... > Subject: Re: [Moosefs-users] MooseFS init files > > On Thu, 8 Jul 2010 22:04:56 +0300 > Stas Oskin <sta...@gm...> wrote: > > > By the way, the master file should have the snippet from here added: > > http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html > > > > This would enable it to automatically replay the logs. > Looks like a good idea. Michal, do you see any point for not adding this to > init master script ? > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Laurent W. <lw...@hy...> - 2010-07-09 08:27:44
|
On Thu, 8 Jul 2010 21:51:23 +0300 Stas Oskin <sta...@gm...> wrote: > Sorry, forwarding to list as well. > > ---------- Forwarded message ---------- > Hi. > > > > Generally speaking, MooseFS should not be run as a user nobody. If user > > nobody has some other services running and if somebody gets control over > > one service, it can interfere other services run on the same user. We > > recommend just creating a user mfs and a group mfs. > > > > > This makes sense. > > > Laurent, that should be a good addition to that RPM ;). I need to take a look at how RPM handles user creation. I don't know how it acts when auth deals with nis/ldap/whatever. Added on my TODO list, as well as the log replay on startup, if Michal confirms it's a good idea. > > Maybe we also should consider placing a script, which would assign proper > permissions to selected data folders. As told in a previous mail, problem is DATA_PATH is configurable after installation. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-07-09 08:26:02
|
Yes I read that page (actually I read every pages of <http://moosefs.org> moosefs.org to really understand how it works !). [MB] Perfect :) It's just a mistake I made to compile it on a 32 bits platform. But maybe you could tell the dev team that in case of memory allocation failure, mfsmaster crashes without a message... well, if we consider that "segmentation fault" is not a real error message :-) [MB] It’s on our todo list (but to be honest, with low priority – one cannot expect a 32bit machine to work with more than 4GB RAM :)) Maybe later, we'd like to use it for webhosting. But since LOCK is not supported, it's not yet possible. [MB] What for do you need LOCK for webhosting? Dynamic websites, writing information to files. Several web servers using the same MooseFS could try to write to the same file in the same time. [MB] It should not be a problem for you. There is a mechanism of chunk locking for write, but the writing process would be slow. There is no mechanism of informing the client waiting to write that the lock had been released (probably we’ll implement it one time). So now client which couldn’t start writing process will try again every second. This solution can in theory lead to starvation. But practically it shouldn’t. So this is a safe operation but still is not recommended. It is better when different process on different machines write to different files and later some other system combine this data from many files into one target file (something like in “map-reduce” processing). The only problem would be with simultaneous appending (writing at the end) to the same file by two clients. Please also read a thread “Append and seek while writing functionality” on the group archive: http://sourceforge.net/mailarchive/forum.php?forum_name=moosefs-users <http://sourceforge.net/mailarchive/forum.php?forum_name=moosefs-users&max_rows=25&style=ultimate&viewmonth=201006> &max_rows=25&style=ultimate&viewmonth=201006 Regards Michał |
From: Laurent W. <lw...@hy...> - 2010-07-09 08:21:29
|
On Thu, 8 Jul 2010 22:04:56 +0300 Stas Oskin <sta...@gm...> wrote: > By the way, the master file should have the snippet from here added: > http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html > > This would enable it to automatically replay the logs. Looks like a good idea. Michal, do you see any point for not adding this to init master script ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-09 08:20:08
|
On Thu, 8 Jul 2010 21:53:04 +0300 Stas Oskin <sta...@gm...> wrote: > . > > > It seems difficult to me, as user it runs with is configurable. Or did > > I misunderstood your point ? > > > > I meant the /var/mfs folder for master and logger, which is a good location > and can be prepared in advance. It can't either, as you can also configure DATA_PATH in master and metalogger. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-09 08:17:33
|
On Wed, 7 Jul 2010 11:55:48 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > This is my example (/etc/fstab): > > > > > > > > mfsmount /mnt/mfs fuse > > mfswritecachesize=0,mfsmaster=secintmoosemaster01,mfssubfolder=/virt-repo > > 0 0 > > > > > Thanks for the example. > > Laurent, this probably means that there is no need for mfsmount init file as > it will be mounted straight from fstab. > > In case mfsmount crashes, what would bring it back online? fstab? mount -a will do the trick. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-07-09 08:15:35
|
Can I also save the change logs, and replay them into meta-data? How much time it will be able recover up to point of crash? Regards. |
From: Fabien G. <fab...@gm...> - 2010-07-08 23:41:09
|
Hi all, 2010/6/29 Michał Borychowski <mic...@ge...> > * 'mfsmaster' process on master : 4.9 GB (64 bits recompilation required > : the 32 bits version of mfsmaster crashed without a message when it came to > 4 GB) > * 'mfschunkserver' process on chunkservers : 580 MB > > [MB] 32bit machines are not capable of addressing more than 4GB, so that > was quite a normal behaviour. > > Regarding memory please have a look at this FAQ entry: > > http://www.moosefs.org/moosefs-faq.html#cpu – there is information about > the cpu loads and ram usage. Keep in mind that RAM depends on the total > number of files and folders (not on their size) and CPU load in mfsmaster > depends on amount of operations which take place in the filesystem. > Yes I read that page (actually I read every pages of moosefs.org to really understand how it works !). It's just a mistake I made to compile it on a 32 bits platform. But maybe you could tell the dev team that in case of memory allocation failure, mfsmaster crashes without a message... well, if we consider that "segmentation fault" is not a real error message :-) > Maybe later, we'd like to use it for webhosting. But since LOCK is not > supported, it's not yet possible. > > [MB] What for do you need LOCK for webhosting? > Dynamic websites, writing information to files. Several web servers using the same MooseFS could try to write to the same file in the same time. Fabien |
From: Fabien G. <fab...@gm...> - 2010-07-08 23:29:49
|
Hi Stas, 2010/7/8 Stas Oskin <sta...@gm...> > http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html >> >> > If you have not read this entry, we strongly recommend it:Thanks, now it > much more clear. > > Speaking of crashes, will the master detect it has crashed, and will try to > replay the logs itself? > Or it's recommended to use the script described at the end of the entry, to > always check for repayable logs? > If the mfsmaster process crashes, you won't be able to start it again unless you use "mfsmetarestore" first. Best regards, Fabien |