You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michał B. <mic...@ge...> - 2010-07-28 11:44:58
|
First of all if you want better performance you should use gigabit network. We have writes of about 20-30MiB/s (have a look here: http://www.moosefs.org/moosefs-faq.html#average). You can also have a look here: http://www.moosefs.org/moosefs-faq.html#mtu for some network tips. PS. Talking about 3 copies do you mean setting goal=3 or copying 3 files simultaneously? Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Chen, Alvin [mailto:alv...@in...] Sent: Tuesday, July 27, 2010 10:52 AM To: moo...@li... Subject: [Moosefs-users] How fast can you copy files to your Moosefs ? Hi guys, I am a new user of moosefs. I have 3 chunk servers, one master server with 100Mbps network. And I just copy a 4GB files from one client machine to Moosefs, and the copying speed can reach 9MB/s if just one copy, but the copying speed is just 500KB/s if 3copies. How fast do your Moosefs can reach? Does anybody get better performance? Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... |
From: Michał B. <mic...@ge...> - 2010-07-28 11:40:07
|
You should not be worried. This is just a structure of your data. In one of our environments there is 3,934,705 directories and MooseFS deals with it. Regards Michal From: kuer ku [mailto:ku...@gm...] Sent: Wednesday, July 28, 2010 12:52 PM To: moo...@li... Subject: [Moosefs-users] Is it harmful when too many directories exist in moosefs Hi, all, In my moosefs, from mfs.cgi, I find : there are 97374 directories, 63279 files, and 160653 fs object in total. there are far more directories than files, I want to know if too many directories would eat up too much metadata memory, and slow down metamaster ? Is the situation dangerous ?? thanks -- kuer |
From: kuer ku <ku...@gm...> - 2010-07-28 10:51:38
|
Hi, all, In my moosefs, from mfs.cgi, I find : there are 97374 directories, 63279 files, and 160653 fs object in total. there are far more directories than files, I want to know if too many directories would eat up too much metadata memory, and slow down metamaster ? Is the situation dangerous ?? thanks -- kuer |
From: Chen, A. <alv...@in...> - 2010-07-27 08:52:05
|
Hi guys, I am a new user of moosefs. I have 3 chunk servers, one master server with 100Mbps network. And I just copy a 4GB files from one client machine to Moosefs, and the copying speed can reach 9MB/s if just one copy, but the copying speed is just 500KB/s if 3copies. How fast do your Moosefs can reach? Does anybody get better performance? Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... |
From: Ricardo J. B. <ric...@da...> - 2010-07-26 18:28:41
|
El Dom 25 Julio 2010, Stas Oskin escribió: > Hi. Hi! > > Mfsmount /mnt/mfs fuse > > mfsmaster=mfsmaster.gem.lan,mfsport=9421,_netdev 0 0 > > What is the purpose of _netdev in this line? > > Regards. It tells the mount init scripts that you need to have networking up before mounting those filesystems, as the local filesystems get usually mounted earlier in the boot process. At least that's true for RedHat and its derivatives. Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! |
From: Stas O. <sta...@gm...> - 2010-07-25 09:54:49
|
Hi. I checked the latest rpmforge mfs and it worked great. I did notice mfs-cgi package requires full-fledged httpd server. As mfs-cgi can work on it's own, I wondered if you Steve could create an additional init file which will allow to launch the CGI server on start-up. Thanks in advance! |
From: Stas O. <sta...@gm...> - 2010-07-25 09:09:47
|
Hi. > Mfsmount /mnt/mfs fuse mfsmaster=mfsmaster.gem.lan,mfsport=9421,_netdev > 0 0 > > What is the purpose of _netdev in this line? Regards. |
From: Stas O. <sta...@gm...> - 2010-07-25 09:02:09
|
By the way, Steve init files are working great. On Fri, Jul 9, 2010 at 11:32 AM, Laurent Wandrebeck <lw...@hy...> wrote: > Stas, > > Did you have time to test the init scripts I posted ? > > Regards, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C > D17C F64C > |
From: Laurent W. <lw...@hy...> - 2010-07-22 10:01:26
|
On Tue, 20 Jul 2010 20:54:05 +0200, Michał Borychowski <mic...@ge...> wrote: > Hi! > > Thanks to Steve Huff we have ready rpms here: > > http://orannis.hmdc.harvard.edu/rpmforge/mfs/ [1] > > Feedback is welcome > > Regards > > Michal > Hi, That was an unofficial url. Please use only rpmforge, 1.6.16 is available, i386 and x86_64, for C3 through C5. (I'll be offline for a week from now on, have fun with mfs !) -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-07-20 18:54:23
|
Hi! Thanks to Steve Huff we have ready rpms here: http://orannis.hmdc.harvard.edu/rpmforge/mfs/ Feedback is welcome Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Tuesday, July 20, 2010 8:39 PM To: moo...@li... Subject: [Moosefs-users] Fwd: Latest stable release of MooseFS 1.6.16 Hi. Good news about update. Is the update as simple as building RPM's with spec which passed through here, and installing via RPM update? Regards. ---------- Forwarded message ---------- From: MooseFS <co...@mo...> Date: Tue, Jul 20, 2010 at 2:37 PM Subject: Latest stable release of MooseFS 1.6.16 To: co...@mo... We are pleased to announce the latest stable release of MooseFS 1.6.16. You can download it from http://www.moosefs.org/download.html webpage. More information about this release is available here: http://moosefs.org/news-reader/items/moose-file-system-v-1616-released.html If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 |
From: Stas O. <sta...@gm...> - 2010-07-20 18:39:00
|
Hi. Good news about update. Is the update as simple as building RPM's with spec which passed through here, and installing via RPM update? Regards. ---------- Forwarded message ---------- From: MooseFS <co...@mo...> Date: Tue, Jul 20, 2010 at 2:37 PM Subject: Latest stable release of MooseFS 1.6.16 To: co...@mo... We are pleased to announce the latest stable release of MooseFS 1.6.16. You can download it from http://www.moosefs.org/download.html webpage. More information about this release is available here: http://moosefs.org/news-reader/items/moose-file-system-v-1616-released.html If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 |
From: Roast <zha...@gm...> - 2010-07-20 08:38:50
|
Thanks, Michał Borychowski. After sent this mail yesterday, I deleted all change log files at the metalogger server, and restart the metalogger process, and now, metadata_ml.mfs.back was created at the metalogger server, it seems OK now. 2010/7/20 Michał Borychowski <mic...@ge...> > We have some questions: > > > > 1. Do you have "sessions_ml.mfs" file? > > 2. Do you have any temp files: "metadata_ml.tmp" and/or > "sessions_ml.tmp"? > > 3. Is there any strange message from metalogger in the logs? > > 4. Have you changed any options in config file? > > > > > > Regards > > Michal > > > > > > *From:* Roast [mailto:zha...@gm...] > *Sent:* Monday, July 19, 2010 2:09 PM > *To:* moosefs-users > *Subject:* [Moosefs-users] metadata do not sync to metalogger server > > > > Hi, all. > > At my metalogger server, there is no metadata_ml.mfs.back file, but c > hangelog_ml.0.mfs ~ changelog_ml.N.mfs was created and updated. It seems > the master server do not sync the metadata to logger server. > > And all those servers was setup for about 2 weeks. > > So I do not know why was that? And how to fix this problem? > > Thanks all. > > -- > The time you enjoy wasting is not wasted time! > -- The time you enjoy wasting is not wasted time! |
From: Michał B. <mic...@ge...> - 2010-07-20 07:20:19
|
We have some questions: 1. Do you have "sessions_ml.mfs" file? 2. Do you have any temp files: "metadata_ml.tmp" and/or "sessions_ml.tmp"? 3. Is there any strange message from metalogger in the logs? 4. Have you changed any options in config file? Regards Michal From: Roast [mailto:zha...@gm...] Sent: Monday, July 19, 2010 2:09 PM To: moosefs-users Subject: [Moosefs-users] metadata do not sync to metalogger server Hi, all. At my metalogger server, there is no metadata_ml.mfs.back file, but changelog_ml.0.mfs ~ changelog_ml.N.mfs was created and updated. It seems the master server do not sync the metadata to logger server. And all those servers was setup for about 2 weeks. So I do not know why was that? And how to fix this problem? Thanks all. -- The time you enjoy wasting is not wasted time! |
From: Roast <zha...@gm...> - 2010-07-19 12:09:11
|
Hi, all. At my metalogger server, there is no metadata_ml.mfs.back file, but c hangelog_ml.0.mfs ~ changelog_ml.N.mfs was created and updated. It seems the master server do not sync the metadata to logger server. And all those servers was setup for about 2 weeks. So I do not know why was that? And how to fix this problem? Thanks all. -- The time you enjoy wasting is not wasted time! |
From: Roast <zha...@gm...> - 2010-07-19 08:34:26
|
Thanks,Michał Borychowski. 2010/7/19 Michał Borychowski <mic...@ge...> > Please read this article: > http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html > > > > The most important part for you is here: > > *The metalogger will download the metadata.mfs.back file on a regular > basis (by default every 24 hours) from the master server. The downloaded > file is saved with the file name metadata_ml.mfs.back. Similarly, it also > continuously receives the current changes from the master server and writes > them into its own text change log named changelog_ml.0.mfs. Where these > files are also rotated every hour up to the configured maximum number of > change log files (see man mfsmetalogger.cfg).* > > > > Yes, you can fully restore metadata from files saved by metalogger. > > > > If you need any further assistance please let us know. > > > > > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > *From:* Roast [mailto:zha...@gm...] > *Sent:* Friday, July 16, 2010 1:45 PM > *To:* Michał Borychowski > *Cc:* Stas Oskin; moosefs-users > > *Subject:* Re: [Moosefs-users] Backing up MFS metadata > > > > Another question. > > Does master server sync the change log to metalogger server in real time? > > And if so, we can restore the full meta info from the metalogger server. Am > I right? > > 2010/7/15 Michał Borychowski <mic...@ge...> > > You can backup changelog files (you would need just the two newest ones - > "0" and "1"). You can make these backups even every minute. So you can have > potentially lost information for about 1-2 minutes. > > > > But the question is - why to back up changelogs manually? Metalogger > machines are dedicated to this. You can have as many metalogger machines on > the network as you like. And metalogger process can be run on any computer, > even an older one. > > > > > > Regards > > Michał > > > > > > > > *From:* Stas Oskin [mailto:sta...@gm...] > *Sent:* Friday, July 09, 2010 2:27 PM > *To:* Fabien Germain > *Cc:* moo...@li...; Michał Borychowski > *Subject:* Re: [Moosefs-users] Backing up MFS metadata > > > > Hi. > > > As for the time it takes, it depends on the number of chunks you have, and > the hardware server you use (CPU + HDD speed). For example in our case (15 > million chunks, 6 GB of metadata), it takes between 1 and 2 minutes on a > Xeon processor. > > > I actually meant, how much time backwards could be recovered by replaying > all the logs? > > Michael said that backup log is created every hour so up to 1.5 hour can be > potentially lost. > If all the logs are replayed, can the data be consistent up to the moment > of crash? > > Regards. > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > -- > The time you enjoy wasting is not wasted time! > -- The time you enjoy wasting is not wasted time! |
From: Michał B. <mic...@ge...> - 2010-07-19 06:45:09
|
Please read this article: http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html The most important part for you is here: The metalogger will download the metadata.mfs.back file on a regular basis (by default every 24 hours) from the master server. The downloaded file is saved with the file name metadata_ml.mfs.back. Similarly, it also continuously receives the current changes from the master server and writes them into its own text change log named changelog_ml.0.mfs. Where these files are also rotated every hour up to the configured maximum number of change log files (see man mfsmetalogger.cfg). Yes, you can fully restore metadata from files saved by metalogger. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Roast [mailto:zha...@gm...] Sent: Friday, July 16, 2010 1:45 PM To: Michał Borychowski Cc: Stas Oskin; moosefs-users Subject: Re: [Moosefs-users] Backing up MFS metadata Another question. Does master server sync the change log to metalogger server in real time? And if so, we can restore the full meta info from the metalogger server. Am I right? 2010/7/15 Michał Borychowski <mic...@ge...> You can backup changelog files (you would need just the two newest ones – “0” and “1”). You can make these backups even every minute. So you can have potentially lost information for about 1-2 minutes. But the question is – why to back up changelogs manually? Metalogger machines are dedicated to this. You can have as many metalogger machines on the network as you like. And metalogger process can be run on any computer, even an older one. Regards Michał From: Stas Oskin [mailto:sta...@gm...] Sent: Friday, July 09, 2010 2:27 PM To: Fabien Germain Cc: moo...@li...; Michał Borychowski Subject: Re: [Moosefs-users] Backing up MFS metadata Hi. As for the time it takes, it depends on the number of chunks you have, and the hardware server you use (CPU + HDD speed). For example in our case (15 million chunks, 6 GB of metadata), it takes between 1 and 2 minutes on a Xeon processor. I actually meant, how much time backwards could be recovered by replaying all the logs? Michael said that backup log is created every hour so up to 1.5 hour can be potentially lost. If all the logs are replayed, can the data be consistent up to the moment of crash? Regards. ------------------------------------------------------------------------------ This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users -- The time you enjoy wasting is not wasted time! |
From: Roast <zha...@gm...> - 2010-07-16 11:45:36
|
Another question. Does master server sync the change log to metalogger server in real time? And if so, we can restore the full meta info from the metalogger server. Am I right? 2010/7/15 Michał Borychowski <mic...@ge...> > You can backup changelog files (you would need just the two newest ones – > “0” and “1”). You can make these backups even every minute. So you can have > potentially lost information for about 1-2 minutes. > > > > But the question is – why to back up changelogs manually? Metalogger > machines are dedicated to this. You can have as many metalogger machines on > the network as you like. And metalogger process can be run on any computer, > even an older one. > > > > > > Regards > > Michał > > > > > > > > *From:* Stas Oskin [mailto:sta...@gm...] > *Sent:* Friday, July 09, 2010 2:27 PM > *To:* Fabien Germain > *Cc:* moo...@li...; Michał Borychowski > *Subject:* Re: [Moosefs-users] Backing up MFS metadata > > > > Hi. > > > As for the time it takes, it depends on the number of chunks you have, and > the hardware server you use (CPU + HDD speed). For example in our case (15 > million chunks, 6 GB of metadata), it takes between 1 and 2 minutes on a > Xeon processor. > > > I actually meant, how much time backwards could be recovered by replaying > all the logs? > > Michael said that backup log is created every hour so up to 1.5 hour can be > potentially lost. > If all the logs are replayed, can the data be consistent up to the moment > of crash? > > Regards. > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- The time you enjoy wasting is not wasted time! |
From: Travis <tra...@tr...> - 2010-07-15 12:47:07
|
On 07/12/2010 11:11 AM, Steve Boese wrote: > I've just followed the instructions in the step-by-step tutorial for > Installing MooseFS on one server. Fuse installed fine, the CGI > monitor is working fine, I can see the two chunks. > > Everything seems to work until mounting the system: > > /usr/bin/mfsmount /mnt/mfs -H mfsmaster > > but /usr/bin/mfsmount doesn't exist. > > Have I missed something simple here? > > Thanks! > > --Steve I think I saw one time when I compiled MooseFS, when the machine does not have the libfuse-dev package installed, the configure script will emit a warning about not able to find FUSE headers, and then will not build the mfsmount executable. So, make sure you have the libfuse-dev package (.rpm, .deb, etc) installed on your system, then in the MooseFS sources folder do a make distclean, and re-run the configure, make, make install. |
From: Michał B. <mic...@ge...> - 2010-07-15 08:48:25
|
From: Stas Oskin [mailto:sta...@gm...] Sent: Friday, July 09, 2010 2:57 PM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) Another suggestion: Perhaps it's possible to measure the total available memory to MFS master / logger, and show via chart how much is left? Similar to how disk space is measured today per chunk servers. That would allow to plan the memory expansion in advance, and not to be pressed to locate and add more memory modules when the MFSg master / logger has crashed (or even normally stopped once this added) due to insufficient memory. [MB] Probably we could quite easily check how much memory a given process occupies. We have to see how all the supported operating system return this value. But it would be much more difficult to check how much memory or swap is still left. So yes, we can add to the CGI Monitor "RAM usage" information for the master server, but still admin would have to tell if it is much or not. Regards Michał |
From: Michał B. <mic...@ge...> - 2010-07-15 08:33:13
|
You can backup changelog files (you would need just the two newest ones - "0" and "1"). You can make these backups even every minute. So you can have potentially lost information for about 1-2 minutes. But the question is - why to back up changelogs manually? Metalogger machines are dedicated to this. You can have as many metalogger machines on the network as you like. And metalogger process can be run on any computer, even an older one. Regards Michał From: Stas Oskin [mailto:sta...@gm...] Sent: Friday, July 09, 2010 2:27 PM To: Fabien Germain Cc: moo...@li...; Michał Borychowski Subject: Re: [Moosefs-users] Backing up MFS metadata Hi. As for the time it takes, it depends on the number of chunks you have, and the hardware server you use (CPU + HDD speed). For example in our case (15 million chunks, 6 GB of metadata), it takes between 1 and 2 minutes on a Xeon processor. I actually meant, how much time backwards could be recovered by replaying all the logs? Michael said that backup log is created every hour so up to 1.5 hour can be potentially lost. If all the logs are replayed, can the data be consistent up to the moment of crash? Regards. |
From: Laurent W. <lw...@hy...> - 2010-07-14 15:52:44
|
On Wed, 14 Jul 2010 11:13:17 -0400, Steve Huff <sh...@ve...> wrote: <snip> > > maybe; certainly a dedicated user would limit potential damage more, but > on the other hand: > > * the daemon user already is a non-interactive user > * as a personal preference, i prefer to avoid making local user accounts > needlessly when there's an OK alternative > * nothing stops a security-conscious admin from making a mfs user > themselves and changing the config; the config files are marked as > %config(noreplace), so later versions of the package won't clobber the > admin's changes agreed, fine with me. > <snip> > yes; my officemate and i are running a little 2-node MFS cluster using > these packages, i on i386 and he on x86_64. i'm the metadata server, and > we're both chunkservers. i had to power-cycle my workstationm recently, > and i successfully did a mfsmetarestore afterwards. MooseFS really is > pretty neat; i've recommended it to a few colleagues already. my bosses > are currently looking into deploying HDFS in our cluster, since we may be > deploying Hadoop as well, but i'm offering MooseFS as another candidate. Nice to know :) Another heavy advantage is that codes don't need to be changed to be able to use moosefs, contrary to hdfs, AFAIK. > > i'd like to run a larger-scale test on our cluster, so that i can see how > uid/gid mapping works when the clients and servers are all looking at LDAP > for directory information instead of using local users, but i haven't > gotten around to that yet. if you're using MooseFS in production now, i'd > like to hear how well it's working out for you. It's still in testbed for me, no problem up to now. I'll deploy it on about 70TB by the end of summer. We're not using LDAP but NIS, works like a charm. > >> Do you plan to maintain the rpm for a long time ? Do you need help for >> version updates and such ? > > if you're going to be following the moosefs-users list, i'd appreciate it > if you'd forward to <su...@li...> any announcements of > version updates; it doesn't look like they have a list just for > announcements or an RSS feed, and i'm not excited at the prospect of > joining another users list :) but yes, i'm happy to continue maintaining > the mfs packages in RPMforge. OK, I'll forward version updates to that list. Thanks for maintaining the package ! > <snip> > > i don't want to do that sort of thing automatically; since the local admin > needs to make some decisions when configuring each service, i don't want > the services to be able to start up until the admin has at least looked at > the config files (the same reason why i don't chkconfig all the services on > by default). also, this way package updates will replace the *.cfg.dist > files (if there are any changes to the default configs) but not replace the > running *.cfg files. > > does that answer your questions? my goal (as is generally the case with > RPMforge) is to provide you with a package that's good enough that you > don't have to make your own custom package. Everything is fine for me. > > -steve > > p.s. in case you had overlooked it, i also made a mfs-cgi package, which > takes the MooseFS management cgi and installs it so that it can be served > out by Apache instead of the little standalone webserver. Actually, I saw it ;) Just having nothing special to tell about it, as it's a nice addon to be able to run that cgi under apache. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-14 15:03:50
|
typo in ml address, forwarding. -------- Original Message -------- Subject: Re: mfs & rpmforge Date: Wed, 14 Jul 2010 16:49:38 +0200 From: Laurent Wandrebeck <lw...@hy...> To: Steve Huff <sh...@ve...>, <moo...@li...> On Wed, 14 Jul 2010 09:55:39 -0400, Steve Huff <sh...@ve...> wrote: > > hello Laurent! thank you for reminding me; i had intended to drop you a > line once the package was built and pushed to the repository. No problem. I'm really glad you worked on it, as I'm a crap when it comes to real .spec files :) > > is it really necessary to have three separate mfs server packages? i had > been thinking that it would be easier for users if there was one package > that provided all the necessary server functionality (mfs) and one that > provided all the client and admin functionality (mfs-client). since none > of the services in the mfs package are chkconfig'ed on by default, i > figured the deployment procedure would be to install the mfs package on any > system that had any mfs server functionality, then use chkconfig to > determine which server components should run. Oh well, I thought at first it was better to have a package per service, but I like your approach. A couple anyway: Why use user/group daemon ? wouldn't a dedicated user be safer from a security view point ? Did you test the resulting RPMS ? I'm running (for now) -2, with a regular mfs user, I don't know (yet) if the absence of a real shell could cause problem. Do you plan to maintain the rpm for a long time ? Do you need help for version updates and such ? Some actions are to be taken when you install, for example, master, you need to change .cfg.dist into .cfg. I've found nothing about it in the spec file, can you enlighten me ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-07-14 10:13:27
|
Hi Steve, Just found out that you added mfs on rpmforge, and tweaked the .spec file. I'm glad you did so…but I'd have liked to hear from you about that. Working together is better than doing things alone :) I had just begun my own work on splitting packages, so I'll rip it off for now, as your version is better. Could you also add master package ( --disable-mfschunkserver --disable-mfsmount ) and metalogger one ( --disable-mfschunkserver --disable-mfsmount ) ? Thanks ! -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-07-13 08:33:30
|
Hi Steve! Have you used these configure options? ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs Maybe by mistake you used "--disable-mfsmount"? Or try to issue: "whereis mfsmount" - maybe it is installed in different location? If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Steve Boese [mailto:bo...@st...] Sent: Monday, July 12, 2010 5:12 PM To: moo...@li... Subject: [Moosefs-users] mfsmount not found I've just followed the instructions in the step-by-step tutorial for Installing MooseFS on one server. Fuse installed fine, the CGI monitor is working fine, I can see the two chunks. Everything seems to work until mounting the system: /usr/bin/mfsmount /mnt/mfs -H mfsmaster but /usr/bin/mfsmount doesn't exist. Have I missed something simple here? Thanks! --Steve |
From: Scoleri, S. <Sco...@gs...> - 2010-07-13 08:32:50
|
Is it in /usr/local/bin? If you did a source compile straight it's probably in /usr/local/bin. -Scoleri From: Steve Boese [mailto:bo...@st...] Sent: Monday, July 12, 2010 11:12 AM To: moo...@li... Subject: [Moosefs-users] mfsmount not found I've just followed the instructions in the step-by-step tutorial for Installing MooseFS on one server. Fuse installed fine, the CGI monitor is working fine, I can see the two chunks. Everything seems to work until mounting the system: /usr/bin/mfsmount /mnt/mfs -H mfsmaster but /usr/bin/mfsmount doesn't exist. Have I missed something simple here? Thanks! --Steve |