You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Laurent W. <lw...@hy...> - 2010-08-16 08:03:42
|
On Sun, 15 Aug 2010 22:55:15 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > I have couple of questions I wanted to ask about MFS: > > 1) In case the MFS chunk-server writes data, does it recognizes this and > writes to itself first, and also reads from itself first, in order to > speed-up operations? hmmmm. master decides where data are to be written. Chunkservers read or write data when they are commanded. Chunkservers do not, afaik, take decisions on their own about read or write. > > 2) Also, does MFS support rack awareness, where it replicates the files > within same sub-net first, and only moves later to other sub-nets? I think you're doing a mix-up between rack and subnet. you can perfectly have the same subnet in several racks. Anyway, rack awareness is not yet implemented. It's on the todo list as far as I can remember. (can't verify as I can't access web for now). Hope it helps, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-08-15 19:55:42
|
Hi. I have couple of questions I wanted to ask about MFS: 1) In case the MFS chunk-server writes data, does it recognizes this and writes to itself first, and also reads from itself first, in order to speed-up operations? 2) Also, does MFS support rack awareness, where it replicates the files within same sub-net first, and only moves later to other sub-nets? Regards. |
From: Michał B. <mic...@ge...> - 2010-08-13 09:31:12
|
The whole communication with the master is on the level of i-nodes not paths. So the system has to have possibility to remap path on the i-node number. Generally most regular file systems under unix work this way. CUTOMA_FUSE_LOOKUP finds i-node number which is a descendant of i-node number N and name Y (N, Y)->M. For example to find i-node number of object “foo/has/bar” you need to make three LOOKUP operations: 1. LOOKUP(1, “foo”)->X 2. LOOKUP(X, “has”)->Y 3. LOOKUP(Y, “bar”)->Z And we know that object “foo/has/bar” has i-node number Z. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: 夏亮 [mailto:xia...@zh...] Sent: Thursday, August 12, 2010 5:33 AM To: 'Michał Borychowski' Subject: 答复: [Moosefs-users] help Hi! I want to know when client CUTOMA_FUSE_LOOKUP data to master, what should be inode seted. thanks _____ 发件人: Michał Borychowski [mailto:mic...@ge...] 发送时间: 2010年8月10日 15:47 收件人: '夏亮' 抄送: moo...@li... 主题: RE: [Moosefs-users] help Hi! Please share with us where you would see the usage of such an interface? We already have a mini-library for this but it is not very efficient. It would need some additional mechanisms as read-ahead, etc. On the other hand FUSE works nicely and smoothly, we are not sure there is really need for the additional c library. If you would like to contribute to the project probably there would some other issues you could take care of :) Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: 夏亮 [mailto:xia...@zh...] Sent: Friday, August 06, 2010 4:41 AM To: moo...@li... Subject: [Moosefs-users] help Hi : I am programmer used c language. I want to add c interface in moosefs , which used like lidhdfs of hadoop. Could you give me some advice? thanks |
From: Anh K. H. <ky...@vi...> - 2010-08-12 11:09:47
|
Hi, I am a moosefs newbie. I am using EC2 instances and I intended to build a moosefs system to share EBS disks between instances. My question is that: can I use moosefs for logs? My applications (web server, applications) need to write to logs files, but I don't know if there's any performance problem when logs are written to moosefs' disk. Thank you for your helps, Regards, -- Anh Ky Huynh |
From: Michał B. <mic...@ge...> - 2010-08-11 09:33:35
|
Yes, we can think of metalogger as a "spare" master. When it fails, metalogger can easily and quickly take over its functions. The process of switching to metalogger by using CARP is described here: http://www.moosefs.org/mini-howtos.html#redundant-master (we have not yet tried Heartbeat and as far as we know nobody from the community have done it yet neither). And with metalogger it is just easy to back up the metadata on the fly. Of course, you can additionally back it up to external resources, but you need to prepare extra scripts. Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Wednesday, August 11, 2010 10:47 AM To: Michał Borychowski Cc: moosefs-users Subject: Re: [Moosefs-users] Fwd: Fwd: Backing up MFS metadata Hi. Thanks a lot - much appreciate your answers :). If you have any other doubts please let us know. Just one question - what the main benefit of keeping meta-loggers, considering the meta-data can be backed up to external additional medium? Is this the option of having high-availability, where the meta-logger could be quickly brought down, and master brought up using the meta-logger data? Regards. |
From: Stas O. <sta...@gm...> - 2010-08-11 08:47:08
|
Hi. Thanks a lot - much appreciate your answers :). * * > > *If you have any other doubts please let us know.* > > * * > > * > * > > ** > Just one question - what the main benefit of keeping meta-loggers, considering the meta-data can be backed up to external additional medium? Is this the option of having high-availability, where the meta-logger could be quickly brought down, and master brought up using the meta-logger data? Regards. |
From: Michał B. <mic...@ge...> - 2010-08-11 07:48:11
|
From: Stas Oskin [mailto:sta...@gm...] Sent: Monday, August 09, 2010 5:18 PM To: moosefs-users Subject: [Moosefs-users] Fwd: Fwd: Backing up MFS metadata Hi. After reading the "Metadata files ins and outs" article, and this email thread, I'm still left with several questions that I would like to clarify: 1) Does mfsmaster dumps to checkpoint files an up-to-date representation of the memory, at periods of 1-2 minutes maximum? Meaning, backing up the /var/mfs directory and restoring it back, will provide me with a 1-2 minutes maximum of data loss? [MB] The "mfsmaster.mfs.back" file by default is saved every hour but "changelog.0.mfs" is saved on the fly so the delay is just in milliseconds needed for the writing process. So the data loss of 1-2 minutes would be really the most negative scenario. 2) Does metalogger pull an up-to-date version of master checkpoint files as well? If I move the master to metalogger and restore the checkpoint files, it will provide me with a 1-2 minutes maximum of data loss as well? [MB] This delay should also be close to 0 (on condition that metalogger has been working all the time). The file metadata_ml.mfs.back is dumped every 24hours but changelogs are saved on the fly. Each change that master saves to a file is also sent to metalogger. In case of the master server failure you would lose data stuck in buffer of outgoing network interface, times also in milliseconds. 3) Any benefit of lowering the default 24 hours download frequency? Or the streamed updates from master to metalogger will be enough? [MB] When the metalogger restarts you would have incomplete change logs (because when it is down it doesn't save the data and after restart master server doesn't send to it the not yet saved changes - but this behavior will be changed soon). So if you set the frequency to eg. 6 hours you decrease probability of "holes" in changelogs in case metalogger is down. 4) In case I do want to backup to external storage (NAS for example), which would be the better source machine - master (sounds more logical) or metalogger? [MB] Yes, master. 5) If I have 3 metaloggers for example, how much load it will incur on the master server? [MB] For the master the load is really negligible. If you have any other doubts please let us know. Regards Michal Thanks in advance for answers, as it would clarify the backup topic for me completely! Regards. |
From: Zhaowen Li <liz...@gm...> - 2010-08-11 03:28:42
|
Hi, recently I've started to read the source code of mfs. The version I downloaded is mfs-1.6.17. I want to find out the logical relationship of each module, and get a whole view on the architecture of the source code. But I found there is little comments in the source files. I can't figure out what the certain function or the source file module is responsible for. Where can I get more information about the source codes' architecture? Is there some manual or doxygen files I can get help? Or can you provide some more detail description about each directories' and each source files' functionality? eg. What is \mfs-mount\cscomm.h/c used for? This kind of information can help me a lot already. Expecting your reply! Thanks! |
From: Stas O. <sta...@gm...> - 2010-08-10 13:52:28
|
Hi. > > Thanks for pointing this. We will implement something like FILE_UMASK in > the future version so it would be possible to decide about the access > rights. > > > > Great to know :). Regards. |
From: Michał B. <mic...@ge...> - 2010-08-10 10:35:48
|
Hi! Thanks for pointing this. We will implement something like FILE_UMASK in the future version so it would be possible to decide about the access rights. Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Monday, August 09, 2010 7:29 PM To: moosefs-users Subject: [Moosefs-users] Changing metadata permissions Hi. I'm trying to set-up an external process which would back-up the metadata. Unfortunately the metadata files have read by mfs user only. Is it possible to pass any parameter to MFS to create files readable by other users as well? Or the only solution is to sudo the backing up process to MFS user? Regards. |
From: Michał B. <mic...@ge...> - 2010-08-10 10:34:13
|
Has the metalogger been restarted after downloading the metadata_ml.mfs.back file? When you restore data on the metalogger machine you do not need master server. But please mind, that you cannot issue a command like this: mfsmetarestore -m /data/mfs/meta/metadata_ml.mfs.back -o /data/mfs/meta/metadata.mfs /data/mfs/meta/changelog_ml.*.mfs As stated on the website I gave you below: When you pass changelog filenames as parameters into the mfsmetarestore command line tool, it is necessary to pass them in a chronological order. We therefore recommend using mfsmetarestore -a which automatically will find and apply the changelog files in their proper order. So please just use "mfsmetarestore -a" on the metalogger machine. If you still have problems you can send us the gzipped metadata files and we will look into them why the problem can appear. Regards Michał From: Roast [mailto:zha...@gm...] Sent: Tuesday, August 10, 2010 10:58 AM To: Michał Borychowski Cc: moosefs-users Subject: Re: [Moosefs-users] problem with mfsmetarestore. Thanks Michał . I want to restore the meta data from metalogger machine, so there is no mfsdata.mfs.back at the metalogger machine. And at the metalogger server, metadata_ml.mfs.back will sync from master 24 hours as default, so I think we can restore the meta data with metadata_ml.mfs.back and changelog_ml.0.mfs ~ changelog_ml.24.mfs. But we met version mismatch and Data mismatch. Here is some new logs. ---------------------------------------------------------------------------- ------------------------------------------- version after applying changelog: 2072593 applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.1.mfs meta data version: 2072593 version after applying changelog: 2077544 applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.0.mfs meta data version: 2077544 2081473: error: 32 (Data mismatch) ---------------------------------------------------------------------------- ------------------------------------------- And can you tell me how to fix this problem? Or how to restore the metadata without master? Thanks again. 2010/8/10 Michał Borychowski <mic...@ge...> Normally you do not run the master process on the metalogger machine. You do it only when the main master server breaks down. In this situation you stop metalogger, do metarestore and run the master process. You do not need to do anything special to generate metadata_ml.mfs.back on the metalogger machine. It will be just created. See http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html and read about recovering metadata by mfsmetarestore. Generally you would need to run "mfsmetarestore -a". Regards Michał From: Roast [mailto:zha...@gm...] Sent: Thursday, July 29, 2010 4:11 AM To: moosefs-users Subject: [Moosefs-users] problem with mfsmetarestore. Hi,all. I want to setup the metalogger server as the backup for the master, so I use mfsmetarestore to generate the metadata.mfs to start the master at the metalogger server. But it doesn't works. Here is some logs: ---------------------------------------------------------------------------- ------------------------------------------- [root@localhost ~]# /usr/local/mfs/sbin/mfsmetarestore -m /data/mfs/meta/metadata_ml.mfs.back -o /data/mfs/meta/metadata.mfs /data/mfs/meta/changelog_ml.*.mfs loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok applying changes from file: /data/mfs/meta/changelog_ml.0.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.10.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.11.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.12.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.13.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.14.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.15.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.16.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.17.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.18.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.19.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.1.mfs meta data version: 961564 963911: version mismatch [root@localhost ~]# ---------------------------------------------------------------------------- ------------------------------------------- version mismatch? How to fix this problem? Thanks. -- The time you enjoy wasting is not wasted time! -- The time you enjoy wasting is not wasted time! |
From: Scoleri, S. <Sco...@gs...> - 2010-08-10 10:30:16
|
Ah yes sorry. From: Michał Borychowski [mailto:mic...@ge...] Sent: Tuesday, August 10, 2010 5:10 AM To: Scoleri, Steven Cc: moo...@li... Subject: RE: [Moosefs-users] Request Hi Steven! The master server version is available on the Info tab at the very first column "version". Kind regards Michal Borychowski From: Scoleri, Steven [mailto:Sco...@gs...] Sent: Wednesday, July 28, 2010 3:53 PM To: moo...@li... Subject: [Moosefs-users] Request Any change we can get the mfsmaster version to show up in the CGI. The chunkserver version show up but it would be cool for the metaserver version to be there as well. BTW - updated my mfsmaster this morning (flawless). Thanks, -Scoleri |
From: Michał B. <mic...@ge...> - 2010-08-10 09:19:43
|
At our company we have two vmware machines run in a production environment. One image has now about 4.5GB, the second about 2.5GB. Both images expand as the virtual machine needs more space, there is Linux installed. Images are stored in MooseFS with goal=2. We do not notice any performance drawbacks though the machines are quite busy. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Ólafur Ósvaldsson [mailto:osv...@ne...] Sent: Friday, August 06, 2010 12:50 PM To: moo...@li... Subject: [Moosefs-users] Using MooseFS as a VM diskimage storage Hi, I was wondering if anyone has experience with using MFS to store disk images used by Xen or KVM virtual machines? If yes, are there any known issues with this setup, be it performance or other? /Oli -- Ólafur Osvaldsson System Administrator e-mail: osv...@ne... phone: +354 517 3418 ---------------------------------------------------------------------------- -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-08-10 09:10:05
|
Hi Steven! The master server version is available on the Info tab at the very first column "version". Kind regards Michal Borychowski From: Scoleri, Steven [mailto:Sco...@gs...] Sent: Wednesday, July 28, 2010 3:53 PM To: moo...@li... Subject: [Moosefs-users] Request Any change we can get the mfsmaster version to show up in the CGI. The chunkserver version show up but it would be cool for the metaserver version to be there as well. BTW - updated my mfsmaster this morning (flawless). Thanks, -Scoleri |
From: Roast <zha...@gm...> - 2010-08-10 08:58:50
|
Thanks Michał . I want to restore the meta data from metalogger machine, so there is no mfsdata.mfs.back at the metalogger machine. And at the metalogger server, metadata_ml.mfs.back will sync from master 24 hours as default, so I think we can restore the meta data with metadata_ml.mfs.back and changelog_ml.0.mfs ~ changelog_ml.24.mfs. But we met *version mismatch *and* **Data mismatch*. Here is some new logs. ----------------------------------------------------------------------------------------------------------------------- version after applying changelog: 2072593 applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.1.mfs meta data version: 2072593 version after applying changelog: 2077544 applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.0.mfs meta data version: 2077544 2081473: error: 32 (Data mismatch) ----------------------------------------------------------------------------------------------------------------------- And can you tell me how to fix this problem? Or how to restore the metadata without master? Thanks again. 2010/8/10 Michał Borychowski <mic...@ge...> > Normally you do not run the master process on the metalogger machine. You > do it only when the main master server breaks down. In this situation you > stop metalogger, do metarestore and run the master process. > > > > You do not need to do anything special to generate metadata_ml.mfs.back on > the metalogger machine. It will be just created. > > > > See http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.htmland read about recovering metadata by mfsmetarestore. Generally you would > need to run “mfsmetarestore -a”. > > > > > > Regards > > Michał > > > > *From:* Roast [mailto:zha...@gm...] > *Sent:* Thursday, July 29, 2010 4:11 AM > *To:* moosefs-users > *Subject:* [Moosefs-users] problem with mfsmetarestore. > > > > Hi,all. > > I want to setup the metalogger server as the backup for the master, so I > use mfsmetarestore to generate the metadata.mfs to start the master at the > metalogger server. But it doesn't works. > > Here is some logs: > > > ----------------------------------------------------------------------------------------------------------------------- > [root@localhost ~]# /usr/local/mfs/sbin/mfsmetarestore -m > /data/mfs/meta/metadata_ml.mfs.back -o /data/mfs/meta/metadata.mfs > /data/mfs/meta/changelog_ml.*.mfs > loading objects (files,directories,etc.) ... ok > loading names ... ok > loading deletion timestamps ... ok > checking filesystem consistency ... ok > loading chunks data ... ok > connecting files and chunks ... ok > applying changes from file: /data/mfs/meta/changelog_ml.0.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.10.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.11.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.12.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.13.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.14.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.15.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.16.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.17.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.18.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.19.mfs > meta data version: 961564 > version after applying changelog: 961564 > applying changes from file: /data/mfs/meta/changelog_ml.1.mfs > meta data version: 961564 > 963911: *version mismatch* > [root@localhost ~]# > > ----------------------------------------------------------------------------------------------------------------------- > > version mismatch? How to fix this problem? > > Thanks. > > -- > The time you enjoy wasting is not wasted time! > -- The time you enjoy wasting is not wasted time! |
From: Michał B. <mic...@ge...> - 2010-08-10 08:00:26
|
Hi Kuer! No, there is no limit for i-nodes in the master server. This message just means how many i-nodes were deleted by the function FREEINODES which releases not used i-nodes. In your example the function just deleted 0 not used i-nodes. BTW - maybe we will resign from logging this information at all as nothing was released so metadata was also not changed. So these are really normal messages, nothing to worry about. Doing "kill -9 mfsmaster" you would not lose anything. There is a tool mfsmetarestore which restores the metadata. If you feel you need it, you can make a backup of /usr/local/var/mfs and additionaly issue: "/usr/local/sbin/mfsmetadump /usr/local/var/mfs/metadata.mfs.back" and check if all the files are visible in this dump. Kind regards Michał From: kuer ku [mailto:ku...@gm...] Sent: Tuesday, August 10, 2010 3:38 AM To: Michał Borychowski Subject: Re: [Moosefs-users] what should I do IF cannot shutdown master normally ?? Hi, Michal, I do NOT want to "kill -9 master", to avoid information losing. No other sepcial messages in syslog when MFS is hung up. I noticed another message : $ tail -f changelog.0.mfs 143616569: 1280322540|EMPTYRESERVED():0 143616570: 1280322600|FREEINODES():0 143616571: 1280322600|EMPTYRESERVED():0 143616572: 1280322600|EMPTYTRASH():0,0 143616573: 1280322660|FREEINODES():0 ^^^^^^^^^^^^^^^^^^^^^^ FREEINODES() == 0 I guess there are no more inodes in master, so no new file to create, and so the MFS hang up. BUT I do not understand the relationship between MFS diskspace and INODES capacity. SO I attach another mfschunkserver to Master, so the system ran again. QUESTION : is there any document describing the relationship between diskspace and INODES capacity ?? -- kuer 2010/8/9 Michał Borychowski <mic...@ge...> These messages (143616569: 1280322540|EMPTYRESERVED():0) are normal ones. Functions run periodically delete free i-nodes, deleted and "reserved" files. 0 means there was nothing to do. It may happen that there are still some operations on the clients' side. If nothing else happens you can always kill the master server process (in this case it would be "kill -9 23172"), wait till the process ends, run "mfsmetarestore -a" and run the master again by: "mfsmaster start". Are there any other messages in syslog close to these operations? Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: kuer ku [mailto:ku...@gm...] Sent: Wednesday, July 28, 2010 3:22 PM To: moo...@li... Subject: Re: [Moosefs-users] what should I do IF cannot shutdown master normally ?? hi, all, I find some strange things in changelog, $ tail -f changelog.0.mfs 143616569: 1280322540|EMPTYRESERVED():0 143616570: 1280322600|FREEINODES():0 143616571: 1280322600|EMPTYRESERVED():0 143616572: 1280322600|EMPTYTRASH():0,0 143616573: 1280322660|FREEINODES():0 143616574: 1280322660|EMPTYRESERVED():0 143616575: 1280322720|FREEINODES():0 143616576: 1280322720|EMPTYRESERVED():0 143616577: 1280322780|FREEINODES():0 143616578: 1280322780|EMPTYRESERVED():0 what does this means, how to fix it ? Jul 28 21:19:00 meta1 mfsmaster[23172]: chunkservers status: Jul 28 21:19:00 meta1 mfsmaster[23172]: server 1 (ip: 221.194.134.189, port: 19322): usedspace: 943864180736 (879.04 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.92% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 2 (ip: 221.194.134.187, port: 19322): usedspace: 957016182784 (891.29 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.79% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 3 (ip: 221.194.134.181, port: 19322): usedspace: 1898559021056 (1768.17 GiB), totalspace: 3000328257536 (2794.27 GiB), usage: 63.28% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 4 (ip: 221.194.134.186, port: 19322): usedspace: 940963352576 (876.34 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.72% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 5 (ip: 221.194.134.184, port: 19322): usedspace: 944276942848 (879.43 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.94% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 6 (ip: 221.194.134.190, port: 19322): usedspace: 1893327695872 (1763.30 GiB), totalspace: 3000328257536 (2794.27 GiB), usage: 63.10% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 7 (ip: 221.194.134.188, port: 19322): usedspace: 957261549568 (891.52 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 8 (ip: 221.194.134.185, port: 19322): usedspace: 957269495808 (891.53 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 9 (ip: 221.194.134.183, port: 19322): usedspace: 957314211840 (891.57 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 10 (ip: 221.194.134.182, port: 19322): usedspace: 956960980992 (891.24 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.79% Jul 28 21:19:00 meta1 mfsmaster[23172]: total: usedspace: 11406813614080 (10623.42 GiB), totalspace: 18001969545216 (16765.64 GiB), usage: 63.36% there are still free space on chunkservers ( almost 40% free space ) what resources runs out ??? how to fix them ? thanks -- kuer On Wed, Jul 28, 2010 at 8:59 PM, kuer ku <ku...@gm...> wrote: hi, I just want to know why master stucked, but I find nothing wrong in /var/log/messages. >From /var/log/messages, it seems that master still work, but why it does NOT exit when got SIGTERM ? Are there any other way that I can find some useful messages ? thanks -- kuer On Wed, Jul 28, 2010 at 8:52 PM, kuer ku <ku...@gm...> wrote: hi, all I cannot shutdown moosefs master normally. When shutdown, it shows : working directory: /usr/local/moosefs/bin/master sending SIGTERM to lock owner (pid:23172) waiting for termination ... 10s 20s 30s 40s 50s give up Something must be wrong, what should I do ? thanks -- kuer |
From: Michał B. <mic...@ge...> - 2010-08-10 07:47:32
|
Hi! Please share with us where you would see the usage of such an interface? We already have a mini-library for this but it is not very efficient. It would need some additional mechanisms as read-ahead, etc. On the other hand FUSE works nicely and smoothly, we are not sure there is really need for the additional c library. If you would like to contribute to the project probably there would some other issues you could take care of :) Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: 夏亮 [mailto:xia...@zh...] Sent: Friday, August 06, 2010 4:41 AM To: moo...@li... Subject: [Moosefs-users] help Hi : I am programmer used c language. I want to add c interface in moosefs , which used like lidhdfs of hadoop. Could you give me some advice? thanks |
From: Michał B. <mic...@ge...> - 2010-08-10 07:42:48
|
This is just the way as chunkserver works. It does not store any additional information on the system disk. At every startup it scans disks with data. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Ólafur Ósvaldsson [mailto:osv...@ne...] Sent: Friday, August 06, 2010 3:32 PM To: moo...@li... Subject: [Moosefs-users] Questions regarding chunkservers Hi, I'm looking at the pros and cons of using MFS in our enviroment and there are a few things that I have not found an answer too yet. How much CPU and RAM is required for chunkservers part from the requirements of the OS itself? How much info does the chunkserver need after reboot other than the chunks themselves? I'm thinking about running the OS from RAM and having the system disks only for MFS, that would result in the chunkserver having no data available at startup except for chunks written before reboot and the initial config. Would this result in the chunkserver loosing all its chunks or would it recover and just notify the master of the chunks that it finds on its drives? Hopefully my questions make sense. /Oli -- Ólafur Osvaldsson System Administrator e-mail: osv...@ne... phone: +354 517 3418 ---------------------------------------------------------------------------- -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-08-10 07:32:45
|
Normally you do not run the master process on the metalogger machine. You do it only when the main master server breaks down. In this situation you stop metalogger, do metarestore and run the master process. You do not need to do anything special to generate metadata_ml.mfs.back on the metalogger machine. It will be just created. See http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html and read about recovering metadata by mfsmetarestore. Generally you would need to run "mfsmetarestore -a". Regards Michał From: Roast [mailto:zha...@gm...] Sent: Thursday, July 29, 2010 4:11 AM To: moosefs-users Subject: [Moosefs-users] problem with mfsmetarestore. Hi,all. I want to setup the metalogger server as the backup for the master, so I use mfsmetarestore to generate the metadata.mfs to start the master at the metalogger server. But it doesn't works. Here is some logs: ---------------------------------------------------------------------------- ------------------------------------------- [root@localhost ~]# /usr/local/mfs/sbin/mfsmetarestore -m /data/mfs/meta/metadata_ml.mfs.back -o /data/mfs/meta/metadata.mfs /data/mfs/meta/changelog_ml.*.mfs loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok applying changes from file: /data/mfs/meta/changelog_ml.0.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.10.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.11.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.12.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.13.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.14.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.15.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.16.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.17.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.18.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.19.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.1.mfs meta data version: 961564 963911: version mismatch [root@localhost ~]# ---------------------------------------------------------------------------- ------------------------------------------- version mismatch? How to fix this problem? Thanks. -- The time you enjoy wasting is not wasted time! |
From: Stas O. <sta...@gm...> - 2010-08-09 17:29:48
|
Hi. I'm trying to set-up an external process which would back-up the metadata. Unfortunately the metadata files have read by mfs user only. Is it possible to pass any parameter to MFS to create files readable by other users as well? Or the only solution is to sudo the backing up process to MFS user? Regards. |
From: Stas O. <sta...@gm...> - 2010-08-09 15:18:51
|
Hi. After reading the "Metadata files ins and outs" article, and this email thread, I'm still left with several questions that I would like to clarify: 1) Does mfsmaster dumps to checkpoint files an up-to-date representation of the memory, at periods of 1-2 minutes maximum? Meaning, backing up the /var/mfs directory and restoring it back, will provide me with a 1-2 minutes maximum of data loss? 2) Does metalogger pull an up-to-date version of master checkpoint files as well? If I move the master to metalogger and restore the checkpoint files, it will provide me with a 1-2 minutes maximum of data loss as well? 3) Any benefit of lowering the default 24 hours download frequency? Or the streamed updates from master to metalogger will be enough? 4) In case I do want to backup to external storage (NAS for example), which would be the better source machine - master (sounds more logical) or metalogger? 5) If I have 3 metaloggers for example, how much load it will incur on the master server? Thanks in advance for answers, as it would clarify the backup topic for me completely! Regards. |
From: Michał B. <mic...@ge...> - 2010-08-09 13:24:55
|
MooseFS is POSIX compliant so you can normally set read / write / execute rights for owner / group / others. Probably in the near future we’ll additionally introduce ACL rights management (but it is not yet ready). If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: li...@ci... [mailto:li...@ci...] Sent: Monday, August 09, 2010 7:37 AM To: moo...@li... Subject: [Moosefs-users] moosefs file permission issue Hi All experts: Thanks for develop this great file system , there’s one thing I want to know is can the Moosefs offer the same access control as NFS , as I know that NFS can mapping the local system’s user right if they are as same as the NFS server , but when I mount the Moosefs , what I’ve seen is all the users have the read/write access to everyone’s file , is there any way to archive each user can only read & write there own file ? Best regards! Alex Li 李 杰 Firm Management 中国国际金融有限公司 China International Capital Corporation Limited Tel : (86 10) 6505-1166 ext. 3485 Fax: (86 10) 6505-9539 E-mail: li...@ci... Mobile:13651054608 NOTICE: If received in error, please destroy and notify sender immediately. Sender does not intend to waive confidentiality or privilege. Copy, distribution or use of this email is prohibited when received in error. See http://www.cicc.com.cn/CICC/english/noresponsibility/index.htm for further important terms in relating to this email, please ensure you read them carefully. 敬启: 本邮件内容为保密信息,发件人保留与本邮件相关的一切权利。有关本邮件的其它重要事项,请点击http://www.cicc.com.cn/CICC/chinese/noresponsibility/index.htm 确保阅读。此外,若您误收到本邮件,敬请立刻删除、通知发件人,严禁复制、转发或以任何形式使用其内容。 |
From: Michał B. <mic...@ge...> - 2010-08-09 13:15:50
|
Shen, thanks for the reply :) Tian, these limits have been changed in 1.6.16 and now the latest stable is 1.6.17 so we would recommend you just update the master server to 1.6.17. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Shen Guowen [mailto:sh...@ui...] Sent: Monday, August 09, 2010 4:42 AM To: TianYuchuan(田玉川) Cc: moo...@li... Subject: Re: [Moosefs-users] mfs-master[10546]: CS(192.168.0.125) packet too long (115289537/50000000) Don't worry! This is because some of your chunk servers are currently unreachable, and the master server notices it, then modifies the meta data of files in those chunk servers to set the "allvalidcopies" to 0 in "struct chunk". When the master is rescanning the files (fs_test_files() in filesystem.c), it finds out the valid copy is 0, then print information into syslog file, just as listed below. However, printing process is quite time-consuming, especially the mount of files is large. During this period, the master ignores the chunk server's connection (because it is in a big loop of test files, and it is a single thread to do this, maybe this is a pitfall). So although you make sure the chunk server working correctly, it is useless (you can notice the reconnecting information in chunk server's syslog file). You could let the master finish printing, then it will reconnect with chunk servers, and will notice the files is there, then set the "allvalidcopies" to a correct value. Then works normally. Or you can re-compile the program with commenting the line 5512 and line 5482 in filesystem.c(mfs-1.6.15). It will ignore the print messages and of cause, reduce the fs test time. Below is from Michal: ----------------------------------------------------------------------- We give you here some quick patches you can implement to the master server to improve its performance for that amount of files: In matocsserv.c in mfsmaster you need to change this line: #define MaxPacketSize 50000000 into this: #define MaxPacketSize 500000000 Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" function. Change this line: if ((uint32_t)(main_time())<=starttime+150) { into: if ((uint32_t)(main_time())<=starttime+900) { And also changing this line: for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { into this: for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { You need to recompile the master server and start it again. The above changes should make the master server work more stable with large amount of files. Another suggestion would be to create two MooseFS instances (eg. 2 x 200 million files). One master server could also be metalogger for the another system and vice versa. Kind regards Michał ----------------------------------------------------------------------------- -- Guowen Shen On Sun, 2010-08-08 at 22:51 +0800, TianYuchuan(田玉川) wrote: > > > hello,everyone! > I have a big quertion,please help me,thank you very much. > We intend to use moosefs at our product environment as the storage of > our online photo service. > We'll store for about 200 million photo files. > I've built one master server(48G mem), one metalogger server, eight > chunk servers(8*1T SATA). When I copy photo files to the moosefs > system. At start everything is good. But I had copyed files 57 > million ,the master machines'CPU were used 100% > I sthoped the master when used “/user/local/mfs/sbin/mfsmasterserver > -s”,that I started the master。but there was a big problem ,the > master had not read my files。 These documents are important to me,I > am very anxious,please help me recover these files,tihanks。 > > I got many error syslog from master server: > > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 41991323: 2668/2526212449954462668/176s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 00000000043CD358 (inode: 50379931 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 50379931: 2926/4294909215566102926/163b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 00000000002966C3 (inode: 48284 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 48284: bookdata/178/8533354296639220178/180b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000594726 (inode: 4242588 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 4242588: bookdata/6631/4300989258725036631/85s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000993541 (inode: 8436892 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 8436892: bookdata/7534/3147352338521267534/122b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000D906E6 (inode: 12631196 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 12631196: bookdata/8691/11879047433161548691/164s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 000000000118DC1E (inode: 16825500 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 16825500: bookdata/1232/17850056326363351232/166b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001681BC7 (inode: 21019804 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 21019804: bookdata/26/12779298489336140026/246s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001A804E1 (inode: 25214108 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 25214108: bookdata/3886/8729781571075193886/30s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001E7E826 (inode: 29408412 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 29408412: bookdata/4757/142868991575144757/316b.jpg > > > Aug 7 23:56:36 localhost mfsmaster[10546]: CS(192.168.0.124) packet > too long (115289537/50000000) > Aug 7 23:56:36 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.124, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > Aug 8 00:08:14 localhost mfsmaster[10546]: CS(192.168.0.127) packet > too long (104113889/50000000) > Aug 8 00:08:14 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.127, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > Aug 8 00:21:03 localhost mfsmaster[10546]: CS(192.168.0.120) packet > too long (117046565/50000000) > Aug 8 00:21:03 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.120, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > > when I visited the mfscgi,the error was“Can't connect to MFS master > (IP:127.0.0.1 ; PORT:9421)” > 。 > > Thanks all! > ------------------------------------------------------------------------------ > This SF.net email is sponsored by > > Make an app they can't live without > Enter the BlackBerry Developer Challenge > http://p.sf.net/sfu/RIM-dev2dev > _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-08-09 13:06:26
|
If the server already has apache / lighttpd for other websites it is reasonable to run CGI Monitor files on the existing server. On the other hand, we introduced mfscgiserv in case somebody doesn't have any other webserver. For sure mfscgiserv would need less resources than full apache. As far as I remember there is configure option for deciding if we want mfscgiserv. We leaving the choice to users in rpms. Regards Michał -----Original Message----- From: Ricardo J. Barberis [mailto:ric...@da...] Sent: Thursday, August 05, 2010 7:23 PM To: moo...@li... Subject: Re: [Moosefs-users] standalone mfs cgi serv El Jue 05 Agosto 2010, Laurent Wandrebeck escribió: > Hi, Hi Laurent! > I'm in touch with the RPM maintainer about adding an init script for > mfscgiserv. He has concerns about its behaviour in a loaded env, and > he's wondering if it's a good idea to let it run for a long time as a > standalone, arguing that if it has to run all the time, you'd better > run it with apache (he provides a package especially for that). > What are your experiences with mfscgiserv ? FWIW, I have the mfscgiserv running for almost four months (since April 20) without any issue on CentOS 5.5 x86_64. This is of course in an internal network, only accessible from our NOC, but it might be nice to have it on public internet. For that you'd need some authorization mechanism and IMHO the rpm does the right thing. That said, the init script could be chkconfig'ed off and the admin would choose the way to run the cgi, either through apache or standalone. Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! ------------------------------------------------------------------------------ The Palm PDK Hot Apps Program offers developers who use the Plug-In Development Kit to bring their C/C++ apps to Palm for a share of $1 Million in cash or HP Products. Visit us here for more details: http://p.sf.net/sfu/dev2dev-palm _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-08-09 12:59:16
|
This is not a normal behavior. With goal=3 data transmission looks like this: client <=> cs1 <=> cs2 <=> cs3 (whereas order of chunkservers if different for different chunks) and normally should work with speeds similar to goal=1. For 100Mbit/s network we would expect 6-7MB/s). But please check if you have full-duplex enabled. If you only have half-duplex, speed can be substantially lower. As stated before, we definitely recommend 1Gbit/s. And please check if the existing network is for sure properly configured. Kind regards Michał Borychowski From: Chen, Alvin [mailto:alv...@in...] Sent: Thursday, July 29, 2010 3:57 AM To: Micha? Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] How fast can you copy files to your Moosefs ? I mean setting goal to 3. If setting goal to 1, the writing speed is 9MBytes/sec for 100Mbps networking, but if setting goal to 3, the speed is just 500KBytes/sec. For 100Mbps networking, the speed should reach 12.5Mbytes/sec, so if setting goal to 1, 9Mbytes/sec is reasonable, but if setting goal to 3, the speed should be around 3Mbytes/sec. By the way, I just use scp to copy 4GB data file to the mount folder. Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... From: mic...@ge... [mailto:mic...@ge...] Sent: Wednesday, July 28, 2010 7:45 PM To: Chen, Alvin Cc: moo...@li... Subject: RE: [Moosefs-users] How fast can you copy files to your Moosefs ? First of all if you want better performance you should use gigabit network. We have writes of about 20-30MiB/s (have a look here: http://www.moosefs.org/moosefs-faq.html#average). You can also have a look here: http://www.moosefs.org/moosefs-faq.html#mtu for some network tips. PS. Talking about 3 copies do you mean setting goal=3 or copying 3 files simultaneously? Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Chen, Alvin [mailto:alv...@in...] Sent: Tuesday, July 27, 2010 10:52 AM To: moo...@li... Subject: [Moosefs-users] How fast can you copy files to your Moosefs ? Hi guys, I am a new user of moosefs. I have 3 chunk servers, one master server with 100Mbps network. And I just copy a 4GB files from one client machine to Moosefs, and the copying speed can reach 9MB/s if just one copy, but the copying speed is just 500KB/s if 3copies. How fast do your Moosefs can reach? Does anybody get better performance? Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... |