You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Laurent W. <lw...@hy...> - 2010-06-15 13:12:42
|
Hi, please find attached a patch file (one against 1.6.15, one against git (which is still outdated btw)). Several fwrite return codes were not checked, causing warnings during compilation. Return codes are now verified, and problems are logged. Compile tested only. Feedback welcome. I may have done some errors, it's my very first dive into mfs code :) Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-06-15 08:00:12
|
On Tue, 15 Jun 2010 10:50:54 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. Hi, > > In addition to my previous emails, I would like to know how it's possible to > specify the amount of replicas for each file in MooseFS? See http://www.moosefs.org/reference-guide.html#using-moosefs Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-06-15 07:51:26
|
Hi. In addition to my previous emails, I would like to know how it's possible to specify the amount of replicas for each file in MooseFS? Thanks. |
From: Laurent W. <lw...@hy...> - 2010-06-15 06:31:13
|
On Tue, 15 Jun 2010 13:36:03 +0800 Roast <zha...@gm...> wrote: > Hello, everyone. Hi Zhang, > > We are plan to use moosefs at our product environment as the storage of our > online photo service. > > But since we got that master server store all the metadata at memory, but we > will store for about a hundred million photo files, so I wonder how much > memory should prepare for the master server? And how to calculate this > number? According to FAQ, http://www.moosefs.org/moosefs-faq.html#cpu , at gemius, 8GB ram is used by master for 25 millions files. So, for a hundred millions, you'd need 32GB. > > If the memory is not enough, what will happened with master server's? I guess it'll swap. > > And I still wonder the performance about master server when use moosefs to > store a hundred million photo files? Anyone can give me some more > information? I've no experience with such a large setup. I guess memory caching used to prevent bottleneck on master will still do the trick. regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Roast <zha...@gm...> - 2010-06-15 05:36:33
|
Hello, everyone. We are plan to use moosefs at our product environment as the storage of our online photo service. But since we got that master server store all the metadata at memory, but we will store for about a hundred million photo files, so I wonder how much memory should prepare for the master server? And how to calculate this number? If the memory is not enough, what will happened with master server's? And I still wonder the performance about master server when use moosefs to store a hundred million photo files? Anyone can give me some more information? Thanks all. -- The time you enjoy wasting is not wasted time! |
From: Stas O. <sta...@gm...> - 2010-06-14 15:56:35
|
Actually in more detail: 1) Pause writing of new file, skip to any position, and update the data. 2) Open existing file, skip to any position and update the data. Regards. 2010/6/14 Stas Oskin <sta...@gm...> > Hi. > > Thanks for the explanation. > > So this covers appending, but can a write operation to file pause, jump to > file start for example, and update some data? > > Regards. > > 2010/6/14 Michał Borychowski <mic...@ge...> > > MooseFS fully supports appending to a file and writing to any position of >> a file. It also supports creating sparse files. >> >> >> >> Two independent processes / machines may write in parallel to the same >> file at different positions. If two positions are located in different >> chunks (pos1 / 64Mi != pos2 / 64Mi) the writing process would be run at >> normal speed; if two positions are in the same chunk you can expect a >> substantial loss of writing speed (due to chunk lock for writing). >> >> >> >> >> >> Kind regards >> >> Michał Borychowski >> >> MooseFS Support Manager >> >> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >> >> Gemius S.A. >> >> ul. Wołoska 7, 02-672 Warszawa >> >> Budynek MARS, klatka D >> >> Tel.: +4822 874-41-00 >> >> Fax : +4822 874-41-01 >> >> >> >> >> >> >> >> *From:* Stas Oskin [mailto:sta...@gm...] >> *Sent:* Thursday, June 10, 2010 2:13 PM >> *To:* moo...@li... >> *Cc:* MooseFS >> *Subject:* [Moosefs-users] Append and seek while writing functionality >> >> >> >> Hi. >> >> Does MooseFS supports append functionality? >> >> Also, does it support the ability to seek file being it's being written >> and write data in other place (like regular file system)? >> >> Thanks! >> > > |
From: Laurent W. <lw...@hy...> - 2010-06-14 13:43:24
|
Hi, Is the registration of a moosefs channel on an IRC network planned ? It'd be nice to keep in touch devs and users IMHO. I checked freenode and OFTC networks, #moosefs and #mfs are available. Thanks, PS: I can take care of the registration if needed. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-06-14 12:59:24
|
On Mon, 14 Jun 2010 15:47:52 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > I'm using the initial rpm I built two weeks ago without problem up to > > now. 6 machines involved (1 master, 1 metalogger, 4 chunks). > > > > Can you share your typical loads? We're still in testing, so nothing close to our real workload. I've made a (little) 1TB volume, when we work on 70… I guess you'd have a realistic answer asking Michał Borychowski or some people from redefine.pl :) Or wait a couple months until I deploy it for production. thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-06-14 12:48:20
|
Hi. I'm using the initial rpm I built two weeks ago without problem up to > now. 6 machines involved (1 master, 1 metalogger, 4 chunks). > Can you share your typical loads? > PS: please don't forget to post to the list. > Sure. |
From: Laurent W. <lw...@hy...> - 2010-06-14 11:57:53
|
On Mon, 14 Jun 2010 11:55:10 +0200 Michał Borychowski <mic...@ge...> wrote: > > > > -----Original Message----- > > From: Laurent Wandrebeck [mailto:lw...@hy...] > > Sent: Monday, June 14, 2010 10:46 AM > > To: moo...@li... > > Subject: Re: [Moosefs-users] git outdated > > [MB] Mfs resources are locally mounted (just as msf client with mfsmount) by chunkservers. Great, I thought so. There is really no reason for it not to work, just wanted to be sure :) > > [ ... snip ... ] > > [MB] We plan to make it more automatic but I'm afraid it won't be in the next release. Do you mean 1.6.16 or 1.7 ? :) > > [ ... snip ... ] > > > Another point, I've « advertised » my rpm repo on CentOS mailing-list. > [MB] Thanks! <snip> > [MB] MooseFS is still developed mainly by Gemius as it is widely used in the company and the development is backed up by the community. OK. I haven't been able to find a place explaining code structure (call graphs, etc.). Does it already exist ? > > > And also thank you for the CentOS repo! You're welcome. Some people on CentOS mailing-list exprimed concerns about security of the RPMS. Quite normal, as I'm anonymous, recent user of MooseFS, I could have put anything in the source before building the repo (of course I did NOT do that:). Do you have a bit of workforce to verify* they are OK so you can « officialy » back them (I'll maintain them happilly if you want), or to host them so such doubt would not happen anymore ? Regards, * rpm -Uvh http://centos.kodros.fr/5/SRPMS/mfs-1.6.15-2.src.rpm wget http://sourceforge.net/projects/moosefs/files/moosefs/1.6.15/mfs-1.6.15.tar.gz/download md5sum mfs-1.6.15.tar.gz /usr/src/redhat/SOURCES/mfs-1.6.15.tar.gz result is 90749a0fb0e55c0013fae6e23a990bd9 for both here. and don't forget to take a look at /usr/src/redhat/SPEC/mfs.spec do check that nothing is applied to the sources :) Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-06-14 11:35:40
|
On Mon, 14 Jun 2010 13:41:48 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > The repo works great, thanks! > > How stable this build in overall? I'm using the initial rpm I built two weeks ago without problem up to now. 6 machines involved (1 master, 1 metalogger, 4 chunks). The only change I made in -2 rpm release is the config files directory. So nothing in the build chain has changed. PS: please don't forget to post to the list. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-06-14 11:27:23
|
Also, can any process / machine read file being written by other process / machine? (I presume it can from the ability to write to same file from multiple machines). 2010/6/14 Stas Oskin <sta...@gm...> Hi. > > Thanks for the explanation. > > So this covers appending, but can a write operation to file pause, jump to > file start for example, and update some data? > > Regards. > > 2010/6/14 Michał Borychowski <mic...@ge...> > > MooseFS fully supports appending to a file and writing to any position of >> a file. It also supports creating sparse files. >> >> >> >> Two independent processes / machines may write in parallel to the same >> file at different positions. If two positions are located in different >> chunks (pos1 / 64Mi != pos2 / 64Mi) the writing process would be run at >> normal speed; if two positions are in the same chunk you can expect a >> substantial loss of writing speed (due to chunk lock for writing). >> >> >> >> >> >> Kind regards >> >> Michał Borychowski >> >> MooseFS Support Manager >> >> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >> >> Gemius S.A. >> >> ul. Wołoska 7, 02-672 Warszawa >> >> Budynek MARS, klatka D >> >> Tel.: +4822 874-41-00 >> >> Fax : +4822 874-41-01 >> >> >> >> >> >> >> >> *From:* Stas Oskin [mailto:sta...@gm...] >> *Sent:* Thursday, June 10, 2010 2:13 PM >> *To:* moo...@li... >> *Cc:* MooseFS >> *Subject:* [Moosefs-users] Append and seek while writing functionality >> >> >> >> Hi. >> >> Does MooseFS supports append functionality? >> >> Also, does it support the ability to seek file being it's being written >> and write data in other place (like regular file system)? >> >> Thanks! >> > > |
From: Stas O. <sta...@gm...> - 2010-06-14 10:39:32
|
Hi. Thanks for the explanation. So this covers appending, but can a write operation to file pause, jump to file start for example, and update some data? Regards. 2010/6/14 Michał Borychowski <mic...@ge...> > MooseFS fully supports appending to a file and writing to any position of > a file. It also supports creating sparse files. > > > > Two independent processes / machines may write in parallel to the same file > at different positions. If two positions are located in different chunks > (pos1 / 64Mi != pos2 / 64Mi) the writing process would be run at normal > speed; if two positions are in the same chunk you can expect a substantial > loss of writing speed (due to chunk lock for writing). > > > > > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > > > > > *From:* Stas Oskin [mailto:sta...@gm...] > *Sent:* Thursday, June 10, 2010 2:13 PM > *To:* moo...@li... > *Cc:* MooseFS > *Subject:* [Moosefs-users] Append and seek while writing functionality > > > > Hi. > > Does MooseFS supports append functionality? > > Also, does it support the ability to seek file being it's being written and > write data in other place (like regular file system)? > > Thanks! > |
From: Michał B. <mic...@ge...> - 2010-06-14 10:11:15
|
> > You can also use Heartbeat for the master/slave, with a haresources > > file looking like "mfsmaster IPaddr::192.168.0.100/24/eth1 > > switchMFSmaster", and creating a script switchMFSmaster launching the > > following lines when switching from slave to master : > > > > mfsmetalogger stop > > mfsmetarestore -a > > mfsmaster start > > > > And on the master node becoming slave : > > > > mfsmaster stop > > mfsmetalogger start > You're on the right track to get a nice and complete howto, don't stop there > :) [MB] Yes, Fabien, you are encouraged to prepare a mini howto which we could post on http://www.moosefs.org/mini-howtos.html page. Thank you! Michał > <snip> > > PS : thanks for the CentOS repo ;-) > You're welcome ! Nice to read it's useful ! > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Michał B. <mic...@ge...> - 2010-06-14 09:55:26
|
> -----Original Message----- > From: Laurent Wandrebeck [mailto:lw...@hy...] > Sent: Monday, June 14, 2010 10:46 AM > To: moo...@li... > Subject: Re: [Moosefs-users] git outdated [ ... snip ... ] > > [MB] At our company (http://www.gemius.com) we have four deployments, the > biggest has almost 30 million files distributed over 70 chunk servers having a > total space of 570TiB. Chunkserver machines at the same time are used to make > other calculations. > Do the chunkservers use the mfs volume they export via a local mount for their > calculations ? [MB] Mfs resources are locally mounted (just as msf client with mfsmount) by chunkservers. [ ... snip ... ] > > [MB] You can also refer to this mini how-to: > > http://www.moosefs.org/mini-howtos.html#redundant-master > > > > and see how it is possible to create a fail proof solution using CARP. > Well, the only CARP setting I've done is for pfsense, and it's integrated (as > in click, click, done:). > MooseFS is especially sweet to configure and deploy. Not so for master > failover :) Do you plan to enhance that point in an upcoming version so it > becomes quick (and easy) to get ? > It'd be a really nice feature, and could push MooseFS into HA world. [MB] We plan to make it more automatic but I'm afraid it won't be in the next release. [ ... snip ... ] > Another point, I've « advertised » my rpm repo on CentOS mailing-list. [MB] Thanks! > Someone > asked if MooseFS was backed by a company or if it was developped on spare time > by freelance devs. I know it was developped by Gemius for their internal > needs, but now it's been freed, does the company still backs the software, or > does devs work on it in their spare time ? [MB] MooseFS is still developed mainly by Gemius as it is widely used in the company and the development is backed up by the community. And also thank you for the CentOS repo! Regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 > > Thanks, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Laurent W. <lw...@hy...> - 2010-06-14 09:31:37
|
On Mon, 14 Jun 2010 11:24:29 +0200 Fabien Germain <fab...@gm...> wrote: > Hello, > > On Mon, Jun 14, 2010 at 10:45 AM, Laurent Wandrebeck <lw...@hy...> wrote: > > You can also use Heartbeat for the master/slave, with a haresources file > looking like "mfsmaster IPaddr::192.168.0.100/24/eth1 switchMFSmaster", and > creating a script switchMFSmaster launching the following lines when > switching from slave to master : > > mfsmetalogger stop > mfsmetarestore -a > mfsmaster start > > And on the master node becoming slave : > > mfsmaster stop > mfsmetalogger start You're on the right track to get a nice and complete howto, don't stop there :) > <snip> > PS : thanks for the CentOS repo ;-) You're welcome ! Nice to read it's useful ! -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Fabien G. <fab...@gm...> - 2010-06-14 09:24:57
|
Hello, On Mon, Jun 14, 2010 at 10:45 AM, Laurent Wandrebeck <lw...@hy...> wrote: > > http://www.moosefs.org/mini-howtos.html#redundant-master > > > > and see how it is possible to create a fail proof solution using CARP. > > [MB] You can also refer to this mini how-to: > Well, the only CARP setting I've done is for pfsense, and it's integrated > (as in click, click, done:). > MooseFS is especially sweet to configure and deploy. Not so for master > failover :) > You can also use Heartbeat for the master/slave, with a haresources file looking like "mfsmaster IPaddr::192.168.0.100/24/eth1 switchMFSmaster", and creating a script switchMFSmaster launching the following lines when switching from slave to master : mfsmetalogger stop mfsmetarestore -a mfsmaster start And on the master node becoming slave : mfsmaster stop mfsmetalogger start > Do you plan to enhance that point in an upcoming version so it becomes > quick (and easy) to get ? > It'd be a really nice feature, and could push MooseFS into HA world. > I agree with you, it would be really nice to have an integrated way to do so. Fabien PS : thanks for the CentOS repo ;-) |
From: Laurent W. <lw...@hy...> - 2010-06-14 08:45:52
|
On Thu, 10 Jun 2010 10:26:46 +0200 Michał Borychowski <mic...@ge...> wrote: > > > El Jue 03 Junio 2010, Laurent Wandrebeck escribió: > > > On Thu, 3 Jun 2010 09:02:29 +0200 > > > - Do you know of any « big user » relying on mfs ? I've been able to find > > > several for glusterfs for example, nothing for moosefs. Such entries would > > > be nice on the website, and reassuring for potential users. > > > > Well, I was pretty sure I saw a "Who's using" section on the website but I > > can't find it. Indeed it would be nice to have one. > [MB] No, it has not been yet created. We plan to implement it. Nice. > > [MB] At our company (http://www.gemius.com) we have four deployments, the biggest has almost 30 million files distributed over 70 chunk servers having a total space of 570TiB. Chunkserver machines at the same time are used to make other calculations. Do the chunkservers use the mfs volume they export via a local mount for their calculations ? > > [MB] Another big Polish company which uses MooseFS for data storage is Redefine (http://www.redefine.pl/). > > > > > > I've read that you have something like half a PB. We're up to 70TB, > > > going to 200 in the next months. Are there any known limits, bottlenecks, > > > loads that push systems/network on their knees ? We are processing satellite > > > images, so I/O is quite heavy, and I'm worrying a bit about the behaviour > > > during real processing load. > [MB] You can have a look at this FAQ entry: > http://www.moosefs.org/moosefs-faq.html#mtu Thanks for the link. I've read it before, I was just wondering if there were any other recipe :) > > [MB] At our environment we use SATA disks and while making lots of additional calculations on chunkservers we even do not fully use the available bandwidth of the network. If you will use SAS disks it can happen that there would appear some problems we have not yet encountered. We're 3ware+SATA everywhere here. So I guess it'll work. > > > > [ ... snip ... ] > > > master failover is a bit tricky, which is really annoying for HA. > > > > That's probably a point for Gluster as it doesn't have a metadata server, but > > actually there is a master (sort of) which is the one the clients connect to. > > > > If it goes away, there's a delay till another node becomes master, at least in > > theory as I didn't test that part. > [MB] You can also refer to this mini how-to: > http://www.moosefs.org/mini-howtos.html#redundant-master > > and see how it is possible to create a fail proof solution using CARP. Well, the only CARP setting I've done is for pfsense, and it's integrated (as in click, click, done:). MooseFS is especially sweet to configure and deploy. Not so for master failover :) Do you plan to enhance that point in an upcoming version so it becomes quick (and easy) to get ? It'd be a really nice feature, and could push MooseFS into HA world. > > > > [ ... snip ... ] > > Bigger files are divided into fragments of 64MB and each of them can be stored on different chunkservers. So there is a quite substantial probability that a big file with goal=1 will be unavailable (or at least its part(s)) if one of the chunks has been stored on the failed chunkserver. > > The general rule is to use goal=2 for normal files and goal=3 for files that are especially important to you. Thanks for the clarification. Another point, I've « advertised » my rpm repo on CentOS mailing-list. Someone asked if MooseFS was backed by a company or if it was developped on spare time by freelance devs. I know it was developped by Gemius for their internal needs, but now it's been freed, does the company still backs the software, or does devs work on it in their spare time ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-14 07:19:24
|
MooseFS fully supports appending to a file and writing to any position of a file. It also supports creating sparse files. Two independent processes / machines may write in parallel to the same file at different positions. If two positions are located in different chunks (pos1 / 64Mi != pos2 / 64Mi) the writing process would be run at normal speed; if two positions are in the same chunk you can expect a substantial loss of writing speed (due to chunk lock for writing). Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Stas Oskin [mailto:sta...@gm...] Sent: Thursday, June 10, 2010 2:13 PM To: moo...@li... Cc: MooseFS Subject: [Moosefs-users] Append and seek while writing functionality Hi. Does MooseFS supports append functionality? Also, does it support the ability to seek file being it's being written and write data in other place (like regular file system)? Thanks! |
From: Laurent W. <lw...@hy...> - 2010-06-11 09:52:49
|
Hi, you can cd /etc/yum.repos.d; wget http://centos.kodros.fr/moosefs.repo; yum install mfs SRPMS, i386 and x86_64 are available. Two points: - DNS may not be up to date where you are. The subdomain has just been created. Please be patient. - I have made an update to the .spec file, to move config files to /etc/mfs instead of /etc. I had no time to test it yet. I guess it works :) Feedback welcome ! Oh, and don't worry about the GPG key, @gmail address is another mailbox I use, and the same key is used. Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Ricardo J. B. <ric...@da...> - 2010-06-10 23:21:13
|
El Jue 10 Junio 2010, Michał Borychowski escribió: > > El Jue 03 Junio 2010, Laurent Wandrebeck escribió: > > > On Thu, 3 Jun 2010 09:02:29 +0200 > > > master failover is a bit tricky, which is really annoying for HA. > > > > That's probably a point for Gluster as it doesn't have a metadata server, > > but actually there is a master (sort of) which is the one the clients > > connect to. > > > > If it goes away, there's a delay till another node becomes master, at > > least in theory as I didn't test that part. > > [MB] You can also refer to this mini how-to: > http://www.moosefs.org/mini-howtos.html#redundant-master > > and see how it is possible to create a fail proof solution using CARP. Yup, but notice I was talking about Gluster, which takes care of HA itself: that's more integrated but less flexible, since with CARP or another HA solution you can probably tune it a little bit more. Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! |
From: Laurent W. <lw...@hy...> - 2010-06-10 15:35:13
|
On Thu, 10 Jun 2010 15:29:23 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > Any idea when the RPM's will be available? I'll try to create the repo tomorrow. Will give you the url ASAP. Which arch/distro do you need ? I can easily do i386/x86_64 for F13 and CentOS 5.5. Any other distro/arch would be problematic for me. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-06-10 12:51:41
|
Hi. Any idea when the RPM's will be available? Thanks. On Sat, Jun 5, 2010 at 10:43 PM, <lw...@hy...> wrote: > Hi, > > Please find attached an updated spec file for rpm creation, based on the > one Kirby Zhou provided. > Tested on CentOS 5.5 x86_64 and i386. > Arch built: i386,i486,i586,i686,athlon,pentium2/3/4,x86_64. > Tested on Fedora 13 x86_64. > Arch built: x86_64. > Could you please add the file in the git repo ? After all, debian is there > :) > I can provide a repo (probably CentOS 5 only) if you want. > > Thanks, > > Laurent. > > PS: not even a warning during compilation, nice ! > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Stas O. <sta...@gm...> - 2010-06-10 12:44:25
|
Hi. Does MooseFS supports append functionality? Also, does it support the ability to seek file being it's being written and write data in other place (like regular file system)? Thanks! |
From: Michał B. <mic...@ge...> - 2010-06-10 08:46:00
|
From: kuer ku [mailto:ku...@gm...] Sent: Wednesday, June 09, 2010 4:19 AM To: Michał Borychowski Subject: Re: [Moosefs-users] how to fix unavailabe chunk ?? Thanks, Michal. 1. Reboot just took place on one of mfsmount box, none of metaservers or chunkservers reboot. So what cause the file lost, it is very interesting. and I can not understand why and how. [MB] Unfortunately we also do not know why it happened, we have too little detailed information on this matter. Interruption of the writing process on a client side could cause that the file would be shorter (lacking some data at the end) or probably could also cause a wrong version number of the chunk. But we hardly imagine why the chunk could have disappeared. Client machine has nothing to do with creating or deleting chunks or with assigning chunks to files. These operations are made only on the level of communication between master and chunkservers. So if none of the chunkservers or the master server had not been rebooted this situation is really unlikely. 2. I try to mfsfilerepair the file, and it worked. I can view the content of the file after repair. But, on web interface, it still shows that : "xxx file currently unavailable. ". How it get these information? and can I find this information somewhere else? [MB] Interface shows data collected with one hour lag so only after one hour you would have updated information. Regards Michał Borychowski thanks all. 2010/6/8 Michał Borychowski <mic...@ge...> Hi! The system says that chunk numbered "D710" is not available (none copy of the 3 set in goal). If all chunkservers and all the disks are connected it means that this chunk simply does not exist. If reboot took place while the file had been written it can happen that such a chunk will be lost. The important question is - was it the reboot of the master server, chunkservers or the whole system? An abrupt reboot of the whole system (eg. lack of electricity) could cause something like this. Fsck on chunkserver could have unfortunately deleted this chunk. It may be worthy to look into "lost+found" on disks connected on mfschunkservers. You can also issue "mfsfilerepair", but this will help only by creating zeros in the "damaged" place of the file. The system would not try to read it (to be exact system does not hang up, it makes lots of retries to read it - waits for the file to show up and after several minutes it gives up). If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: kuer ku [mailto:ku...@gm...] Sent: Saturday, June 05, 2010 12:05 PM To: moo...@li... Subject: [Moosefs-users] how to fix unavailabe chunk ?? Hi, all, I setup a moosefs storage with 1 metaserver + 4 chunkserver. today I found some error messages on http interface : there are some files lost. currently unavailable chunk 000000000000D710 (inode: 331 ; index: 0) * currently unavailable file 331: sink/fifodata/00126/20100604/00126_20100604164805 On box where mfsmount, when executing 'ls' command, it shows : -rw-rw-rw- 1 sea sea 2778996 6 4 17:32 00126_20100604164805 There is a system reboot occurs on 06/04 17:32; it is the last time when file was written. Now, at present, I can list it, but I cannot cat content of the files. Moreover, when you cat this file, the command would hang. I can find some error message in /var/log/messages : Jun 5 17:51:26 nbase07 mfsmount[6625]: file: 331, index: 0, chunk: 55056, version: 2 - there are no valid copies Jun 5 17:51:26 nbase07 mfsmount[6625]: file: 331, index: 0 - can't connect to proper chunkserver (try counter: 15) Jun 5 17:52:26 nbase07 mfsmount[6625]: file: 331, index: 0, chunk: 55056, version: 2 - there are no valid copies Jun 5 17:52:26 nbase07 mfsmount[6625]: file: 331, index: 0 - can't connect to proper chunkserver (try counter: 22) Jun 5 17:53:26 nbase07 mfsmount[6625]: file: 331, index: 0, chunk: 55056, version: 2 - there are no valid copies Jun 5 17:53:26 nbase07 mfsmount[6625]: file: 331, index: 0 - can't connect to proper chunkserver (try counter: 29) and, the goal of the file should be 3, because I set goal of its parent-directory is 3. What is the problem ? how to fix it ?? My environment : metaserver : moosefs 1.6.13 build on CentOS 5.3 x86_64 chunkserver : moosefs 1.6.13 build on CentOS 5.3 x86_64 mfsmount : MFS version 1.6.15 (FUSE library version: 2.7.4) on FreeBSD 6.2 thanks, - kuer |