From: Laurent W. <lw...@hy...> - 2010-06-02 16:02:47
|
Hi, git repo is outdated. where are up to date sources available ? thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-03 07:03:05
|
It will be updated when we push the next release (probably next week). If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 > -----Original Message----- > From: Laurent Wandrebeck [mailto:lw...@hy...] > Sent: Wednesday, June 02, 2010 5:36 PM > To: moo...@li... > Subject: [Moosefs-users] git outdated > > Hi, > > git repo is outdated. > where are up to date sources available ? > > thanks, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Laurent W. <lw...@hy...> - 2010-06-03 08:33:57
|
On Thu, 3 Jun 2010 09:02:29 +0200 Michał Borychowski <mic...@ge...> wrote: > It will be updated when we push the next release (probably next week). Nice. 4 months without a commit, and 4 new stable versions was a bit annoying. What about 1.7 ? Is there a not yet published dev branch ? > > If you need any further assistance please let us know. Thanks, now some technical points: I've been able to deploy for testing purpose 6 boxes: 1 master, 1 metalogger, 4 chunks, for a little less than 1TB. - Are there any known problems being mfs client and server ? Right now, every machine has its own storage, having a single volume is nice, but we can't afford to lose their processing power. I've done some quick tests, it seems to work fine. - I've read that you have something like half a PB. We're up to 70TB, going to 200 in the next months. Are there any known limits, bottlenecks, loads that push systems/network on their knees ? We are processing satellite images, so I/O is quite heavy, and I'm worrying a bit about the behaviour during real processing load. - Do you know of any « big user » relying on mfs ? I've been able to find several for glusterfs for example, nothing for moosefs. Such entries would be nice on the website, and reassuring for potential users. - How does moosefs compare to glusterfs ? What are their respective pros and cons ? I haven't been able to find a comprehensive list. moose is quite easy to deploy (easier than glusterfs, I think, but not yet tested). master failover is a bit tricky, which is really annoying for HA. Goal is just beautiful. Other than that, stability/performance wise, I have no idea. - At last, just to be sure I understood correctly, files are automatically striped through available chunkservers, so for all files with goal at 1, if a single chunkserver falls, files are unavailable, unless they are smaller than 64MB and not on the out of order chunkserver, correct ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Ricardo J. B. <ric...@da...> - 2010-06-03 15:38:57
|
El Jue 03 Junio 2010, Laurent Wandrebeck escribió: > On Thu, 3 Jun 2010 09:02:29 +0200 [ ... snip ... ] > - Do you know of any « big user » relying on mfs ? I've been able to find > several for glusterfs for example, nothing for moosefs. Such entries would > be nice on the website, and reassuring for potential users. Well, I was pretty sure I saw a "Who's using" section on the website but I can't find it. Indeed it would be nice to have one. > - How does moosefs compare to glusterfs? What are their respective pros and > cons ? I haven't been able to find a comprehensive list. I have tested both (and also Lustre) so here are my two cents: > moose is quite easy to deploy (easier than glusterfs, I think, but not yet > tested). Yes, I think Moose is the easiest of the three. > master failover is a bit tricky, which is really annoying for HA. That's probably a point for Gluster as it doesn't have a metadata server, but actually there is a master (sort of) which is the one the clients connect to. If it goes away, there's a delay till another node becomes master, at least in theory as I didn't test that part. > Goal is just beautiful. Yes, and IMHO this is a big advantage of Moose. Lustre doesn't even have replication and with Gluster the copies of a file are determined by how many storage nodes you configure as replicas. > Other than that, stability/performance wise, I have no idea. My tests showed Moose had the best performance of the three. My Moose cluster (1 master + 1 metalogger + 3 chunkservers = 5.3 TB, with 84 clients doing nightly backups) has been running for only 3 months, but without any problems so far. Never had any stability issues with Gluster or Lustre either, but I only did some tests, never put them in production. > - At last, just to be sure I understood correctly, files are automatically > striped through available chunkservers, so for all files with goal at 1, if > a single chunkserver falls, files are unavailable, unless they are smaller > than 64MB and not on the out of order chunkserver, correct ? I believe you're correct, and that's why you should always have at least a goal of 2. I mean, if you consider your data important ;) Best regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! ------------------------------------------ Nota de confidencialidad: Este mensaje y los archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de Dattatec.com queda prohibida. Dattatec.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud. el destinatario del mismo y lo ha recibido por error, por favor notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by Dattatec.com. Dattatec.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender. Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informe-nos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Michał B. <mic...@ge...> - 2010-06-10 08:27:04
|
> El Jue 03 Junio 2010, Laurent Wandrebeck escribió: > > On Thu, 3 Jun 2010 09:02:29 +0200 > > - Do you know of any « big user » relying on mfs ? I've been able to find > > several for glusterfs for example, nothing for moosefs. Such entries would > > be nice on the website, and reassuring for potential users. > > Well, I was pretty sure I saw a "Who's using" section on the website but I > can't find it. Indeed it would be nice to have one. [MB] No, it has not been yet created. We plan to implement it. [MB] At our company (http://www.gemius.com) we have four deployments, the biggest has almost 30 million files distributed over 70 chunk servers having a total space of 570TiB. Chunkserver machines at the same time are used to make other calculations. [MB] Another big Polish company which uses MooseFS for data storage is Redefine (http://www.redefine.pl/). > > I've read that you have something like half a PB. We're up to 70TB, > > going to 200 in the next months. Are there any known limits, bottlenecks, > > loads that push systems/network on their knees ? We are processing satellite > > images, so I/O is quite heavy, and I'm worrying a bit about the behaviour > > during real processing load. [MB] You can have a look at this FAQ entry: http://www.moosefs.org/moosefs-faq.html#mtu [MB] At our environment we use SATA disks and while making lots of additional calculations on chunkservers we even do not fully use the available bandwidth of the network. If you will use SAS disks it can happen that there would appear some problems we have not yet encountered. [ ... snip ... ] > > master failover is a bit tricky, which is really annoying for HA. > > That's probably a point for Gluster as it doesn't have a metadata server, but > actually there is a master (sort of) which is the one the clients connect to. > > If it goes away, there's a delay till another node becomes master, at least in > theory as I didn't test that part. [MB] You can also refer to this mini how-to: http://www.moosefs.org/mini-howtos.html#redundant-master and see how it is possible to create a fail proof solution using CARP. [ ... snip ... ] > > - At last, just to be sure I understood correctly, files are automatically > > striped through available chunkservers, so for all files with goal at 1, if > > a single chunkserver falls, files are unavailable, unless they are smaller > > than 64MB and not on the out of order chunkserver, correct ? > > I believe you're correct, and that's why you should always have at least a > goal of 2. I mean, if you consider your data important ;) [MB] Files smaller than 64MB are kept in one chunk and if you set goal=1 and a chunkserver storing this chunk fails, the file is not available. Bigger files are divided into fragments of 64MB and each of them can be stored on different chunkservers. So there is a quite substantial probability that a big file with goal=1 will be unavailable (or at least its part(s)) if one of the chunks has been stored on the failed chunkserver. The general rule is to use goal=2 for normal files and goal=3 for files that are especially important to you. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 > > Best regards, > -- > Ricardo J. Barberis > Senior SysAdmin - I+D > Dattatec.com :: Soluciones de Web Hosting > Su Hosting hecho Simple..! > > ------------------------------------------ > > Nota de confidencialidad: Este mensaje y los archivos adjuntos al mismo > son confidenciales, de uso exclusivo para el destinatario del mismo. La > divulgación y/o uso del mismo sin autorización por parte de Dattatec.com > queda prohibida. Dattatec.com no se hace responsable del mensaje por la > falsificación y/o alteración del mismo. > De no ser Ud. el destinatario del mismo y lo ha recibido por error, por > favor notifique al remitente y elimínelo de su sistema. > > Confidentiality Note: This message and any attachments (the message) are > confidential and intended solely for the addressees. Any unauthorised use > or dissemination is prohibited by Dattatec.com. Dattatec.com shall not be > liable for the message if altered or falsified. > If you are not the intended addressee of this message, please cancel it > immediately and inform the sender. > > Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem > conter dados confidenciais ou privilegiados. Se você os recebeu por engano > ou não é um dos destinatários aos quais ela foi endereçada, por favor > destrua-a e a todos os seus eventuais anexos ou copias realizadas, > imediatamente. > É proibida a retenção, distribuição, divulgação ou utilização de quaisquer > informações aqui contidas. Por favor, informe-nos sobre o recebimento > indevido desta mensagem, retornando-a para o autor. > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@da...> - 2010-06-10 23:21:13
|
El Jue 10 Junio 2010, Michał Borychowski escribió: > > El Jue 03 Junio 2010, Laurent Wandrebeck escribió: > > > On Thu, 3 Jun 2010 09:02:29 +0200 > > > master failover is a bit tricky, which is really annoying for HA. > > > > That's probably a point for Gluster as it doesn't have a metadata server, > > but actually there is a master (sort of) which is the one the clients > > connect to. > > > > If it goes away, there's a delay till another node becomes master, at > > least in theory as I didn't test that part. > > [MB] You can also refer to this mini how-to: > http://www.moosefs.org/mini-howtos.html#redundant-master > > and see how it is possible to create a fail proof solution using CARP. Yup, but notice I was talking about Gluster, which takes care of HA itself: that's more integrated but less flexible, since with CARP or another HA solution you can probably tune it a little bit more. Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! |
From: Laurent W. <lw...@hy...> - 2010-06-14 08:45:52
|
On Thu, 10 Jun 2010 10:26:46 +0200 Michał Borychowski <mic...@ge...> wrote: > > > El Jue 03 Junio 2010, Laurent Wandrebeck escribió: > > > On Thu, 3 Jun 2010 09:02:29 +0200 > > > - Do you know of any « big user » relying on mfs ? I've been able to find > > > several for glusterfs for example, nothing for moosefs. Such entries would > > > be nice on the website, and reassuring for potential users. > > > > Well, I was pretty sure I saw a "Who's using" section on the website but I > > can't find it. Indeed it would be nice to have one. > [MB] No, it has not been yet created. We plan to implement it. Nice. > > [MB] At our company (http://www.gemius.com) we have four deployments, the biggest has almost 30 million files distributed over 70 chunk servers having a total space of 570TiB. Chunkserver machines at the same time are used to make other calculations. Do the chunkservers use the mfs volume they export via a local mount for their calculations ? > > [MB] Another big Polish company which uses MooseFS for data storage is Redefine (http://www.redefine.pl/). > > > > > > I've read that you have something like half a PB. We're up to 70TB, > > > going to 200 in the next months. Are there any known limits, bottlenecks, > > > loads that push systems/network on their knees ? We are processing satellite > > > images, so I/O is quite heavy, and I'm worrying a bit about the behaviour > > > during real processing load. > [MB] You can have a look at this FAQ entry: > http://www.moosefs.org/moosefs-faq.html#mtu Thanks for the link. I've read it before, I was just wondering if there were any other recipe :) > > [MB] At our environment we use SATA disks and while making lots of additional calculations on chunkservers we even do not fully use the available bandwidth of the network. If you will use SAS disks it can happen that there would appear some problems we have not yet encountered. We're 3ware+SATA everywhere here. So I guess it'll work. > > > > [ ... snip ... ] > > > master failover is a bit tricky, which is really annoying for HA. > > > > That's probably a point for Gluster as it doesn't have a metadata server, but > > actually there is a master (sort of) which is the one the clients connect to. > > > > If it goes away, there's a delay till another node becomes master, at least in > > theory as I didn't test that part. > [MB] You can also refer to this mini how-to: > http://www.moosefs.org/mini-howtos.html#redundant-master > > and see how it is possible to create a fail proof solution using CARP. Well, the only CARP setting I've done is for pfsense, and it's integrated (as in click, click, done:). MooseFS is especially sweet to configure and deploy. Not so for master failover :) Do you plan to enhance that point in an upcoming version so it becomes quick (and easy) to get ? It'd be a really nice feature, and could push MooseFS into HA world. > > > > [ ... snip ... ] > > Bigger files are divided into fragments of 64MB and each of them can be stored on different chunkservers. So there is a quite substantial probability that a big file with goal=1 will be unavailable (or at least its part(s)) if one of the chunks has been stored on the failed chunkserver. > > The general rule is to use goal=2 for normal files and goal=3 for files that are especially important to you. Thanks for the clarification. Another point, I've « advertised » my rpm repo on CentOS mailing-list. Someone asked if MooseFS was backed by a company or if it was developped on spare time by freelance devs. I know it was developped by Gemius for their internal needs, but now it's been freed, does the company still backs the software, or does devs work on it in their spare time ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Fabien G. <fab...@gm...> - 2010-06-14 09:24:57
|
Hello, On Mon, Jun 14, 2010 at 10:45 AM, Laurent Wandrebeck <lw...@hy...> wrote: > > http://www.moosefs.org/mini-howtos.html#redundant-master > > > > and see how it is possible to create a fail proof solution using CARP. > > [MB] You can also refer to this mini how-to: > Well, the only CARP setting I've done is for pfsense, and it's integrated > (as in click, click, done:). > MooseFS is especially sweet to configure and deploy. Not so for master > failover :) > You can also use Heartbeat for the master/slave, with a haresources file looking like "mfsmaster IPaddr::192.168.0.100/24/eth1 switchMFSmaster", and creating a script switchMFSmaster launching the following lines when switching from slave to master : mfsmetalogger stop mfsmetarestore -a mfsmaster start And on the master node becoming slave : mfsmaster stop mfsmetalogger start > Do you plan to enhance that point in an upcoming version so it becomes > quick (and easy) to get ? > It'd be a really nice feature, and could push MooseFS into HA world. > I agree with you, it would be really nice to have an integrated way to do so. Fabien PS : thanks for the CentOS repo ;-) |
From: Laurent W. <lw...@hy...> - 2010-06-14 09:31:37
|
On Mon, 14 Jun 2010 11:24:29 +0200 Fabien Germain <fab...@gm...> wrote: > Hello, > > On Mon, Jun 14, 2010 at 10:45 AM, Laurent Wandrebeck <lw...@hy...> wrote: > > You can also use Heartbeat for the master/slave, with a haresources file > looking like "mfsmaster IPaddr::192.168.0.100/24/eth1 switchMFSmaster", and > creating a script switchMFSmaster launching the following lines when > switching from slave to master : > > mfsmetalogger stop > mfsmetarestore -a > mfsmaster start > > And on the master node becoming slave : > > mfsmaster stop > mfsmetalogger start You're on the right track to get a nice and complete howto, don't stop there :) > <snip> > PS : thanks for the CentOS repo ;-) You're welcome ! Nice to read it's useful ! -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-14 10:11:15
|
> > You can also use Heartbeat for the master/slave, with a haresources > > file looking like "mfsmaster IPaddr::192.168.0.100/24/eth1 > > switchMFSmaster", and creating a script switchMFSmaster launching the > > following lines when switching from slave to master : > > > > mfsmetalogger stop > > mfsmetarestore -a > > mfsmaster start > > > > And on the master node becoming slave : > > > > mfsmaster stop > > mfsmetalogger start > You're on the right track to get a nice and complete howto, don't stop there > :) [MB] Yes, Fabien, you are encouraged to prepare a mini howto which we could post on http://www.moosefs.org/mini-howtos.html page. Thank you! Michał > <snip> > > PS : thanks for the CentOS repo ;-) > You're welcome ! Nice to read it's useful ! > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Michał B. <mic...@ge...> - 2010-06-14 09:55:26
|
> -----Original Message----- > From: Laurent Wandrebeck [mailto:lw...@hy...] > Sent: Monday, June 14, 2010 10:46 AM > To: moo...@li... > Subject: Re: [Moosefs-users] git outdated [ ... snip ... ] > > [MB] At our company (http://www.gemius.com) we have four deployments, the > biggest has almost 30 million files distributed over 70 chunk servers having a > total space of 570TiB. Chunkserver machines at the same time are used to make > other calculations. > Do the chunkservers use the mfs volume they export via a local mount for their > calculations ? [MB] Mfs resources are locally mounted (just as msf client with mfsmount) by chunkservers. [ ... snip ... ] > > [MB] You can also refer to this mini how-to: > > http://www.moosefs.org/mini-howtos.html#redundant-master > > > > and see how it is possible to create a fail proof solution using CARP. > Well, the only CARP setting I've done is for pfsense, and it's integrated (as > in click, click, done:). > MooseFS is especially sweet to configure and deploy. Not so for master > failover :) Do you plan to enhance that point in an upcoming version so it > becomes quick (and easy) to get ? > It'd be a really nice feature, and could push MooseFS into HA world. [MB] We plan to make it more automatic but I'm afraid it won't be in the next release. [ ... snip ... ] > Another point, I've « advertised » my rpm repo on CentOS mailing-list. [MB] Thanks! > Someone > asked if MooseFS was backed by a company or if it was developped on spare time > by freelance devs. I know it was developped by Gemius for their internal > needs, but now it's been freed, does the company still backs the software, or > does devs work on it in their spare time ? [MB] MooseFS is still developed mainly by Gemius as it is widely used in the company and the development is backed up by the community. And also thank you for the CentOS repo! Regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 > > Thanks, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Laurent W. <lw...@hy...> - 2010-06-14 11:57:53
|
On Mon, 14 Jun 2010 11:55:10 +0200 Michał Borychowski <mic...@ge...> wrote: > > > > -----Original Message----- > > From: Laurent Wandrebeck [mailto:lw...@hy...] > > Sent: Monday, June 14, 2010 10:46 AM > > To: moo...@li... > > Subject: Re: [Moosefs-users] git outdated > > [MB] Mfs resources are locally mounted (just as msf client with mfsmount) by chunkservers. Great, I thought so. There is really no reason for it not to work, just wanted to be sure :) > > [ ... snip ... ] > > [MB] We plan to make it more automatic but I'm afraid it won't be in the next release. Do you mean 1.6.16 or 1.7 ? :) > > [ ... snip ... ] > > > Another point, I've « advertised » my rpm repo on CentOS mailing-list. > [MB] Thanks! <snip> > [MB] MooseFS is still developed mainly by Gemius as it is widely used in the company and the development is backed up by the community. OK. I haven't been able to find a place explaining code structure (call graphs, etc.). Does it already exist ? > > > And also thank you for the CentOS repo! You're welcome. Some people on CentOS mailing-list exprimed concerns about security of the RPMS. Quite normal, as I'm anonymous, recent user of MooseFS, I could have put anything in the source before building the repo (of course I did NOT do that:). Do you have a bit of workforce to verify* they are OK so you can « officialy » back them (I'll maintain them happilly if you want), or to host them so such doubt would not happen anymore ? Regards, * rpm -Uvh http://centos.kodros.fr/5/SRPMS/mfs-1.6.15-2.src.rpm wget http://sourceforge.net/projects/moosefs/files/moosefs/1.6.15/mfs-1.6.15.tar.gz/download md5sum mfs-1.6.15.tar.gz /usr/src/redhat/SOURCES/mfs-1.6.15.tar.gz result is 90749a0fb0e55c0013fae6e23a990bd9 for both here. and don't forget to take a look at /usr/src/redhat/SPEC/mfs.spec do check that nothing is applied to the sources :) Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-17 11:51:08
|
[ ... snip ... ] > > [MB] We plan to make it more automatic but I'm afraid it won't be in the > next release. > Do you mean 1.6.16 or 1.7 ? :) [MB] For sure not 1.6.16 and maybe in some version in 1.7.x [ ... snip ... ] > > [MB] MooseFS is still developed mainly by Gemius as it is widely used in the > company and the development is backed up by the community. > OK. I haven't been able to find a place explaining code structure (call > graphs, etc.). Does it already exist ? [MB] No, there are no call graphs, no code structure overview prepared by us. Possibly Doxygen could make most of it? > > And also thank you for the CentOS repo! > You're welcome. Some people on CentOS mailing-list exprimed concerns about > security of the RPMS. Quite normal, as I'm anonymous, recent user of MooseFS, > I could have put anything in the source before building the repo (of course I > did NOT do that:). > Do you have a bit of workforce to verify* they are OK so you can « officialy » > back them (I'll maintain them happilly if you want), or to host them so such > doubt would not happen anymore ? > Regards, > * > rpm -Uvh http://centos.kodros.fr/5/SRPMS/mfs-1.6.15-2.src.rpm > wget > http://sourceforge.net/projects/moosefs/files/moosefs/1.6.15/mfs- > 1.6.15.tar.gz/download > md5sum mfs-1.6.15.tar.gz /usr/src/redhat/SOURCES/mfs-1.6.15.tar.gz > result is 90749a0fb0e55c0013fae6e23a990bd9 for both here. > and don't forget to take a look at /usr/src/redhat/SPEC/mfs.spec do check that > nothing is applied to the sources :) Thanks, [MB] Our technical team will look into these files and probably would incorporate them in one of the next releases. Regards Michał > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Laurent W. <lw...@hy...> - 2010-06-17 12:00:13
|
On Thu, 17 Jun 2010 13:50:46 +0200 Michał Borychowski <mic...@ge...> wrote: > [ ... snip ... ] > > > [MB] We plan to make it more automatic but I'm afraid it won't be in the > > next release. > > Do you mean 1.6.16 or 1.7 ? :) > [MB] For sure not 1.6.16 and maybe in some version in 1.7.x I guessed so, but at least it's clear now. > > > [ ... snip ... ] > > OK. I haven't been able to find a place explaining code structure (call > > graphs, etc.). Does it already exist ? > [MB] No, there are no call graphs, no code structure overview prepared by us. Possibly Doxygen could make most of it? I think so. It would greatly help for maintainability, and for new coders to dive into the beast. Is it on the roadmap ? > <snip> > [MB] Our technical team will look into these files and probably would incorporate them in one of the next releases. OK, I'll send in a couple minutes the latest .spec file version. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-17 12:06:53
|
[ ... snip ... ] > > > OK. I haven't been able to find a place explaining code structure > > > (call graphs, etc.). Does it already exist ? > > [MB] No, there are no call graphs, no code structure overview prepared by > us. Possibly Doxygen could make most of it? > I think so. It would greatly help for maintainability, and for new coders to > dive into the beast. > Is it on the roadmap ? [MB] Maybe someone from the community would like to take care of code documentation? For a start Doxygen would help to prepare the on-line html files. [ ... snip ... ] Kind regards Michał |
From: Laurent W. <lw...@hy...> - 2010-06-17 12:30:19
|
On Thu, 17 Jun 2010 14:06:33 +0200 Michał Borychowski <mic...@ge...> wrote: > [ ... snip ... ] > [MB] Maybe someone from the community would like to take care of code documentation? For a start Doxygen would help to prepare the on-line html files. OK, I'll try to take care of it. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |