You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: youngcow <you...@gm...> - 2011-02-24 16:14:06
|
Hi I have build a moosefs system.Now I have a new demand. I have two networks(network A and network B). Network A and network B can't communicate each other directly. There are some moosefs clients in network A and network B. So I will install two Nics(connect to network A and network B) in each trunks server and master server. But I don't know this solution can work correctly? (because I don't know master server can return correct trunk server's ip address to client?) Thanks. |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-24 09:34:11
|
Hi, For the moment we have only 4 disks. 4 will be added after the end of the synchronisation (with our old GlusterFS partition). Regards. Sam Le 24/02/2011 10:28, Laurent Wandrebeck a écrit : > On Thu, 24 Feb 2011 10:15:41 +0100 > "Samuel Hassine, Olympe Network"<sam...@ol...> > wrote: > >> Hi, >> >> The MFS Master crashes but all servers are now up but I still have >> >> unavailable chunks: 21467 >> unavailable trash files: 241 >> unavailable files: 287 >> >> The address: http://on-001.olympe-network.com:9425 > Look in disks tab. I see only 4 disks, not 8 as you said in a previous > mail ? > > > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-24 09:33:39
|
Hi! Exactly, operation on small files speed is a very important point for us :) Regards. Samuel Le 24/02/2011 10:10, Michal Borychowski a écrit : > Hi! > > Quotas are quite high in our roadmap but for the moment we focus on the > changes which would speed up operations on small files - I think it would > also be quite profitable for you. > > > Regards > Michal > > > -----Original Message----- > From: Samuel Hassine, Olympe Network > [mailto:sam...@ol...] > Sent: Wednesday, February 23, 2011 11:57 AM > To: Michal Borychowski > Cc: 'Yann Autissier, Another Service'; moo...@li... > Subject: Re: [Moosefs-users] MooseFS, our Savior! > > Hi Michal, > > Thanks for your message. We will share our configuration and description > in the page "Who is using MooseFS" asap. > > I have a first question, our system requires per-user quota (or > per-directory quota) but since we deploy MooseFS we did not find any > "lighty" solution to do this. > > Have you any advice for us? Because it is an important part of our service. > > Regards. > Samuel Hassine > > Le 23/02/2011 09:53, Michal Borychowski a écrit : >> Hi Samuel! >> >> We feel very very happy that MooseFS meets your needs and expectations > when >> talking about a reliable distributed filesystem! :) >> >> It would be nice if you could share your system configuration and >> description at our "Who is using MooseFS" webpage: >> http://www.moosefs.org/who-is-using-moosefs.html >> >> Also you may like to share your experiences with setting up the system > with >> other users. >> >> >> Kind regards >> Michał Borychowski >> MooseFS Support Manager >> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >> Gemius S.A. >> ul. Wołoska 7, 02-672 Warszawa >> Budynek MARS, klatka D >> Tel.: +4822 874-41-00 >> Fax : +4822 874-41-01 >> >> -----Original Message----- >> From: Samuel Hassine, Olympe Network >> [mailto:sam...@ol...] >> Sent: Wednesday, February 23, 2011 2:14 AM >> To: moo...@li... >> Cc: Yann Autissier, Another Service >> Subject: [Moosefs-users] MooseFS, our Savior! >> >> Hi all, >> >> I just want to say to the MooseFS team that they are simply awesome. >> Since one year, we try to create a new concept of a full Open Source >> Cloud Architecture "over datacenters". >> >> We start with a public cloud service and private clouds for our >> customers but, since 5 years, we also own a free web hosting service, >> without any ads, just to defend some values we think important for the >> Internet. >> >> In november, after multiple months of work, we launched a new version >> (4.0) with our cloud solution and, of course, for free. If all the >> bricks we are using are very efficient (Proxmox with OpenVZ and KVM >> virtual engines, load balancers and routing policies avoiding SPOF, >> Zenoss and AI scripting, Gearman inter-apps worker and LDAP directories >> for cloud management and automatic self-heal), the filesystem was just >> not equal to the task. >> >> To not quote its name, GlusterFS, because we want a solution without any >> SPOF. But we encoutered errors during replication, files in bad state, >> undeletable data etc... And each day, our partition freezes without any >> trace in log files... >> >> However, since we deploy MooseFS in production, for approximately 20 000 >> websites and 10 000 databases (8TB of storage, with 10 billions of files >> and directories), there is not any problem at all. >> >> Thank you to the MooseFS team, we hope we will be able to help you when >> we could. And we are attending the directory quota feature in 1.7 version > :) >> >> Thanks again. >> >> Best regards. >> > |
From: Laurent W. <lw...@hy...> - 2011-02-24 09:28:29
|
On Thu, 24 Feb 2011 10:15:41 +0100 "Samuel Hassine, Olympe Network" <sam...@ol...> wrote: > Hi, > > The MFS Master crashes but all servers are now up but I still have > > unavailable chunks: 21467 > unavailable trash files: 241 > unavailable files: 287 > > The address: http://on-001.olympe-network.com:9425 Look in disks tab. I see only 4 disks, not 8 as you said in a previous mail ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-24 09:25:08
|
Hi again, I need help about solving this issue because I think that files in errors hang and block the file operations. Best regards. Samuel Le 24/02/2011 10:15, Samuel Hassine, Olympe Network a écrit : > Hi, > > The MFS Master crashes but all servers are now up but I still have > > unavailable chunks: 21467 > unavailable trash files: 241 > unavailable files: 287 > > The address: http://on-001.olympe-network.com:9425 > > Thanks for your help. > > Regards. > Samuel Hassine > > Le 24/02/2011 09:16, Laurent Wandrebeck a écrit : >> On Thu, 24 Feb 2011 08:56:54 +0100 >> "Samuel Hassine, Olympe Network"<sam...@ol...> >> wrote: >> >>> Hi, >>> >>> Thank you for your answer. Another question, how can I deal with : >>> >>> unavailable chunks: 21467 >>> unavailable trash files: 241 >>> unavailable files: 287 >>> >>> ? >> You must have lost a server or a couple disks, IMHO. >> Check that first. >> Regards, >> >> >> >> ------------------------------------------------------------------------------ >> Free Software Download: Index, Search& Analyze Logs and other IT data in >> Real-Time with Splunk. Collect, index and harness all the fast moving IT data >> generated by your applications, servers and devices whether physical, virtual >> or in the cloud. Deliver compliance at lower cost and gain new business >> insights. http://p.sf.net/sfu/splunk-dev2dev >> >> >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-24 09:15:51
|
Hi, The MFS Master crashes but all servers are now up but I still have unavailable chunks: 21467 unavailable trash files: 241 unavailable files: 287 The address: http://on-001.olympe-network.com:9425 Thanks for your help. Regards. Samuel Hassine Le 24/02/2011 09:16, Laurent Wandrebeck a écrit : > On Thu, 24 Feb 2011 08:56:54 +0100 > "Samuel Hassine, Olympe Network"<sam...@ol...> > wrote: > >> Hi, >> >> Thank you for your answer. Another question, how can I deal with : >> >> unavailable chunks: 21467 >> unavailable trash files: 241 >> unavailable files: 287 >> >> ? > You must have lost a server or a couple disks, IMHO. > Check that first. > Regards, > > > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-02-24 09:11:08
|
Hi! Quotas are quite high in our roadmap but for the moment we focus on the changes which would speed up operations on small files - I think it would also be quite profitable for you. Regards Michal -----Original Message----- From: Samuel Hassine, Olympe Network [mailto:sam...@ol...] Sent: Wednesday, February 23, 2011 11:57 AM To: Michal Borychowski Cc: 'Yann Autissier, Another Service'; moo...@li... Subject: Re: [Moosefs-users] MooseFS, our Savior! Hi Michal, Thanks for your message. We will share our configuration and description in the page "Who is using MooseFS" asap. I have a first question, our system requires per-user quota (or per-directory quota) but since we deploy MooseFS we did not find any "lighty" solution to do this. Have you any advice for us? Because it is an important part of our service. Regards. Samuel Hassine Le 23/02/2011 09:53, Michal Borychowski a écrit : > Hi Samuel! > > We feel very very happy that MooseFS meets your needs and expectations when > talking about a reliable distributed filesystem! :) > > It would be nice if you could share your system configuration and > description at our "Who is using MooseFS" webpage: > http://www.moosefs.org/who-is-using-moosefs.html > > Also you may like to share your experiences with setting up the system with > other users. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > -----Original Message----- > From: Samuel Hassine, Olympe Network > [mailto:sam...@ol...] > Sent: Wednesday, February 23, 2011 2:14 AM > To: moo...@li... > Cc: Yann Autissier, Another Service > Subject: [Moosefs-users] MooseFS, our Savior! > > Hi all, > > I just want to say to the MooseFS team that they are simply awesome. > Since one year, we try to create a new concept of a full Open Source > Cloud Architecture "over datacenters". > > We start with a public cloud service and private clouds for our > customers but, since 5 years, we also own a free web hosting service, > without any ads, just to defend some values we think important for the > Internet. > > In november, after multiple months of work, we launched a new version > (4.0) with our cloud solution and, of course, for free. If all the > bricks we are using are very efficient (Proxmox with OpenVZ and KVM > virtual engines, load balancers and routing policies avoiding SPOF, > Zenoss and AI scripting, Gearman inter-apps worker and LDAP directories > for cloud management and automatic self-heal), the filesystem was just > not equal to the task. > > To not quote its name, GlusterFS, because we want a solution without any > SPOF. But we encoutered errors during replication, files in bad state, > undeletable data etc... And each day, our partition freezes without any > trace in log files... > > However, since we deploy MooseFS in production, for approximately 20 000 > websites and 10 000 databases (8TB of storage, with 10 billions of files > and directories), there is not any problem at all. > > Thank you to the MooseFS team, we hope we will be able to help you when > we could. And we are attending the directory quota feature in 1.7 version :) > > Thanks again. > > Best regards. > |
From: Heiko S. <sch...@iu...> - 2011-02-24 09:08:59
|
Am Donnerstag 24 Februar 2011, um 09:54:14 schrieb Laurent Wandrebeck: > On Thu, 24 Feb 2011 09:24:11 +0100 > Heiko Schröter <sch...@iu...> wrote: > > > Hello, > Hi, > > > > we are currently investigating moosefs as a successor of our 200TB storage cfs. > > mfs-1.6.20, 2.6.36-gentoo-r5, x86_64, fuse 2.8.5 > > Everything is working fine. > > > > We have a question about the way mfs handles chunks. > > Is it possible to keep the chunks on a single chunkserver, instead of "load balance" them to all chunkservers ? > Not that I know of. mfs is designed for reliability, and working with > goal=1 is as bad as raid 0 when it comes to it. Thks for the reply but that was not my intention to ask. I try to keep the "pieces" (chunks) of a file on a single chunkserver. And yes, i'am quite clear about the risks when setting the goal=1. But this will affect the number of copies of the whole file, as far as i understand it. This what we have now with our lustre system anyway ;-) Our chunkserver (Raids) are generally running a hardware raid6 with 5 to 8TB per partition. I wouldn't mind if all the chunks are spread across theses partitions, as long as they stay on a single chunkserver. So, would it be possible to gather all chunks of a single file on a single chunkserver ? Regards Heiko |
From: p g <pg0...@gm...> - 2011-02-24 09:06:20
|
Dear Sir We are using Moosefs currently, when users download a large file via nginx , if there are missing chunks inside that file due to the network problem, the nginx process will hang and become a zombie thread that wait for the IO read. is there any way to prevent this situation, for example, display "404 Not found" to user if missing chunk are found. Regards Pong |
From: Anh K. H. <ky...@vi...> - 2011-02-24 09:01:17
|
On Thu, 24 Feb 2011 09:24:11 +0100 Heiko Schröter <sch...@iu...> wrote: > we are currently investigating moosefs as a successor of our 200TB > storage cfs. mfs-1.6.20, 2.6.36-gentoo-r5, x86_64, fuse 2.8.5 > Everything is working fine. > > We have a question about the way mfs handles chunks. > Is it possible to keep the chunks on a single chunkserver, instead > of "load balance" them to all chunkservers ? As far as I know your requirement is far possible: there's no way to specify a chunk server for any files. The master does it job automatically. > Reason is that in case of a total unrecoverable loss of a single > chunkserver we would loose some files completly. But that would be > better to us than loosing some parts in all files. > Incrementing the goal is not an option since the storage capacity > is limited. How about your current goal (and your MFS setup)? IMHO you would get problem (as you described) when your goal is 1. If goal is 2, you can freely destroy (at most) 1 chunk server :P > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT > data in Real-Time with Splunk. Collect, index and harness all the > fast moving IT data generated by your applications, servers and > devices whether physical, virtual or in the cloud. Deliver > compliance at lower cost and gain new business insights. > http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ moosefs-users > mailing list moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Anh Ky Huynh @ UTC+7 |
From: Laurent W. <lw...@hy...> - 2011-02-24 08:54:24
|
On Thu, 24 Feb 2011 09:24:11 +0100 Heiko Schröter <sch...@iu...> wrote: > Hello, Hi, > > we are currently investigating moosefs as a successor of our 200TB storage cfs. > mfs-1.6.20, 2.6.36-gentoo-r5, x86_64, fuse 2.8.5 > Everything is working fine. > > We have a question about the way mfs handles chunks. > Is it possible to keep the chunks on a single chunkserver, instead of "load balance" them to all chunkservers ? Not that I know of. mfs is designed for reliability, and working with goal=1 is as bad as raid 0 when it comes to it. I don't think such a feature is planned, but I'm not part of mfs dev team :) Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2011-02-24 08:43:38
|
On Thu, 24 Feb 2011 08:56:54 +0100 "Samuel Hassine, Olympe Network" <sam...@ol...> wrote: > Hi, > > Thank you for your answer. Another question, how can I deal with : > > unavailable chunks: 21467 > unavailable trash files: 241 > unavailable files: 287 > > ? You must have lost a server or a couple disks, IMHO. Check that first. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Heiko S. <sch...@iu...> - 2011-02-24 08:42:13
|
Hello, we are currently investigating moosefs as a successor of our 200TB storage cfs. mfs-1.6.20, 2.6.36-gentoo-r5, x86_64, fuse 2.8.5 Everything is working fine. We have a question about the way mfs handles chunks. Is it possible to keep the chunks on a single chunkserver, instead of "load balance" them to all chunkservers ? Reason is that in case of a total unrecoverable loss of a single chunkserver we would loose some files completly. But that would be better to us than loosing some parts in all files. Incrementing the goal is not an option since the storage capacity is limited. Thanks and Regards Heiko |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-24 07:57:06
|
Hi, Thank you for your answer. Another question, how can I deal with : unavailable chunks: 21467 unavailable trash files: 241 unavailable files: 287 ? Thanks again ! Regards. Samuel Hassine Le 24/02/2011 01:35, Anh K. Huynh a écrit : > On Wed, 23 Feb 2011 18:55:59 +0100 > "Samuel Hassine, Olympe Network"<sam...@ol...> wrote: > >> Hi there, >> >> If I have 4 servers with 2 disks (so 8 disks) and I set a goal of 2 >> on all my files and directories. The goal regards to servers or >> disks? >> For example, could a file be replicated between 2 disks on the same >> server or MFS forces the replication to be done between 2 servers? > > To the chunk servers. Example: > > $ mfsfileinfo README > README: > chunk 0: 00000000000113E3_00000001 / (id:70627 ver:1) > copy 1: 10.0.0.92:9422 > copy 2: 10.0.0.91:9422 > >> I the first case, if I want a actual failover solution, I need a >> goal of 3. > > You would study failover solution for the master first :) Search some old topics on the mailing list. > |
From: Anh K. H. <ky...@vi...> - 2011-02-24 00:35:27
|
On Wed, 23 Feb 2011 18:55:59 +0100 "Samuel Hassine, Olympe Network" <sam...@ol...> wrote: > Hi there, > > If I have 4 servers with 2 disks (so 8 disks) and I set a goal of 2 > on all my files and directories. The goal regards to servers or > disks? > For example, could a file be replicated between 2 disks on the same > server or MFS forces the replication to be done between 2 servers? To the chunk servers. Example: $ mfsfileinfo README README: chunk 0: 00000000000113E3_00000001 / (id:70627 ver:1) copy 1: 10.0.0.92:9422 copy 2: 10.0.0.91:9422 > I the first case, if I want a actual failover solution, I need a > goal of 3. You would study failover solution for the master first :) Search some old topics on the mailing list. -- Anh Ky Huynh @ UTC+7 |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-23 17:56:17
|
Hi there, If I have 4 servers with 2 disks (so 8 disks) and I set a goal of 2 on all my files and directories. The goal regards to servers or disks? For example, could a file be replicated between 2 disks on the same server or MFS forces the replication to be done between 2 servers? I the first case, if I want a actual failover solution, I need a goal of 3. Thanks for your answer. -- Samuel HASSINE Président Olympe Network - 31 avenue Sainte Victoire, 13100 Aix-en-Pce Tel. : +33(0)6.89.50.39.65 Site : www.olympe-network.com |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-23 10:57:25
|
Hi Michal, Thanks for your message. We will share our configuration and description in the page "Who is using MooseFS" asap. I have a first question, our system requires per-user quota (or per-directory quota) but since we deploy MooseFS we did not find any "lighty" solution to do this. Have you any advice for us? Because it is an important part of our service. Regards. Samuel Hassine Le 23/02/2011 09:53, Michal Borychowski a écrit : > Hi Samuel! > > We feel very very happy that MooseFS meets your needs and expectations when > talking about a reliable distributed filesystem! :) > > It would be nice if you could share your system configuration and > description at our "Who is using MooseFS" webpage: > http://www.moosefs.org/who-is-using-moosefs.html > > Also you may like to share your experiences with setting up the system with > other users. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > -----Original Message----- > From: Samuel Hassine, Olympe Network > [mailto:sam...@ol...] > Sent: Wednesday, February 23, 2011 2:14 AM > To: moo...@li... > Cc: Yann Autissier, Another Service > Subject: [Moosefs-users] MooseFS, our Savior! > > Hi all, > > I just want to say to the MooseFS team that they are simply awesome. > Since one year, we try to create a new concept of a full Open Source > Cloud Architecture "over datacenters". > > We start with a public cloud service and private clouds for our > customers but, since 5 years, we also own a free web hosting service, > without any ads, just to defend some values we think important for the > Internet. > > In november, after multiple months of work, we launched a new version > (4.0) with our cloud solution and, of course, for free. If all the > bricks we are using are very efficient (Proxmox with OpenVZ and KVM > virtual engines, load balancers and routing policies avoiding SPOF, > Zenoss and AI scripting, Gearman inter-apps worker and LDAP directories > for cloud management and automatic self-heal), the filesystem was just > not equal to the task. > > To not quote its name, GlusterFS, because we want a solution without any > SPOF. But we encoutered errors during replication, files in bad state, > undeletable data etc... And each day, our partition freezes without any > trace in log files... > > However, since we deploy MooseFS in production, for approximately 20 000 > websites and 10 000 databases (8TB of storage, with 10 billions of files > and directories), there is not any problem at all. > > Thank you to the MooseFS team, we hope we will be able to help you when > we could. And we are attending the directory quota feature in 1.7 version :) > > Thanks again. > > Best regards. > |
From: Michal B. <mic...@ge...> - 2011-02-23 08:53:22
|
Hi Samuel! We feel very very happy that MooseFS meets your needs and expectations when talking about a reliable distributed filesystem! :) It would be nice if you could share your system configuration and description at our "Who is using MooseFS" webpage: http://www.moosefs.org/who-is-using-moosefs.html Also you may like to share your experiences with setting up the system with other users. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Samuel Hassine, Olympe Network [mailto:sam...@ol...] Sent: Wednesday, February 23, 2011 2:14 AM To: moo...@li... Cc: Yann Autissier, Another Service Subject: [Moosefs-users] MooseFS, our Savior! Hi all, I just want to say to the MooseFS team that they are simply awesome. Since one year, we try to create a new concept of a full Open Source Cloud Architecture "over datacenters". We start with a public cloud service and private clouds for our customers but, since 5 years, we also own a free web hosting service, without any ads, just to defend some values we think important for the Internet. In november, after multiple months of work, we launched a new version (4.0) with our cloud solution and, of course, for free. If all the bricks we are using are very efficient (Proxmox with OpenVZ and KVM virtual engines, load balancers and routing policies avoiding SPOF, Zenoss and AI scripting, Gearman inter-apps worker and LDAP directories for cloud management and automatic self-heal), the filesystem was just not equal to the task. To not quote its name, GlusterFS, because we want a solution without any SPOF. But we encoutered errors during replication, files in bad state, undeletable data etc... And each day, our partition freezes without any trace in log files... However, since we deploy MooseFS in production, for approximately 20 000 websites and 10 000 databases (8TB of storage, with 10 billions of files and directories), there is not any problem at all. Thank you to the MooseFS team, we hope we will be able to help you when we could. And we are attending the directory quota feature in 1.7 version :) Thanks again. Best regards. -- Samuel HASSINE Président Olympe Network - 31 avenue Sainte Victoire, 13100 Aix-en-Pce Tel. : +33(0)6.89.50.39.65 Site : www.olympe-network.com ---------------------------------------------------------------------------- -- Free Software Download: Index, Search & Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Samuel H. O. N. <sam...@ol...> - 2011-02-23 01:30:43
|
Hi all, I just want to say to the MooseFS team that they are simply awesome. Since one year, we try to create a new concept of a full Open Source Cloud Architecture "over datacenters". We start with a public cloud service and private clouds for our customers but, since 5 years, we also own a free web hosting service, without any ads, just to defend some values we think important for the Internet. In november, after multiple months of work, we launched a new version (4.0) with our cloud solution and, of course, for free. If all the bricks we are using are very efficient (Proxmox with OpenVZ and KVM virtual engines, load balancers and routing policies avoiding SPOF, Zenoss and AI scripting, Gearman inter-apps worker and LDAP directories for cloud management and automatic self-heal), the filesystem was just not equal to the task. To not quote its name, GlusterFS, because we want a solution without any SPOF. But we encoutered errors during replication, files in bad state, undeletable data etc... And each day, our partition freezes without any trace in log files... However, since we deploy MooseFS in production, for approximately 20 000 websites and 10 000 databases (8TB of storage, with 10 billions of files and directories), there is not any problem at all. Thank you to the MooseFS team, we hope we will be able to help you when we could. And we are attending the directory quota feature in 1.7 version :) Thanks again. Best regards. -- Samuel HASSINE Président Olympe Network - 31 avenue Sainte Victoire, 13100 Aix-en-Pce Tel. : +33(0)6.89.50.39.65 Site : www.olympe-network.com |
From: kuer ku <ku...@gm...> - 2011-02-22 08:18:21
|
Hi, all, There is a sessions_ml.mfs on metalogger backuping metadata. But sessions.mfs is not mentioned in http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html. What is put in sessions.mfs ? and do I need copy sessions_ml.mfs to sessions.mfs when restore metadata from metalogger data ?? Thanks -- kuerant |
From: Raymond J. <ray...@ca...> - 2011-02-21 19:52:19
|
Hello, We're currently running an MFS client on FreeBSD 8-STABLE, and we're experiencing severe issues with stability. For example, if somebody scp's to the MFS system locally, we have a repeatable crash dump[1]. At other times, Samba will lock up with a page fault; the problem seems deep enough that it sometime refuses to let the kernel dump core. It seems it might be a FUSE issue, since the crash is consistently in the fuse4bsd module, but I'm wondering if you guys know of any workarounds, or if we should simply switch our clients over to Linux. I've looked at several other mailing posts that seem to point towards instabilities[2,3] in fuse4bsd, but it seems either nobody is interested in fixing them, or they don't occur often enough to bother people. Thanks, Raymond Jimenez [1] Key portions of the crash dump are included below; more info available at http://lenin.caltech.edu/~raymondj/core.txt.4 [2] http://lists.freebsd.org/pipermail/freebsd-current/2009-September/011659.html [3] http://www.mail-archive.com/fre...@fr.../msg126051.html (IIRC, we did retry rebuilding our module; we think it may be the unmaintained code clashing with new things in 8.2) panic: vm_fault: fault on nofault entry, addr: ffffff843e281000 GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: panic: vm_fault: fault on nofault entry, addr: ffffff843e281000 cpuid = 6 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a kdb_backtrace() at kdb_backtrace+0x37 panic() at panic+0x182 vm_fault() at vm_fault+0x1f38 trap_pfault() at trap_pfault+0x308 trap() at trap+0x32f calltrap() at calltrap+0x8 --- trap 0xc, rip = 0xffffffff8066995b, rsp = 0xffffff8488c8d890, rbp = 0xffffff8488c8d910 --- copyout() at copyout+0x3b fusedev_read() at fusedev_read+0x1b3 devfs_read_f() at devfs_read_f+0x81 dofileread() at dofileread+0x88 kern_readv() at kern_readv+0x52 read() at read+0x4e syscallenter() at syscallenter+0x1d2 syscall() at syscall+0x40 Xfast_syscall() at Xfast_syscall+0xe2 and Loaded symbols for /usr/local/modules/fuse.ko #0 doadump () at pcpu.h:224 224 pcpu.h: No such file or directory. in pcpu.h (kgdb) #0 doadump () at pcpu.h:224 #1 0xffffffff8041be83 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:419 #2 0xffffffff8041c2f2 in panic (fmt=Variable "fmt" is not available. ) at /usr/src/sys/kern/kern_shutdown.c:592 #3 0xffffffff806330df in vm_fault (map=0xffffff0001000000, vaddr=18446743542176419840, fault_type=1 '\001', fault_flags=0) at /usr/src/sys/vm/vm_fault.c:283 #4 0xffffffff8066b7d7 in trap_pfault (frame=0xffffff8488c8d7e0, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:688 #5 0xffffffff8066bbcb in trap (frame=0xffffff8488c8d7e0) at /usr/src/sys/amd64/amd64/trap.c:449 #6 0xffffffff80654188 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:224 #7 0xffffffff8066995b in copyout () at /usr/src/sys/amd64/amd64/support.S:258 #8 0xffffffff8042377b in uiomove (cp=0xffffff843e27f000, n=16384, uio=0xffffff8488c8daa0) at /usr/src/sys/kern/kern_subr.c:168 #9 0xffffffff80e23c58 in fusedev_read () from /usr/local/modules/fuse.ko #10 0xffffffff803a1a43 in devfs_read_f (fp=0x1, uio=0xffffff00207da800, cred=Variable "cred" is not available. ) at /usr/src/sys/fs/devfs/devfs_vnops.c:1084 #11 0xffffffff8045e109 in dofileread (td=0xffffff01150e78c0, fd=5, fp=0xffffff011508b0f0, auio=0xffffff8488c8daa0, offset=Variable "offset" is not available. ) at file.h:227 #12 0xffffffff8045e401 in kern_readv (td=0xffffff01150e78c0, fd=5, auio=0xffffff8488c8daa0) at /usr/src/sys/kern/sys_generic.c:238 #13 0xffffffff8045e4d8 in read (td=Variable "td" is not available. ) at /usr/src/sys/kern/sys_generic.c:154 #14 0xffffffff80459b78 in syscallenter (td=0xffffff01150e78c0, sa=0xffffff8488c8dba0) at /usr/src/sys/kern/subr_trap.c:315 #15 0xffffffff8066b81f in syscall (frame=0xffffff8488c8dc40) at /usr/src/sys/amd64/amd64/trap.c:888 #16 0xffffffff80654462 in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:377 #17 0x0000000800b77e9c in ?? () Previous frame inner to this frame (corrupt stack?) (kgdb) -- Raymond Jimenez <ray...@ca...> http://shia.wsyntax.com <> http://fusion.wsyntax.com |
From: Flow J. <fl...@gm...> - 2011-02-21 16:03:56
|
Hi, Just found another issue. I cleared about 10000 reserved files with the script provided at http://sourceforge.net/tracker/?func=detail&aid=3104619&group_id=228631&atid=1075722 yesterday, and this morning I had 0 reserved file when started working. However, after one day development activity with 6 workstations, now we have 204 reserved files not deleted. I've noticed it's stated that "Each session after two hours is automatically closed and all the files are released." in above link, but seems it's not happening in our environment. We have CentOS 5.5 x86 servers and run mfsmount on Fedora 12 x64 workstations. Both servers and workstations run mfs 1.6.19. And mfs is serving as home with read / write access. Here are some example of the reserved files by reading the metadata: 00067856|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|index.sqlite-journal 00067857|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|index.sqlite-journal Most of the 204 reserved files look like temp / journal files. Any ideas about the cause of the issue? BTW, OpenOffice fails to start if MFS serves as home directory. It should be a FS bug as stated on: http://qa.openoffice.org/issues/show_bug.cgi?id=113207. Would it be related to the issue above? And can we fix this OOo issue? Many Thanks Flow |
From: Lin Y. <id...@gm...> - 2011-02-19 17:25:35
|
in mfscommon/sockets.c : line 381 int32_t tcptoread(int sock,void *buff,uint32_t leng,uint32_t msecto) { uint32_t rcvd=0; int i; struct pollfd pfd; pfd.fd = sock; pfd.events = POLLIN; while (rcvd<leng) { pfd.revents = 0; if (poll(&pfd,1,msecto)<0) { return -1; } if (pfd.revents & POLLIN) { i = read(sock,((uint8_t*)buff)+rcvd,leng-rcvd); if (i<=0) { //this code work only in Long Connection without EOF return i; } rcvd+=i; } else { errno = ETIMEDOUT; return -1; } } return rcvd; } ------------------------------------------------- should be replaced with : if (i==0) { //EOF return rcvd; } if (i<0) { return i; } -- 杨林 中科院计算技术研究所 15811038200 id...@gm... http://idning.javaeye.com/ |
From: Flow J. <fl...@gm...> - 2011-02-19 15:25:58
|
Hi, We've been using moosefs in production environment and now we have about 4 million files stored. Here's the information reported by the CGI script: version total space avail space trash space trash files reserved space reserved files all fs objects directories files chunks all chunk copies regular chunk copies 1.6.19 974 GiB 537 GiB 0 B 0 0 B 0 4602256 331677 4027193 4002874 8005748 8005748 Everything looks good except for the space taken on 2 chunk servers, they are not equal: mfschunkserver1 13.187.243.149 9422 1.6.19 4002874 257 GiB 523 GiB mfschunkserver2 13.187.243.153 9422 1.6.19 4002874 180 GiB 451 GiB We have set goal=2 for all the files so chunks should have been mirrored on these 2 servers, right? But why we still have different disk space taken on these servers? Here's the disk layout for these 2 server: 1 13.187.243.149:9422:/mnt/mfs_chunk1/ 3532242 no errors ok - - - - - - - - 204 GiB 229 GiB 2 13.187.243.149:9422:/mnt/mfs_chunk3/ 470632 no errors ok - - - - - - - - 53 GiB 293 GiB 3 13.187.243.153:9422:/mnt/mfs_chunk1/ 2002902 no errors ok - - - - - - - - 90 GiB 226 GiB 4 13.187.243.153:9422:/mnt/mfs_chunk2/ 1999972 no errors ok - - - - - - - - 90 GiB 226 GiB Most of the chunks stored on 13.187.243.149 were replicated from another server which has been removed from the cluster. The disk on that server was set to "marked for removal" and we waited until the replication finished. Can anyone help to explain if this is normal and can we make the space taken by the 2 servers equal? Thanks Flow |
From: Anh K. H. <ky...@vi...> - 2011-02-19 02:03:30
|
On Fri, 18 Feb 2011 15:24:47 -0800 Eric Kim <ek...@op...> wrote: > New mfs user here. You're welcome :) > Is it possible to mount mfs as readonly from client-side? I don't > see any such option from mfsmount. I tried passing "-o ro", but > that didn't work either. I know I could use mfsexports.cfg to > enforce acl via IP addresses, but I am just looking for alternative > ways to do this. readonly mount is supported by general mount(8). From the man page of mfsmount: General mount options (see mount(8) manual): -o rw|-o ro Mount file-system in read-write (default) or read-only mode respectivel Here's an example from my /etc/fstab: mfsmount /home/share/ fuse \ ro,mfsmaster=mfsmaster,_netdev,nosuid,nodev,mfssubfolder=/share 0 0 Hope this helps, Regards, -- Anh Ky Huynh @ UTC+7 |