You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Laurent W. <lw...@hy...> - 2010-07-06 07:16:24
|
On Tue, 6 Jul 2010 01:04:02 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. Hi, > > Are there any init.d files for RedHat MooseFS installations? No. I'll try to cook some for Redhat/CentOS, and ask the dev team to include it in the main repository. Until then, I won't be able to provide RPM with it. So, at best, you'll have to wait for the next stable version. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-07-05 22:34:30
|
Hi. Are there any init.d files for RedHat MooseFS installations? Regards. |
From: jose m. <let...@us...> - 2010-07-05 20:18:06
|
El Lunes 05 Julio 2010, Stas Oskin escribió: > Hi. > > How well MooseFS supports XFS? > > Or the recommended file systems is Ext3? > > Regards. > * I tried it on secondary cluster in ext4 and btrfs, for 3 months ok, failed after I could not find the reason, surely xfs can be used if supported by the kernel. * Safety is an issue of stability of the file system, ext3 is stable xfs is also stable, I prefer ext3. * Regards. |
From: Stas O. <sta...@gm...> - 2010-07-05 15:45:25
|
Hi. How well MooseFS supports XFS? Or the recommended file systems is Ext3? Regards. |
From: Michał B. <mic...@ge...> - 2010-07-05 13:05:21
|
No, it is not possible that MooseFS would create such files on its own. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 > -----Original Message----- > From: Jonathan Carter [mailto:jc...@gl...] > Sent: Monday, July 05, 2010 11:37 AM > To: moo...@li... > Subject: [Moosefs-users] new user question > > I have set up moosefs (master + chunkserver) on a dell pe2650 / > centos 5.3 server. > I have the up the client on a dell PE850 / redhat as4 > > When I look in the folders I created on the mfs filesystem I get some > strange files starting with an underscore : I list my Maildir as an > example : > > -r-------- 1 jc jc 0 Jul 4 23:35 _3rC.a6PMMB.jon2.glimworm.com > -r-------- 1 jc jc 0 Jul 4 23:44 _9VD,jCQMMB.jon2.glimworm.com > -r-------- 1 jc jc 0 Jul 4 23:44 _9VD.jCQMMB.jon2.glimworm.com > -rwxr-xr-x 1 root root 168 Aug 26 2007 clear.sh > drwx------ 2 jc jc 0 Jul 5 11:34 cur > -rw------- 1 jc jc 161136 Jul 5 11:34 dovecot-uidlist > -r-------- 1 jc jc 0 Jul 4 23:36 _gtC,46PMMB.jon2.glimworm.com > -r-------- 1 jc jc 0 Jul 4 23:36 _gtC.46PMMB.jon2.glimworm.com > drwx------ 2 jc jc 0 Jul 5 11:34 new > -rw------- 1 jc jc 177 Feb 9 2009 razor-agent.log > -r-------- 1 jc jc 0 Jul 4 23:44 _RYD.PDQMMB.jon2.glimworm.com > drwx------ 2 jc jc 0 Jul 5 11:33 tmp > -r-------- 1 jc jc 0 Jul 4 23:44 _TWD,jCQMMB.jon2.glimworm.com > -r-------- 1 jc jc 0 Jul 4 23:44 _TWD.jCQMMB.jon2.glimworm.com > drwx------ 5 jc jc 0 Oct 24 2006 Virus > -r-------- 1 jc jc 0 Jul 4 23:37 _-xC,K8PMMB.jon2.glimworm.com > -r-------- 1 jc jc 0 Jul 4 23:37 _-xC.K8PMMB.jon2.glimworm.com > > > does anyone know if these have been created because of the moosefs?? > > > Jonathan Carter > > > > > > > > ---------------------------------------------------------------------------- -- > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Jonathan C. <jc...@gl...> - 2010-07-05 10:06:34
|
I have set up moosefs (master + chunkserver) on a dell pe2650 / centos 5.3 server. I have the up the client on a dell PE850 / redhat as4 When I look in the folders I created on the mfs filesystem I get some strange files starting with an underscore : I list my Maildir as an example : -r-------- 1 jc jc 0 Jul 4 23:35 _3rC.a6PMMB.jon2.glimworm.com -r-------- 1 jc jc 0 Jul 4 23:44 _9VD,jCQMMB.jon2.glimworm.com -r-------- 1 jc jc 0 Jul 4 23:44 _9VD.jCQMMB.jon2.glimworm.com -rwxr-xr-x 1 root root 168 Aug 26 2007 clear.sh drwx------ 2 jc jc 0 Jul 5 11:34 cur -rw------- 1 jc jc 161136 Jul 5 11:34 dovecot-uidlist -r-------- 1 jc jc 0 Jul 4 23:36 _gtC,46PMMB.jon2.glimworm.com -r-------- 1 jc jc 0 Jul 4 23:36 _gtC.46PMMB.jon2.glimworm.com drwx------ 2 jc jc 0 Jul 5 11:34 new -rw------- 1 jc jc 177 Feb 9 2009 razor-agent.log -r-------- 1 jc jc 0 Jul 4 23:44 _RYD.PDQMMB.jon2.glimworm.com drwx------ 2 jc jc 0 Jul 5 11:33 tmp -r-------- 1 jc jc 0 Jul 4 23:44 _TWD,jCQMMB.jon2.glimworm.com -r-------- 1 jc jc 0 Jul 4 23:44 _TWD.jCQMMB.jon2.glimworm.com drwx------ 5 jc jc 0 Oct 24 2006 Virus -r-------- 1 jc jc 0 Jul 4 23:37 _-xC,K8PMMB.jon2.glimworm.com -r-------- 1 jc jc 0 Jul 4 23:37 _-xC.K8PMMB.jon2.glimworm.com does anyone know if these have been created because of the moosefs?? Jonathan Carter |
From: Steve <st...@bo...> - 2010-07-01 14:16:26
|
With typing lessons ? -------Original Message------- From: Scoleri, Steven Date: 01/07/2010 12:20:50 To: moo...@li... Subject: [Moosefs-users] help |
From: Scoleri, S. <Sco...@gs...> - 2010-07-01 11:20:24
|
From: Laurent W. <lw...@hy...> - 2010-06-29 13:04:03
|
On Mon, 21 Jun 2010 12:12:03 +0200 Michał Borychowski <mic...@ge...> wrote: > Thanks for "doxygenizing" the project. I hope it would be the first step for better documentation of MooseFS source code. But I know still lots of depends on time of our dev guys. Hope so too. Anyway, I manage to make doxygen break with my attempts to « doxygenize » the code. Please find an updated version (created today) with no doxygen content at all, but at least you have the complete list of structures and all. http://centos.kodros.fr/doxygen.tar.gz Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-29 07:23:59
|
Hi! Just a question : How much RAM do you use on the master, and on slaves ? In our case (11'800'000 chunks on 5 chunkservers) : * 'mfsmaster' process on master : 4.9 GB (64 bits recompilation required : the 32 bits version of mfsmaster crashed without a message when it came to 4 GB) * 'mfschunkserver' process on chunkservers : 580 MB [MB] 32bit machines are not capable of addressing more than 4GB, so that was quite a normal behaviour. Regarding memory please have a look at this FAQ entry: http://www.moosefs.org/moosefs-faq.html#cpu – there is information about the cpu loads and ram usage. Keep in mind that RAM depends on the total number of files and folders (not on their size) and CPU load in mfsmaster depends on amount of operations which take place in the filesystem. [...] Maybe later, we'd like to use it for webhosting. But since LOCK is not supported, it's not yet possible. [MB] What for do you need LOCK for webhosting? [MB] Kind regards Michal |
From: Michał B. <mic...@ge...> - 2010-06-29 06:23:09
|
Hi! Yes, web files can be stored on MooseFS. Just remember it would not be a load balancer solution. If you prepare for heavy load, you'll still have to have several web servers and MooseFS will be a very convenient place to store the files just in one place and not having to care about backups or replications. Performance should not be affected (of course, it depends how heavy load you'll have). And the caching mechanism for clients (which we keep talking already for a while) would be available soon for beta testing. You can also have a look at this FAQ entry as web files (js, html, php) are usually small files: http://www.moosefs.org/moosefs-faq.html#source_code Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Sunday, June 27, 2010 1:39 PM To: moo...@li... Subject: [Moosefs-users] Storing web server files on MFS Hi. Is it feasible to place web applications files (JS, HTML, PHP and so) on MFS? Or the read speeds will be too low? Thanks. |
From: Travis <tra...@tr...> - 2010-06-27 13:11:22
|
I have found running Apache document root folders over MFS to be not worse than running them on standard Unix network file system (NFS). Typically my users connecting to the website experience more latencies in their network connection that gets them to the site. I guess it would depend on your definition of speeds. such as if you have several dozens of thousands of requests per second, where even a classical file system would experience heavy loads, where you would not want to use a NFS mount point, you probably then would have similar speed issues with MFS. On 06/27/2010 07:39 AM, Stas Oskin wrote: > Hi. > > Is it feasible to place web applications files (JS, HTML, PHP and so) > on MFS? > > Or the read speeds will be too low? > > Thanks. > |
From: Stas O. <sta...@gm...> - 2010-06-27 11:39:48
|
Hi. Is it feasible to place web applications files (JS, HTML, PHP and so) on MFS? Or the read speeds will be too low? Thanks. |
From: Laurent W. <lw...@hy...> - 2010-06-25 07:32:00
|
On Fri, 25 Jun 2010 01:47:52 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > Reading the High Availability section, there is a list of required scripts > mentioned. > http://www.moosefs.org/mini-howtos.html#redundant-master > > Is there a sample of these HA scripts to automate the fail-over process? You can find some draft here: http://sourceforge.net/mailarchive/message.php?msg_name=AANLkTimLpK_xBFVlW4vKG3dryn-BPOpFDNYpBJ95PVGh%40mail.gmail.com For now, nothing has properly been written. it's on the TODO list of Fabien and myself. If you have to deploy such a thing quickly, then please write down steps so we can get your howto :) -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-06-24 22:48:23
|
Hi. Reading the High Availability section, there is a list of required scripts mentioned. http://www.moosefs.org/mini-howtos.html#redundant-master Is there a sample of these HA scripts to automate the fail-over process? Regards, Stas. |
From: Fabien G. <fab...@gm...> - 2010-06-24 12:45:15
|
Hi, On Thu, Jun 24, 2010 at 12:53 PM, Laurent Wandrebeck <lw...@hy...> wrote: > About the packet too long thread, I'd like to talk a bit about possible > improvements before coding anything if needed. > [...] > Just for our information, what those values (MacPacketSize, NODEHASHSIZE and MFSMAXFILES) represent ? Fabien |
From: Laurent W. <lw...@hy...> - 2010-06-24 12:35:05
|
On Thu, 24 Jun 2010 14:15:10 +0200 Fabien Germain <fab...@gm...> wrote: > Hi, > > On Thu, Jun 24, 2010 at 12:53 PM, Laurent Wandrebeck <lw...@hy...> wrote: > > > About the packet too long thread, I'd like to talk a bit about possible > > improvements before coding anything if needed. > > [...] > > > > Just for our information, what those values (MacPacketSize, NODEHASHSIZE and > MFSMAXFILES) represent ? > > Fabien The only one I know for sure is MFSMAXFILES, which represents the maximum number of files open at a time. (reading a bit of code…) MaxPacketSize seems to be part of the moosefs prococol. It represents the maximum size of a packet ;) To be clearer, when a server talks « moosefs » to another, maxpacketsize represents the maximum length of information send before trying to be decoded and understood by the receiver. NODEHASHSIZE: a node seems to be recognised by a hash, and NODEHASHSIZE is the length of that hash. Please remember I barely read the code, so I may be wrong ! -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-06-24 10:53:38
|
Hi, About the packet too long thread, I'd like to talk a bit about possible improvements before coding anything if needed. MaxPacketSize could be dynamically adapted, as the +900 in « if ((uint32_t) (main_time())<=starttime+900) », and so is the 14400 in « for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { ». Border-line to this, MFSMAXFILES could also be dynamically allocated to grow. First, please note I'm unsure that basing auto-adjust on number of chunks per server is the right way. If not, I guess you can trash almost evething down there :) I've looked quickly at the code, and it seems quite easy to do. The only thing is to find some good enough basic formulas so users don't or seldom meet these problems again. Some numbers to try to find the way: MaxPacketSize 50000000 works fine with 800000 chunks/server. MaxPacketSize 50000000 does not work with 4700000 chunks/server. MaxPacketSize 500000000 does work with 4700000 chunks/server. 50000000/800000=62.5 works 500000000/4700000~=106 works 50000000/4700000~=10.6 fails I don't know the inner working of mfs enough, but it may work if MaxPacketSize=number of chunks per server * 62.5, supposing the evolution of MaxPacketSize only needs to be linear. If not, maybe 50000000+(number of chunks per server-800000)/~115 would do the trick ? It would be great if fabien could try 62.5*4700000=293750000 for MaxPacketSize. Or if moosefs main devs could bring their knowledge into it. BUT, there are several definitions to MaxPacketSize: mfsmetalogger/masterconn.c #define MaxPacketSize 1500000 mfsmaster/matocuserv.c #define MaxPacketSize 1000000 mfsmaster/matomlserv.c #define MaxPacketSize 1500000 mfsmaster/matocsserv.c #define MaxPacketSize 50000000 mfschunkserver/masterconn.c #define MaxPacketSize 10000 mfschunkserver/csserv.c #define MaxPacketSize 100000 Of course, here, only mfsmaster/matocsserv.c is concerned. I guess the others are not as sensible about size or number of chunks. About starttime+x 150 works fine with 800000 chunks/server. 900 is proposed with 4700000 chunks/server. If linear is the way: 800k/150=5333.333… 4.7M/900=5222.222… so x=number of chunks per server/5300 may do the trick ? Or, if taking a minimum of 150, we'll get x=150+(number of chunks per server-800k)/6000 with of course number of chunks-800k being≥0. About NODEHASHSIZE/x 3600 works with 800k chunks per server. 14400 is proposed with 4.7M. Linear: 800k/3600=222.222… 4.7M/14400=326.388… x=number of chunks per server/222 ? or with 3600 minimum: x=3600+(number of chunks per server-800k)/361 with of course number of chunks-800k being≥0. About MFSMAXFILES: defaut is 5000 in code, Makefile pushes it to 10000. I've found no information about tuning it. Maybe we could simply use malloc for struct pollfd pdesc [MFSMAXFILES]; and use realloc to MFSMAXFILES+something if « if (setrlimit (RLIMIT_NOFILE,&rls)<0) { » is true ? Thoughts ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Ricardo J. B. <ric...@da...> - 2010-06-23 21:29:28
|
El Martes 22 Junio 2010, kuer ku escribió: > Hi, all, > > In document, I found that mfs can be mounted via /etc/fstab : > > http://www.moosefs.org/reference-guide.html > > As of 1.6.x the mfsmount may be set up in the /etc/fstab to facilitate > having MooseFS mounted on the system startup: > MFSMASTER_IP:9421 on /mnt/mfs type fuse.mfs (rw, nosuid, nodev, > allow_other, default_permissions) > And additionally, the system dependent mount script would need to invoke > something like mount -a -t fuse.mfs. > > > But when I try that, I failed. > > I add following to /etc/fstab : > > 192.168.0.23:9421 /var/mfs/tesa fuse.mfs rw,nosuid,nodev,allow_other 0 0 > > then reboot the machine, and no fuse.mfs auto-remount. > > So, I try this on command line : > > $ sudo mount /var/mfs/tesa > *mount: unknown filesystem type 'fuse.mfs'* > > It does not work, either. > > Is there any other setting to tell system that there is a fuse.mfs > filesystem ?? > > thanks. > > -- kuer I have MooseFS installed in CentOS 5 clients, it should work if you add this in /etc/fstab: mfsmount /mnt/mfs fuse defaults,nosuid,nodev,_netdev,mfsmaster=MASTER_IP,mfssubfolder=/ 0 0 Then edit /etc/init.d/netfs to add the next line to the "stop" section [1], so MooseFS is cleanly unmounted on shutdown: action $"Unmounting MFS filesystems: " umount -a -t fuse Lastly, to have network FSs (marked with _netdev in fstab) auto mounted on boot you have to execute: # chkconfig netfs on [1] I also have "/sbin/modprobe fuse" in the "start" section of netfs, just in case. HTH Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! ------------------------------------------ Nota de confidencialidad: Este mensaje y los archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de Dattatec.com queda prohibida. Dattatec.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud. el destinatario del mismo y lo ha recibido por error, por favor notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by Dattatec.com. Dattatec.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender. Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informe-nos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Fabien G. <fab...@gm...> - 2010-06-23 15:10:48
|
Hi Michal and moosefs-users@, 2010/6/23 Michał Borychowski <mic...@ge...> > Probably important is the difference in the amount of chunks per one > chunkserver. We have about 800,000 chunks per chunkserver (60 million chunks > on 75 machines). > Thanks for you quick answer. Yes you're right, that's what I thought too : 4.7M chunks per chunkserver is far bigger than 800K Just a question : How much RAM do you use on the master, and on slaves ? In our case (11'800'000 chunks on 5 chunkservers) : * 'mfsmaster' process on master : 4.9 GB (64 bits recompilation required : the 32 bits version of mfsmaster crashed without a message when it came to 4 GB) * 'mfschunkserver' process on chunkservers : 580 MB How many files do you have? What is the average size of a file? What goal > do you have set? > Our current test cluster is used for backups storage. We have lots of files from all sizes (rsync of /etc, /home/, ... files on several servers), and also a lot of big archive files (several GB each). We currently have 14.7 millions inodes used. Maybe later, we'd like to use it for webhosting. But since LOCK is not supported, it's not yet possible. Fabien |
From: Ruan C. <rua...@gm...> - 2010-06-23 12:46:43
|
Thank you for your reply. my fuse version: fusefs-kmod-0.3.9.p1.20080208_6 I think the fuse-kmod in freebsd ports is too old. I will try to find some newer FUSE ,and do more test. And is there any client libs for programming languages ? for example C , PHP extension ? On Tue, Jun 22, 2010 at 9:08 PM, Travis <tra...@tr...> wrote: > Yes, I agree it is likely an incomplete FUSE, or perhaps not the most up > to date FUSE implementation on freebsd. > > I am currently working on: > moosefs 1.6.13 > FUSE library version 2.8.1 > Samba 3.4.7 > Linux 2.6.32-21 (part of Ubuntu Linux 10.4 64 bit edition) > > Where I have the Samba exposing folders within the moosefs mount for > windows machines on my network. > > And this appears to work well for me, I have never observed a crash that is. > > I am also working on amd64 hardware. So the only difference is the > Freebsd vs Linux, and as Michaeł said it is probably the Freebsd support > for FUSE.? > > I remember the freebsd ports sometimes being a little behind what the > latest version of a package was. Would it be possible for you to try to > compile the latest FUSE from sources instead of using a FreeBSD port ? I > am not even sure if this is possible, it might require hacking with the > FUSE sources to even get them to compile. Most of the problems likely to > be different system header files needed, or a different location, and > possibly the need to fetch other packages to provide supporting > features. And even then there might be some kernel parameter tuning > needed to make it work. though this should be done already I would think > if the current FUSE mostly works for you. > > > > Linux, Fuse, moosefs, samba, 64 bit > On 06/21/2010 12:05 PM, Ruan Chunping wrote: >> Thank you for your reply! >> >> >> I agress with you >> >> i tesed on FreeBSD8.0 and FreeBSD8.0-p3 , the same problem. >> >> My guess is: >> when i copy files, Samba makes some operations ,but FUSE not >> supported, I'm not sure :) >> >> >> >> 2010/6/21 Michał Borychowski<mic...@ge...>: >> >>> Thank you for your submission. We will look into this situation but when the kernel makes a crash it is more probably caused by FUSE for FreeBSD than for MooseFS itself. >>> >>> >>> Kind regards >>> Michał Borychowski >>> MooseFS Support Manager >>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >>> Gemius S.A. >>> ul. Wołoska 7, 02-672 Warszawa >>> Budynek MARS, klatka D >>> Tel.: +4822 874-41-00 >>> Fax : +4822 874-41-01 >>> >>> >>> >>>> -----Original Message----- >>>> From: Ruan Chunping [mailto:rua...@gm...] >>>> Sent: Monday, June 21, 2010 6:53 AM >>>> To: moo...@li... >>>> Subject: Re: [Moosefs-users] BUG report,mfs+samba+freebsd8amd64 crash >>>> >>>> and the samba version: >>>> >>>> pkg_info|grep samba >>>> samba-3.0.32_2,1 A free SMB and CIFS client and server for UNIX >>>> samba-libsmbclient-3.0.37 Shared libs from the samba package >>>> >>>> >>>> On Mon, Jun 21, 2010 at 12:50 PM, Ruan Chunping<rua...@gm...> >>>> wrote: >>>> >>>>> OS: FreeBSD dev.xxxx.com 8.0-RELEASE-p3 FreeBSD 8.0-RELEASE-p3 #0: Sun >>>>> Jun 20 13:06:16 CST 2010 >>>>> ro...@de...:/usr/obj/usr/src/sys/GENERIC amd64 >>>>> MFS: mfs-1.6.15.tar.gz from freebsd ports >>>>> FUSE: fusefs-kmod-0.3.9.p1.20080208_6 >>>>> fusefs-libs-2.7.4 >>>>> dmesg|grep fuse >>>>> fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 >>>>> >>>>> install and config MFS ( http://www.moosefs.org/reference-guide.html ) >>>>> ip: 192.168.1.77 >>>>> master,metalog,chunkserver,client,cgiserv are installed to one machine. >>>>> >>>>> /etc/hosts >>>>> 192.168.1.77 mfsmaster >>>>> >>>>> ... >>>>> >>>>> dd if=/dev/zero of=/chunk1 bs=4m count=1024 >>>>> dd if=/dev/zero of=/chunk2 bs=4m count=1024 >>>>> mdconfig -a -t vnode -f /chunk1 -u 0 >>>>> mdconfig -a -t vnode -f /chunk2 -u 1 >>>>> newfs -m0 -O2 /dev/md0 >>>>> newfs -m0 -O2 /dev/md1 >>>>> mount /dev/md0 /mnt/mfschunk1 >>>>> mount /dev/md2 /mnt/mfschunk2 >>>>> >>>>> ..ok >>>>> >>>>> config and start mfschunkserver >>>>> >>>>> .. ok >>>>> >>>>> mfsmount /mnt/mfs -H mfsmaster >>>>> >>>>> ..ok >>>>> >>>>> cd /mnt/mfs/ >>>>> mkdir test >>>>> >>>>> .. ok >>>>> >>>>> echo "test"> test.txt >>>>> >>>>> .. ok >>>>> >>>>> cat test.txt >>>>> >>>>> .. ok (i noticed that: read/readdir operation increasing, from mfscgiserver >>>>> >>>> ) >>>> >>>>> mkdir MFS (/mnt/mfs/MFS) >>>>> >>>>> .. ok >>>>> >>>>> ln -s /mnt/mfs/MFS /mnt/SMB/ >>>>> >>>>> ls -l /mnt/SMB/MFS >>>>> >>>>> .. ok >>>>> >>>>> >>>>> now, in my work pc(OS:Win7, ip:192.168.1.10),i can see the MFS folder >>>>> ( \\192.168.1.77\SMB ) >>>>> 1. copy a file (<1M) to \\192.168.1.77\SMB >>>>> .. ok >>>>> 2. and copy the sam file to \\192.168.1.77\SMB\MFS\ >>>>> freebsd crash,and auto reboot , no kernel dump :( >>>>> >>>>> >>>>> * I can reproduce this bug >>>>> >>>>> >>>> >>>> >>>> -- >>>> 米胖 >>>> www.mipang.com >>>> >>>> ------------------------------------------------------------------------------ >>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>> lucky parental unit. See the prize list and enter to win: >>>> http://p.sf.net/sfu/thinkgeek-promo >>>> _______________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>> >>> >>> >> >> >> > > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- 米胖 www.mipang.com |
From: Michał B. <mic...@ge...> - 2010-06-23 09:16:19
|
Maybe you miss a package called fuse-utils (this is a name used in Debian, the package can have other names in different distributions). You may also need to read the kernel "fuse" module before (you can insert it into /etc/modules). Regards Michał From: kuer ku [mailto:ku...@gm...] Sent: Wednesday, June 23, 2010 2:28 AM To: moo...@li... Subject: [Moosefs-users] how to mont fuse.mfs in /etc/fstab Hi, all, In document, I found that mfs can be mounted via /etc/fstab : http://www.moosefs.org/reference-guide.html As of 1.6.x the mfsmount may be set up in the /etc/fstab to facilitate having MooseFS mounted on the system startup: MFSMASTER_IP:9421 on /mnt/mfs type fuse.mfs (rw, nosuid, nodev, allow_other, default_permissions) And additionally, the system dependent mount script would need to invoke something like mount -a -t fuse.mfs. But when I try that, I failed. I add following to /etc/fstab : 192.168.0.23:9421 /var/mfs/tesa fuse.mfs rw,nosuid,nodev,allow_other 0 0 then reboot the machine, and no fuse.mfs auto-remount. So, I try this on command line : $ sudo mount /var/mfs/tesa mount: unknown filesystem type 'fuse.mfs' It does not work, either. Is there any other setting to tell system that there is a fuse.mfs filesystem ?? thanks. -- kuer |
From: Michał B. <mic...@ge...> - 2010-06-23 09:09:34
|
Hi Fabien! Probably important is the difference in the amount of chunks per one chunkserver. We have about 800,000 chunks per chunkserver (60 million chunks on 75 machines). How many files do you have? What is the average size of a file? What goal do you have set? Regards Michał From: Fabien Germain [mailto:fab...@gm...] Sent: Monday, June 21, 2010 4:41 PM To: moo...@li... Subject: Re: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) Hello, We had exactly the same issue as Marco this morning (while copying lots of files, it suddenly stopped working with the same error messages). The three modifications in the source code provided by Michal + recompilation of mfsmaster binary solved the problem, it's backup to life :-) Notice that we "only" have 11'480'000 chunks (whereas Gemius seems to run a 26'000'000 chunks MFS cluster). Do you have any clue why it can happen, whereas our current cluster is quite slam ? Our configuration : one master server (8 GB of RAM), one master backup server, 5 chunk servers (1 BG of RAM, 2 x 4 TB HDD on each chunkserver, and about 2'200'000 chunks of each HDD, which means about 4'500'000 chunks stored on each chunk server). Regards, Fabien 2010/6/21 Michał Borychowski <mic...@ge...> We give you here some quick patches you can implement to the master server to improve its performance for that amount of files: In matocsserv.c in mfsmaster you need to change this line: #define MaxPacketSize 50000000 into this: #define MaxPacketSize 500000000 Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" function. Change this line: if ((uint32_t)(main_time())<=starttime+150) { into: if ((uint32_t)(main_time())<=starttime+900) { And also changing this line: for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { into this: for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { You need to recompile the master server and start it again. The above changes should make the master server work more stable with large amount of files. Another suggestion would be to create two MooseFS instances (eg. 2 x 200 million files). One master server could also be metalogger for the another system and vice versa. Kind regards Michał From: marco lu [mailto:mar...@gm...] Sent: Monday, June 21, 2010 6:04 AM To: moo...@li... Subject: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) hi, everyone We intend to use moosefs at our product environment as the storage of our online photo service. We'll store for about 400 million photo files. So the master server's mem is a big problem. I've built one master server(64G mem), one metalogger server, three chunk servers(10*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But when the master server's exhaust the memories. I got many error syslog from master server: Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 00000000018140FF (inode: 26710547 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 26710547: img.xxx.com/003/810/560/b.jpg Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 000000000144B907 (inode: 22516243 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 22516243: img.xxx.com/051/383/419/a.jpg and some error message like this: Jun 21 11:49:31 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.11, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:03 mfs-master[4166]: CS(10.25.40.111) packet too long (226064141/50000000) Jun 21 11:50:03 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.12, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:34 mfs-master[4166]: CS(10.25.40.113) packet too long (217185941/50000000) Jun 21 11:50:34 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.13, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) It's a memory problem or a kernel tuning problem? Anyone can give me some information? Thans all. Mumonitor ------------------------------------------------------------------------------ ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: kuer ku <ku...@gm...> - 2010-06-23 00:35:27
|
Hi, all, In document, I found that mfs can be mounted via /etc/fstab : http://www.moosefs.org/reference-guide.html As of 1.6.x the mfsmount may be set up in the /etc/fstab to facilitate having MooseFS mounted on the system startup: MFSMASTER_IP:9421 on /mnt/mfs type fuse.mfs (rw, nosuid, nodev, allow_other, default_permissions) And additionally, the system dependent mount script would need to invoke something like mount -a -t fuse.mfs. But when I try that, I failed. I add following to /etc/fstab : 192.168.0.23:9421 /var/mfs/tesa fuse.mfs rw,nosuid,nodev,allow_other 0 0 then reboot the machine, and no fuse.mfs auto-remount. So, I try this on command line : $ sudo mount /var/mfs/tesa *mount: unknown filesystem type 'fuse.mfs'* It does not work, either. Is there any other setting to tell system that there is a fuse.mfs filesystem ?? thanks. -- kuer |
From: Travis <tra...@tr...> - 2010-06-22 13:24:32
|
Yes, I agree it is likely an incomplete FUSE, or perhaps not the most up to date FUSE implementation on freebsd. I am currently working on: moosefs 1.6.13 FUSE library version 2.8.1 Samba 3.4.7 Linux 2.6.32-21 (part of Ubuntu Linux 10.4 64 bit edition) Where I have the Samba exposing folders within the moosefs mount for windows machines on my network. And this appears to work well for me, I have never observed a crash that is. I am also working on amd64 hardware. So the only difference is the Freebsd vs Linux, and as Michaeł said it is probably the Freebsd support for FUSE.? I remember the freebsd ports sometimes being a little behind what the latest version of a package was. Would it be possible for you to try to compile the latest FUSE from sources instead of using a FreeBSD port ? I am not even sure if this is possible, it might require hacking with the FUSE sources to even get them to compile. Most of the problems likely to be different system header files needed, or a different location, and possibly the need to fetch other packages to provide supporting features. And even then there might be some kernel parameter tuning needed to make it work. though this should be done already I would think if the current FUSE mostly works for you. Linux, Fuse, moosefs, samba, 64 bit On 06/21/2010 12:05 PM, Ruan Chunping wrote: > Thank you for your reply! > > > I agress with you > > i tesed on FreeBSD8.0 and FreeBSD8.0-p3 , the same problem. > > My guess is: > when i copy files, Samba makes some operations ,but FUSE not > supported, I'm not sure :) > > > > 2010/6/21 Michał Borychowski<mic...@ge...>: > >> Thank you for your submission. We will look into this situation but when the kernel makes a crash it is more probably caused by FUSE for FreeBSD than for MooseFS itself. >> >> >> Kind regards >> Michał Borychowski >> MooseFS Support Manager >> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >> Gemius S.A. >> ul. Wołoska 7, 02-672 Warszawa >> Budynek MARS, klatka D >> Tel.: +4822 874-41-00 >> Fax : +4822 874-41-01 >> >> >> >>> -----Original Message----- >>> From: Ruan Chunping [mailto:rua...@gm...] >>> Sent: Monday, June 21, 2010 6:53 AM >>> To: moo...@li... >>> Subject: Re: [Moosefs-users] BUG report,mfs+samba+freebsd8amd64 crash >>> >>> and the samba version: >>> >>> pkg_info|grep samba >>> samba-3.0.32_2,1 A free SMB and CIFS client and server for UNIX >>> samba-libsmbclient-3.0.37 Shared libs from the samba package >>> >>> >>> On Mon, Jun 21, 2010 at 12:50 PM, Ruan Chunping<rua...@gm...> >>> wrote: >>> >>>> OS: FreeBSD dev.xxxx.com 8.0-RELEASE-p3 FreeBSD 8.0-RELEASE-p3 #0: Sun >>>> Jun 20 13:06:16 CST 2010 >>>> ro...@de...:/usr/obj/usr/src/sys/GENERIC amd64 >>>> MFS: mfs-1.6.15.tar.gz from freebsd ports >>>> FUSE: fusefs-kmod-0.3.9.p1.20080208_6 >>>> fusefs-libs-2.7.4 >>>> dmesg|grep fuse >>>> fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 >>>> >>>> install and config MFS ( http://www.moosefs.org/reference-guide.html ) >>>> ip: 192.168.1.77 >>>> master,metalog,chunkserver,client,cgiserv are installed to one machine. >>>> >>>> /etc/hosts >>>> 192.168.1.77 mfsmaster >>>> >>>> ... >>>> >>>> dd if=/dev/zero of=/chunk1 bs=4m count=1024 >>>> dd if=/dev/zero of=/chunk2 bs=4m count=1024 >>>> mdconfig -a -t vnode -f /chunk1 -u 0 >>>> mdconfig -a -t vnode -f /chunk2 -u 1 >>>> newfs -m0 -O2 /dev/md0 >>>> newfs -m0 -O2 /dev/md1 >>>> mount /dev/md0 /mnt/mfschunk1 >>>> mount /dev/md2 /mnt/mfschunk2 >>>> >>>> ..ok >>>> >>>> config and start mfschunkserver >>>> >>>> .. ok >>>> >>>> mfsmount /mnt/mfs -H mfsmaster >>>> >>>> ..ok >>>> >>>> cd /mnt/mfs/ >>>> mkdir test >>>> >>>> .. ok >>>> >>>> echo "test"> test.txt >>>> >>>> .. ok >>>> >>>> cat test.txt >>>> >>>> .. ok (i noticed that: read/readdir operation increasing, from mfscgiserver >>>> >>> ) >>> >>>> mkdir MFS (/mnt/mfs/MFS) >>>> >>>> .. ok >>>> >>>> ln -s /mnt/mfs/MFS /mnt/SMB/ >>>> >>>> ls -l /mnt/SMB/MFS >>>> >>>> .. ok >>>> >>>> >>>> now, in my work pc(OS:Win7, ip:192.168.1.10),i can see the MFS folder >>>> ( \\192.168.1.77\SMB ) >>>> 1. copy a file (<1M) to \\192.168.1.77\SMB >>>> .. ok >>>> 2. and copy the sam file to \\192.168.1.77\SMB\MFS\ >>>> freebsd crash,and auto reboot , no kernel dump :( >>>> >>>> >>>> * I can reproduce this bug >>>> >>>> >>> >>> >>> -- >>> 米胖 >>> www.mipang.com >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> > > > |