You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve T. <sm...@cb...> - 2012-02-10 14:40:56
|
MFS 1.6.20. I note that if I use something like this for mfsmaster.cfg: MATOCS_LISTEN_HOST = <hostname> where <hostname> resolves to more than one IP address, a listener is established only for the first address returned. The same for MATOML_LISTEN_HOST and MATOCU_LISTEN_HOST. This sounds like a bug to me. Is there any technique available for establishing listeners on multiple IP addresses without using "*"? Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- |
From: JJ <jj...@ci...> - 2012-02-09 22:58:01
|
Well, I hate to do update my message this way but I reverted all /etc/hosts entries on all 3 mfs* hosts to use the public IP and restarted mfsmaster/chunk daemons across all hosts. I moved my entry UP in /etc/mfs/mfsexports.cfg: # Allow everything but "meta". 24.101.150.38/32 / rw,alldirs,maproot=0 instead of appending it to the bottom where it was previously? (Does placement in the file matter?) ... umount and remounted my client. I tested successfully with "echo "X" > /mnt/mfs/{x,y,z}" so I re-ran time dd if=/dev/zero of=/mnt/mfs/100Meg bs=1024 count=102400 a 100Meg dd file operation previously took real 111m35.027s and came up 0 bytes after the "input/output" message. It now only takes real 8m9.896s And it's nice and peppy. Thank you for listening! Have a Great Day. JJ / Habitual Support Engineer Cirrhus9.com |
From: JJ <jj...@ci...> - 2012-02-09 19:18:37
|
What I've also noticed is that the file size is NOT always 0. It turns to 0 when the "Input/output error" message appears on my client. a mount command from my client shows: mfsmaster:9421 on /mnt/mfs type fuse.mfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other) my /etc/fstab entry is: mfsmount /mnt/mfs fuse defaults 0 0 the /etc/mfs/mfsexports.cfg on mfsmaster for my client is 24.101.150.38/32 / rw,alldirs,maproot=0 All other values in same file are commented out or appear default: # Allow everything but "meta". * / rw,alldirs,maproot=0 # Allow "meta". * ... I'm going to try another popular distro of the Ubu* variety and see what happens from there. Thank you for your time. JJ / Habitual Support Engineer Cirrhus9.com On 02/09/2012 12:40 PM, JJ wrote: > Travis and Company: > > Ås I have said "It all appears good" > Everything as far as I can tell is a green-light. > > /etc/hosts contents all ping with expected responses. > Telnet to the (3 mfs*) hosts:9420 all work via named host or > VIP:9420 as expected. > > I unREMarked BIND_HOST = * in mfschunkserver.cfg on mfsmaster and > bounced the service. > > My only issue now still seems to be the inability to store any files > there from my client reliably. > > time dd if=/dev/zero of=/mnt/mfs/100Meg bs=1024 count=102400 > dd: writing `/mnt/mfs/100Meg': Input/output error > dd: closing output file `/mnt/mfs/100Meg': Input/output error > > I thought CentOS by default used all available IPs when binding. > but it couldn't hurt to be specific. :) > > I am now re-reading both > http://www.moosefs.org/moosefs-faq.html and > http://www.moosefs.org/reference-guide.html#using-moosefs > > but I could still use a hand getting some non-zero byte file writes going. > > JJ / Habitual > Support Engineer > Cirrhus9.com > > On 02/08/2012 04:40 PM, Travis Hein wrote: >> to fix the networking so that things go over your internal private >> network segment instead of the public segment, >> for the chunk server configuratons (mfschunkserver.cfg) >> uncomment and edit the BIND_HOST >> >> # BIND_HOST = * >> >> by default it will listen on all interfaces. >> >> Also, when setting the chunk servers, and mfsmount clients, be sure to >> specify the internal IP address (or even better, create an internal DNS >> name) for the internal MFS master node IP address. >> >> >> Nautalus might do some kind of pre-crawl sub folders to cache file and >> content types, like a content indexer? I am not sure though, have not >> used it >> >> On 12-02-08 3:38 PM, JJ wrote: >>> Hello moosefs-users: >>> >>> I have managed to install our first moosefs test system. >>> I have 3 identical boxes. All >>> CentOS 5.7 /x86_64 >>> mfs 1.6.20 installed from source. >>> fuse-devel-2.7.4-8.el5 across all installed via yum. >>> >>> My client is >>> OpenSUSE 11.4 >>> mfs 1.6.20 installed from source and >>> fuse-devel-2.8.5-5.1.i586 install via zypper. >>> >>> All 3 moose servers have a dedicated 10G ext3 /dev/hdb however I did not >>> export anything on the mfsmaster, so I have 20G and >>> http://ipa.ddr.ess:9425/mfs.cgi shows the same. >>> >>> du -sh on my client shows: >>> mfsmaster:9421 19G 1001M 18G 6% /mnt/mfs >>> >>> It all appears good. >>> >>> The problem is that using nautilus to navigate (or from c-li) >>> "nautilus /mnt/mfs " results in a> 5 minutes to show content? >>> umounting and remounting doesn't seem to help or affect it. >>> >>> ls -al /mnt/mfs is very responsive however. >>> >>> I am following some hints I found at >>> http://contrib.meharwal.com/home/moosefs >>> >>> I have assigned a VirtualIP to my moose hosts and the networking seems fine. >>> I am accessing mfsmaster using the forward-facing/public IP from my client. >>> >>> Is there something else I need to investigate? >>> Thank you for your time. >>> >>> >>> >> >> >> ------------------------------------------------------------------------------ >> Keep Your Developer Skills Current with LearnDevNow! >> The most comprehensive online learning library for Microsoft developers >> is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, >> Metro Style Apps, more. Free future releases when you subscribe now! >> http://p.sf.net/sfu/learndevnow-d2d >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > Virtualization& Cloud Management Using Capacity Planning > Cloud computing makes use of virtualization - but cloud computing > also focuses on allowing computing to be delivered as a service. > http://www.accelacomm.com/jaw/sfnl/114/51521223/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: JJ <jj...@ci...> - 2012-02-09 17:43:44
|
Travis and Company: Ås I have said "It all appears good" Everything as far as I can tell is a green-light. /etc/hosts contents all ping with expected responses. Telnet to the (3 mfs*) hosts:9420 all work via named host or VIP:9420 as expected. I unREMarked BIND_HOST = * in mfschunkserver.cfg on mfsmaster and bounced the service. My only issue now still seems to be the inability to store any files there from my client reliably. time dd if=/dev/zero of=/mnt/mfs/100Meg bs=1024 count=102400 dd: writing `/mnt/mfs/100Meg': Input/output error dd: closing output file `/mnt/mfs/100Meg': Input/output error I thought CentOS by default used all available IPs when binding. but it couldn't hurt to be specific. :) I am now re-reading both http://www.moosefs.org/moosefs-faq.html and http://www.moosefs.org/reference-guide.html#using-moosefs but I could still use a hand getting some non-zero byte file writes going. JJ / Habitual Support Engineer Cirrhus9.com On 02/08/2012 04:40 PM, Travis Hein wrote: > to fix the networking so that things go over your internal private > network segment instead of the public segment, > for the chunk server configuratons (mfschunkserver.cfg) > uncomment and edit the BIND_HOST > > # BIND_HOST = * > > by default it will listen on all interfaces. > > Also, when setting the chunk servers, and mfsmount clients, be sure to > specify the internal IP address (or even better, create an internal DNS > name) for the internal MFS master node IP address. > > > Nautalus might do some kind of pre-crawl sub folders to cache file and > content types, like a content indexer? I am not sure though, have not > used it > > On 12-02-08 3:38 PM, JJ wrote: >> Hello moosefs-users: >> >> I have managed to install our first moosefs test system. >> I have 3 identical boxes. All >> CentOS 5.7 /x86_64 >> mfs 1.6.20 installed from source. >> fuse-devel-2.7.4-8.el5 across all installed via yum. >> >> My client is >> OpenSUSE 11.4 >> mfs 1.6.20 installed from source and >> fuse-devel-2.8.5-5.1.i586 install via zypper. >> >> All 3 moose servers have a dedicated 10G ext3 /dev/hdb however I did not >> export anything on the mfsmaster, so I have 20G and >> http://ipa.ddr.ess:9425/mfs.cgi shows the same. >> >> du -sh on my client shows: >> mfsmaster:9421 19G 1001M 18G 6% /mnt/mfs >> >> It all appears good. >> >> The problem is that using nautilus to navigate (or from c-li) >> "nautilus /mnt/mfs " results in a> 5 minutes to show content? >> umounting and remounting doesn't seem to help or affect it. >> >> ls -al /mnt/mfs is very responsive however. >> >> I am following some hints I found at >> http://contrib.meharwal.com/home/moosefs >> >> I have assigned a VirtualIP to my moose hosts and the networking seems fine. >> I am accessing mfsmaster using the forward-facing/public IP from my client. >> >> Is there something else I need to investigate? >> Thank you for your time. >> >> >> > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Volodymyr Y. <tau...@gm...> - 2012-02-09 14:10:08
|
Hello! I have two server: 1. freebsd1 - master and chunk server 2. freebsd2 - metalogger and chunk server OS: FreeBSD freebsd2 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Fri Jan 27 11:13:24 EET 2012 root@freebsd2:/usr/obj/usr/src/sys/TAURUS amd64 On server freebsd1: 1. I'm wrote a file to "FreeBSD-7.3-RELEASE-i386-disc1.iso" moosefs 2. Kill process mfsmaster. On server freebsd2: 1. Change directory to /var/mfs 2. Run utility mfsmetarestore -a > [root@freebsd2] /var/mfs# mfsmetarestore -a > file 'metadata.mfs.back' not found - will try 'metadata_ml.mfs.back' instead > loading objects (files,directories,etc.) ... ok > loading names ... ok > loading deletion timestamps ... ok > checking filesystem consistency ... ok > loading chunks data ... ok > connecting files and chunks ... ok > store metadata into file: /var/mfs/metadata.mfs 3. Verify new metadata.mfs: > [root@freebsd2] /var/mfs# mfsmetadump metadata.mfs > # header: MFSM 1.5 (4D46534D20312E35) > # maxnodeid: 1 ; version: 0 ; nextsessionid: 1 > # ------------------------------------------------------------------- > D|i: 1|#:1|e:0|m:0777|u: 0|g: 0|a:1328786730,m:1328786730,c:1328786730|t: 86400 > # ------------------------------------------------------------------- > # ------------------------------------------------------------------- > # free nodes: 0 > # ------------------------------------------------------------------- > # nextchunkid: 0000000000000001 > *|i:0000000000000000|v:00000000|t: 0 Why is information about the file does not appear in metadata.mfs? The log file is a record of it has: > [root@freebsd2] /var/mfs# cat changelog_ml.0.mfs > 0: 1328786876|SESSION():1 > 1: 1328786886|SETGOAL(1,0,2,4):1,0,0 > 2: 1328786898|ACCESS(1) > 3: 1328786913|ACCESS(1) > 4: 1328786928|ACCESS(1) > 5: 1328786928|CREATE(1,FreeBSD-7.3-RELEASE-i386-disc1.iso,f,420,0,0,0):2 > 6: 1328786928|AQUIRE(2,1) > 7: 1328786928|WRITE(2,0,1):1 > ..... Regards, Volodymyr |
From: Quenten G. <QG...@on...> - 2012-02-09 12:20:59
|
I setup a iscsi target on our lab mfs / ubuntu earlier today, happy to send you a copy of the docs I used in the morning if it will help. Thanks Quenten Sent from my iPhone On 09/02/2012, at 10:05 PM, "Steve Thompson" <sm...@cb...> wrote: > > How does one NFS export an MFS file system on Linux? I add the requisite > line to /etc/exports on the MFS master (and I _am_ using fsid), showmounts > displays the proper export, but I get this message in the log from mountd > when a client attempts to mount the file system: > > possibly unsupported filesystem or fsid= required > > I have seen posts from others claiming to be able to do this. > > -Steve > > ------------------------------------------------------------------------------ > Virtualization & Cloud Management Using Capacity Planning > Cloud computing makes use of virtualization - but cloud computing > also focuses on allowing computing to be delivered as a service. > http://www.accelacomm.com/jaw/sfnl/114/51521223/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: youngcow <you...@gm...> - 2012-02-09 12:10:38
|
We use unfs(http://unfs3.sourceforge.net/) export mfs filesystem to nfs. But unfs performance is not good. > > How does one NFS export an MFS file system on Linux? I add the requisite > line to /etc/exports on the MFS master (and I _am_ using fsid), showmounts > displays the proper export, but I get this message in the log from mountd > when a client attempts to mount the file system: > > possibly unsupported filesystem or fsid= required > > I have seen posts from others claiming to be able to do this. > > -Steve > > ------------------------------------------------------------------------------ > Virtualization& Cloud Management Using Capacity Planning > Cloud computing makes use of virtualization - but cloud computing > also focuses on allowing computing to be delivered as a service. > http://www.accelacomm.com/jaw/sfnl/114/51521223/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve T. <sm...@cb...> - 2012-02-09 12:04:15
|
How does one NFS export an MFS file system on Linux? I add the requisite line to /etc/exports on the MFS master (and I _am_ using fsid), showmounts displays the proper export, but I get this message in the log from mountd when a client attempts to mount the file system: possibly unsupported filesystem or fsid= required I have seen posts from others claiming to be able to do this. -Steve |
From: Steve T. <sm...@cb...> - 2012-02-09 11:53:38
|
On Wed, 8 Feb 2012, Steve Thompson wrote: > (2) Does a chunkserver properly handle the case where a local file system is > resized (eg, resize2fs) while it is use (that is, without stopping the > mfschunkserver process)? To answer my own question: I had the opportunity arise to test this, and yes, it works just fine. Steve |
From: Travis H. <tra...@tr...> - 2012-02-08 21:40:33
|
to fix the networking so that things go over your internal private network segment instead of the public segment, for the chunk server configuratons (mfschunkserver.cfg) uncomment and edit the BIND_HOST # BIND_HOST = * by default it will listen on all interfaces. Also, when setting the chunk servers, and mfsmount clients, be sure to specify the internal IP address (or even better, create an internal DNS name) for the internal MFS master node IP address. Nautalus might do some kind of pre-crawl sub folders to cache file and content types, like a content indexer? I am not sure though, have not used it On 12-02-08 3:38 PM, JJ wrote: > Hello moosefs-users: > > I have managed to install our first moosefs test system. > I have 3 identical boxes. All > CentOS 5.7 /x86_64 > mfs 1.6.20 installed from source. > fuse-devel-2.7.4-8.el5 across all installed via yum. > > My client is > OpenSUSE 11.4 > mfs 1.6.20 installed from source and > fuse-devel-2.8.5-5.1.i586 install via zypper. > > All 3 moose servers have a dedicated 10G ext3 /dev/hdb however I did not > export anything on the mfsmaster, so I have 20G and > http://ipa.ddr.ess:9425/mfs.cgi shows the same. > > du -sh on my client shows: > mfsmaster:9421 19G 1001M 18G 6% /mnt/mfs > > It all appears good. > > The problem is that using nautilus to navigate (or from c-li) > "nautilus /mnt/mfs " results in a> 5 minutes to show content? > umounting and remounting doesn't seem to help or affect it. > > ls -al /mnt/mfs is very responsive however. > > I am following some hints I found at > http://contrib.meharwal.com/home/moosefs > > I have assigned a VirtualIP to my moose hosts and the networking seems fine. > I am accessing mfsmaster using the forward-facing/public IP from my client. > > Is there something else I need to investigate? > Thank you for your time. > > > |
From: Steve T. <sm...@cb...> - 2012-02-08 20:51:58
|
Hello MooseFS users. I'm running 1.6.20 on my first installation (Linux servers and Linux/OSX clients), and am very much liking what I see so far. I have two questions/requests: (1) I'd like to see netgroups support to specify host lists in the mfsexports.cfg file. Any chance? (2) Does a chunkserver properly handle the case where a local file system is resized (eg, resize2fs) while it is use (that is, without stopping the mfschunkserver process)? I don't have the space available currently to test this. TIA, Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- |
From: JJ <jj...@ci...> - 2012-02-08 20:41:24
|
Hello moosefs-users: I have managed to install our first moosefs test system. I have 3 identical boxes. All CentOS 5.7 /x86_64 mfs 1.6.20 installed from source. fuse-devel-2.7.4-8.el5 across all installed via yum. My client is OpenSUSE 11.4 mfs 1.6.20 installed from source and fuse-devel-2.8.5-5.1.i586 install via zypper. All 3 moose servers have a dedicated 10G ext3 /dev/hdb however I did not export anything on the mfsmaster, so I have 20G and http://ipa.ddr.ess:9425/mfs.cgi shows the same. du -sh on my client shows: mfsmaster:9421 19G 1001M 18G 6% /mnt/mfs It all appears good. The problem is that using nautilus to navigate (or from c-li) "nautilus /mnt/mfs " results in a > 5 minutes to show content? umounting and remounting doesn't seem to help or affect it. ls -al /mnt/mfs is very responsive however. I am following some hints I found at http://contrib.meharwal.com/home/moosefs I have assigned a VirtualIP to my moose hosts and the networking seems fine. I am accessing mfsmaster using the forward-facing/public IP from my client. Is there something else I need to investigate? Thank you for your time. -- JJ aka Habitual Support Engineer Cirrhus9.com |
From: Atom P. <ap...@di...> - 2012-02-08 18:30:15
|
I've just started using MooseFS for shared drives and web data. I chose not to use it for VMs because I couldn't be "five nines" confident in it's stability and VMs get very cranky if they loose connection to their storage. Based on my experiences so far I think your biggest concern will be disk I/O; which appears to be directly related to the amount of CPU you put in your metamaster. Also, be very very careful that you never ever fill up your disk on the metamaster. On 02/07/2012 05:36 PM, Quenten Grasso wrote: > Hi Everyone, > > We are currently looking at using MooseFS in our environment to store > virtual machines/iscsi targets on. > > So far I must say it looks excellent when compared to the other offers > out there I still haven’t found anything that is easy to setup and seems > to “just work” so far on ubuntu 10LTS. > > So I’m after some recommendations on number of servers and number of > drives per server. I’ve found a couple of benchmarks around the place > and would like to hear form fellow users what kind of implementation you > maybe using and its use case and if you’ve come across any issues so far? > > We are thinking along the lines of a Metadata server Virtualised HA or > FT using vmware, A virtualised Backup Metadata Server with 4 physical > chunk servers w/ 8 x 3TB 7200rpm SATA Disks using ZFS on the individual > disks for chunk integrity (not using raidz) in each backed into 4 x 1GBE > LACP Linked Ports per physical server. > > Any infomation you maybe able to share is greatly appericated J > > Regards, > > Quenten Grasso > > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Quenten G. <QG...@on...> - 2012-02-08 02:01:38
|
Hi Everyone, We are currently looking at using MooseFS in our environment to store virtual machines/iscsi targets on. So far I must say it looks excellent when compared to the other offers out there I still haven't found anything that is easy to setup and seems to "just work" so far on ubuntu 10LTS. So I'm after some recommendations on number of servers and number of drives per server. I've found a couple of benchmarks around the place and would like to hear form fellow users what kind of implementation you maybe using and its use case and if you've come across any issues so far? We are thinking along the lines of a Metadata server Virtualised HA or FT using vmware, A virtualised Backup Metadata Server with 4 physical chunk servers w/ 8 x 3TB 7200rpm SATA Disks using ZFS on the individual disks for chunk integrity (not using raidz) in each backed into 4 x 1GBE LACP Linked Ports per physical server. Any infomation you maybe able to share is greatly appericated :) Regards, Quenten Grasso |
From: Steve <st...@bo...> - 2012-02-02 15:47:51
|
Oli, Look forward to that thanks. -------Original Message------- From: Ólafur Ósvaldsson Date: 06/01/2012 11:36:55 To: Steve Cc: moo...@li... Subject: Re: [Moosefs-users] moosefs distro Hi, We run a customized version of Ubuntu and to be honest I've not set it up in a distributable form. I'll check if I can gather it up next week and send the info to the list. /Oli On 6.1.2012, at 11:20, Steve wrote: > > That's really cool, are you able to share the scripts, image and/or how the > usb image was made ? > > > > Steve > > > > > > > > -------Original Message------- > > > > From: Ólafur Ósvaldsson > > Date: 06/01/2012 10:20:28 > > To: moo...@li... > > Subject: Re: [Moosefs-users] moosefs distro > > > > Hi, > > We run the chunkservers completely from memory, they boot from USB and > > only use a small partition of the USB drive for the graph data so that the > > history isn't lost between reboots. > > > > Every chunkserver has 6x1TB disks and 3GB of ram and there are startup > > scripts that initialize new disks on boot if required, so a new server can > just > > be put into the rack with a USB stick plugged in and it will clear all the > disks > > and setup for MFS if it is not like that already. > > > > /Oli > > > > On 5.1.2012, at 16:11, Travis Hein wrote: > > > >> The chunk server daemons are very low footprint for system resource > >> requirements. Enough so they are suitable to coexist with other system > >> services. Where if you have a cluster of physical machines each with a > >> local disk, just making every compute node also be a chunk server for > >> aggregated file system capacity. > >> > >> Most of the time lately though we put everything in virtual machines on > >> virtual machine hosting platforms now. Which I guess I kind of feel it > >> is to as efficient to have the chunk servers spread out everywhere, all > >> VMs are backed by the same SAN anyway, so the performance over spread > >> out disks goes away right. > >> > >> So lately I create a Virtual machine just for running the chunk server > >> process. We have a "standard" of using CentOS for our VMs. Which is > >> arguably kind of wasteful to just use it as a chunk server process, but > >> it is pretty much set and forget and appliance-ized. > >> > >> I have often thought about creating a moose fs stand alone appliance. An > >> embedded nano itx board in a 1U rackmount chassis, solid state boot, and > >> a minimal linux distribution, with a large sata drives. Both low power > >> and efficient, out of our virtualized platform. At least probably > >> cheaper to grow capacity than buying more iSCSI RAID SAN products :P > >> But this is still in my to do some day pile. > >> On 12-01-04 10:28 AM, Steve wrote: > >>> Do people use moose boxes for other roles ? > >>> > >>> > >>> > >>> Sometime ago I made a moosefs linux ISO cd (not a respin) however the > >>> installer wasn't insert cd and job done. Taking it any further was beyond > my > >>> capabilities. > >>> > >>> > >>> > >>> Is such a thing needed or desired ? Any collaborators > >>> > >>> > >>> > >>> Steve > >>> > >>> > ----------------------------------------------------------------------------- > > >>> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > > >>> infrastructure or vast IT resources to deliver seamless, secure access to > > >>> virtual desktops. With this all-in-one solution, easily deploy virtual > >>> desktops for less than the cost of PCs and save 60% on VDI infrastructure > > >>> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > >>> _______________________________________________ > >>> moosefs-users mailing list > >>> moo...@li... > >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > >> > >> > >> -- > >> Travis > >> > >> > >> > ----------------------------------------------------------------------------- > > >> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > >> infrastructure or vast IT resources to deliver seamless, secure access to > >> virtual desktops. With this all-in-one solution, easily deploy virtual > >> desktops for less than the cost of PCs and save 60% on VDI infrastructure > >> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > >> _______________________________________________ > >> moosefs-users mailing list > >> moo...@li... > >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > -- > > Ólafur Osvaldsson > > System Administrator > > Nethonnun ehf. > > e-mail: osv...@ne... > > phone: +354 517 3400 > > > > > > ----------------------------------------------------------------------------- > > > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > > infrastructure or vast IT resources to deliver seamless, secure access to > > virtual desktops. With this all-in-one solution, easily deploy virtual > > desktops for less than the cost of PCs and save 60% on VDI infrastructure > > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 |
From: Sébastien M. <seb...@u-...> - 2012-01-30 13:09:10
|
Hi Michal, I read your faq about the usage of memory . But how to calculate the quantity of memory with the master informations following ? all inodes: 16782386 directory inodes: 577617 file inodes: 16204746 chunks: 2104855 I added a lot of swap but it change nothing. It's normal ? Sébastien On 30/01/2012 11:14, Michał Borychowski wrote: > > Hi! > > Is it possible that you run out of memory? Snapshot is a quite RAM > consuming operation in the master server. > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > *From:*Sébastien MERTES [mailto:seb...@u-...] > *Sent:* Friday, January 27, 2012 5:52 PM > *To:* moo...@li... > *Subject:* [Moosefs-users] bug ? > > Hello, > > I have a big problem with the filsystem MooseFS. The operating system > is Freebsd 8.2. > > The command truss on the processus mfsmaster show error following : > /ERR#35 'Resource temporarily unavailable'/ > > The command rm -rf "snapshot name" disconnect the mfschunkservers and > mfsmetaloggers because the mfsmaster processus status is STOP. > It's the same bahavior with mfsmakesnapshot. > > /chunk3 mfschunkserver[12415]: connecting ... > chunk3 mfschunkserver[12415]: connected to Master/ > > > The command top following show the mfsmaster status : > / > 4233 mfs 1 44 -19 5438M 3878M swread 0 0:19 2.10% > mfsmaster > 4187 mfs 24 44 -19 93288K 54816K ucond 0 0:04 0.10% > mfschunkserver > 4180 mfs 1 44 -19 5440M 2945M STOP 0 1:49 0.00% > mfsmaster/ > > The mfsmaster process increases the swap abnormally. > > I observed than the metadata.mfs file have a big size when I stop the > mfsmaster. This size is the double of the original. > > > > Can you help me please ? > > Thanks > > Sébastien > -- Sébastien Merte`s CNRS - ICB/Université de Bourgogne Service Informatique 9 avenue Alain Savary 21078 Dijon Cedex - FRANCE Tél: 03.80.39.59.98 |
From: Michał B. <mic...@ge...> - 2012-01-30 10:14:23
|
Hi! Is it possible that you run out of memory? Snapshot is a quite RAM consuming operation in the master server. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Sébastien MERTES [mailto:seb...@u-...] Sent: Friday, January 27, 2012 5:52 PM To: moo...@li... Subject: [Moosefs-users] bug ? Hello, I have a big problem with the filsystem MooseFS. The operating system is Freebsd 8.2. The command truss on the processus mfsmaster show error following : ERR#35 'Resource temporarily unavailable' The command rm -rf "snapshot name" disconnect the mfschunkservers and mfsmetaloggers because the mfsmaster processus status is STOP. It's the same bahavior with mfsmakesnapshot. chunk3 mfschunkserver[12415]: connecting ... chunk3 mfschunkserver[12415]: connected to Master The command top following show the mfsmaster status : 4233 mfs 1 44 -19 5438M 3878M swread 0 0:19 2.10% mfsmaster 4187 mfs 24 44 -19 93288K 54816K ucond 0 0:04 0.10% mfschunkserver 4180 mfs 1 44 -19 5440M 2945M STOP 0 1:49 0.00% mfsmaster The mfsmaster process increases the swap abnormally. I observed than the metadata.mfs file have a big size when I stop the mfsmaster. This size is the double of the original. Can you help me please ? Thanks Sébastien |
From: Sébastien M. <seb...@u-...> - 2012-01-27 16:51:51
|
Hello, I have a big problem with the filsystem MooseFS. The operating system is Freebsd 8.2. The command truss on the processus mfsmaster show error following : /ERR#35 'Resource temporarily unavailable'/ The command rm -rf "snapshot name" disconnect the mfschunkservers and mfsmetaloggers because the mfsmaster processus status is STOP. It's the same bahavior with mfsmakesnapshot. /chunk3 mfschunkserver[12415]: connecting ... chunk3 mfschunkserver[12415]: connected to Master/ The command top following show the mfsmaster status : / 4233 mfs 1 44 -19 5438M 3878M swread 0 0:19 2.10% mfsmaster 4187 mfs 24 44 -19 93288K 54816K ucond 0 0:04 0.10% mfschunkserver 4180 mfs 1 44 -19 5440M 2945M STOP 0 1:49 0.00% mfsmaster/ The mfsmaster process increases the swap abnormally. I observed than the metadata.mfs file have a big size when I stop the mfsmaster. This size is the double of the original. Can you help me please ? Thanks Sébastien |
From: Dr. M. J. C. <mj...@av...> - 2012-01-27 13:46:33
|
> Well, it is complaining in the CGI panel now. All the problem files are > "reserved" files, like this: > > currently unavailable chunk 0000000000042F7D (inode: 297708 ; index: 0) > + currently unavailable reserved file 297708: > home/backdesk/.mozilla/firefox/fd9kuc06.default/cookies.sqlite-wal > > Is there a way to "flush" unavailable reserved files? Just to follow up: these errors did eventually just go away automagically after a week or so, so everything is good. Thanks for the great software! - Mike |
From: Florent B. <fl...@co...> - 2012-01-26 16:22:19
|
Hi all, Today i'm having some benchmark tests on a 2 chunkservers MFS installation. It consists on a lot of read/write operations. Both are using ext4 as underlying file system. I see that "jbd2" process (Journaling Block Device, from Linux kernel) is taking around 50% of CPU time !! When mfschunkserver process, less dans 10%... So i wonder if we just could use a non-journaling file system for chunkservers, assuming that data consistency is managed by the overlying FS (MFS and mfsmaster). If some problems occurs on chunkservers, and data is lost or corrupted, it will automatically be corrected by distributing chunk files over servers. Am I wrong ? What do you think about it ? Flo |
From: Samuel H. O. N. <sam...@ol...> - 2012-01-26 00:00:29
|
Hi there, I am using MooseFS since 1 year and I did not encounter many metadata errors. MooseFS is a very stable solution and we want to improve our architecture. When the mfsmaster encounter a problem, we launch a new one with the metadata but some files information are missing and files are marked as "0 valid copies" in the MFS interface. I am used to deal with this error by manually deleting the files which are missing (usually a few, 100 max). But since the last problem we encounter we have approximatly 1000 files missing (not too bad, we have 33 millions of files...). My question is pretty simple, how to automatically delete these files? Because "no valid copies" causes filesystem hang when we try to access to these files. Thanks for your answers. Best regards. Sam |
From: Dr. M. J. C. <mj...@av...> - 2012-01-16 14:31:40
|
On 01/14/2012 03:00 PM, wk...@bn... wrote: > > They should go away automatically as it completes it pass. (MooseFS is > continuously rolling through the system doing cleanup). > > if there was an issue the CGI panel would be complaining about it in the > info screen at the bottom of the page. Well, it is complaining in the CGI panel now. All the problem files are "reserved" files, like this: currently unavailable chunk 0000000000042F7D (inode: 297708 ; index: 0) + currently unavailable reserved file 297708: home/backdesk/.mozilla/firefox/fd9kuc06.default/cookies.sqlite-wal Is there a way to "flush" unavailable reserved files? - Mike |
From: Ken <ken...@gm...> - 2012-01-16 01:06:10
|
Hi, Michał I push my scripts to github: https://github.com/pedia/moosefs/tree/master/failover-script It really work to me. Sorry for late reply, because weekend. Regards -Ken 2012/1/13 Michał Borychowski <mic...@ge...>: > Hi Ken! > > It would be great if you could also provide the group with your scripts for > auto recovery of the system. > > > Kind regards > Michał > > > -----Original Message----- > From: Ken [mailto:ken...@gm...] > Sent: Friday, January 13, 2012 8:59 AM > To: Davies Liu > Cc: moo...@li... > Subject: Re: [Moosefs-users] ChunkServer in different Level > > We also use ucarp, but changed a little from Thomas version. The auto > switching never failed. > > A friend of mine use DRBD and LVS. It also work fine, but I think it is much > smaller than the Douban's. > > -Ken > > > > On Fri, Jan 13, 2012 at 3:42 PM, Davies Liu <dav...@gm...> wrote: >> On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: >>> I noticed the go-mfsclient in this mail-list before, we also write a >>> moose client in C++. ;-) It work find when mfsmaster failover. >> >> It's the most interesting part, how do you failover mfsmaster ? >> I have tried the method with ucarp, provided by Thomas S Hatch, It's >> seemed not stable enough, failed sometimes. >> >> Before a stable solution come up, we decide to do manually fail-over >> by ops, and do not deploy it in heavy online system. >> >>> We plan to build it as a preload dynamic library, and auto hook the file > API. >>> I think it is high availability enough. >>> >>> Thanks >>> >>> Regards >>> -Ken >>> >>> >>> >>> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: >>>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>>> small files >>>>> I agree. We combine some small files into one big file, and read >>>>> the small files with offset/length infomation. >>>> >>>> Is not safe to write to same file concurrently. >>>> We use this method to backup the original file user uploaded, with >>>> tar, when offline. >>>> Some times, some file will be broken. >>>> >>>> MFS is not good enough for online system, not high available, and >>>> some IO operations will be block when error in mfsmaster or > mfschunkserver. >>>> >>>> So we serve some video files (>10M) in MFS this way: >>>> Nginx -> nginx + FUSE -> MFS >>>> or >>>> Nginx -> go-mfsclient [1] -> MFS >>>> >>>> If there something wrong with MFS, it will not block the first Nginx >>>> and the whole site will not be affected. >>>> >>>> Davies >>>> >>>> [1] github.com/davies/go-mfsclient >>>> >>>>> Thanks. >>>>> >>>>> Regards >>>>> -Ken >>>>> >>>>> >>>>> >>>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> > wrote: >>>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>>>> hi, moosefs >>>>>>> >>>>>>> We plan to use moosefs as storage for huge amount photos uploaded by > users. >>>>>> >>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>> small files, because the mfsmaster will be the bottle neck when >>>>>> you have more than 100M files. At that time, the whole size of >>>>>> files may be 1T (10k per file), can be stored by one local disk. >>>>>> >>>>>> Huge amount small files need other solutions, just like TFS [1] >>>>>> from taobao.com, or beansdb [2] from douban.com. >>>>>> >>>>>> [1] http://code.taobao.org/p/tfs/src/ [2] >>>>>> http://code.google.com/p/beansdb/ >>>>>> >>>>>>> Because of read operations of new files are very more than old >>>>>>> files, maybe write new files to SSD is a choice. >>>>>>> For strict safe reason, we must backup content to an other data > center. >>>>>>> And more features in maintain purpose are required. >>>>>>> >>>>>>> I don't think moosefs can work fine in these situation. We try to >>>>>>> implement these features several weeks ago. Till now, it's almost >>>>>>> done. >>>>>>> >>>>>>> Is there anyone interested in this? >>>>>>> >>>>>>> more detail: >>>>>>> # Add access_mode(none, read, write capability) to struct >>>>>>> matocserventry(matocserv.c). This value can be changed from >>>>>>> outside(maybe from the python cgi) # mfschunkserver.cfg add >>>>>>> 'LEVEL' config, if not, LEVEL=0 as normal. >>>>>>> ChunkServer report it to Master if need. >>>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied >>>>>>> LEVEL should be 1,2,3 or 4. >>>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 >>>>>>> copy in >>>>>>> level=2 ChunkServer. >>>>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>>>> This policy should be very complicated in future. >>>>>>> # Also, we add read/write levelgoal support in mfstools. >>>>>>> >>>>>>> We plan to put these trivial change into github or somewhere else. >>>>>>> >>>>>>> It's a very incipient prototype. We appreciate any advice from >>>>>>> the develop team and other users. >>>>>>> >>>>>>> Regards >>>>>>> -Ken >>>>>>> >>>>>>> ----------------------------------------------------------------- >>>>>>> ------------- >>>>>>> RSA(R) Conference 2012 >>>>>>> Mar 27 - Feb 2 >>>>>>> Save $400 by Jan. 27 >>>>>>> Register now! >>>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>>>> _______________________________________________ >>>>>>> moosefs-users mailing list >>>>>>> moo...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> - Davies >>>> >>>> >>>> >>>> -- >>>> - Davies >> >> >> >> -- >> - Davies > > ---------------------------------------------------------------------------- > -- > RSA(R) Conference 2012 > Mar 27 - Feb 2 > Save $400 by Jan. 27 > Register now! > http://p.sf.net/sfu/rsa-sfdev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: <wk...@bn...> - 2012-01-14 19:55:59
|
On 1/13/12 12:47 AM, ro...@mm... wrote: > Sorry for intervening and excuse a moosefs newbie question. > Why are you concerned so much about mfsmaster failing? How often does this > happen? > > I am considering moosefs for a small lan of 15 users, mainly for > aggregating unused storage space from various machines. Googling suggested > moosefs is rather robust, but this thread suggest otherwise. > Have I misunderstood something? The Master is a single point of failure. If it fails, your Data is not available until you bring it backup. The MooseFS software is very reliable, we run several clusters and have only seen failures due to human error or hardware (we started off testing with old, thrown off kit). The good news is that if you have MetaLoggers, recovering is very easy and very reliable. We have never seen data loss due with a recovery (except for data "on the fly") and we have seen some rather "inelegant" failures as we were playing around with the system. So use good kit (like a server class chassis, dual Power Supply, UPS and ECC memory) and you dramatically reduce the chance of the outage. Make sure one of the MetaLoggers is capable of being a Master, so you can promote it if needed. There are lots of reasons for an outage and MooseFS is pretty minor on the list. Because it is a rare issue (with good kit) AND our application can deal with some downtime, we elected to not have an automated failover and have a human identify what the real issue is and handle it. If the Master fails a staff member just promotes a MetaLogger to take over the role until we can fix the real master and switch back at a convenient time. Downtime for that is 5-15 minutes once you figure in, identifying the issue, recovering the metadata and moving over the IP, clearing arps and maybe restarting chunkservers, depending upon what happened. When you do recover, there will be garbage files (for the on the fly files) in the Control Panel that eventually get cleaned out automatically. As mentioned there ARE better procedures, and we would love to have a more automated (reliable) failover, but its not quite there yet. |
From: Dr. M. J. C. <mj...@av...> - 2012-01-13 17:43:25
|
Hi, I have one export on my moosefs system: [root@xena ~]# mfsgetgoal -r /fileserver2 /fileserver2: files with goal 2 : 204781 directories with goal 2 : 13023 As you can see, all files have a goal=2 setting. However, the cgi "info" page shows that 215 files exist with a goal=1 and have 0 valid copies. I've been playing with different disks (adding and removing), to debug some speed issues, and perhaps something became botched along the way. (I probably should have prefixed the disks with the "*" marker and waited before pulling them out of the system.) I don't think I've lost any "real" files, however - probably metadata or trash. How do I determine what these goal=1 files are, since they don't seem to be reported on my mount (by mfsgetgoal -r)? Should I worry about them? Will they be purged from the system over time? Thanks for any advice... - Mike |