You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Stefan P. <s.p...@pr...> - 2012-05-14 19:45:33
|
Hi, we're thinking about replacing our SAN for KVM with a distributed FS like MooseFS. I'm running a small 4 machines MooseFS Cluster with 180GB SSD each and i'm not able to gain more performance than 77Mbit/s. Is this the limit? Is there a way to archieve higher throughput? I'm using bonding 2x1Gbit/s network cards. Greets Stefan |
From: Adam O. <bl...@bl...> - 2012-05-14 16:29:04
|
W dniu 12-05-14 17:29, Karol Pasternak pisze: > Thanks for hint, > > but I have still problem with mfsmaster and metadata.mfs.back file which > isn't saved. > I'm afraid if I stop mfsmaster I will still have the same old file;/ > Do you have any idea how to force mfsmaster (whitout stoping it) to save > fresh metadata.mfs.back? Do You have a free space on mfsmaster ? Which version of mfs do You use ? Any hints from syslog ? blink |
From: Stefan P. <s.p...@pr...> - 2012-05-14 16:17:15
|
Hi, we're thinking about replacing our SAN for KVM with a distributed FS like MooseFS. I'm running a small 4 machines MooseFS Cluster with 180GB SSD each and i'm not able to gain more performance than 77Mbit/s. Is this the limit? Is there a way to archieve higher throughput? I'm using bonding 2x1Gbit/s network cards. Greets Stefan |
From: Karol P. <kar...@ar...> - 2012-05-14 15:29:22
|
Thanks for hint, but I have still problem with mfsmaster and metadata.mfs.back file which isn't saved. I'm afraid if I stop mfsmaster I will still have the same old file;/ Do you have any idea how to force mfsmaster (whitout stoping it) to save fresh metadata.mfs.back? br. Karol On 05/14/2012 04:46 PM, Adam Ochmanski wrote: > W dniu 12-05-14 14:45, Karol Pasternak pisze: >> Hi all, >> >> I have problem with restoring metadata.mfs from mfsmetalloger. >> >> # mfsmetarestore -m metadata_ml.mfs.back -o /tmp/metadata.mfs >> changelog_ml* >> loading objects (files,directories,etc.) ... ok >> loading names ... ok >> loading deletion timestamps ... ok >> loading chunks data ... ok >> checking filesystem consistency ... ok >> connecting files and chunks ... ok >> found garbage at the end of file: changelog_ml_back.0.mfs (last correct >> id: 66424694) >> hole in change files (entries from changelog_ml_back.1.mfs:66475886 to >> changelog_ml.0.mfs:67436656 are missing) - add more files >> >> Anybody had similar problem? How to solve it? >> > > Hi, > use all changelog from metalogger and master, then do restore. Should > help. > > blink > -- Pozdrawiam/Best Regards *Karol Pasternak **Linux Administrator | ARBOmedia* *_ka...@ar..._*** T +48 22 592 49 95 | M +48 785 555 184 * ARBOmedia Polska | *Altowa 2 | 02-386 Warszawa* www.arbomedia.pl* <http://www.arbomedia.pl/>** | T +48 22 592 45 00 | F 22 448 71 63 ARBOmedia Polska Sp. z o.o., zarejestrowana w Sa;dzie Rejonowym dla m.st. Warszawy w Warszawie, XII Wydzia? Gospodarczy Krajowego Rejestru Sa;dowego, pod numerem KRS: 0000181362, kapita? zak?adowy wynosi 600 000 z?, NIP: 527-23-12-331 |
From: Adam O. <bl...@bl...> - 2012-05-14 15:05:59
|
W dniu 12-05-14 14:45, Karol Pasternak pisze: > Hi all, > > I have problem with restoring metadata.mfs from mfsmetalloger. > > # mfsmetarestore -m metadata_ml.mfs.back -o /tmp/metadata.mfs changelog_ml* > loading objects (files,directories,etc.) ... ok > loading names ... ok > loading deletion timestamps ... ok > loading chunks data ... ok > checking filesystem consistency ... ok > connecting files and chunks ... ok > found garbage at the end of file: changelog_ml_back.0.mfs (last correct > id: 66424694) > hole in change files (entries from changelog_ml_back.1.mfs:66475886 to > changelog_ml.0.mfs:67436656 are missing) - add more files > > Anybody had similar problem? How to solve it? > Hi, use all changelog from metalogger and master, then do restore. Should help. blink |
From: Karol P. <kar...@ar...> - 2012-05-14 14:25:55
|
based on documentation, metadata.mfs.back should be saved every hour, but it isn't. Opposite to metadata.mfs.back, changelogs are saved correctly. On 05/14/2012 02:45 PM, Karol Pasternak wrote: > Hi all, > > I have problem with restoring metadata.mfs from mfsmetalloger. > > # mfsmetarestore -m metadata_ml.mfs.back -o /tmp/metadata.mfs changelog_ml* > loading objects (files,directories,etc.) ... ok > loading names ... ok > loading deletion timestamps ... ok > loading chunks data ... ok > checking filesystem consistency ... ok > connecting files and chunks ... ok > found garbage at the end of file: changelog_ml_back.0.mfs (last correct > id: 66424694) > hole in change files (entries from changelog_ml_back.1.mfs:66475886 to > changelog_ml.0.mfs:67436656 are missing) - add more files > > Anybody had similar problem? How to solve it? > > Some days ago I also observed that mfsmaster can't save metadata to file > metadata.mfs.back, also when is runned as root. > > br. > Karol > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Pozdrawiam/Best Regards *Karol Pasternak **Linux Administrator | ARBOmedia* *_ka...@ar..._*** T +48 22 592 49 95 | M +48 785 555 184 * ARBOmedia Polska | *Altowa 2 | 02-386 Warszawa* www.arbomedia.pl* <http://www.arbomedia.pl/>** | T +48 22 592 45 00 | F 22 448 71 63 ARBOmedia Polska Sp. z o.o., zarejestrowana w Sa;dzie Rejonowym dla m.st. Warszawy w Warszawie, XII Wydzia? Gospodarczy Krajowego Rejestru Sa;dowego, pod numerem KRS: 0000181362, kapita? zak?adowy wynosi 600 000 z?, NIP: 527-23-12-331 |
From: Karol P. <kar...@ar...> - 2012-05-14 13:03:53
|
Hi all, I have problem with restoring metadata.mfs from mfsmetalloger. # mfsmetarestore -m metadata_ml.mfs.back -o /tmp/metadata.mfs changelog_ml* loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok loading chunks data ... ok checking filesystem consistency ... ok connecting files and chunks ... ok found garbage at the end of file: changelog_ml_back.0.mfs (last correct id: 66424694) hole in change files (entries from changelog_ml_back.1.mfs:66475886 to changelog_ml.0.mfs:67436656 are missing) - add more files Anybody had similar problem? How to solve it? Some days ago I also observed that mfsmaster can't save metadata to file metadata.mfs.back, also when is runned as root. br. Karol |
From: Boris E. <bor...@gm...> - 2012-05-11 19:53:53
|
Hello listmates, I've got a MooseFS disk space pool residing on a chunk server in a file system that is about 22TB. Yet, the MooseFS reports that file system as being only about 17 TB in size. Why would that be? Thanks. Boris. |
From: Scoleri, S. <Sco...@gs...> - 2012-05-11 17:26:28
|
The kernel nfs never worked on moose for me. Something to do with direct i/o if remember correctly. -Scoleri -----Original Message----- From: Allen, Benjamin S [mailto:bs...@la...] Sent: Friday, May 11, 2012 1:06 PM To: Scoleri, Steven Cc: Boris Epstein; moo...@li... Subject: Re: [Moosefs-users] MooseFS to NFS gateway Curious why you decided to use user-space NFS over the typical kernel-space NFS. Ben On May 11, 2012, at 10:45 AM, Scoleri, Steven wrote: > I do this using UNFS3 - works fine - not exactly hyper fast but works. > > http://unfs3.sourceforge.net/ > > -Scoleri > > From: Boris Epstein [mailto:bor...@gm...] > Sent: Friday, May 11, 2012 10:33 AM > To: moo...@li... > Subject: [Moosefs-users] MooseFS to NFS gateway > > Hello all, > > Has anybody every successfully set up a machine that would mount a MooseFS as a client and serve that mount point up to to to ohers via NFS? > > Cheers, > > Boris. > ---------------------------------------------------------------------- > -------- > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. > Discussions will include endpoint security, mobile security and the > latest in malware threats. > http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/_____________ > __________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve W. <st...@pu...> - 2012-05-11 17:11:26
|
On 05/11/2012 01:06 PM, Dr. Michael J. Chudobiak wrote: >> Additionally, gvfsd-metadata has problems using shared network storage >> (I think it creates a memory mapped file). I finally removed the execute >> bit from the gvfsd-metadata permissions to prevent from running. This >> solved a problem I had with very high chunk deletion dramatically >> slowing down the MFS storage system. > Is there a bug report against gvfs for that? > > Maybe related to this: > http://bugzilla.redhat.com/show_bug.cgi?id=561904 > > - Mike > Yes, that bug report was one that helped me pinpoint my problem and implement a work-around. Steve |
From: Allen, B. S <bs...@la...> - 2012-05-11 17:06:37
|
Curious why you decided to use user-space NFS over the typical kernel-space NFS. Ben On May 11, 2012, at 10:45 AM, Scoleri, Steven wrote: > I do this using UNFS3 – works fine – not exactly hyper fast but works. > > http://unfs3.sourceforge.net/ > > -Scoleri > > From: Boris Epstein [mailto:bor...@gm...] > Sent: Friday, May 11, 2012 10:33 AM > To: moo...@li... > Subject: [Moosefs-users] MooseFS to NFS gateway > > Hello all, > > Has anybody every successfully set up a machine that would mount a MooseFS as a client and serve that mount point up to to to ohers via NFS? > > Cheers, > > Boris. > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Dr. M. J. C. <mj...@av...> - 2012-05-11 17:06:12
|
> Additionally, gvfsd-metadata has problems using shared network storage > (I think it creates a memory mapped file). I finally removed the execute > bit from the gvfsd-metadata permissions to prevent from running. This > solved a problem I had with very high chunk deletion dramatically > slowing down the MFS storage system. Is there a bug report against gvfs for that? Maybe related to this: http://bugzilla.redhat.com/show_bug.cgi?id=561904 - Mike |
From: Scoleri, S. <Sco...@gs...> - 2012-05-11 16:57:58
|
I do this using UNFS3 - works fine - not exactly hyper fast but works. http://unfs3.sourceforge.net/ -Scoleri From: Boris Epstein [mailto:bor...@gm...] Sent: Friday, May 11, 2012 10:33 AM To: moo...@li... Subject: [Moosefs-users] MooseFS to NFS gateway Hello all, Has anybody every successfully set up a machine that would mount a MooseFS as a client and serve that mount point up to to to ohers via NFS? Cheers, Boris. |
From: Atom P. <ap...@di...> - 2012-05-11 16:17:26
|
On 05/11/2012 09:00 AM, Boris Epstein wrote: > Now that we are on the topic - if I need to build an SMB/CIFS gateway to > MooseFs what in your opinion would be the best way to do that? That will depend a lot on your environment. Do you have an Active Directory domain or do you use OpenLDAP/OpenDirectory? Are you clients Windows XP, Vista, 7, Samba? How many clients are you serving? What kind of data are they using (big files, high I/O, etc)? I use OpenLDAP with Windows XP and 7 clients. I use MSDFS roots and several identically configured SMB/CIFS samba servers. Access control is done at the share and file system level. All the data, including the MSDFS links, is in the moose cluster. These are Ubuntu VMs with about a gig of RAM each. The vast majority of my data is static. IIR, from the MooseFS docs, this setup could cause problems if multiple people are trying to use the same file; I think MooseFS (in 1.6.20) doesn't do file locks and with the MSDFS you can't rely on the client to do the locking either. -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Boris E. <bor...@gm...> - 2012-05-11 16:00:42
|
Atom, Thanks! Now that we are on the topic - if I need to build an SMB/CIFS gateway to MooseFs what in your opinion would be the best way to do that? Boris. On Fri, May 11, 2012 at 11:55 AM, Atom Powers <ap...@di...> wrote: > Yes, I do this one one of my machines. We have some operating systems > (FreeBSD) that can't mount MooseFS directly. Fortunately there are few > enough of them that I only need one gateway, for now. (I'm using Ubuntu > 10 or 11 on the gateway, but any Linux should work.) > > I also have quite a number of systems that serve as SMB/Samba gateways > for MS Windows systems. > > On 05/11/2012 07:33 AM, Boris Epstein wrote: > > Hello all, > > > > Has anybody every successfully set up a machine that would mount a > > MooseFS as a client and serve that mount point up to to to ohers via NFS? > > > -- > -- > Perfection is just a word I use occasionally with mustard. > --Atom Powers-- > Director of IT > DigiPen Institute of Technology > +1 (425) 895-4443 > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Steve W. <st...@pu...> - 2012-05-11 15:58:14
|
On 05/11/2012 05:36 AM, Dr. Michael J. Chudobiak wrote: > On 05/10/2012 05:28 PM, Steve Thompson wrote: >> On Thu, 29 Mar 2012, Steve Thompson wrote: >> >>> I have further found that Thunderbird works well, but Firefox is so >>> painfully slow (glacial) as to be unusable. For the time being, I have had >>> to relocate the .mozilla directories to a non-MFS file system and replaced >>> them by symbolic links. > ... >> Copying a large file into MFS gets me something like 80-85 MB/sec >> (physically twice that with goal=2) so I am at a loss to explain the >> dismal performance with firefox. I could really use some ideas, as I have >> no idea where to go next. > I would focus on the sqlite files that firefox uses. sqlite is notorious > for causing problems on remote filesystems (particularly NFS). > "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). > > Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps > call fsync much more frequently than others. > > - Mike I've been chasing similar problems the past day or two and have found that the sqlite files created by Firefox to be a real problem (e.g., cookies.sqlite-wal, cookies.sqlite-shm, urlclassifier3.sqlite, places.sqlite-wal). Also, I've had occasional problems with a user's ~/.xsession-errors causing a lot of I/O activity. Notice the number of messages sent (and dropped) to syslog: May 11 10:00:37 maverick mfsmount[4749]: file: 75074, index: 3 - fs_writechunk returns status 11 May 11 10:00:38 mfsmount[4749]: last message repeated 199 times May 11 10:00:38 maverick rsyslogd-2177: imuxsock begins to drop messages from pid 4749 due to rate-limiting May 11 10:00:39 maverick mfschunkserver[2437]: testing chunk: /mfs/01/C2/chunk_00000000000F4DC2_00000001.mfs May 11 10:00:44 maverick rsyslogd-2177: imuxsock lost 17465 messages from pid 4749 due to rate-limiting Additionally, gvfsd-metadata has problems using shared network storage (I think it creates a memory mapped file). I finally removed the execute bit from the gvfsd-metadata permissions to prevent from running. This solved a problem I had with very high chunk deletion dramatically slowing down the MFS storage system. In each of these cases, it's not the fault of MooseFS but applications that don't properly handle network storage systems. Steve |
From: Atom P. <ap...@di...> - 2012-05-11 15:55:33
|
Yes, I do this one one of my machines. We have some operating systems (FreeBSD) that can't mount MooseFS directly. Fortunately there are few enough of them that I only need one gateway, for now. (I'm using Ubuntu 10 or 11 on the gateway, but any Linux should work.) I also have quite a number of systems that serve as SMB/Samba gateways for MS Windows systems. On 05/11/2012 07:33 AM, Boris Epstein wrote: > Hello all, > > Has anybody every successfully set up a machine that would mount a > MooseFS as a client and serve that mount point up to to to ohers via NFS? > -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: yishi c. <hol...@gm...> - 2012-05-11 15:13:20
|
hi mfs-dev-team: I've found a small bug of moosefs:when there is a chunkserver with a lot of chunks(for instance,abount one million chunks),connected to a master which has no information about these chunks.the master will assign a new chunkid for each chunk,and call the syslog() for each chunk at the same time.that is quite a big cost.for my server(12 cores cpu),it can print about less than 150 syslog per second,for more than one million chunks it will take about two hours to handle this "new chunkserver". During this two hours,the *master can't handle the normal request of mfsmount*.if we have more than one MFS cluster,or the chunkserver has been disconnected with the master for a certain time that the master has already deleted all the information of these chunks.when the chunkserver connected to a diffrent master or reconnect to the master after a long time.*this bug will make the MFS system down with the normal request for hours*. the same kind of problem may occured under the circumstance like manually delete a lot of chunks of chunkserver,and *something similar with do some thing wrong with the chunks* .the master will print all the information of these chunks with syslog(). I fixed these bugs and added a configuration about how many times we will reduce the log numbers,if will config this with 1000,the master will print 1/1000 of the normal log. the modified code is in the mail‘s attachment.in my perspective ,this bug is critical for online service,hope you can let users know about the risk of this. best wished and good luck. yours tony cheng 2012/5/11 <moo...@li...> > Send moosefs-users mailing list submissions to > moo...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/moosefs-users > or, via email, send a message with subject or body 'help' to > moo...@li... > > You can reach the person managing the list at > moo...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of moosefs-users digest..." > > > Today's Topics: > > 1. Re: does MooseFS support incremental backup for data? (????) > 2. spindles or chunkservers for performance? (wkmail) > 3. bundle open source [was: Solution of small file store] (Ken) > 4. Re: bundle open source [was: Solution of small file store] (Ken) > 5. Re: bundle open source [was: Solution of small file store] > (Dennis Jacobfeuerborn) > 6. Re: fox and bird (Steve Thompson) > 7. Moosefs git source tree not updated (Anh K. Huynh) > 8. Re: fox and bird (Dr. Michael J. Chudobiak) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 9 May 2012 09:45:25 +0800 > From: ???? <toa...@gm...> > Subject: Re: [Moosefs-users] does MooseFS support incremental backup > for data? > To: ??? <shu...@gm...> > Cc: moo...@li... > Message-ID: > <CADX6XXVF5vUEjiHKtCHPMoBF9K8j-X8XFbbJ5QX2AE7gr=qD...@ma... > > > Content-Type: text/plain; charset="gb2312" > > use moosefs's delay delete can fit what you want. > > 2012/4/24 ??? <shu...@gm...> > > > hi, > > MooseFS is efficent file system for distributing. From Q&A on > > www.moosefs.org, I know moosefs support snapshot feature. > > I want to know whether if moose suppport incremental backup for data > > in chunkdata server? If not support, will it be added int the future? > > > > 3x > > > > Good lucky > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > Message: 2 > Date: Wed, 09 May 2012 12:11:58 -0700 > From: wkmail <wk...@bn...> > Subject: [Moosefs-users] spindles or chunkservers for performance? > To: moo...@li... > Message-ID: <4FA...@bn...> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > We run several MFS clusters, mostly for data storage but we have been > pleased with their use in Email Server clusters where despite the > Storage penalty (the 64K chunks multiplying the storage size used) > performance has been quite good and compares to other solutions we have > tried (replicated NFS etc) with much easier maintenance. Our feeling is > that hard drives are still cheap (despite the Asian flood) and we have > lots of older kit/drives floating around in the DC. > > We currently have a 4 chunkserver setup that due to growth is beginning > to slow down (7+ million files now). > > Each CS has a single SATA 1TB drive. Goal is set to 3. > > Would we be better off adding additional chunkservers and thus spreading > the read/writes out over more CS machines? > > or would simply adding additional drives to the existing Chunkservers > achieve the same thing (or close to the same thing) due to the writes > being spread over more spindles. > > On this list I recall previous recommendations that going more than 4 > spindles per CS was problematic due to limits in the software for the > number of CS connections, but > in this case we are starting with only 1 drive apiece now and we > certainly have a lot of opportunity to grow (and with the lower end kit > we use for chunkservers they probably only > have 2-4 SATA ports anyway). > > Thank You. > > -bill > > > > > > ------------------------------ > > Message: 3 > Date: Thu, 10 May 2012 19:17:05 +0800 > From: Ken <ken...@gm...> > Subject: [Moosefs-users] bundle open source [was: Solution of small > file store] > To: moosefs-users <moo...@li...> > Message-ID: > <CAJ...@ma... > > > Content-Type: text/plain; charset=ISO-8859-1 > > hi, all > > As mention in previous mail > (http://sf.net/mailarchive/message.php?msg_id=29171206), > now we open source it - bundle > > https://github.com/xiaonei/bundle > > The source is well tested and documented. > > Demo: > http://60.29.242.206/demo.html > > > Any ideas is appreciated. > > -Ken > > > > ------------------------------ > > Message: 4 > Date: Thu, 10 May 2012 19:51:37 +0800 > From: Ken <ken...@gm...> > Subject: Re: [Moosefs-users] bundle open source [was: Solution of > small file store] > To: moosefs-users <moo...@li...> > Message-ID: > <CAJbVbdLDiHZLzwf07L=v_W...@ma... > > > Content-Type: text/plain; charset=ISO-8859-1 > > A fast Demo: > http://220.181.180.55/demo.html > > > -Ken > > > On Thu, May 10, 2012 at 7:17 PM, Ken <ken...@gm...> wrote: > > hi, all > > > > As mention in previous mail > > (http://sf.net/mailarchive/message.php?msg_id=29171206), > > now we open source it - bundle > > > > ?https://github.com/xiaonei/bundle > > > > The source is well tested and documented. > > > > Demo: > > ?http://60.29.242.206/demo.html > > > > > > Any ideas is appreciated. > > > > -Ken > > > > ------------------------------ > > Message: 5 > Date: Thu, 10 May 2012 15:39:56 +0200 > From: Dennis Jacobfeuerborn <den...@co...> > Subject: Re: [Moosefs-users] bundle open source [was: Solution of > small file store] > To: moo...@li... > Message-ID: <4FA...@co...> > Content-Type: text/plain; charset=ISO-8859-1 > > On 05/10/2012 01:17 PM, Ken wrote: > > hi, all > > > > As mention in previous mail > > (http://sf.net/mailarchive/message.php?msg_id=29171206), > > now we open source it - bundle > > > > https://github.com/xiaonei/bundle > > > > The source is well tested and documented. > > > > Demo: > > http://60.29.242.206/demo.html > > > > Looks really interesting, thanks! > You should probably add a license file to make it clear what the conditions > for its use are. > > Regards, > Dennis > > > > ------------------------------ > > Message: 6 > Date: Thu, 10 May 2012 17:28:50 -0400 (EDT) > From: Steve Thompson <sm...@cb...> > Subject: Re: [Moosefs-users] fox and bird > To: moo...@li... > Message-ID: > <alp...@as...> > Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed > > On Thu, 29 Mar 2012, Steve Thompson wrote: > > > I have further found that Thunderbird works well, but Firefox is so > > painfully slow (glacial) as to be unusable. For the time being, I have > had > > to relocate the .mozilla directories to a non-MFS file system and > replaced > > them by symbolic links. > > Now I have upgraded to 1.6.25, have emptied MFS completely apart from my > .mozilla directory. There are now four dedicated chunkservers with a total > of 20TB of SATA RAID-5 file systems formatted with ext4, and all four are > connected to a common HP Procurve switch using dual bonded balance-alb > gigabit links, dedicated to MFS, with MTU=1500. The master is running on > one of the chunkservers. "hdparm -t" gives me about 400 MB/sec. > > Firefox is still so painfully slow as to be unuseable. It takes something > like 30-45 minutes to start firefox, and several minutes to click on a > link. With .mozilla in an NFS mounted file system from the same disks, > firefox starts immediately, so it doesn't look like hardware. > > Copying a large file into MFS gets me something like 80-85 MB/sec > (physically twice that with goal=2) so I am at a loss to explain the > dismal performance with firefox. I could really use some ideas, as I have > no idea where to go next. > > Steve > > > > ------------------------------ > > Message: 7 > Date: Fri, 11 May 2012 13:50:31 +0700 > From: "Anh K. Huynh" <anh...@gm...> > Subject: [Moosefs-users] Moosefs git source tree not updated > To: moo...@li... > Message-ID: <201...@gm...> > Content-Type: text/plain; charset=US-ASCII > > Hello, > > Though MFS has released some new versions, I don't see any updates in > the git repository > > > http://moosefs.git.sourceforge.net/git/gitweb.cgi?p=moosefs/moosefs;a=summary > > Is the git repository located at another place and/or is there anything > wrong here? > > Thanks & Regards, > > -- > Anh K. Huynh > ('&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#" > `CB]V?Tx<uVtT`Rpo3NlF.Jh++FdbCBA@?]!~|4XzyTT43Qsqq(Lnmkj"Fhg${z@> > > > > ------------------------------ > > Message: 8 > Date: Fri, 11 May 2012 05:36:14 -0400 > From: "Dr. Michael J. Chudobiak" <mj...@av...> > Subject: Re: [Moosefs-users] fox and bird > To: moo...@li... > Message-ID: <4FA...@av...> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > On 05/10/2012 05:28 PM, Steve Thompson wrote: > > On Thu, 29 Mar 2012, Steve Thompson wrote: > > > >> I have further found that Thunderbird works well, but Firefox is so > >> painfully slow (glacial) as to be unusable. For the time being, I have > had > >> to relocate the .mozilla directories to a non-MFS file system and > replaced > >> them by symbolic links. > ... > > Copying a large file into MFS gets me something like 80-85 MB/sec > > (physically twice that with goal=2) so I am at a loss to explain the > > dismal performance with firefox. I could really use some ideas, as I have > > no idea where to go next. > > I would focus on the sqlite files that firefox uses. sqlite is notorious > for causing problems on remote filesystems (particularly NFS). > "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). > > Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps > call fsync much more frequently than others. > > - Mike > > > > ------------------------------ > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > ------------------------------ > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > End of moosefs-users Digest, Vol 29, Issue 7 > ******************************************** > |
From: youngcow <you...@gm...> - 2012-05-11 14:36:35
|
you can use unfs(http://unfs3.sourceforge.net/) as nfs gateway for moosefs and kernel nfs didn't support fuse. > Hello all, > > Has anybody every successfully set up a machine that would mount a > MooseFS as a client and serve that mount point up to to to ohers via NFS? > > Cheers, > > Boris. > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Boris E. <bor...@gm...> - 2012-05-11 14:33:27
|
Hello all, Has anybody every successfully set up a machine that would mount a MooseFS as a client and serve that mount point up to to to ohers via NFS? Cheers, Boris. |
From: Ken <ken...@gm...> - 2012-05-11 10:39:51
|
We updated the git, add a license. It's GPLv2. Thanks -Ken On Thu, May 10, 2012 at 9:39 PM, Dennis Jacobfeuerborn <den...@co...> wrote: > On 05/10/2012 01:17 PM, Ken wrote: >> hi, all >> >> As mention in previous mail >> (http://sf.net/mailarchive/message.php?msg_id=29171206), >> now we open source it - bundle >> >> https://github.com/xiaonei/bundle >> >> The source is well tested and documented. >> >> Demo: >> http://60.29.242.206/demo.html >> > > Looks really interesting, thanks! > You should probably add a license file to make it clear what the conditions > for its use are. > > Regards, > Dennis > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Dr. M. J. C. <mj...@av...> - 2012-05-11 10:01:22
|
On 05/10/2012 05:28 PM, Steve Thompson wrote: > On Thu, 29 Mar 2012, Steve Thompson wrote: > >> I have further found that Thunderbird works well, but Firefox is so >> painfully slow (glacial) as to be unusable. For the time being, I have had >> to relocate the .mozilla directories to a non-MFS file system and replaced >> them by symbolic links. ... > Copying a large file into MFS gets me something like 80-85 MB/sec > (physically twice that with goal=2) so I am at a loss to explain the > dismal performance with firefox. I could really use some ideas, as I have > no idea where to go next. I would focus on the sqlite files that firefox uses. sqlite is notorious for causing problems on remote filesystems (particularly NFS). "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps call fsync much more frequently than others. - Mike |
From: Anh K. H. <anh...@gm...> - 2012-05-11 06:44:45
|
Hello, Though MFS has released some new versions, I don't see any updates in the git repository http://moosefs.git.sourceforge.net/git/gitweb.cgi?p=moosefs/moosefs;a=summary Is the git repository located at another place and/or is there anything wrong here? Thanks & Regards, -- Anh K. Huynh ('&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#" `CB]V?Tx<uVtT`Rpo3NlF.Jh++FdbCBA@?]!~|4XzyTT43Qsqq(Lnmkj"Fhg${z@> |
From: Steve T. <sm...@cb...> - 2012-05-10 21:28:57
|
On Thu, 29 Mar 2012, Steve Thompson wrote: > I have further found that Thunderbird works well, but Firefox is so > painfully slow (glacial) as to be unusable. For the time being, I have had > to relocate the .mozilla directories to a non-MFS file system and replaced > them by symbolic links. Now I have upgraded to 1.6.25, have emptied MFS completely apart from my .mozilla directory. There are now four dedicated chunkservers with a total of 20TB of SATA RAID-5 file systems formatted with ext4, and all four are connected to a common HP Procurve switch using dual bonded balance-alb gigabit links, dedicated to MFS, with MTU=1500. The master is running on one of the chunkservers. "hdparm -t" gives me about 400 MB/sec. Firefox is still so painfully slow as to be unuseable. It takes something like 30-45 minutes to start firefox, and several minutes to click on a link. With .mozilla in an NFS mounted file system from the same disks, firefox starts immediately, so it doesn't look like hardware. Copying a large file into MFS gets me something like 80-85 MB/sec (physically twice that with goal=2) so I am at a loss to explain the dismal performance with firefox. I could really use some ideas, as I have no idea where to go next. Steve |
From: Dennis J. <den...@co...> - 2012-05-10 13:40:06
|
On 05/10/2012 01:17 PM, Ken wrote: > hi, all > > As mention in previous mail > (http://sf.net/mailarchive/message.php?msg_id=29171206), > now we open source it - bundle > > https://github.com/xiaonei/bundle > > The source is well tested and documented. > > Demo: > http://60.29.242.206/demo.html > Looks really interesting, thanks! You should probably add a license file to make it clear what the conditions for its use are. Regards, Dennis |