You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: mARK b. <mb...@gm...> - 2012-06-06 19:22:46
|
http://www.moosefs.org/reference-guide.html says "The master server should have approximately 300 MiB of RAM allocated to handle 1 million files on chunkservers." Are those files as seen by clients, or do all the copies count separately? Or perhaps it is the number of chunk files that counts? I have nearly a billion (and growing) files to store, mostly just a few kilobytes in size, with a goal of at least 2, so is my RAM requirement 300 GiB or 600 GiB? (Either way, yikes!) -- mARK bLOORE <mb...@gm...> |
From: jyc <mai...@gm...> - 2012-06-06 16:43:27
|
yes I agree with you. but above 64GB of Ram, you need a server that can handle more... and it cost much more... for example : OCZ RevoDrive 3 PCI-Express 240 Go in France => 500 euros for a 8GB = from 50 to 120 euros (don't know if ECC) for 500 euros, i can have 10x 8GB => 80GB of el cheapo memory... I think ssd is one of the future of distributed filesystems... and I think moosefs as to think about it. Wang Jian a écrit : > Under normal or ideal workload, most of metadata operation is 'append > write' (for new file), some 'overwrite' (for file update, like > enlarge, delete, and relocate). This is ok for SSD. > > However, under workload that modify files a lot, it may be problematic. > > BTW, 64GB RAM alone is cheap. 8GB ~= 100 - 150 USD. It's expensive > when you need 64GB x N and N is big. > > 于 2012/6/6 22:44, Boris Epstein 写道: >> On Wed, Jun 6, 2012 at 10:06 AM, jyc <mai...@gm... >> <mailto:mai...@gm...>> wrote: >> >> Hi every one ! >> >> Hi michal ! >> >> will there be in a near futur a way to store the metadatas on a >> device >> like a ssd, instead of putting them into memory ? >> having 64GB of Ram or more isn't available easely (even nowadays) >> but having 120 GB of SSD (mirrored) on a pci-e bus is much more >> larger, >> and cost a lot less. but maybe slower... >> >> anyone with an opinion is welcome :-) >> >> >> I believe this may not be a bad idea. One thing that would trouble me >> is that SSD drives apparently don't like to do lots of writes - that >> causes their life expectancy to go down significantly. That's what >> I've heard. >> >> Boris. >> > |
From: Wang J. <jia...@re...> - 2012-06-06 16:39:34
|
Under normal or ideal workload, most of metadata operation is 'append write' (for new file), some 'overwrite' (for file update, like enlarge, delete, and relocate). This is ok for SSD. However, under workload that modify files a lot, it may be problematic. BTW, 64GB RAM alone is cheap. 8GB ~= 100 - 150 USD. It's expensive when you need 64GB x N and N is big. 于 2012/6/6 22:44, Boris Epstein 写道: > On Wed, Jun 6, 2012 at 10:06 AM, jyc <mai...@gm... <mailto:mai...@gm...>> wrote: > > Hi every one ! > > Hi michal ! > > will there be in a near futur a way to store the metadatas on a device > like a ssd, instead of putting them into memory ? > having 64GB of Ram or more isn't available easely (even nowadays) > but having 120 GB of SSD (mirrored) on a pci-e bus is much more larger, > and cost a lot less. but maybe slower... > > anyone with an opinion is welcome :-) > > > I believe this may not be a bad idea. One thing that would trouble me is that SSD drives apparently don't like to do lots of writes - that causes their life expectancy to go down significantly. > That's what I've heard. > > Boris. > |
From: jyc <mai...@gm...> - 2012-06-06 16:20:15
|
yes, but the problem I get is when the mfsmaster fork... I have a 64 GB Ram server. i'm almost à 35 Gb of Ram used by the process. of course, memory is more fast. i'm working on a web platform. this is more WORM like platform : Write Once, Read Many i don't see the write endurance problem a great one for me. i can mirror the ssd, and there is mfsmetalogger... but if i could say : the metadatas go to the ssd now, i would say : yeah, it's fine ! for instance, i have for now : a filer of 109TB, 34 GB for the mfsmaster process, 58 Tb of available space, 162 972 123 for all chunk copies,81 406 307 files... many many very small files. I say it would be really great that moosefs works as good with small files that it does with big ones. small files => a lot of memory for the mfsmaster consumed... ssd might be a solution for me. I don't know if it's possible, but it would be cool to plug an array of 8 blocks of 8KB in a minimal chunk of 64KB. to store 8 files under 8KB. (I know what it means in dev time... :-( ) Allen, Benjamin S a écrit : > Ideally you'd use SLC based SSD which has a much more write endurance > than MLC. However they're more expensive (near or just under the cost > of memory). Also SSD performance degrades fairly quickly see: > http://www.ddrdrive.com/zil_accelerator.pdf, ZFS' ZIL is a fairly > similar use case. > > I'd say SSD metadata storage would be useful. however you'd want to > use it in addition to whatever memory is available in a system. > Although this would bring up the complexity of the metadata server, > you'd want to cache recent read/write metadata entries in memory to > limit the accesses to block IO. > > Ben > > On Jun 6, 2012, at 8:44 AM, Boris Epstein wrote: > >> On Wed, Jun 6, 2012 at 10:06 AM, jyc <mai...@gm... >> <mailto:mai...@gm...>> wrote: >> >> Hi every one ! >> >> Hi michal ! >> >> will there be in a near futur a way to store the metadatas on a >> device >> like a ssd, instead of putting them into memory ? >> having 64GB of Ram or more isn't available easely (even nowadays) >> but having 120 GB of SSD (mirrored) on a pci-e bus is much more >> larger, >> and cost a lot less. but maybe slower... >> >> anyone with an opinion is welcome :-) >> >> >> I believe this may not be a bad idea. One thing that would trouble me >> is that SSD drives apparently don't like to do lots of writes - that >> causes their life expectancy to go down significantly. That's what >> I've heard. >> >> Boris. >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. >> Discussions >> will include endpoint security, mobile security and the latest in >> malware >> threats. >> http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/_______________________________________________ >> moosefs-users mailing list >> moo...@li... >> <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Allen, B. S <bs...@la...> - 2012-06-06 15:23:47
|
Ideally you'd use SLC based SSD which has a much more write endurance than MLC. However they're more expensive (near or just under the cost of memory). Also SSD performance degrades fairly quickly see: http://www.ddrdrive.com/zil_accelerator.pdf, ZFS' ZIL is a fairly similar use case. I'd say SSD metadata storage would be useful. however you'd want to use it in addition to whatever memory is available in a system. Although this would bring up the complexity of the metadata server, you'd want to cache recent read/write metadata entries in memory to limit the accesses to block IO. Ben On Jun 6, 2012, at 8:44 AM, Boris Epstein wrote: On Wed, Jun 6, 2012 at 10:06 AM, jyc <mai...@gm...<mailto:mai...@gm...>> wrote: Hi every one ! Hi michal ! will there be in a near futur a way to store the metadatas on a device like a ssd, instead of putting them into memory ? having 64GB of Ram or more isn't available easely (even nowadays) but having 120 GB of SSD (mirrored) on a pci-e bus is much more larger, and cost a lot less. but maybe slower... anyone with an opinion is welcome :-) I believe this may not be a bad idea. One thing that would trouble me is that SSD drives apparently don't like to do lots of writes - that causes their life expectancy to go down significantly. That's what I've heard. Boris. ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/_______________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Boris E. <bor...@gm...> - 2012-06-06 14:45:06
|
On Wed, Jun 6, 2012 at 10:06 AM, jyc <mai...@gm...> wrote: > Hi every one ! > > Hi michal ! > > will there be in a near futur a way to store the metadatas on a device > like a ssd, instead of putting them into memory ? > having 64GB of Ram or more isn't available easely (even nowadays) > but having 120 GB of SSD (mirrored) on a pci-e bus is much more larger, > and cost a lot less. but maybe slower... > > anyone with an opinion is welcome :-) > > I believe this may not be a bad idea. One thing that would trouble me is that SSD drives apparently don't like to do lots of writes - that causes their life expectancy to go down significantly. That's what I've heard. Boris. |
From: jyc <mai...@gm...> - 2012-06-06 14:32:45
|
Hi every one ! Hi michal ! will there be in a near futur a way to store the metadatas on a device like a ssd, instead of putting them into memory ? having 64GB of Ram or more isn't available easely (even nowadays) but having 120 GB of SSD (mirrored) on a pci-e bus is much more larger, and cost a lot less. but maybe slower... anyone with an opinion is welcome :-) |
From: Michal B. <mic...@co...> - 2012-06-06 10:38:06
|
Of data chunk. So " mfsfilerepair " doesn't help?? Regards Michal -----Original Message----- From: Davies Liu [mailto:dav...@gm...] Sent: Wednesday, June 06, 2012 12:12 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] There are only invalid copyies in mfsmaster On Wed, Jun 6, 2012 at 5:59 PM, Michal Borychowski <mic...@co...> wrote: > Hi > > You have "incompatibility" between chunk versions among master and > chunkservers. "mfsfilerepair" should help The version of data chunk or moosefs ? mfsfilerepair had never helped, the files are broken, any IO operations will hang. > > Regards > Michal > > > -----Original Message----- > From: Davies Liu [mailto:dav...@gm...] > Sent: Tuesday, June 05, 2012 11:48 AM > To: moo...@li... > Subject: [Moosefs-users] There are only invalid copyies in mfsmaster > > Hi, > > There are so many error messages in syslog, like: > > May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350 has only > invalid copies (2) - please repair it manually May 29 22:13:30 sam > mfsmaster[19988]: chunk 000000000AE51350_00000001 > - invalid copy on (192.168.1.78 - ver:00000000) May 29 22:13:30 sam > mfsmaster[19988]: chunk 000000000AE51350_00000001 > - invalid copy on (192.168.1.74 - ver:00000000) > > When try to read the file, will get error: > > [Errno 6] No such device or address > > -- > - Davies > > ---------------------------------------------------------------------- > ------ > -- > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. > Discussions will include endpoint security, mobile security and the > latest in malware threats. > http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Davies L. <dav...@gm...> - 2012-06-06 10:12:28
|
On Wed, Jun 6, 2012 at 5:59 PM, Michal Borychowski <mic...@co...> wrote: > Hi > > You have "incompatibility" between chunk versions among master and > chunkservers. "mfsfilerepair" should help The version of data chunk or moosefs ? mfsfilerepair had never helped, the files are broken, any IO operations will hang. > > Regards > Michal > > > -----Original Message----- > From: Davies Liu [mailto:dav...@gm...] > Sent: Tuesday, June 05, 2012 11:48 AM > To: moo...@li... > Subject: [Moosefs-users] There are only invalid copyies in mfsmaster > > Hi, > > There are so many error messages in syslog, like: > > May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350 has only > invalid copies (2) - please repair it manually May 29 22:13:30 sam > mfsmaster[19988]: chunk 000000000AE51350_00000001 > - invalid copy on (192.168.1.78 - ver:00000000) May 29 22:13:30 sam > mfsmaster[19988]: chunk 000000000AE51350_00000001 > - invalid copy on (192.168.1.74 - ver:00000000) > > When try to read the file, will get error: > > [Errno 6] No such device or address > > -- > - Davies > > ---------------------------------------------------------------------------- > -- > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and threat > landscape has changed and how IT managers can respond. Discussions will > include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Michal B. <mic...@co...> - 2012-06-06 09:59:20
|
Hi! It will be introduced in upcoming 1.6.26 Regards Michal From: Karol Pasternak [mailto:kar...@ar...] Sent: Tuesday, June 05, 2012 3:51 PM To: moo...@li... Subject: [Moosefs-users] Reload mfsmaster.cfg without interrupts? Hi, is it any possibilities to reload mfsmaster.cfg without affecting active connections? from man: SIGHUP forces mfsmaster to reload mfsexports.cfg and mfstopology.cfg files. It doesn't affect already active connections. But what about mfsmaster.cfg? is it also reloading but it's not mentioned in man page? -- Pozdrawiam/Best Regards Karol Pasternak |
From: Michal B. <mic...@co...> - 2012-06-06 09:59:20
|
Hi You have "incompatibility" between chunk versions among master and chunkservers. "mfsfilerepair" should help Regards Michal -----Original Message----- From: Davies Liu [mailto:dav...@gm...] Sent: Tuesday, June 05, 2012 11:48 AM To: moo...@li... Subject: [Moosefs-users] There are only invalid copyies in mfsmaster Hi, There are so many error messages in syslog, like: May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350 has only invalid copies (2) - please repair it manually May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350_00000001 - invalid copy on (192.168.1.78 - ver:00000000) May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350_00000001 - invalid copy on (192.168.1.74 - ver:00000000) When try to read the file, will get error: [Errno 6] No such device or address -- - Davies ---------------------------------------------------------------------------- -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Karol P. <kar...@ar...> - 2012-06-05 13:51:37
|
Hi, is it any possibilities to reload mfsmaster.cfg without affecting active connections? from man: SIGHUP forces mfsmaster to reload mfsexports.cfg and mfstopology.cfg files. It doesn't affect already active connections. But what about mfsmaster.cfg? is it also reloading but it's not mentioned in man page? -- Pozdrawiam/Best Regards *Karol Pasternak *** |
From: Davies L. <dav...@gm...> - 2012-06-05 09:48:13
|
Hi, There are so many error messages in syslog, like: May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350 has only invalid copies (2) - please repair it manually May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350_00000001 - invalid copy on (192.168.1.78 - ver:00000000) May 29 22:13:30 sam mfsmaster[19988]: chunk 000000000AE51350_00000001 - invalid copy on (192.168.1.74 - ver:00000000) When try to read the file, will get error: [Errno 6] No such device or address -- - Davies |
From: Michal B. <mic...@co...> - 2012-06-03 11:37:49
|
Hmm... You just install it from tar.gz. Unless you use “--disable-mfsmount” option in “./configure” you have “mfsmount” Regards Michal From: Boris Epstein [mailto:bor...@gm...] Sent: Sunday, June 03, 2012 1:23 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] mfsmount for Mac OS X 10.6 - 10.7 Michal, Thanks! It seems to me this is just the Mac OS X FUSE module, though. Where do I get the MooseFS client ( mfsmount ) though? Boris. On Sun, Jun 3, 2012 at 6:44 AM, Michal Borychowski <mic...@co...> wrote: You can find a link to OSXFUSE on http://www.moosefs.org/reference-guide.html: http://osxfuse.github.com/ Kind regards Michał Borychowski MooseFS Support Manager From: Boris Epstein [mailto:bor...@gm...] Sent: Saturday, June 02, 2012 10:54 PM To: moo...@li... Subject: [Moosefs-users] mfsmount for Mac OS X 10.6 - 10.7 Hello again, Does anyone know if it exists? If so, where do I get download it from? For some reason I don't seem to be able to find it. Thanks. Boris. |
From: Boris E. <bor...@gm...> - 2012-06-03 11:25:31
|
Well, that could theoretically happen but in my particular case I was doing test reads, and actively so, when I noticed it, so it must have been something else. Boris. On Sun, Jun 3, 2012 at 6:40 AM, Michal Borychowski < mic...@co...> wrote: > Hi!**** > > ** ** > > Or you had I/O stats set to show last minute and there really was no > traffic?**** > > ** ** > > ** ** > > Kind regards**** > > Michał Borychowski **** > > MooseFS Support Manager**** > > ** ** > > ** ** > > ** ** > > *From:* Boris Epstein [mailto:bor...@gm...] > *Sent:* Saturday, June 02, 2012 4:24 PM > *To:* moo...@li... > *Subject:* Re: [Moosefs-users] web-based report no longer shows I/O speeds > **** > > ** ** > > On Sat, Jun 2, 2012 at 10:12 AM, Boris Epstein <bor...@gm...> > wrote:**** > > Hello Mooseusers,**** > > ** ** > > For some reason my CGI-based reporter ( http://master:9425 ) stopped > showing the I/O speed in the "Disks" section. Has anybody seen this? Why > would this happen?**** > > ** ** > > Thanks.**** > > ** ** > > Boris.**** > > ** ** > > OK, this problem seems to have fixed itself just as unexpectedly as it > appeared. Oh, well, mystery and technology always go hand in hand.**** > > ** ** > > Boris.**** > |
From: Boris E. <bor...@gm...> - 2012-06-03 11:22:50
|
Michal, Thanks! It seems to me this is just the Mac OS X FUSE module, though. Where do I get the MooseFS client ( mfsmount ) though? Boris. On Sun, Jun 3, 2012 at 6:44 AM, Michal Borychowski < mic...@co...> wrote: > You can find a link to OSXFUSE on > http://www.moosefs.org/reference-guide.html:**** > > http://osxfuse.github.com/**** > > ** ** > > ** ** > > Kind regards**** > > Michał Borychowski **** > > MooseFS Support Manager**** > > ** ** > > ** ** > > ** ** > > ** ** > > *From:* Boris Epstein [mailto:bor...@gm...] > *Sent:* Saturday, June 02, 2012 10:54 PM > *To:* moo...@li... > *Subject:* [Moosefs-users] mfsmount for Mac OS X 10.6 - 10.7**** > > ** ** > > Hello again,**** > > ** ** > > Does anyone know if it exists? If so, where do I get download it from?**** > > ** ** > > For some reason I don't seem to be able to find it.**** > > ** ** > > Thanks.**** > > ** ** > > Boris.**** > |
From: Michal B. <mic...@co...> - 2012-06-03 10:44:43
|
You can find a link to OSXFUSE on http://www.moosefs.org/reference-guide.html: http://osxfuse.github.com/ Kind regards Michał Borychowski MooseFS Support Manager From: Boris Epstein [mailto:bor...@gm...] Sent: Saturday, June 02, 2012 10:54 PM To: moo...@li... Subject: [Moosefs-users] mfsmount for Mac OS X 10.6 - 10.7 Hello again, Does anyone know if it exists? If so, where do I get download it from? For some reason I don't seem to be able to find it. Thanks. Boris. |
From: Michal B. <mic...@co...> - 2012-06-03 10:40:19
|
Hi! Or you had I/O stats set to show last minute and there really was no traffic? Kind regards Michał Borychowski MooseFS Support Manager From: Boris Epstein [mailto:bor...@gm...] Sent: Saturday, June 02, 2012 4:24 PM To: moo...@li... Subject: Re: [Moosefs-users] web-based report no longer shows I/O speeds On Sat, Jun 2, 2012 at 10:12 AM, Boris Epstein <bor...@gm...> wrote: Hello Mooseusers, For some reason my CGI-based reporter ( http://master:9425 ) stopped showing the I/O speed in the "Disks" section. Has anybody seen this? Why would this happen? Thanks. Boris. OK, this problem seems to have fixed itself just as unexpectedly as it appeared. Oh, well, mystery and technology always go hand in hand. Boris. |
From: Atom P. <ap...@di...> - 2012-06-03 05:29:21
|
It depends on your network hardware and how much data you will put in MooseFS. On 6/2/2012 1:19 PM, Boris Epstein wrote: > 1) Put it all on the same flat network. Obviously the easiest and scales best. Your bottleneck will be the MFS master servers. But, depending on your client OS and network topology you could loose some performance to network chatter from other services. Also, you wouldn't have a single point, the router, from which to manage security policies. > 2) Put the MooseFS machines (master, meta servers, chunk servers) on a > separate fully routable private network. Easier to secure and potentially easier to manage. But your bottleneck could be the router. Consider that the MFS master must answer every file request but the chunk servers actually serve the data. You might have some success using the topology feature. Put your master and a few chunk servers in a private network and some chunk servers in each of your client networks; then configure topologies to tell the clients to prefer chunk servers on the same network. Unfortunately you can't predict where the chunks will go so to increase the chance that a chunk will be in the same network you either need to use a higher goal or fewer chunk servers. -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Boris E. <bor...@gm...> - 2012-06-02 20:53:37
|
Hello again, Does anyone know if it exists? If so, where do I get download it from? For some reason I don't seem to be able to find it. Thanks. Boris. |
From: Boris E. <bor...@gm...> - 2012-06-02 20:19:29
|
Hello Moosies (is that the correct term? :) OK, here's my problem. I have a flat private network housing the user workstations and various other potential client machines. I need to build some MooseFS storage installations. Given that a client can not use the MooseFs installation via a single point of entry as it needs to communicate to both the master and the chunk server there are two options here. 1) Put it all on the same flat network. 2) Put the MooseFS machines (master, meta servers, chunk servers) on a separate fully routable private network. What are the advantages and disadvantages of the options above? Which one is, in your opinion, preferrable? Any input much appreciated. Thanks. Boris. |
From: Boris E. <bor...@gm...> - 2012-06-02 14:24:12
|
On Sat, Jun 2, 2012 at 10:12 AM, Boris Epstein <bor...@gm...> wrote: > Hello Mooseusers, > > For some reason my CGI-based reporter ( http://master:9425 ) stopped > showing the I/O speed in the "Disks" section. Has anybody seen this? Why > would this happen? > > Thanks. > > Boris. > OK, this problem seems to have fixed itself just as unexpectedly as it appeared. Oh, well, mystery and technology always go hand in hand. Boris. |
From: Boris E. <bor...@gm...> - 2012-06-02 14:12:13
|
Hello Mooseusers, For some reason my CGI-based reporter ( http://master:9425 ) stopped showing the I/O speed in the "Disks" section. Has anybody seen this? Why would this happen? Thanks. Boris. |
From: Boris E. <bor...@gm...> - 2012-06-02 03:53:08
|
Hello listmates, I have a moosefs installation which I suspect does not scale as well as I would like it too. It seems like it bogs down with the performance going down the tubes when multiple users are trying to do I/O to it. Hence the following questions: 1) Where do I look for bottlenecks? 2) What are the tunable parameters I should be looking to adjust? Thanks. Boris. |
From: wkmail <wk...@bn...> - 2012-05-31 19:41:26
|
On 5/31/2012 9:32 AM, Elliot Finley wrote: > when you take a chunkserver offline, your active VMs are modifying > chunks within their VM image(s). But they can only modify the copies > that are online. So when you bring your chunkserver back online, you > have undergoal chunks because the offline copy hasn't been modified. > Also, the copies that were offline are now an older (i.e. wrong) > version, thus causing your version errors. > OK, that makes sense and is what I suspected. However are the version errors what is causing the VM to crash because its trying to use stale chunks or Is it just the stress/confusion on the system as it works through those bad chunks while doing rebalancing and goal repair of the chunks that were moved while the chunkserver was offline (note, we've only seen issues when both are conditions are there. Simple recovery from a bad drive or adding an additional chunkserver has not presented a problem, even when the cluster was rather busy). How does MFS determine which are good, current chunks and which are old/invalid and should be tossed out when you re-introduce old chunks back into the system by bringing back a chunkserver? Does the mount/master know which ones are current/valid or does that only come up during a subsequent scan? On 1.6.20 the chunkserver behaviour is to do a scan before joining the Master. I suppose that would be where the the system could identify and invalidate out of date chunks, but it appears that those checks are occuring on the fly during goal repair. And of course the chunkserver behaviour has changed since 1.6.24. We are planning a migration of that MFS cluster this weekend to 1.6.25, maybe that will resolve the issue, since the devs had indicated that a number of bug fixes have taken place. |