You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ken <ken...@gm...> - 2012-05-10 11:51:58
|
A fast Demo: http://220.181.180.55/demo.html -Ken On Thu, May 10, 2012 at 7:17 PM, Ken <ken...@gm...> wrote: > hi, all > > As mention in previous mail > (http://sf.net/mailarchive/message.php?msg_id=29171206), > now we open source it - bundle > > https://github.com/xiaonei/bundle > > The source is well tested and documented. > > Demo: > http://60.29.242.206/demo.html > > > Any ideas is appreciated. > > -Ken |
From: Ken <ken...@gm...> - 2012-05-10 11:17:31
|
hi, all As mention in previous mail (http://sf.net/mailarchive/message.php?msg_id=29171206), now we open source it - bundle https://github.com/xiaonei/bundle The source is well tested and documented. Demo: http://60.29.242.206/demo.html Any ideas is appreciated. -Ken |
From: wkmail <wk...@bn...> - 2012-05-09 19:12:24
|
We run several MFS clusters, mostly for data storage but we have been pleased with their use in Email Server clusters where despite the Storage penalty (the 64K chunks multiplying the storage size used) performance has been quite good and compares to other solutions we have tried (replicated NFS etc) with much easier maintenance. Our feeling is that hard drives are still cheap (despite the Asian flood) and we have lots of older kit/drives floating around in the DC. We currently have a 4 chunkserver setup that due to growth is beginning to slow down (7+ million files now). Each CS has a single SATA 1TB drive. Goal is set to 3. Would we be better off adding additional chunkservers and thus spreading the read/writes out over more CS machines? or would simply adding additional drives to the existing Chunkservers achieve the same thing (or close to the same thing) due to the writes being spread over more spindles. On this list I recall previous recommendations that going more than 4 spindles per CS was problematic due to limits in the software for the number of CS connections, but in this case we are starting with only 1 drive apiece now and we certainly have a lot of opportunity to grow (and with the lower end kit we use for chunkservers they probably only have 2-4 SATA ports anyway). Thank You. -bill |
From: 欧阳晓华 <toa...@gm...> - 2012-05-09 01:45:31
|
use moosefs's delay delete can fit what you want. 2012/4/24 舒彦博 <shu...@gm...> > hi, > MooseFS is efficent file system for distributing. From Q&A on > www.moosefs.org, I know moosefs support snapshot feature. > I want to know whether if moose suppport incremental backup for data > in chunkdata server? If not support, will it be added int the future? > > 3x > > Good lucky > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Michał B. <mic...@co...> - 2012-05-04 09:39:32
|
Unfortunately it is not coded yet, probably it would be in beta in July/August. We'll let you know. Kind regards Michał From: Ken [mailto:ken...@gm...] Sent: Friday, May 04, 2012 11:18 AM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] ChunkServer in different Level Would you please publish more detail about it? or publish the source repository in developing? We very much hope will help to do some testing. -Ken On Fri, May 4, 2012 at 5:05 PM, Michał Borychowski <mic...@co...> wrote: Hi Ken! The idea would be very similar but we need to make our implementation of this and do some tests. Kind regards Michał From: Ken [mailto:ken...@gm...] Sent: Friday, May 04, 2012 10:23 AM To: Michał Borychowski Subject: Re: [Moosefs-users] ChunkServer in different Level Michal, I notice the blog Rack Awareness <http://www.moosefs.org/news-reader/items/rack-awareness.html> : The behaviour of choosing where to create the new chunks will be introduced as "level goal" functionality in an upcoming release. Is this 'level goal' same as 'ChunkServer in different Level'? Thanks -Ken On Mon, Jan 30, 2012 at 6:38 PM, Ken <ken...@gm...> wrote: hi, I am very glad to hear that. Patch attached, and web version via github: https://github.com/pedia/moosefs/commit/74bc2f498fd218ea6aa51ccd0779d0e154da 09ed And hope to help more if you need. Best Regards -Ken 2012/1/30 Michał Borychowski <mic...@ge...>: > Hi Ken > > Our developers took a closer look at your solution and it is very interesting. We'd like to incorporate your code (after some little changes) into our main branch - don't you have anything against it? If not, please send your changes in a form of a patch. > > Thank you > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > -----Original Message----- > From: Ken [mailto:ken...@gm...] > Sent: Monday, January 16, 2012 2:06 AM > To: Michał Borychowski > Cc: moo...@li... > Subject: Re: [Moosefs-users] ChunkServer in different Level > > Hi, Michał > > I push my scripts to github: > https://github.com/pedia/moosefs/tree/master/failover-script > > It really work to me. > > Sorry for late reply, because weekend. > > Regards > -Ken > > > > 2012/1/13 Michał Borychowski <mic...@ge...>: >> Hi Ken! >> >> It would be great if you could also provide the group with your >> scripts for auto recovery of the system. >> >> >> Kind regards >> Michał >> >> >> -----Original Message----- >> From: Ken [mailto:ken...@gm...] >> Sent: Friday, January 13, 2012 8:59 AM >> To: Davies Liu >> Cc: moo...@li... >> Subject: Re: [Moosefs-users] ChunkServer in different Level >> >> We also use ucarp, but changed a little from Thomas version. The auto >> switching never failed. >> >> A friend of mine use DRBD and LVS. It also work fine, but I think it >> is much smaller than the Douban's. >> >> -Ken >> >> >> >> On Fri, Jan 13, 2012 at 3:42 PM, Davies Liu <dav...@gm...> wrote: >>> On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: >>>> I noticed the go-mfsclient in this mail-list before, we also write a >>>> moose client in C++. ;-) It work find when mfsmaster failover. >>> >>> It's the most interesting part, how do you failover mfsmaster ? >>> I have tried the method with ucarp, provided by Thomas S Hatch, It's >>> seemed not stable enough, failed sometimes. >>> >>> Before a stable solution come up, we decide to do manually fail-over >>> by ops, and do not deploy it in heavy online system. >>> >>>> We plan to build it as a preload dynamic library, and auto hook the >>>> file >> API. >>>> I think it is high availability enough. >>>> >>>> Thanks >>>> >>>> Regards >>>> -Ken >>>> >>>> >>>> >>>> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: >>>>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>>>> small files >>>>>> I agree. We combine some small files into one big file, and read >>>>>> the small files with offset/length infomation. >>>>> >>>>> Is not safe to write to same file concurrently. >>>>> We use this method to backup the original file user uploaded, with >>>>> tar, when offline. >>>>> Some times, some file will be broken. >>>>> >>>>> MFS is not good enough for online system, not high available, and >>>>> some IO operations will be block when error in mfsmaster or >> mfschunkserver. >>>>> >>>>> So we serve some video files (>10M) in MFS this way: >>>>> Nginx -> nginx + FUSE -> MFS >>>>> or >>>>> Nginx -> go-mfsclient [1] -> MFS >>>>> >>>>> If there something wrong with MFS, it will not block the first >>>>> Nginx and the whole site will not be affected. >>>>> >>>>> Davies >>>>> >>>>> [1] github.com/davies/go-mfsclient >>>>> >>>>>> Thanks. >>>>>> >>>>>> Regards >>>>>> -Ken >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> >> wrote: >>>>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>>>>> hi, moosefs >>>>>>>> >>>>>>>> We plan to use moosefs as storage for huge amount photos >>>>>>>> uploaded by >> users. >>>>>>> >>>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>>> small files, because the mfsmaster will be the bottle neck when >>>>>>> you have more than 100M files. At that time, the whole size of >>>>>>> files may be 1T (10k per file), can be stored by one local disk. >>>>>>> >>>>>>> Huge amount small files need other solutions, just like TFS [1] >>>>>>> from taobao.com, or beansdb [2] from douban.com. >>>>>>> >>>>>>> [1] http://code.taobao.org/p/tfs/src/ [2] >>>>>>> http://code.google.com/p/beansdb/ >>>>>>> >>>>>>>> Because of read operations of new files are very more than old >>>>>>>> files, maybe write new files to SSD is a choice. >>>>>>>> For strict safe reason, we must backup content to an other data >> center. >>>>>>>> And more features in maintain purpose are required. >>>>>>>> >>>>>>>> I don't think moosefs can work fine in these situation. We try >>>>>>>> to implement these features several weeks ago. Till now, it's >>>>>>>> almost done. >>>>>>>> >>>>>>>> Is there anyone interested in this? >>>>>>>> >>>>>>>> more detail: >>>>>>>> # Add access_mode(none, read, write capability) to struct >>>>>>>> matocserventry(matocserv.c). This value can be changed from >>>>>>>> outside(maybe from the python cgi) # mfschunkserver.cfg add >>>>>>>> 'LEVEL' config, if not, LEVEL=0 as normal. >>>>>>>> ChunkServer report it to Master if need. >>>>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied >>>>>>>> LEVEL should be 1,2,3 or 4. >>>>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 >>>>>>>> copy in >>>>>>>> level=2 ChunkServer. >>>>>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>>>>> This policy should be very complicated in future. >>>>>>>> # Also, we add read/write levelgoal support in mfstools. >>>>>>>> >>>>>>>> We plan to put these trivial change into github or somewhere else. >>>>>>>> >>>>>>>> It's a very incipient prototype. We appreciate any advice from >>>>>>>> the develop team and other users. >>>>>>>> >>>>>>>> Regards >>>>>>>> -Ken >>>>>>>> >>>>>>>> ---------------------------------------------------------------- >>>>>>>> - >>>>>>>> ------------- >>>>>>>> RSA(R) Conference 2012 >>>>>>>> Mar 27 - Feb 2 >>>>>>>> Save $400 by Jan. 27 >>>>>>>> Register now! >>>>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>>>>> _______________________________________________ >>>>>>>> moosefs-users mailing list >>>>>>>> moo...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> - Davies >>>>> >>>>> >>>>> >>>>> -- >>>>> - Davies >>> >>> >>> >>> -- >>> - Davies >> >> ---------------------------------------------------------------------- >> ------ >> -- >> RSA(R) Conference 2012 >> Mar 27 - Feb 2 >> Save $400 by Jan. 27 >> Register now! >> http://p.sf.net/sfu/rsa-sfdev2dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > |
From: Ken <ken...@gm...> - 2012-05-04 09:18:07
|
Would you please publish more detail about it? or publish the source repository in developing? We very much hope will help to do some testing. -Ken On Fri, May 4, 2012 at 5:05 PM, Michał Borychowski < mic...@co...> wrote: > Hi Ken!**** > > ** ** > > The idea would be very similar but we need to make our implementation of > this and do some tests.**** > > ** ** > > ** ** > > Kind regards**** > > Michał **** > > ** ** > > ** ** > > *From:* Ken [mailto:ken...@gm...] > *Sent:* Friday, May 04, 2012 10:23 AM > *To:* Michał Borychowski > > *Subject:* Re: [Moosefs-users] ChunkServer in different Level**** > > ** ** > > Michal, > > I notice the blog Rack Awareness<http://www.moosefs.org/news-reader/items/rack-awareness.html>: > **** > > The behaviour of choosing where to create the new chunks will be > introduced as "level goal" functionality in an upcoming release.**** > > Is this 'level goal' same as 'ChunkServer in different Level'? > > > Thanks > -Ken > > > **** > > On Mon, Jan 30, 2012 at 6:38 PM, Ken <ken...@gm...> wrote:**** > > hi, > > I am very glad to hear that. > > Patch attached, and web version via github: > > https://github.com/pedia/moosefs/commit/74bc2f498fd218ea6aa51ccd0779d0e154da09ed > > And hope to help more if you need. > > Best Regards > -Ken > > > > 2012/1/30 Michał Borychowski <mic...@ge...>:**** > > > Hi Ken > > > > Our developers took a closer look at your solution and it is very > interesting. We'd like to incorporate your code (after some little changes) > into our main branch - don't you have anything against it? If not, please > send your changes in a form of a patch. > > > > Thank you > > > > > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > > > -----Original Message----- > > From: Ken [mailto:ken...@gm...] > > Sent: Monday, January 16, 2012 2:06 AM > > To: Michał Borychowski > > Cc: moo...@li... > > Subject: Re: [Moosefs-users] ChunkServer in different Level > > > > Hi, Michał > > > > I push my scripts to github: > > https://github.com/pedia/moosefs/tree/master/failover-script > > > > It really work to me. > > > > Sorry for late reply, because weekend. > > > > Regards > > -Ken > > > > > > > > 2012/1/13 Michał Borychowski <mic...@ge...>: > >> Hi Ken! > >> > >> It would be great if you could also provide the group with your > >> scripts for auto recovery of the system. > >> > >> > >> Kind regards > >> Michał > >> > >> > >> -----Original Message----- > >> From: Ken [mailto:ken...@gm...] > >> Sent: Friday, January 13, 2012 8:59 AM > >> To: Davies Liu > >> Cc: moo...@li... > >> Subject: Re: [Moosefs-users] ChunkServer in different Level > >> > >> We also use ucarp, but changed a little from Thomas version. The auto > >> switching never failed. > >> > >> A friend of mine use DRBD and LVS. It also work fine, but I think it > >> is much smaller than the Douban's. > >> > >> -Ken > >> > >> > >> > >> On Fri, Jan 13, 2012 at 3:42 PM, Davies Liu <dav...@gm...> > wrote: > >>> On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: > >>>> I noticed the go-mfsclient in this mail-list before, we also write a > >>>> moose client in C++. ;-) It work find when mfsmaster failover. > >>> > >>> It's the most interesting part, how do you failover mfsmaster ? > >>> I have tried the method with ucarp, provided by Thomas S Hatch, It's > >>> seemed not stable enough, failed sometimes. > >>> > >>> Before a stable solution come up, we decide to do manually fail-over > >>> by ops, and do not deploy it in heavy online system. > >>> > >>>> We plan to build it as a preload dynamic library, and auto hook the > >>>> file > >> API. > >>>> I think it is high availability enough. > >>>> > >>>> Thanks > >>>> > >>>> Regards > >>>> -Ken > >>>> > >>>> > >>>> > >>>> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> > wrote: > >>>>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: > >>>>>>>> It's not good ideal to use moosefs as storage for huge amount of > >>>>>>>> small files > >>>>>> I agree. We combine some small files into one big file, and read > >>>>>> the small files with offset/length infomation. > >>>>> > >>>>> Is not safe to write to same file concurrently. > >>>>> We use this method to backup the original file user uploaded, with > >>>>> tar, when offline. > >>>>> Some times, some file will be broken. > >>>>> > >>>>> MFS is not good enough for online system, not high available, and > >>>>> some IO operations will be block when error in mfsmaster or > >> mfschunkserver. > >>>>> > >>>>> So we serve some video files (>10M) in MFS this way: > >>>>> Nginx -> nginx + FUSE -> MFS > >>>>> or > >>>>> Nginx -> go-mfsclient [1] -> MFS > >>>>> > >>>>> If there something wrong with MFS, it will not block the first > >>>>> Nginx and the whole site will not be affected. > >>>>> > >>>>> Davies > >>>>> > >>>>> [1] github.com/davies/go-mfsclient > >>>>> > >>>>>> Thanks. > >>>>>> > >>>>>> Regards > >>>>>> -Ken > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> > >> wrote: > >>>>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: > >>>>>>>> hi, moosefs > >>>>>>>> > >>>>>>>> We plan to use moosefs as storage for huge amount photos > >>>>>>>> uploaded by > >> users. > >>>>>>> > >>>>>>> It's not good ideal to use moosefs as storage for huge amount of > >>>>>>> small files, because the mfsmaster will be the bottle neck when > >>>>>>> you have more than 100M files. At that time, the whole size of > >>>>>>> files may be 1T (10k per file), can be stored by one local disk. > >>>>>>> > >>>>>>> Huge amount small files need other solutions, just like TFS [1] > >>>>>>> from taobao.com, or beansdb [2] from douban.com. > >>>>>>> > >>>>>>> [1] http://code.taobao.org/p/tfs/src/ [2] > >>>>>>> http://code.google.com/p/beansdb/ > >>>>>>> > >>>>>>>> Because of read operations of new files are very more than old > >>>>>>>> files, maybe write new files to SSD is a choice. > >>>>>>>> For strict safe reason, we must backup content to an other data > >> center. > >>>>>>>> And more features in maintain purpose are required. > >>>>>>>> > >>>>>>>> I don't think moosefs can work fine in these situation. We try > >>>>>>>> to implement these features several weeks ago. Till now, it's > >>>>>>>> almost done. > >>>>>>>> > >>>>>>>> Is there anyone interested in this? > >>>>>>>> > >>>>>>>> more detail: > >>>>>>>> # Add access_mode(none, read, write capability) to struct > >>>>>>>> matocserventry(matocserv.c). This value can be changed from > >>>>>>>> outside(maybe from the python cgi) # mfschunkserver.cfg add > >>>>>>>> 'LEVEL' config, if not, LEVEL=0 as normal. > >>>>>>>> ChunkServer report it to Master if need. > >>>>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). > >>>>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). > >>>>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied > >>>>>>>> LEVEL should be 1,2,3 or 4. > >>>>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 > >>>>>>>> copy in > >>>>>>>> level=2 ChunkServer. > >>>>>>>> # In chunk_do_jobs(chunk.c), send replicated command to > ChunkServer. > >>>>>>>> This policy should be very complicated in future. > >>>>>>>> # Also, we add read/write levelgoal support in mfstools. > >>>>>>>> > >>>>>>>> We plan to put these trivial change into github or somewhere else. > >>>>>>>> > >>>>>>>> It's a very incipient prototype. We appreciate any advice from > >>>>>>>> the develop team and other users. > >>>>>>>> > >>>>>>>> Regards > >>>>>>>> -Ken > >>>>>>>> > >>>>>>>> ---------------------------------------------------------------- > >>>>>>>> - > >>>>>>>> ------------- > >>>>>>>> RSA(R) Conference 2012 > >>>>>>>> Mar 27 - Feb 2 > >>>>>>>> Save $400 by Jan. 27 > >>>>>>>> Register now! > >>>>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 > >>>>>>>> _______________________________________________ > >>>>>>>> moosefs-users mailing list > >>>>>>>> moo...@li... > >>>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> -- > >>>>>>> - Davies > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> - Davies > >>> > >>> > >>> > >>> -- > >>> - Davies > >> > >> ---------------------------------------------------------------------- > >> ------ > >> -- > >> RSA(R) Conference 2012 > >> Mar 27 - Feb 2 > >> Save $400 by Jan. 27 > >> Register now! > >> http://p.sf.net/sfu/rsa-sfdev2dev2 > >> _______________________________________________ > >> moosefs-users mailing list > >> moo...@li... > >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > >> > >**** > > ** ** > |
From: Michał B. <mic...@co...> - 2012-05-04 09:04:01
|
Hi Ken! The idea would be very similar but we need to make our implementation of this and do some tests. Kind regards Michał From: Ken [mailto:ken...@gm...] Sent: Friday, May 04, 2012 10:23 AM To: Michał Borychowski Subject: Re: [Moosefs-users] ChunkServer in different Level Michal, I notice the blog Rack Awareness <http://www.moosefs.org/news-reader/items/rack-awareness.html> : The behaviour of choosing where to create the new chunks will be introduced as "level goal" functionality in an upcoming release. Is this 'level goal' same as 'ChunkServer in different Level'? Thanks -Ken On Mon, Jan 30, 2012 at 6:38 PM, Ken <ken...@gm...> wrote: hi, I am very glad to hear that. Patch attached, and web version via github: https://github.com/pedia/moosefs/commit/74bc2f498fd218ea6aa51ccd0779d0e154da 09ed And hope to help more if you need. Best Regards -Ken 2012/1/30 Michał Borychowski <mic...@ge...>: > Hi Ken > > Our developers took a closer look at your solution and it is very interesting. We'd like to incorporate your code (after some little changes) into our main branch - don't you have anything against it? If not, please send your changes in a form of a patch. > > Thank you > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > -----Original Message----- > From: Ken [mailto:ken...@gm...] > Sent: Monday, January 16, 2012 2:06 AM > To: Michał Borychowski > Cc: moo...@li... > Subject: Re: [Moosefs-users] ChunkServer in different Level > > Hi, Michał > > I push my scripts to github: > https://github.com/pedia/moosefs/tree/master/failover-script > > It really work to me. > > Sorry for late reply, because weekend. > > Regards > -Ken > > > > 2012/1/13 Michał Borychowski <mic...@ge...>: >> Hi Ken! >> >> It would be great if you could also provide the group with your >> scripts for auto recovery of the system. >> >> >> Kind regards >> Michał >> >> >> -----Original Message----- >> From: Ken [mailto:ken...@gm...] >> Sent: Friday, January 13, 2012 8:59 AM >> To: Davies Liu >> Cc: moo...@li... >> Subject: Re: [Moosefs-users] ChunkServer in different Level >> >> We also use ucarp, but changed a little from Thomas version. The auto >> switching never failed. >> >> A friend of mine use DRBD and LVS. It also work fine, but I think it >> is much smaller than the Douban's. >> >> -Ken >> >> >> >> On Fri, Jan 13, 2012 at 3:42 PM, Davies Liu <dav...@gm...> wrote: >>> On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: >>>> I noticed the go-mfsclient in this mail-list before, we also write a >>>> moose client in C++. ;-) It work find when mfsmaster failover. >>> >>> It's the most interesting part, how do you failover mfsmaster ? >>> I have tried the method with ucarp, provided by Thomas S Hatch, It's >>> seemed not stable enough, failed sometimes. >>> >>> Before a stable solution come up, we decide to do manually fail-over >>> by ops, and do not deploy it in heavy online system. >>> >>>> We plan to build it as a preload dynamic library, and auto hook the >>>> file >> API. >>>> I think it is high availability enough. >>>> >>>> Thanks >>>> >>>> Regards >>>> -Ken >>>> >>>> >>>> >>>> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: >>>>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>>>> small files >>>>>> I agree. We combine some small files into one big file, and read >>>>>> the small files with offset/length infomation. >>>>> >>>>> Is not safe to write to same file concurrently. >>>>> We use this method to backup the original file user uploaded, with >>>>> tar, when offline. >>>>> Some times, some file will be broken. >>>>> >>>>> MFS is not good enough for online system, not high available, and >>>>> some IO operations will be block when error in mfsmaster or >> mfschunkserver. >>>>> >>>>> So we serve some video files (>10M) in MFS this way: >>>>> Nginx -> nginx + FUSE -> MFS >>>>> or >>>>> Nginx -> go-mfsclient [1] -> MFS >>>>> >>>>> If there something wrong with MFS, it will not block the first >>>>> Nginx and the whole site will not be affected. >>>>> >>>>> Davies >>>>> >>>>> [1] github.com/davies/go-mfsclient >>>>> >>>>>> Thanks. >>>>>> >>>>>> Regards >>>>>> -Ken >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> >> wrote: >>>>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>>>>> hi, moosefs >>>>>>>> >>>>>>>> We plan to use moosefs as storage for huge amount photos >>>>>>>> uploaded by >> users. >>>>>>> >>>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>>> small files, because the mfsmaster will be the bottle neck when >>>>>>> you have more than 100M files. At that time, the whole size of >>>>>>> files may be 1T (10k per file), can be stored by one local disk. >>>>>>> >>>>>>> Huge amount small files need other solutions, just like TFS [1] >>>>>>> from taobao.com, or beansdb [2] from douban.com. >>>>>>> >>>>>>> [1] http://code.taobao.org/p/tfs/src/ [2] >>>>>>> http://code.google.com/p/beansdb/ >>>>>>> >>>>>>>> Because of read operations of new files are very more than old >>>>>>>> files, maybe write new files to SSD is a choice. >>>>>>>> For strict safe reason, we must backup content to an other data >> center. >>>>>>>> And more features in maintain purpose are required. >>>>>>>> >>>>>>>> I don't think moosefs can work fine in these situation. We try >>>>>>>> to implement these features several weeks ago. Till now, it's >>>>>>>> almost done. >>>>>>>> >>>>>>>> Is there anyone interested in this? >>>>>>>> >>>>>>>> more detail: >>>>>>>> # Add access_mode(none, read, write capability) to struct >>>>>>>> matocserventry(matocserv.c). This value can be changed from >>>>>>>> outside(maybe from the python cgi) # mfschunkserver.cfg add >>>>>>>> 'LEVEL' config, if not, LEVEL=0 as normal. >>>>>>>> ChunkServer report it to Master if need. >>>>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied >>>>>>>> LEVEL should be 1,2,3 or 4. >>>>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 >>>>>>>> copy in >>>>>>>> level=2 ChunkServer. >>>>>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>>>>> This policy should be very complicated in future. >>>>>>>> # Also, we add read/write levelgoal support in mfstools. >>>>>>>> >>>>>>>> We plan to put these trivial change into github or somewhere else. >>>>>>>> >>>>>>>> It's a very incipient prototype. We appreciate any advice from >>>>>>>> the develop team and other users. >>>>>>>> >>>>>>>> Regards >>>>>>>> -Ken >>>>>>>> >>>>>>>> ---------------------------------------------------------------- >>>>>>>> - >>>>>>>> ------------- >>>>>>>> RSA(R) Conference 2012 >>>>>>>> Mar 27 - Feb 2 >>>>>>>> Save $400 by Jan. 27 >>>>>>>> Register now! >>>>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>>>>> _______________________________________________ >>>>>>>> moosefs-users mailing list >>>>>>>> moo...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> - Davies >>>>> >>>>> >>>>> >>>>> -- >>>>> - Davies >>> >>> >>> >>> -- >>> - Davies >> >> ---------------------------------------------------------------------- >> ------ >> -- >> RSA(R) Conference 2012 >> Mar 27 - Feb 2 >> Save $400 by Jan. 27 >> Register now! >> http://p.sf.net/sfu/rsa-sfdev2dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > |
From: Peter M. (aNeutrino) <pio...@co...> - 2012-05-03 23:15:50
|
Hi Samuel :) Please send us directly (su...@mo...) (not broadcast it to the group) more information.: - your configuration files, (/etc/mfs*) - OS version - number of nodes in your storage etc. - version of MooseFS you have errror with and you used before error - how did you get binary ? -- from external repo? (please sent it to us) or -- from make install or ? -- from dpk-buildpacke ?? if possible please send us your binary any way. - your time zone (we are CET) - if it is possible please sent us your metadata.mfs file (if it is huge maybe you can give us some url so we could downloadit ?) Thanks for strace from mfsmaster it is quite usefull but not enough for us to help you. what options have you use for strace ? Can you please send me output.strace.mfsmaster.txt file from this command: strace -o output.strace.mfsmfaster.txt -v -s 1024 -f mfsmaster start If you have a skype please give me your username so we can talk live. or your phone number I will try to call you in my free time. I can see that You do not have our technical support :( however we will try our best to help you:) We do it in our free time. And we have holidays in Poland now. So please be patient. regards aNeutrino :) On Thu, May 3, 2012 at 5:34 PM, Samuel Hassine < sam...@an...> wrote: > Hi there, > > I am sorry to use this kind of mail subject, but it is really urgent. We > are using MooseFS from 2 years without any problems, with a huge datastore. > > Today, we just take down mfsmaster as usual (mfsmaster stop). Metadatas > are clean and not corrupted. But, for the first time, we have an error > on when starting mfsmaster: > > root@mfs:/var/lib/mfs# mfsmaster start > working directory: /var/lib/mfs > lockfile created and locked > initializing mfsmaster modules ... > loading sessions ... ok > sessions file has been loaded > exports file has been loaded > mfstopology configuration file (/etc/mfstopology.cfg) not found - using > defaults > loading metadata ... > loading objects (files,directories,etc.) ... > loading node: read error: ENOENT (No such file or directory) > init: file system manager failed !!! > error occured during initialization - exiting > > Here the strace: http://pastebin.com/XaxhD9BL > > I am sure I correctly shutdown mfsmaster and I never see this type of > issue (we did many restarts). I tried to do "mfsmetarestore -a" and it > finished with all clean, but I still have this error. > > How can I start my mfsmaster again? > > Thanks for tour help. > > Best regards. > Sam > > PS : sorry for double mails, the previous address was on the mfsmaster :) > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Samuel H. <sam...@an...> - 2012-05-03 21:32:16
|
Hi there, I am sorry to use this kind of mail subject, but it is really urgent. We are using MooseFS from 2 years without any problems, with a huge datastore. Today, we just take down mfsmaster as usual (mfsmaster stop). Metadatas are clean and not corrupted. But, for the first time, we have an error on when starting mfsmaster: root@mfs:/var/lib/mfs# mfsmaster start working directory: /var/lib/mfs lockfile created and locked initializing mfsmaster modules ... loading sessions ... ok sessions file has been loaded exports file has been loaded mfstopology configuration file (/etc/mfstopology.cfg) not found - using defaults loading metadata ... loading objects (files,directories,etc.) ... loading node: read error: ENOENT (No such file or directory) init: file system manager failed !!! error occured during initialization - exiting Here the strace: http://pastebin.com/XaxhD9BL I am sure I correctly shutdown mfsmaster and I never see this type of issue (we did many restarts). I tried to do "mfsmetarestore -a" and it finished with all clean, but I still have this error. How can I start my mfsmaster again? Thanks for tour help. Best regards. Sam PS : sorry for double mails, the previous address was on the mfsmaster :) |
From: Boris E. <bor...@gm...> - 2012-05-03 21:07:27
|
Florent, Yes, absolutely - it should have gone to the whole list, sorry about that. It should have been sent to the whole list. I have specified the chunk data in mfshdd.cfg and I think I did it correctly as I got the content folders in there ( 00 through FF ). So it sounds like the data that goes into DATA_PATH is negligibly small - and that is nice to know. Boris. On Thu, May 3, 2012 at 5:02 PM, Florent Bautista <fl...@co...>wrote: > ** > > Reply in the list, it could help other people! > > You specify where the chunks are stored in mfshdd.cfg file, one line per > directory. > > The DATA_PATH does not require a lot of space (some KB). > > > > Le 2012-05-03 22:56, Boris Epstein a écrit : > > Florent, thanks! > So you are saying that even for a multi-terrabyte chunk of data I should > be OK placing my data in a 40 GB /var partition? > Boris. > > On Thu, May 3, 2012 at 4:51 PM, Florent Bautista <fl...@co...>wrote: > >> Hi, >> >> Chunk Servers use DATA_PATH to put some files needed for running, as lock >> and statistics files for examples. >> >> It does not need a lot of space, and it should be on a different file >> system than the chunks disks (security). >> >> You can use the default value which is /var/lib/mfs/ >> >> Flo. >> >> Le 2012-05-03 20:53, Boris Epstein a écrit : >> >> Hello listmates, >> How is this different from the path to the hard drive directory where the >> data (files) are actually stored? I am sorry, this may be a dumb question >> but this is the first MooseFS installation I am going through. >> Thanks. >> Boris. >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > > |
From: Florent B. <fl...@co...> - 2012-05-03 20:56:13
|
You may have a look at moosefs.org website and read how MooseFS works ;-) Metadata is stored on master server. In my CS DATA_PATH I only have a lock file and a statistics file as I said in the previous mail. Le 2012-05-03 22:52, Boris Epstein a écrit : > Hello listmates, > What is the data that the chunk server keeps track of? I know it puts the lock files in its "data" directory. Does it also record metadata there? If so, do I need to allocate the space for the metadata? > Thanks. > Boris. |
From: Scott D. <sd...@cl...> - 2012-05-03 20:53:26
|
Unfortunately your strace doesn't capture system calls from child processes. What is the output of "strace -f mfsmaster start"? On Thu, May 3, 2012 at 11:32 AM, Samuel Hassine, Olympe Network < sam...@ol...> wrote: > Hi there, > > I am sorry to use this kind of mail subject, but it is really urgent. We > are using MooseFS from 2 years without any problems, with a huge datastore. > > Today, we just take down mfsmaster as usual (mfsmaster stop). Metadatas > are clean and not corrupted. But, for the first time, we have an error > on when starting mfsmaster: > > root@mfs:/var/lib/mfs# mfsmaster start > working directory: /var/lib/mfs > lockfile created and locked > initializing mfsmaster modules ... > loading sessions ... ok > sessions file has been loaded > exports file has been loaded > mfstopology configuration file (/etc/mfstopology.cfg) not found - using > defaults > loading metadata ... > loading objects (files,directories,etc.) ... > loading node: read error: ENOENT (No such file or directory) > init: file system manager failed !!! > error occured during initialization - exiting > > Here the strace: http://pastebin.com/XaxhD9BL > > I am sure I correctly shutdown mfsmaster and I never see this type of > issue (we did many restarts). I tried to do "mfsmetarestore -a" and it > finished with all clean, but I still have this error. > > How can I start my mfsmaster again? > > Thanks for tour help. > > Best regards. > Sam > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Boris E. <bor...@gm...> - 2012-05-03 20:52:23
|
Hello listmates, What is the data that the chunk server keeps track of? I know it puts the lock files in its "data" directory. Does it also record metadata there? If so, do I need to allocate the space for the metadata? Thanks. Boris. |
From: Florent B. <fl...@co...> - 2012-05-03 20:51:24
|
Hi, Chunk Servers use DATA_PATH to put some files needed for running, as lock and statistics files for examples. It does not need a lot of space, and it should be on a different file system than the chunks disks (security). You can use the default value which is /var/lib/mfs/ Flo. Le 2012-05-03 20:53, Boris Epstein a écrit : > Hello listmates, > How is this different from the path to the hard drive directory where the data (files) are actually stored? I am sorry, this may be a dumb question but this is the first MooseFS installation I am going through. > Thanks. > Boris. |
From: Steve T. <sm...@cb...> - 2012-05-03 20:42:30
|
On Thu, 3 May 2012, wkmail wrote: > That is supposed to be fixed in 1.6.25. You can now set the deletion > limits in the chunkserver config file. Cool; thanks. > BTW, if you think big deletes on MooseFS are problematic (and they no > longer are), you should try an Active/Active DRBD setup. That just > grinds to a halt period. Heh. We have Active/Passive DRBD here, and I am going to pass on the Active/Active in favor of MFS :) Steve |
From: Boris E. <bor...@gm...> - 2012-05-03 20:35:21
|
Hello listmates, How is this different from the path to the hard drive directory where the data (files) are actually stored? I am sorry, this may be a dumb question but this is the first MooseFS installation I am going through. Thanks. Boris. |
From: wkmail <wk...@bn...> - 2012-05-03 20:32:51
|
That is supposed to be fixed in 1.6.25. You can now set the deletion limits in the chunkserver config file. But in 1.6.20 you can only modify the mfsmaster source code. I wrote a patch for that earlier in the year when I ran into that issue. but I thinks its best that you just upgrade to 1.6.25 That being said. We still use a slow_del.pl program around here that does an unlink once every second, whenever we have to kill big folders. You just background it and let go for a few days and everybody is much happier. BTW, if you think big deletes on MooseFS are problematic (and they no longer are), you should try an Active/Active DRBD setup. That just grinds to a halt period. -bill On 5/3/2012 1:19 PM, Steve Thompson wrote: > MooseFS 1.6.20, CentOS 5.7. > > File deletion in MFS is very fast as we all know. A couple of days ago, we > had a mass deletion of just over 11 million files occupying about 5TB. > After the elapse of the trash time of 24 hours, MFS set about removing all > of the deleted chunks. And with a fervour; it devoted itself entirely to > this task, whereupon the speed of normal file system operations dropped to > much less than 1% of its former performance; it became unusable until the > last of the 11 million was gone, whereupon performance returned to normal. > Are there any tuanbles or best practices (I know, don't delete 11 million > files at once) that can influence this? > > Steve > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve T. <sm...@cb...> - 2012-05-03 20:19:33
|
MooseFS 1.6.20, CentOS 5.7. File deletion in MFS is very fast as we all know. A couple of days ago, we had a mass deletion of just over 11 million files occupying about 5TB. After the elapse of the trash time of 24 hours, MFS set about removing all of the deleted chunks. And with a fervour; it devoted itself entirely to this task, whereupon the speed of normal file system operations dropped to much less than 1% of its former performance; it became unusable until the last of the 11 million was gone, whereupon performance returned to normal. Are there any tuanbles or best practices (I know, don't delete 11 million files at once) that can influence this? Steve |
From: Samuel H. O. N. <sam...@ol...> - 2012-05-03 16:52:35
|
Hi there, I am sorry to use this kind of mail subject, but it is really urgent. We are using MooseFS from 2 years without any problems, with a huge datastore. Today, we just take down mfsmaster as usual (mfsmaster stop). Metadatas are clean and not corrupted. But, for the first time, we have an error on when starting mfsmaster: root@mfs:/var/lib/mfs# mfsmaster start working directory: /var/lib/mfs lockfile created and locked initializing mfsmaster modules ... loading sessions ... ok sessions file has been loaded exports file has been loaded mfstopology configuration file (/etc/mfstopology.cfg) not found - using defaults loading metadata ... loading objects (files,directories,etc.) ... loading node: read error: ENOENT (No such file or directory) init: file system manager failed !!! error occured during initialization - exiting Here the strace: http://pastebin.com/XaxhD9BL I am sure I correctly shutdown mfsmaster and I never see this type of issue (we did many restarts). I tried to do "mfsmetarestore -a" and it finished with all clean, but I still have this error. How can I start my mfsmaster again? Thanks for tour help. Best regards. Sam |
From: Florent B. <fl...@co...> - 2012-05-03 15:14:24
|
Hi, See http://www.moosefs.org/news-reader/items/rack-awareness.html That is the only method (with goal) implemented in MooseFS that can help you. Of course, you can still use rsync to synchronize 2 folders (1 active chunkserver to 1 inactive chunkserver)... Le 03/05/2012 14:05, Lorenzo J. Cubero a écrit : > Hi everybody, > > I would like to force the half of the chunks to be replicas of the other > half. > > There is any way to define zones or groups of chunks? > > Thanks in advance. > |
From: Florent B. <fl...@co...> - 2012-05-03 15:08:05
|
Nice thank you. That's what I understood from mfstopology.cfg file ! (by the way, are the warnings of empty lines at master start are fixed in 1.6.25 ?) Another question: "same machine" => mfsclient & mfschunkserver have the same IP ? (because we can only specify rack_id in mfstopology file) Le 03/05/2012 16:54, Micha? Borychowski a écrit : > > Hi! > > > > We wrote a short blog entry about "rack awareness" functionality. It > is available here: > > http://www.moosefs.org/news-reader/items/rack-awareness.html > > > > I hope it will answer most of your questions or doubts. > > > > > > Kind regards > > Michal > > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@co...> - 2012-05-03 14:52:42
|
Hi! We wrote a short blog entry about "rack awareness" functionality. It is available here: <http://www.moosefs.org/news-reader/items/rack-awareness.html> http://www.moosefs.org/news-reader/items/rack-awareness.html I hope it will answer most of your questions or doubts. Kind regards Michal |
From: Lorenzo J. C. <ljc...@ce...> - 2012-05-03 14:41:05
|
Hi everybody, I would like to force the half of the chunks to be replicas of the other half. There is any way to define zones or groups of chunks? Thanks in advance. -- ......................................................................... __ / / Lorenzo J. Cubero C E / S / C A Unitat d'Operacions i Seguretat /_/ Centre de Serveis Científics i Acadèmics de Catalunya Gran Capità, 2-4 (Edifici Nexus) · 08034 Barcelona T. 93 551 6214 · F. 93 205 6979 · ljc...@ce... Facebook (http://on.fb.me/vPv3oN) · Twitter @CE5CA · Linkedin Subscriu-te al butlletí (www.cesca.cat/butlleti) ......................................................................... |
From: Travis H. <tra...@tr...> - 2012-05-03 13:52:44
|
On 12-05-03 8:49 AM, Boris Epstein wrote: > Hello there, > > Various MooseFS manuals seem to emphasize the necessity of having > MooseFS chunks in a file system of their own. Why is that? Is there > any way to just specify the desired maximum space allocation? > > Thanks. > > Boris. There is not currently a way to specify the "please only use this % of the host disk space" with MooseFS. The larger issue is that other system processes and activities (such as log files, data files, user home folder files, download files, etc.) could quite possibly grow to fill the disk space at a rate that is unanticipated by MooseFS, and that these kinds of external influences might influence the MooseFS capacity planning for rebalancing chunks among the other chunk servers in a non optimal way. Travis |
From: Boris E. <bor...@gm...> - 2012-05-03 13:42:48
|
Hello there, Various MooseFS manuals seem to emphasize the necessity of having MooseFS chunks in a file system of their own. Why is that? Is there any way to just specify the desired maximum space allocation? Thanks. Boris. |