You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Laurent W. <lw...@hy...> - 2012-01-13 14:53:04
|
On Fri, 13 Jan 2012 10:47:21 +0200 ro...@mm... wrote: > Davies Liu wrote: > > On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: > >> I noticed the go-mfsclient in this mail-list before, we also write a > >> moose client in C++. ;-) > >> It work find when mfsmaster failover. > > > > It's the most interesting part, how do you failover mfsmaster ? > > I have tried the method with ucarp, provided by Thomas S Hatch, > > It's seemed not stable enough, failed sometimes. > > > > Before a stable solution come up, we decide to do manually fail-over by > > ops, > > and do not deploy it in heavy online system. > > Sorry for intervening and excuse a moosefs newbie question. > Why are you concerned so much about mfsmaster failing? How often does this > happen? > > I am considering moosefs for a small lan of 15 users, mainly for > aggregating unused storage space from various machines. Googling suggested > moosefs is rather robust, but this thread suggest otherwise. > Have I misunderstood something? > > -Stathis I have mfs running for errrr, something like 1 year and a half, and mfsmaster never failed a single time. The only times master was restarted was to switch to a newer version. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Davies L. <dav...@gm...> - 2012-01-13 11:02:39
|
2012/1/13 <ro...@mm...>: > Davies Liu wrote: >> On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: >>> I noticed the go-mfsclient in this mail-list before, we also write a >>> moose client in C++. ;-) >>> It work find when mfsmaster failover. >> >> It's the most interesting part, how do you failover mfsmaster ? >> I have tried the method with ucarp, provided by Thomas S Hatch, >> It's seemed not stable enough, failed sometimes. >> Before a stable solution come up, we decide to do manually fail-over by >> ops, >> and do not deploy it in heavy online system. > > Sorry for intervening and excuse a moosefs newbie question. > Why are you concerned so much about mfsmaster failing? How often does this > happen? The mfsmaster is very stable, it never crashed in our deployments. But some times, we need restart the machine, where mfsmaster located, or restart the mfsmaster itself to change config et. > I am considering moosefs for a small lan of 15 users, mainly for > aggregating unused storage space from various machines. Googling suggested > moosefs is rather robust, but this thread suggest otherwise. > Have I misunderstood something? Yes, it's robust and stable enough for offline storage. > -Stathis > >> >>> We plan to build it as a preload dynamic library, and auto hook the file >>> API. >>> I think it is high availability enough. >>> >>> Thanks >>> >>> Regards >>> -Ken >>> >>> >>> >>> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> >>> wrote: >>>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>>> small files >>>>> I agree. We combine some small files into one big file, and read the >>>>> small files with offset/length infomation. >>>> >>>> Is not safe to write to same file concurrently. >>>> We use this method to backup the original file user uploaded, with >>>> tar, when offline. >>>> Some times, some file will be broken. >>>> >>>> MFS is not good enough for online system, not high available, and some >>>> IO >>>> operations will be block when error in mfsmaster or mfschunkserver. >>>> >>>> So we serve some video files (>10M) in MFS this way: >>>> Nginx -> nginx + FUSE -> MFS >>>> or >>>> Nginx -> go-mfsclient [1] -> MFS >>>> >>>> If there something wrong with MFS, it will not block the first Nginx >>>> and the whole >>>> site will not be affected. >>>> >>>> Davies >>>> >>>> [1] github.com/davies/go-mfsclient >>>> >>>>> Thanks. >>>>> >>>>> Regards >>>>> -Ken >>>>> >>>>> >>>>> >>>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> >>>>> wrote: >>>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>>>> hi, moosefs >>>>>>> >>>>>>> We plan to use moosefs as storage for huge amount photos uploaded by >>>>>>> users. >>>>>> >>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>> small files, >>>>>> because the mfsmaster will be the bottle neck when you have more than >>>>>> 100M >>>>>> files. At that time, the whole size of files may be 1T (10k per >>>>>> file), >>>>>> can be stored >>>>>> by one local disk. >>>>>> >>>>>> Huge amount small files need other solutions, just like TFS [1] from >>>>>> taobao.com, >>>>>> or beansdb [2] from douban.com. >>>>>> >>>>>> [1] http://code.taobao.org/p/tfs/src/ >>>>>> [2] http://code.google.com/p/beansdb/ >>>>>> >>>>>>> Because of read operations of new files are very more than old >>>>>>> files, >>>>>>> maybe write new files to SSD is a choice. >>>>>>> For strict safe reason, we must backup content to an other data >>>>>>> center. >>>>>>> And more features in maintain purpose are required. >>>>>>> >>>>>>> I don't think moosefs can work fine in these situation. We try to >>>>>>> implement these features several weeks ago. Till now, it's almost >>>>>>> done. >>>>>>> >>>>>>> Is there anyone interested in this? >>>>>>> >>>>>>> more detail: >>>>>>> # Add access_mode(none, read, write capability) to struct >>>>>>> matocserventry(matocserv.c). This value can be changed from >>>>>>> outside(maybe from the python cgi) >>>>>>> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >>>>>>> ChunkServer report it to Master if need. >>>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >>>>>>> should be 1,2,3 or 4. >>>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy >>>>>>> in >>>>>>> level=2 ChunkServer. >>>>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>>>> This policy should be very complicated in future. >>>>>>> # Also, we add read/write levelgoal support in mfstools. >>>>>>> >>>>>>> We plan to put these trivial change into github or somewhere else. >>>>>>> >>>>>>> It's a very incipient prototype. We appreciate any advice from the >>>>>>> develop team and other users. >>>>>>> >>>>>>> Regards >>>>>>> -Ken >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> RSA(R) Conference 2012 >>>>>>> Mar 27 - Feb 2 >>>>>>> Save $400 by Jan. 27 >>>>>>> Register now! >>>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>>>> _______________________________________________ >>>>>>> moosefs-users mailing list >>>>>>> moo...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> - Davies >>>> >>>> >>>> >>>> -- >>>> - Davies >> >> >> >> -- >> - Davies >> >> ------------------------------------------------------------------------------ >> RSA(R) Conference 2012 >> Mar 27 - Feb 2 >> Save $400 by Jan. 27 >> Register now! >> http://p.sf.net/sfu/rsa-sfdev2dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Mar 27 - Feb 2 > Save $400 by Jan. 27 > Register now! > http://p.sf.net/sfu/rsa-sfdev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- - Davies |
From: Sébastien M. <seb...@u-...> - 2012-01-13 09:25:04
|
Hello, I had a big problem with the snapshots. When I create a snapshot on some directory with, i had the errors in the mfs client following : /mfsmount[1240]: master: tcp recv error: ETIMEDOUT (Operation timed out) (1) mfsmount[1240]: master: register error (read header: ETIMEDOUT (Operation timed out)) . . ./ This errors reset the connection by peer with the mfs master and the mfs chunkservers. The connection with the master et chunk servers are not possible. The command umount on the client resolve the problem. Without the snaphosts the mfs system works very well. Thank you for your help |
From: Michał B. <mic...@ge...> - 2012-01-13 09:20:01
|
Hi Ken! It would be great if you could also provide the group with your scripts for auto recovery of the system. Kind regards Michał -----Original Message----- From: Ken [mailto:ken...@gm...] Sent: Friday, January 13, 2012 8:59 AM To: Davies Liu Cc: moo...@li... Subject: Re: [Moosefs-users] ChunkServer in different Level We also use ucarp, but changed a little from Thomas version. The auto switching never failed. A friend of mine use DRBD and LVS. It also work fine, but I think it is much smaller than the Douban's. -Ken On Fri, Jan 13, 2012 at 3:42 PM, Davies Liu <dav...@gm...> wrote: > On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: >> I noticed the go-mfsclient in this mail-list before, we also write a >> moose client in C++. ;-) It work find when mfsmaster failover. > > It's the most interesting part, how do you failover mfsmaster ? > I have tried the method with ucarp, provided by Thomas S Hatch, It's > seemed not stable enough, failed sometimes. > > Before a stable solution come up, we decide to do manually fail-over > by ops, and do not deploy it in heavy online system. > >> We plan to build it as a preload dynamic library, and auto hook the file API. >> I think it is high availability enough. >> >> Thanks >> >> Regards >> -Ken >> >> >> >> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: >>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>> small files >>>> I agree. We combine some small files into one big file, and read >>>> the small files with offset/length infomation. >>> >>> Is not safe to write to same file concurrently. >>> We use this method to backup the original file user uploaded, with >>> tar, when offline. >>> Some times, some file will be broken. >>> >>> MFS is not good enough for online system, not high available, and >>> some IO operations will be block when error in mfsmaster or mfschunkserver. >>> >>> So we serve some video files (>10M) in MFS this way: >>> Nginx -> nginx + FUSE -> MFS >>> or >>> Nginx -> go-mfsclient [1] -> MFS >>> >>> If there something wrong with MFS, it will not block the first Nginx >>> and the whole site will not be affected. >>> >>> Davies >>> >>> [1] github.com/davies/go-mfsclient >>> >>>> Thanks. >>>> >>>> Regards >>>> -Ken >>>> >>>> >>>> >>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> wrote: >>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>>> hi, moosefs >>>>>> >>>>>> We plan to use moosefs as storage for huge amount photos uploaded by users. >>>>> >>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>> small files, because the mfsmaster will be the bottle neck when >>>>> you have more than 100M files. At that time, the whole size of >>>>> files may be 1T (10k per file), can be stored by one local disk. >>>>> >>>>> Huge amount small files need other solutions, just like TFS [1] >>>>> from taobao.com, or beansdb [2] from douban.com. >>>>> >>>>> [1] http://code.taobao.org/p/tfs/src/ [2] >>>>> http://code.google.com/p/beansdb/ >>>>> >>>>>> Because of read operations of new files are very more than old >>>>>> files, maybe write new files to SSD is a choice. >>>>>> For strict safe reason, we must backup content to an other data center. >>>>>> And more features in maintain purpose are required. >>>>>> >>>>>> I don't think moosefs can work fine in these situation. We try to >>>>>> implement these features several weeks ago. Till now, it's almost >>>>>> done. >>>>>> >>>>>> Is there anyone interested in this? >>>>>> >>>>>> more detail: >>>>>> # Add access_mode(none, read, write capability) to struct >>>>>> matocserventry(matocserv.c). This value can be changed from >>>>>> outside(maybe from the python cgi) # mfschunkserver.cfg add >>>>>> 'LEVEL' config, if not, LEVEL=0 as normal. >>>>>> ChunkServer report it to Master if need. >>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied >>>>>> LEVEL should be 1,2,3 or 4. >>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 >>>>>> copy in >>>>>> level=2 ChunkServer. >>>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>>> This policy should be very complicated in future. >>>>>> # Also, we add read/write levelgoal support in mfstools. >>>>>> >>>>>> We plan to put these trivial change into github or somewhere else. >>>>>> >>>>>> It's a very incipient prototype. We appreciate any advice from >>>>>> the develop team and other users. >>>>>> >>>>>> Regards >>>>>> -Ken >>>>>> >>>>>> ----------------------------------------------------------------- >>>>>> ------------- >>>>>> RSA(R) Conference 2012 >>>>>> Mar 27 - Feb 2 >>>>>> Save $400 by Jan. 27 >>>>>> Register now! >>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>>> _______________________________________________ >>>>>> moosefs-users mailing list >>>>>> moo...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>> >>>>> >>>>> >>>>> -- >>>>> - Davies >>> >>> >>> >>> -- >>> - Davies > > > > -- > - Davies ---------------------------------------------------------------------------- -- RSA(R) Conference 2012 Mar 27 - Feb 2 Save $400 by Jan. 27 Register now! http://p.sf.net/sfu/rsa-sfdev2dev2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: <ro...@mm...> - 2012-01-13 08:47:37
|
Davies Liu wrote: > On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: >> I noticed the go-mfsclient in this mail-list before, we also write a >> moose client in C++. ;-) >> It work find when mfsmaster failover. > > It's the most interesting part, how do you failover mfsmaster ? > I have tried the method with ucarp, provided by Thomas S Hatch, > It's seemed not stable enough, failed sometimes. > > Before a stable solution come up, we decide to do manually fail-over by > ops, > and do not deploy it in heavy online system. Sorry for intervening and excuse a moosefs newbie question. Why are you concerned so much about mfsmaster failing? How often does this happen? I am considering moosefs for a small lan of 15 users, mainly for aggregating unused storage space from various machines. Googling suggested moosefs is rather robust, but this thread suggest otherwise. Have I misunderstood something? -Stathis > >> We plan to build it as a preload dynamic library, and auto hook the file >> API. >> I think it is high availability enough. >> >> Thanks >> >> Regards >> -Ken >> >> >> >> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> >> wrote: >>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>>> small files >>>> I agree. We combine some small files into one big file, and read the >>>> small files with offset/length infomation. >>> >>> Is not safe to write to same file concurrently. >>> We use this method to backup the original file user uploaded, with >>> tar, when offline. >>> Some times, some file will be broken. >>> >>> MFS is not good enough for online system, not high available, and some >>> IO >>> operations will be block when error in mfsmaster or mfschunkserver. >>> >>> So we serve some video files (>10M) in MFS this way: >>> Nginx -> nginx + FUSE -> MFS >>> or >>> Nginx -> go-mfsclient [1] -> MFS >>> >>> If there something wrong with MFS, it will not block the first Nginx >>> and the whole >>> site will not be affected. >>> >>> Davies >>> >>> [1] github.com/davies/go-mfsclient >>> >>>> Thanks. >>>> >>>> Regards >>>> -Ken >>>> >>>> >>>> >>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> >>>> wrote: >>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>>> hi, moosefs >>>>>> >>>>>> We plan to use moosefs as storage for huge amount photos uploaded by >>>>>> users. >>>>> >>>>> It's not good ideal to use moosefs as storage for huge amount of >>>>> small files, >>>>> because the mfsmaster will be the bottle neck when you have more than >>>>> 100M >>>>> files. At that time, the whole size of files may be 1T (10k per >>>>> file), >>>>> can be stored >>>>> by one local disk. >>>>> >>>>> Huge amount small files need other solutions, just like TFS [1] from >>>>> taobao.com, >>>>> or beansdb [2] from douban.com. >>>>> >>>>> [1] http://code.taobao.org/p/tfs/src/ >>>>> [2] http://code.google.com/p/beansdb/ >>>>> >>>>>> Because of read operations of new files are very more than old >>>>>> files, >>>>>> maybe write new files to SSD is a choice. >>>>>> For strict safe reason, we must backup content to an other data >>>>>> center. >>>>>> And more features in maintain purpose are required. >>>>>> >>>>>> I don't think moosefs can work fine in these situation. We try to >>>>>> implement these features several weeks ago. Till now, it's almost >>>>>> done. >>>>>> >>>>>> Is there anyone interested in this? >>>>>> >>>>>> more detail: >>>>>> # Add access_mode(none, read, write capability) to struct >>>>>> matocserventry(matocserv.c). This value can be changed from >>>>>> outside(maybe from the python cgi) >>>>>> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >>>>>> ChunkServer report it to Master if need. >>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >>>>>> should be 1,2,3 or 4. >>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy >>>>>> in >>>>>> level=2 ChunkServer. >>>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>>> This policy should be very complicated in future. >>>>>> # Also, we add read/write levelgoal support in mfstools. >>>>>> >>>>>> We plan to put these trivial change into github or somewhere else. >>>>>> >>>>>> It's a very incipient prototype. We appreciate any advice from the >>>>>> develop team and other users. >>>>>> >>>>>> Regards >>>>>> -Ken >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> RSA(R) Conference 2012 >>>>>> Mar 27 - Feb 2 >>>>>> Save $400 by Jan. 27 >>>>>> Register now! >>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>>> _______________________________________________ >>>>>> moosefs-users mailing list >>>>>> moo...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>> >>>>> >>>>> >>>>> -- >>>>> - Davies >>> >>> >>> >>> -- >>> - Davies > > > > -- > - Davies > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Mar 27 - Feb 2 > Save $400 by Jan. 27 > Register now! > http://p.sf.net/sfu/rsa-sfdev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Ken <ken...@gm...> - 2012-01-13 07:59:51
|
We also use ucarp, but changed a little from Thomas version. The auto switching never failed. A friend of mine use DRBD and LVS. It also work fine, but I think it is much smaller than the Douban's. -Ken On Fri, Jan 13, 2012 at 3:42 PM, Davies Liu <dav...@gm...> wrote: > On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: >> I noticed the go-mfsclient in this mail-list before, we also write a >> moose client in C++. ;-) >> It work find when mfsmaster failover. > > It's the most interesting part, how do you failover mfsmaster ? > I have tried the method with ucarp, provided by Thomas S Hatch, > It's seemed not stable enough, failed sometimes. > > Before a stable solution come up, we decide to do manually fail-over by ops, > and do not deploy it in heavy online system. > >> We plan to build it as a preload dynamic library, and auto hook the file API. >> I think it is high availability enough. >> >> Thanks >> >> Regards >> -Ken >> >> >> >> On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: >>> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>>> It's not good ideal to use moosefs as storage for huge amount of small files >>>> I agree. We combine some small files into one big file, and read the >>>> small files with offset/length infomation. >>> >>> Is not safe to write to same file concurrently. >>> We use this method to backup the original file user uploaded, with >>> tar, when offline. >>> Some times, some file will be broken. >>> >>> MFS is not good enough for online system, not high available, and some IO >>> operations will be block when error in mfsmaster or mfschunkserver. >>> >>> So we serve some video files (>10M) in MFS this way: >>> Nginx -> nginx + FUSE -> MFS >>> or >>> Nginx -> go-mfsclient [1] -> MFS >>> >>> If there something wrong with MFS, it will not block the first Nginx >>> and the whole >>> site will not be affected. >>> >>> Davies >>> >>> [1] github.com/davies/go-mfsclient >>> >>>> Thanks. >>>> >>>> Regards >>>> -Ken >>>> >>>> >>>> >>>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> wrote: >>>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>>> hi, moosefs >>>>>> >>>>>> We plan to use moosefs as storage for huge amount photos uploaded by users. >>>>> >>>>> It's not good ideal to use moosefs as storage for huge amount of small files, >>>>> because the mfsmaster will be the bottle neck when you have more than 100M >>>>> files. At that time, the whole size of files may be 1T (10k per file), >>>>> can be stored >>>>> by one local disk. >>>>> >>>>> Huge amount small files need other solutions, just like TFS [1] from >>>>> taobao.com, >>>>> or beansdb [2] from douban.com. >>>>> >>>>> [1] http://code.taobao.org/p/tfs/src/ >>>>> [2] http://code.google.com/p/beansdb/ >>>>> >>>>>> Because of read operations of new files are very more than old files, >>>>>> maybe write new files to SSD is a choice. >>>>>> For strict safe reason, we must backup content to an other data center. >>>>>> And more features in maintain purpose are required. >>>>>> >>>>>> I don't think moosefs can work fine in these situation. We try to >>>>>> implement these features several weeks ago. Till now, it's almost >>>>>> done. >>>>>> >>>>>> Is there anyone interested in this? >>>>>> >>>>>> more detail: >>>>>> # Add access_mode(none, read, write capability) to struct >>>>>> matocserventry(matocserv.c). This value can be changed from >>>>>> outside(maybe from the python cgi) >>>>>> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >>>>>> ChunkServer report it to Master if need. >>>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >>>>>> should be 1,2,3 or 4. >>>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in >>>>>> level=2 ChunkServer. >>>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>>> This policy should be very complicated in future. >>>>>> # Also, we add read/write levelgoal support in mfstools. >>>>>> >>>>>> We plan to put these trivial change into github or somewhere else. >>>>>> >>>>>> It's a very incipient prototype. We appreciate any advice from the >>>>>> develop team and other users. >>>>>> >>>>>> Regards >>>>>> -Ken >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> RSA(R) Conference 2012 >>>>>> Mar 27 - Feb 2 >>>>>> Save $400 by Jan. 27 >>>>>> Register now! >>>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>>> _______________________________________________ >>>>>> moosefs-users mailing list >>>>>> moo...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>> >>>>> >>>>> >>>>> -- >>>>> - Davies >>> >>> >>> >>> -- >>> - Davies > > > > -- > - Davies |
From: Davies L. <dav...@gm...> - 2012-01-13 07:42:47
|
On Fri, Jan 13, 2012 at 3:25 PM, Ken <ken...@gm...> wrote: > I noticed the go-mfsclient in this mail-list before, we also write a > moose client in C++. ;-) > It work find when mfsmaster failover. It's the most interesting part, how do you failover mfsmaster ? I have tried the method with ucarp, provided by Thomas S Hatch, It's seemed not stable enough, failed sometimes. Before a stable solution come up, we decide to do manually fail-over by ops, and do not deploy it in heavy online system. > We plan to build it as a preload dynamic library, and auto hook the file API. > I think it is high availability enough. > > Thanks > > Regards > -Ken > > > > On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: >> On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>>> It's not good ideal to use moosefs as storage for huge amount of small files >>> I agree. We combine some small files into one big file, and read the >>> small files with offset/length infomation. >> >> Is not safe to write to same file concurrently. >> We use this method to backup the original file user uploaded, with >> tar, when offline. >> Some times, some file will be broken. >> >> MFS is not good enough for online system, not high available, and some IO >> operations will be block when error in mfsmaster or mfschunkserver. >> >> So we serve some video files (>10M) in MFS this way: >> Nginx -> nginx + FUSE -> MFS >> or >> Nginx -> go-mfsclient [1] -> MFS >> >> If there something wrong with MFS, it will not block the first Nginx >> and the whole >> site will not be affected. >> >> Davies >> >> [1] github.com/davies/go-mfsclient >> >>> Thanks. >>> >>> Regards >>> -Ken >>> >>> >>> >>> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> wrote: >>>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>>> hi, moosefs >>>>> >>>>> We plan to use moosefs as storage for huge amount photos uploaded by users. >>>> >>>> It's not good ideal to use moosefs as storage for huge amount of small files, >>>> because the mfsmaster will be the bottle neck when you have more than 100M >>>> files. At that time, the whole size of files may be 1T (10k per file), >>>> can be stored >>>> by one local disk. >>>> >>>> Huge amount small files need other solutions, just like TFS [1] from >>>> taobao.com, >>>> or beansdb [2] from douban.com. >>>> >>>> [1] http://code.taobao.org/p/tfs/src/ >>>> [2] http://code.google.com/p/beansdb/ >>>> >>>>> Because of read operations of new files are very more than old files, >>>>> maybe write new files to SSD is a choice. >>>>> For strict safe reason, we must backup content to an other data center. >>>>> And more features in maintain purpose are required. >>>>> >>>>> I don't think moosefs can work fine in these situation. We try to >>>>> implement these features several weeks ago. Till now, it's almost >>>>> done. >>>>> >>>>> Is there anyone interested in this? >>>>> >>>>> more detail: >>>>> # Add access_mode(none, read, write capability) to struct >>>>> matocserventry(matocserv.c). This value can be changed from >>>>> outside(maybe from the python cgi) >>>>> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >>>>> ChunkServer report it to Master if need. >>>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >>>>> should be 1,2,3 or 4. >>>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in >>>>> level=2 ChunkServer. >>>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>>> This policy should be very complicated in future. >>>>> # Also, we add read/write levelgoal support in mfstools. >>>>> >>>>> We plan to put these trivial change into github or somewhere else. >>>>> >>>>> It's a very incipient prototype. We appreciate any advice from the >>>>> develop team and other users. >>>>> >>>>> Regards >>>>> -Ken >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> RSA(R) Conference 2012 >>>>> Mar 27 - Feb 2 >>>>> Save $400 by Jan. 27 >>>>> Register now! >>>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>>> _______________________________________________ >>>>> moosefs-users mailing list >>>>> moo...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>> >>>> >>>> >>>> -- >>>> - Davies >> >> >> >> -- >> - Davies -- - Davies |
From: Ken <ken...@gm...> - 2012-01-13 07:25:39
|
I noticed the go-mfsclient in this mail-list before, we also write a moose client in C++. ;-) It work find when mfsmaster failover. We plan to build it as a preload dynamic library, and auto hook the file API. I think it is high availability enough. Thanks Regards -Ken On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: > On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>> It's not good ideal to use moosefs as storage for huge amount of small files >> I agree. We combine some small files into one big file, and read the >> small files with offset/length infomation. > > Is not safe to write to same file concurrently. > We use this method to backup the original file user uploaded, with > tar, when offline. > Some times, some file will be broken. > > MFS is not good enough for online system, not high available, and some IO > operations will be block when error in mfsmaster or mfschunkserver. > > So we serve some video files (>10M) in MFS this way: > Nginx -> nginx + FUSE -> MFS > or > Nginx -> go-mfsclient [1] -> MFS > > If there something wrong with MFS, it will not block the first Nginx > and the whole > site will not be affected. > > Davies > > [1] github.com/davies/go-mfsclient > >> Thanks. >> >> Regards >> -Ken >> >> >> >> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> wrote: >>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>> hi, moosefs >>>> >>>> We plan to use moosefs as storage for huge amount photos uploaded by users. >>> >>> It's not good ideal to use moosefs as storage for huge amount of small files, >>> because the mfsmaster will be the bottle neck when you have more than 100M >>> files. At that time, the whole size of files may be 1T (10k per file), >>> can be stored >>> by one local disk. >>> >>> Huge amount small files need other solutions, just like TFS [1] from >>> taobao.com, >>> or beansdb [2] from douban.com. >>> >>> [1] http://code.taobao.org/p/tfs/src/ >>> [2] http://code.google.com/p/beansdb/ >>> >>>> Because of read operations of new files are very more than old files, >>>> maybe write new files to SSD is a choice. >>>> For strict safe reason, we must backup content to an other data center. >>>> And more features in maintain purpose are required. >>>> >>>> I don't think moosefs can work fine in these situation. We try to >>>> implement these features several weeks ago. Till now, it's almost >>>> done. >>>> >>>> Is there anyone interested in this? >>>> >>>> more detail: >>>> # Add access_mode(none, read, write capability) to struct >>>> matocserventry(matocserv.c). This value can be changed from >>>> outside(maybe from the python cgi) >>>> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >>>> ChunkServer report it to Master if need. >>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >>>> should be 1,2,3 or 4. >>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in >>>> level=2 ChunkServer. >>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>> This policy should be very complicated in future. >>>> # Also, we add read/write levelgoal support in mfstools. >>>> >>>> We plan to put these trivial change into github or somewhere else. >>>> >>>> It's a very incipient prototype. We appreciate any advice from the >>>> develop team and other users. >>>> >>>> Regards >>>> -Ken >>>> >>>> ------------------------------------------------------------------------------ >>>> RSA(R) Conference 2012 >>>> Mar 27 - Feb 2 >>>> Save $400 by Jan. 27 >>>> Register now! >>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>> _______________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >>> >>> >>> -- >>> - Davies > > > > -- > - Davies |
From: Davies L. <dav...@gm...> - 2012-01-13 06:52:41
|
On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>> It's not good ideal to use moosefs as storage for huge amount of small files > I agree. We combine some small files into one big file, and read the > small files with offset/length infomation. Is not safe to write to same file concurrently. We use this method to backup the original file user uploaded, with tar, when offline. Some times, some file will be broken. MFS is not good enough for online system, not high available, and some IO operations will be block when error in mfsmaster or mfschunkserver. So we serve some video files (>10M) in MFS this way: Nginx -> nginx + FUSE -> MFS or Nginx -> go-mfsclient [1] -> MFS If there something wrong with MFS, it will not block the first Nginx and the whole site will not be affected. Davies [1] github.com/davies/go-mfsclient > Thanks. > > Regards > -Ken > > > > On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> wrote: >> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>> hi, moosefs >>> >>> We plan to use moosefs as storage for huge amount photos uploaded by users. >> >> It's not good ideal to use moosefs as storage for huge amount of small files, >> because the mfsmaster will be the bottle neck when you have more than 100M >> files. At that time, the whole size of files may be 1T (10k per file), >> can be stored >> by one local disk. >> >> Huge amount small files need other solutions, just like TFS [1] from >> taobao.com, >> or beansdb [2] from douban.com. >> >> [1] http://code.taobao.org/p/tfs/src/ >> [2] http://code.google.com/p/beansdb/ >> >>> Because of read operations of new files are very more than old files, >>> maybe write new files to SSD is a choice. >>> For strict safe reason, we must backup content to an other data center. >>> And more features in maintain purpose are required. >>> >>> I don't think moosefs can work fine in these situation. We try to >>> implement these features several weeks ago. Till now, it's almost >>> done. >>> >>> Is there anyone interested in this? >>> >>> more detail: >>> # Add access_mode(none, read, write capability) to struct >>> matocserventry(matocserv.c). This value can be changed from >>> outside(maybe from the python cgi) >>> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >>> ChunkServer report it to Master if need. >>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >>> should be 1,2,3 or 4. >>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in >>> level=2 ChunkServer. >>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>> This policy should be very complicated in future. >>> # Also, we add read/write levelgoal support in mfstools. >>> >>> We plan to put these trivial change into github or somewhere else. >>> >>> It's a very incipient prototype. We appreciate any advice from the >>> develop team and other users. >>> >>> Regards >>> -Ken >>> >>> ------------------------------------------------------------------------------ >>> RSA(R) Conference 2012 >>> Mar 27 - Feb 2 >>> Save $400 by Jan. 27 >>> Register now! >>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> >> -- >> - Davies -- - Davies |
From: Ken <ken...@gm...> - 2012-01-13 06:40:31
|
>> It's not good ideal to use moosefs as storage for huge amount of small files I agree. We combine some small files into one big file, and read the small files with offset/length infomation. Thanks. Regards -Ken On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> wrote: > On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >> hi, moosefs >> >> We plan to use moosefs as storage for huge amount photos uploaded by users. > > It's not good ideal to use moosefs as storage for huge amount of small files, > because the mfsmaster will be the bottle neck when you have more than 100M > files. At that time, the whole size of files may be 1T (10k per file), > can be stored > by one local disk. > > Huge amount small files need other solutions, just like TFS [1] from > taobao.com, > or beansdb [2] from douban.com. > > [1] http://code.taobao.org/p/tfs/src/ > [2] http://code.google.com/p/beansdb/ > >> Because of read operations of new files are very more than old files, >> maybe write new files to SSD is a choice. >> For strict safe reason, we must backup content to an other data center. >> And more features in maintain purpose are required. >> >> I don't think moosefs can work fine in these situation. We try to >> implement these features several weeks ago. Till now, it's almost >> done. >> >> Is there anyone interested in this? >> >> more detail: >> # Add access_mode(none, read, write capability) to struct >> matocserventry(matocserv.c). This value can be changed from >> outside(maybe from the python cgi) >> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >> ChunkServer report it to Master if need. >> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >> # Add uint32_t levelgoal into sturct chunk(chunk.c). >> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >> should be 1,2,3 or 4. >> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in >> level=2 ChunkServer. >> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >> This policy should be very complicated in future. >> # Also, we add read/write levelgoal support in mfstools. >> >> We plan to put these trivial change into github or somewhere else. >> >> It's a very incipient prototype. We appreciate any advice from the >> develop team and other users. >> >> Regards >> -Ken >> >> ------------------------------------------------------------------------------ >> RSA(R) Conference 2012 >> Mar 27 - Feb 2 >> Save $400 by Jan. 27 >> Register now! >> http://p.sf.net/sfu/rsa-sfdev2dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > -- > - Davies |
From: Davies L. <dav...@gm...> - 2012-01-13 06:33:05
|
On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: > hi, moosefs > > We plan to use moosefs as storage for huge amount photos uploaded by users. It's not good ideal to use moosefs as storage for huge amount of small files, because the mfsmaster will be the bottle neck when you have more than 100M files. At that time, the whole size of files may be 1T (10k per file), can be stored by one local disk. Huge amount small files need other solutions, just like TFS [1] from taobao.com, or beansdb [2] from douban.com. [1] http://code.taobao.org/p/tfs/src/ [2] http://code.google.com/p/beansdb/ > Because of read operations of new files are very more than old files, > maybe write new files to SSD is a choice. > For strict safe reason, we must backup content to an other data center. > And more features in maintain purpose are required. > > I don't think moosefs can work fine in these situation. We try to > implement these features several weeks ago. Till now, it's almost > done. > > Is there anyone interested in this? > > more detail: > # Add access_mode(none, read, write capability) to struct > matocserventry(matocserv.c). This value can be changed from > outside(maybe from the python cgi) > # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. > ChunkServer report it to Master if need. > # Add uint32_t levelgoal into struct fsnode(filesystem.c). > # Add uint32_t levelgoal into sturct chunk(chunk.c). > As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL > should be 1,2,3 or 4. > [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in > level=2 ChunkServer. > # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. > This policy should be very complicated in future. > # Also, we add read/write levelgoal support in mfstools. > > We plan to put these trivial change into github or somewhere else. > > It's a very incipient prototype. We appreciate any advice from the > develop team and other users. > > Regards > -Ken > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Mar 27 - Feb 2 > Save $400 by Jan. 27 > Register now! > http://p.sf.net/sfu/rsa-sfdev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- - Davies |
From: Ken <ken...@gm...> - 2012-01-13 06:12:36
|
Code https://github.com/pedia/moosefs normal build same as 1.6.20. To use the feature please build with cflags -DLEVELGOAL. Regards -Ken On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: > hi, moosefs > > We plan to use moosefs as storage for huge amount photos uploaded by users. > Because of read operations of new files are very more than old files, > maybe write new files to SSD is a choice. > For strict safe reason, we must backup content to an other data center. > And more features in maintain purpose are required. > > I don't think moosefs can work fine in these situation. We try to > implement these features several weeks ago. Till now, it's almost > done. > > Is there anyone interested in this? > > more detail: > # Add access_mode(none, read, write capability) to struct > matocserventry(matocserv.c). This value can be changed from > outside(maybe from the python cgi) > # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. > ChunkServer report it to Master if need. > # Add uint32_t levelgoal into struct fsnode(filesystem.c). > # Add uint32_t levelgoal into sturct chunk(chunk.c). > As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL > should be 1,2,3 or 4. > [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in > level=2 ChunkServer. > # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. > This policy should be very complicated in future. > # Also, we add read/write levelgoal support in mfstools. > > We plan to put these trivial change into github or somewhere else. > > It's a very incipient prototype. We appreciate any advice from the > develop team and other users. > > Regards > -Ken |
From: Max <ma...@ag...> - 2012-01-12 18:54:18
|
Thank you guys! It works perfectly now! This system is absolutely great! I have another question: Is is possible to set a default goal for the whole server? (Actual files plus any future file)? Thank you! On Tue, 2012-01-10 at 15:27 -0700, Benjamin Allen wrote: > Max, > > In your mfsmaster.cfg set DATA_PATH to somewhere that is writable by WORKING_USER. Otherwise create and chown the directory its attempting to use (typically /var/mfs). Mfschunkserver, and mfsmetalogger have the same configuration variable and the same requirement of having a writable data directory. > > Ben > > On Jan 10, 2012, at 12:00 PM, Max wrote: > > > Hello! > > > > I am actually trying to install MooseFS with your step-by-step guide on > > http://www.moosefs.org/tl_files/manpageszip/moosefs-step-by-step-tutorial-v.1.1.pdf > > > > Every step worked fine until I try to > > #/usr/sbin/mfsmaster start > > for the first time. I get this error: > > "can't create lockfile in working directory: EACCES (Permission denied)" > > > > User mfs is part of the mfs group. I tried as root, as mfs and as > > myself. > > > > Machine is: > > Distributor ID: Ubuntu > > Description: Ubuntu 8.04.4 LTS > > Release: 8.04 > > Codename: hardy > > > > Chunkservers are going to be Fedora 15 and ext3. > > > > Any idea? could it be a directory missing or something like that? > > > > Thank you! > > > > > > ------------------------------------------------------------------------------ > > Write once. Port to many. > > Get the SDK and tools to simplify cross-platform app development. Create > > new or port existing apps to sell to consumers worldwide. Explore the > > Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join > > http://p.sf.net/sfu/intel-appdev > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Allen, B. S <bs...@la...> - 2012-01-12 18:37:29
|
Max, Just set the root of the MFS from a client to whichever goal you want using the "mfssetgoal" command. Any newly created directories or files will inherit from its parent. To illustrate the behavior: $ mkdir test $ mfssetgoal 1 test test: 1 $ cd test $ touch test_file1 $ mfsgetgoal test_file1 test_file1: 1 $ cd .. $ mfssetgoal 3 test test: 3 $ cd test $ mfsgetgoal test_file1 test_file1: 1 $ touch test_file3 $ mfsgetgoal test_file3 test_file3: 3 Ben On Jan 12, 2012, at 11:27 AM, Max wrote: > Thank you guys! > > It works perfectly now! This system is absolutely great! > > I have another question: Is is possible to set a default goal for the > whole server? (Actual files plus any future file)? > > Thank you! > > On Tue, 2012-01-10 at 15:27 -0700, Benjamin Allen wrote: >> Max, >> >> In your mfsmaster.cfg set DATA_PATH to somewhere that is writable by WORKING_USER. Otherwise create and chown the directory its attempting to use (typically /var/mfs). Mfschunkserver, and mfsmetalogger have the same configuration variable and the same requirement of having a writable data directory. >> >> Ben >> >> On Jan 10, 2012, at 12:00 PM, Max wrote: >> >>> Hello! >>> >>> I am actually trying to install MooseFS with your step-by-step guide on >>> http://www.moosefs.org/tl_files/manpageszip/moosefs-step-by-step-tutorial-v.1.1.pdf >>> >>> Every step worked fine until I try to >>> #/usr/sbin/mfsmaster start >>> for the first time. I get this error: >>> "can't create lockfile in working directory: EACCES (Permission denied)" >>> >>> User mfs is part of the mfs group. I tried as root, as mfs and as >>> myself. >>> >>> Machine is: >>> Distributor ID: Ubuntu >>> Description: Ubuntu 8.04.4 LTS >>> Release: 8.04 >>> Codename: hardy >>> >>> Chunkservers are going to be Fedora 15 and ext3. >>> >>> Any idea? could it be a directory missing or something like that? >>> >>> Thank you! >>> >>> >>> ------------------------------------------------------------------------------ >>> Write once. Port to many. >>> Get the SDK and tools to simplify cross-platform app development. Create >>> new or port existing apps to sell to consumers worldwide. Explore the >>> Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join >>> http://p.sf.net/sfu/intel-appdev >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Michał B. <mic...@ge...> - 2012-01-12 10:05:12
|
Hi Davies! Sorry for the late reply but only now we had time to investigate your patch. The change will be implemented in one of the upcoming releases. We didn't use your patch in the exact shape you sent but we were highly inspired by it :) So again - thank you very much for your commitment! Kind regards Michał -----Original Message----- From: Davies Liu [mailto:dav...@gm...] Sent: Wednesday, November 09, 2011 6:32 AM To: Nolan Eakins Cc: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk new patch, with bug fixed On Tue, Nov 8, 2011 at 6:14 PM, Davies Liu <dav...@gm...> wrote: > I have made a patch to solve this, at attached. > > On Tue, Nov 8, 2011 at 5:42 PM, Davies Liu <dav...@gm...> wrote: >> In the mfsmaster, when dumping data to disk, it rename >> metadata.mfs.back to metadata.mfs.back.tmp first, then write to >> metadata.mfs.back, finally, it unlink metadata.mfs.back.tmp and metadata.mfs. >> >> If writing to metadata.mfs.back failed, we will get part of metadata, >> or damaged one, and previous metadata will be lost together. >> >> Why not writing to metadata.mfs.back.tmp first, if successfully, then >> rename it to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. >> >> In the metalogger, it does not check the downloaded metadata, if it >> download a damaged one, then all the metadata will be lost. let >> metalogger keep sereral recently metadata may be safer. >> >> I will try to fix the above in my deployments, losing metadata is TOO >> UNACCEPTABLE. >> >> On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: >>> I have a little server setup to give MooseFS a try. This server ran >>> out of disk space on its root partition a day or two ago. This >>> partition is where mfsmaster stored all of its data. Needless to >>> say, mfsmaster did something dumb and zeroed out all the metadata >>> files under /var/lib/mfs. It even looks like the metadata logger on >>> another machine was affected since it didn't store anything recoverable. >>> >>> So my entire moosefs file tree is completely hosed now, all because >>> mfsmaster can't handle a full disk. >>> >>> Other than this, I've been happy with MooseFS. >>> >>> Regards, >>> Nolan >>> >>> -------------------------------------------------------------------- >>> ---------- >>> RSA(R) Conference 2012 >>> Save $700 by Nov 18 >>> Register now >>> http://p.sf.net/sfu/rsa-sfdev2dev1 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> >> >> -- >> - Davies >> > > > > -- > - Davies > -- - Davies |
From: Michał B. <mic...@ge...> - 2012-01-12 09:57:38
|
Hi Jakub! Sparse files work a little bit differently on MooseFS and are not properly reported by 'du' command. You don't need to bother about this - it is how it works here. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Jakub Mroziński [mailto:ja...@31...] Sent: Friday, January 06, 2012 1:10 PM To: moo...@li... Subject: [Moosefs-users] Creating sparse file Hello, Im trying to create sparse files on mfs but it isn't working. On MFS: local:/mfs# dd if=/dev/zero of=jakub.test bs=4096 count=0 seek=1000 0 bytes (0 B) copied, 0.000205242 s, 0.0 kB/s local:/mfs# du -h jakub.test 4.0M jakub.test On local disk: local:~# dd if=/dev/zero of=jakub.test bs=4096 count=0 seek=1000 0 bytes (0 B) copied, 1.0481e-05 s, 0.0 kB/s local:~# du -h jakub.test 0 jakub.test As you can see on my local hdd everything is fine, file size is 0. On MFS file size is 4Mb. What about cp? local:~# cp --sparse=always jakub.test /mfs/ local:/mfs# du -h /mfs/jakub.test 4.0M jakub.test local:/mfs# cp --sparse=always jakub.test /root/jakub.test2 local:~# du -h /root/jakub.test2 0 jakub.test2 After cp from MFS to local hdd sparse file has good size. What im doing wrong? Why all sparse files sizes on MFS are >0? -- Jakub Mroziński ------------------------------------------------------------------------------ Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex infrastructure or vast IT resources to deliver seamless, secure access to virtual desktops. With this all-in-one solution, easily deploy virtual desktops for less than the cost of PCs and save 60% on VDI infrastructure costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2012-01-12 09:50:51
|
Hi Max! User running mfsmaster process needs to have write privileges to its working folder. What do you have in your mfsmaster.cfg at DATA_PATH and WORKING_USER options? If your user is 'mfs' you shoud have 'WORKING_USER = mfs'. By default DATA_PATH points to '/usr/local/var/mfs' - you need to check if 'mfs' user has write privilege to this folder (the best if 'mfs' user was the owner of this folder). Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Max [mailto:ma...@ag...] Sent: Tuesday, January 10, 2012 8:01 PM To: moo...@li... Subject: [Moosefs-users] Trying to install MooseFS but EACCES error Hello! I am actually trying to install MooseFS with your step-by-step guide on http://www.moosefs.org/tl_files/manpageszip/moosefs-step-by-step-tutorial-v. 1.1.pdf Every step worked fine until I try to #/usr/sbin/mfsmaster start for the first time. I get this error: "can't create lockfile in working directory: EACCES (Permission denied)" User mfs is part of the mfs group. I tried as root, as mfs and as myself. Machine is: Distributor ID: Ubuntu Description: Ubuntu 8.04.4 LTS Release: 8.04 Codename: hardy Chunkservers are going to be Fedora 15 and ext3. Any idea? could it be a directory missing or something like that? Thank you! ---------------------------------------------------------------------------- -- Write once. Port to many. Get the SDK and tools to simplify cross-platform app development. Create new or port existing apps to sell to consumers worldwide. Explore the Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join http://p.sf.net/sfu/intel-appdev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ken <ken...@gm...> - 2012-01-12 09:29:20
|
hi, moosefs We plan to use moosefs as storage for huge amount photos uploaded by users. Because of read operations of new files are very more than old files, maybe write new files to SSD is a choice. For strict safe reason, we must backup content to an other data center. And more features in maintain purpose are required. I don't think moosefs can work fine in these situation. We try to implement these features several weeks ago. Till now, it's almost done. Is there anyone interested in this? more detail: # Add access_mode(none, read, write capability) to struct matocserventry(matocserv.c). This value can be changed from outside(maybe from the python cgi) # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. ChunkServer report it to Master if need. # Add uint32_t levelgoal into struct fsnode(filesystem.c). # Add uint32_t levelgoal into sturct chunk(chunk.c). As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL should be 1,2,3 or 4. [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in level=2 ChunkServer. # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. This policy should be very complicated in future. # Also, we add read/write levelgoal support in mfstools. We plan to put these trivial change into github or somewhere else. It's a very incipient prototype. We appreciate any advice from the develop team and other users. Regards -Ken |
From: Michał B. <mic...@ge...> - 2012-01-11 12:15:33
|
Hi! We recently found that project Fuse for FreeBSD has been resumed by Gleb Kurtsou at https://github.com/glk/fuse-freebsd - users of FreeBSD should give it a try! If problems of kernel panic still exist we believe it now should not be a problem to quickly eliminate them having new developer of the project. Kind regards Michal |
From: Benjamin A. <bs...@la...> - 2012-01-10 22:27:39
|
Max, In your mfsmaster.cfg set DATA_PATH to somewhere that is writable by WORKING_USER. Otherwise create and chown the directory its attempting to use (typically /var/mfs). Mfschunkserver, and mfsmetalogger have the same configuration variable and the same requirement of having a writable data directory. Ben On Jan 10, 2012, at 12:00 PM, Max wrote: > Hello! > > I am actually trying to install MooseFS with your step-by-step guide on > http://www.moosefs.org/tl_files/manpageszip/moosefs-step-by-step-tutorial-v.1.1.pdf > > Every step worked fine until I try to > #/usr/sbin/mfsmaster start > for the first time. I get this error: > "can't create lockfile in working directory: EACCES (Permission denied)" > > User mfs is part of the mfs group. I tried as root, as mfs and as > myself. > > Machine is: > Distributor ID: Ubuntu > Description: Ubuntu 8.04.4 LTS > Release: 8.04 > Codename: hardy > > Chunkservers are going to be Fedora 15 and ext3. > > Any idea? could it be a directory missing or something like that? > > Thank you! > > > ------------------------------------------------------------------------------ > Write once. Port to many. > Get the SDK and tools to simplify cross-platform app development. Create > new or port existing apps to sell to consumers worldwide. Explore the > Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join > http://p.sf.net/sfu/intel-appdev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Max <ma...@ag...> - 2012-01-10 19:27:53
|
Hello! I am actually trying to install MooseFS with your step-by-step guide on http://www.moosefs.org/tl_files/manpageszip/moosefs-step-by-step-tutorial-v.1.1.pdf Every step worked fine until I try to #/usr/sbin/mfsmaster start for the first time. I get this error: "can't create lockfile in working directory: EACCES (Permission denied)" User mfs is part of the mfs group. I tried as root, as mfs and as myself. Machine is: Distributor ID: Ubuntu Description: Ubuntu 8.04.4 LTS Release: 8.04 Codename: hardy Chunkservers are going to be Fedora 15 and ext3. Any idea? could it be a directory missing or something like that? Thank you! |
From: Giovanni T. <me...@gi...> - 2012-01-06 14:15:49
|
2012/1/6 JV <ja...@da...>: > We actually use NFS root, allowing us to have single read only base FS > for every chunkserver. > We can upgrade, if needed chrooting to base on master and then > restarting all chunkservers. > Also this system allows using different mfshdd.cfg for each > chunkserver, allowing us to remove/add disks if they get hardware > errors. NFS is a good choice but may be a single point of failure if the server is not redundant. With PXE, each node is independent after the first boot, since their entire rootfs is kept compressed in ram (or can be written to disk after download, thanks to live-boot script support). In addition, rootfs auto-enable aufs support, so it's possibile to change files manually or via a custom post-boot script, that get lost on reboot or that can be saved on a custom partition/usb/flash. -- Giovanni Toraldo http://gionn.net/ |
From: JV <ja...@da...> - 2012-01-06 14:04:20
|
We actually use NFS root, allowing us to have single read only base FS for every chunkserver. We can upgrade, if needed chrooting to base on master and then restarting all chunkservers. Also this system allows using different mfshdd.cfg for each chunkserver, allowing us to remove/add disks if they get hardware errors. On 06.01.2012 15:46, Giovanni Toraldo wrote: > Hi, > > 2012/1/6 <jan...@da...>: >> On a related note - we were using USB sticks to boot chunkservers, >> but >> now we are using network boot for that. On chunkservers themselves >> is >> only data. This allows us to very quickly add new servers - just add >> new >> node in configs and plug the server in. >> In our experience using network boot is faster and safer than USB >> sticks (and also cheaper). > > PXE netboot is a great choice when managing a bunch of identical > machines, especially MooseFS chunk-servers that don't need any > particular variation in the config files, and you can use the entire > disks for the volume! > > I had a good experience with Debian Live > http://live.debian.net/devel/live-boot/ (unfortunately this isn't in > production but not for technical reasons): every machine bootstrap > via > PXE, download from a TFTP server the generic kernel and initrd with > live-boot scripts, that downloads a squashed rootfs via HTTP and > mount > it. The last init.d script will mount the available machine disks on > know locations and mfs-chunkserver is started. > > When a software upgrade is required, we simply generate a new > squashed > rootfs with updated software, test it under a virtual machine, and > when you are sure it's working, reboot every node in sequence. |
From: Giovanni T. <me...@gi...> - 2012-01-06 13:55:11
|
Hi, 2012/1/6 <jan...@da...>: > On a related note - we were using USB sticks to boot chunkservers, but > now we are using network boot for that. On chunkservers themselves is > only data. This allows us to very quickly add new servers - just add new > node in configs and plug the server in. > In our experience using network boot is faster and safer than USB > sticks (and also cheaper). PXE netboot is a great choice when managing a bunch of identical machines, especially MooseFS chunk-servers that don't need any particular variation in the config files, and you can use the entire disks for the volume! I had a good experience with Debian Live http://live.debian.net/devel/live-boot/ (unfortunately this isn't in production but not for technical reasons): every machine bootstrap via PXE, download from a TFTP server the generic kernel and initrd with live-boot scripts, that downloads a squashed rootfs via HTTP and mount it. The last init.d script will mount the available machine disks on know locations and mfs-chunkserver is started. When a software upgrade is required, we simply generate a new squashed rootfs with updated software, test it under a virtual machine, and when you are sure it's working, reboot every node in sequence. -- Giovanni Toraldo http://gionn.net/ |
From: <jan...@da...> - 2012-01-06 12:36:28
|
On a related note - we were using USB sticks to boot chunkservers, but now we are using network boot for that. On chunkservers themselves is only data. This allows us to very quickly add new servers - just add new node in configs and plug the server in. In our experience using network boot is faster and safer than USB sticks (and also cheaper). On 06.01.2012 13:36, Ólafur Ósvaldsson wrote: > Hi, > We run a customized version of Ubuntu and to be honest I've not set > it up in > a distributable form. > > I'll check if I can gather it up next week and send the info to the > list. > > /Oli > > On 6.1.2012, at 11:20, Steve wrote: > >> >> That's really cool, are you able to share the scripts, image and/or >> how the >> usb image was made ? >> >> >> >> Steve >> >> >> >> >> >> >> >> -------Original Message------- >> >> >> >> From: Ólafur Ósvaldsson >> >> Date: 06/01/2012 10:20:28 >> >> To: moo...@li... >> >> Subject: Re: [Moosefs-users] moosefs distro >> >> >> >> Hi, >> >> We run the chunkservers completely from memory, they boot from USB >> and >> >> only use a small partition of the USB drive for the graph data so >> that the >> >> history isn't lost between reboots. >> >> >> >> Every chunkserver has 6x1TB disks and 3GB of ram and there are >> startup >> >> scripts that initialize new disks on boot if required, so a new >> server can >> just >> >> be put into the rack with a USB stick plugged in and it will clear >> all the >> disks >> >> and setup for MFS if it is not like that already. >> >> >> >> /Oli >> >> >> >> On 5.1.2012, at 16:11, Travis Hein wrote: >> >> >> >>> The chunk server daemons are very low footprint for system resource >> >>> requirements. Enough so they are suitable to coexist with other >>> system >> >>> services. Where if you have a cluster of physical machines each >>> with a >> >>> local disk, just making every compute node also be a chunk server >>> for >> >>> aggregated file system capacity. >> >>> >> >>> Most of the time lately though we put everything in virtual >>> machines on >> >>> virtual machine hosting platforms now. Which I guess I kind of feel >>> it >> >>> is to as efficient to have the chunk servers spread out everywhere, >>> all >> >>> VMs are backed by the same SAN anyway, so the performance over >>> spread >> >>> out disks goes away right. >> >>> >> >>> So lately I create a Virtual machine just for running the chunk >>> server >> >>> process. We have a "standard" of using CentOS for our VMs. Which is >> >>> arguably kind of wasteful to just use it as a chunk server process, >>> but >> >>> it is pretty much set and forget and appliance-ized. >> >>> >> >>> I have often thought about creating a moose fs stand alone >>> appliance. An >> >>> embedded nano itx board in a 1U rackmount chassis, solid state >>> boot, and >> >>> a minimal linux distribution, with a large sata drives. Both low >>> power >> >>> and efficient, out of our virtualized platform. At least probably >> >>> cheaper to grow capacity than buying more iSCSI RAID SAN products >>> :P >> >>> But this is still in my to do some day pile. >> >>> On 12-01-04 10:28 AM, Steve wrote: >> >>>> Do people use moose boxes for other roles ? >> >>>> >> >>>> >> >>>> >> >>>> Sometime ago I made a moosefs linux ISO cd (not a respin) however >>>> the >> >>>> installer wasn't insert cd and job done. Taking it any further was >>>> beyond >> my >> >>>> capabilities. >> >>>> >> >>>> >> >>>> >> >>>> Is such a thing needed or desired ? Any collaborators >> >>>> >> >>>> >> >>>> >> >>>> Steve >> >>>> >> >>>> >> >> ----------------------------------------------------------------------------- >> >> >>>> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a >>>> complex >> >> >>>> infrastructure or vast IT resources to deliver seamless, secure >>>> access to >> >> >>>> virtual desktops. With this all-in-one solution, easily deploy >>>> virtual >> >>>> desktops for less than the cost of PCs and save 60% on VDI >>>> infrastructure >> >> >>>> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox >> >>>> _______________________________________________ >> >>>> moosefs-users mailing list >> >>>> moo...@li... >> >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >>> >> >>> >> >>> -- >> >>> Travis >> >>> >> >>> >> >>> >> >> ----------------------------------------------------------------------------- >> >> >>> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a >>> complex >> >>> infrastructure or vast IT resources to deliver seamless, secure >>> access to >> >>> virtual desktops. With this all-in-one solution, easily deploy >>> virtual >> >>> desktops for less than the cost of PCs and save 60% on VDI >>> infrastructure >> >>> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox >> >>> _______________________________________________ >> >>> moosefs-users mailing list >> >>> moo...@li... >> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> >> -- >> >> Ólafur Osvaldsson >> >> System Administrator >> >> Nethonnun ehf. >> >> e-mail: osv...@ne... >> >> phone: +354 517 3400 >> >> >> >> >> >> >> ----------------------------------------------------------------------------- >> >> >> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a >> complex >> >> infrastructure or vast IT resources to deliver seamless, secure >> access to >> >> virtual desktops. With this all-in-one solution, easily deploy >> virtual >> >> desktops for less than the cost of PCs and save 60% on VDI >> infrastructure >> >> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox >> >> _______________________________________________ >> >> moosefs-users mailing list >> >> moo...@li... >> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > > -- > Ólafur Osvaldsson > System Administrator > Nethonnun ehf. > e-mail: osv...@ne... > phone: +354 517 3400 > > > > ------------------------------------------------------------------------------ > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a > complex > infrastructure or vast IT resources to deliver seamless, secure > access to > virtual desktops. With this all-in-one solution, easily deploy > virtual > desktops for less than the cost of PCs and save 60% on VDI > infrastructure > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |