From: Ken <ken...@gm...> - 2012-01-13 07:25:39
|
I noticed the go-mfsclient in this mail-list before, we also write a moose client in C++. ;-) It work find when mfsmaster failover. We plan to build it as a preload dynamic library, and auto hook the file API. I think it is high availability enough. Thanks Regards -Ken On Fri, Jan 13, 2012 at 2:52 PM, Davies Liu <dav...@gm...> wrote: > On Fri, Jan 13, 2012 at 2:40 PM, Ken <ken...@gm...> wrote: >>>> It's not good ideal to use moosefs as storage for huge amount of small files >> I agree. We combine some small files into one big file, and read the >> small files with offset/length infomation. > > Is not safe to write to same file concurrently. > We use this method to backup the original file user uploaded, with > tar, when offline. > Some times, some file will be broken. > > MFS is not good enough for online system, not high available, and some IO > operations will be block when error in mfsmaster or mfschunkserver. > > So we serve some video files (>10M) in MFS this way: > Nginx -> nginx + FUSE -> MFS > or > Nginx -> go-mfsclient [1] -> MFS > > If there something wrong with MFS, it will not block the first Nginx > and the whole > site will not be affected. > > Davies > > [1] github.com/davies/go-mfsclient > >> Thanks. >> >> Regards >> -Ken >> >> >> >> On Fri, Jan 13, 2012 at 2:32 PM, Davies Liu <dav...@gm...> wrote: >>> On Thu, Jan 12, 2012 at 5:28 PM, Ken <ken...@gm...> wrote: >>>> hi, moosefs >>>> >>>> We plan to use moosefs as storage for huge amount photos uploaded by users. >>> >>> It's not good ideal to use moosefs as storage for huge amount of small files, >>> because the mfsmaster will be the bottle neck when you have more than 100M >>> files. At that time, the whole size of files may be 1T (10k per file), >>> can be stored >>> by one local disk. >>> >>> Huge amount small files need other solutions, just like TFS [1] from >>> taobao.com, >>> or beansdb [2] from douban.com. >>> >>> [1] http://code.taobao.org/p/tfs/src/ >>> [2] http://code.google.com/p/beansdb/ >>> >>>> Because of read operations of new files are very more than old files, >>>> maybe write new files to SSD is a choice. >>>> For strict safe reason, we must backup content to an other data center. >>>> And more features in maintain purpose are required. >>>> >>>> I don't think moosefs can work fine in these situation. We try to >>>> implement these features several weeks ago. Till now, it's almost >>>> done. >>>> >>>> Is there anyone interested in this? >>>> >>>> more detail: >>>> # Add access_mode(none, read, write capability) to struct >>>> matocserventry(matocserv.c). This value can be changed from >>>> outside(maybe from the python cgi) >>>> # mfschunkserver.cfg add 'LEVEL' config, if not, LEVEL=0 as normal. >>>> ChunkServer report it to Master if need. >>>> # Add uint32_t levelgoal into struct fsnode(filesystem.c). >>>> # Add uint32_t levelgoal into sturct chunk(chunk.c). >>>> As seen, uint32_t levelgoal = uint8_t levelgoal[4], implied LEVEL >>>> should be 1,2,3 or 4. >>>> [2,1,0,0] mean store 2 copies in level=1 ChunkServer, store 1 copy in >>>> level=2 ChunkServer. >>>> # In chunk_do_jobs(chunk.c), send replicated command to ChunkServer. >>>> This policy should be very complicated in future. >>>> # Also, we add read/write levelgoal support in mfstools. >>>> >>>> We plan to put these trivial change into github or somewhere else. >>>> >>>> It's a very incipient prototype. We appreciate any advice from the >>>> develop team and other users. >>>> >>>> Regards >>>> -Ken >>>> >>>> ------------------------------------------------------------------------------ >>>> RSA(R) Conference 2012 >>>> Mar 27 - Feb 2 >>>> Save $400 by Jan. 27 >>>> Register now! >>>> http://p.sf.net/sfu/rsa-sfdev2dev2 >>>> _______________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >>> >>> >>> -- >>> - Davies > > > > -- > - Davies |