From: Laurent W. <lw...@hy...> - 2010-06-15 06:31:13
|
On Tue, 15 Jun 2010 13:36:03 +0800 Roast <zha...@gm...> wrote: > Hello, everyone. Hi Zhang, > > We are plan to use moosefs at our product environment as the storage of our > online photo service. > > But since we got that master server store all the metadata at memory, but we > will store for about a hundred million photo files, so I wonder how much > memory should prepare for the master server? And how to calculate this > number? According to FAQ, http://www.moosefs.org/moosefs-faq.html#cpu , at gemius, 8GB ram is used by master for 25 millions files. So, for a hundred millions, you'd need 32GB. > > If the memory is not enough, what will happened with master server's? I guess it'll swap. > > And I still wonder the performance about master server when use moosefs to > store a hundred million photo files? Anyone can give me some more > information? I've no experience with such a large setup. I guess memory caching used to prevent bottleneck on master will still do the trick. regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |