From: Michal B. <mic...@ge...> - 2011-04-26 08:39:59
|
Hi! Unfortunately for the moment we do not plan to develop MooseFS in a way that metadata is divided among RAM and HDD. Though there already are computers supporting 96GB RAM. With this size of metadata you would also need a fast SDD disk(s) for the dumps made every hour from RAM. And maybe you are simply able to reduce the number of files in the system? By creating some "containers" just like .tar archives and keeping files inside them? Best regards Michal From: ha...@si... [mailto:ha...@si...] Sent: Friday, April 22, 2011 11:15 AM To: moo...@li... Subject: [Moosefs-users] hi, i ask one question now, look forward to your reply! Hi Please continue to pay attention to us and help me. First,thanks for your early reply ,now,i ask some questions as follows: As we all know,in the system,now our masterserver can support 64G RAM at most.and you say,"For 300 million files we'll need: 300 * 300 MB = 87.8 GB of RAM at least" .i want to ask: problem one:If we modify the management about the metadata's namespace ,such as from the HASH to B-TREE and take data from HDD,and whether you feel it the feasibility and reasonable? whether you think we can reduce the num of RAM? problem two:To support the storage 500T, 300 million files, operate 500G times a day and reduce the num of RAM, have you some good suggestions??? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw |