From: Robert S. <rsa...@ne...> - 2011-07-10 21:57:22
|
We have 80 million objects using 27 GB of RAM for mfsmaster. RAM usage does seem to scale linearly with the number of files. The limitation is really all about the speed of the meta-data access. I.e. getting the location of a file to open it, determining the size and attributes of a file. Performance would probably degrade significantly with insufficient memory. It could also introduce network timeouts if you utilize too much swap space on your master server and I have a suspicion this could have a negative effect on the reliability of the file system. Robert On 7/10/11 5:24 PM, Vineet Jain wrote: > I have 16gigs of ram on my meta data server. Is the max number of > files that can be stored about 45-48 million? I got this number from > the faq where 25 million files took 8gigs of ram. Is there any way to > store more number of files other than to increase the ram? > > Is there any planned effort to remove this limitation or is this going > to be around for some time. > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |