You can subscribe to this list here.
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
| 2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
| 2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
| 2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
| 2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
| 2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
| 2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
| 2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
| 2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
| 2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
| 2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Elliot F. <efi...@gm...> - 2011-07-19 17:56:27
|
>> 2. If I have 6 drives in a server. Is it better to have one chunk >> server with 6 drives or 6 chunkservers with one drive each or >> somewhere in the middle. > > 6 chunkservers will give you better performance as you are spreading out > the reads and writes over those machines. You also have more high > availability as the loss of a single chunkserver is not as significant > to the cluster and its also faster to recover from when bring files > back up to your goal setting (assuming goals > 1). I thought the OP was asking: If I have one physical server with 6 drives in it, would it be better to run a single chunkserver process and give it all six drives or run 6 chunkserver processes giving them 1 drive each. Even if the OP wasn't asking it this way, it'd still be nice to know the answer with regards to performance. :) Elliot |
|
From: WK <wk...@bn...> - 2011-07-19 16:33:50
|
On 7/19/11 6:51 AM, Vineet Jain wrote: > Couple of questions: > > 1. Is there an entry on how to add a chunkserver to a system. Offline > or online is fine? Online works fine. We take chunkservers offline and online all the time as we swap in better equipment for our growing needs. > 2. If I have 6 drives in a server. Is it better to have one chunk > server with 6 drives or 6 chunkservers with one drive each or > somewhere in the middle. 6 chunkservers will give you better performance as you are spreading out the reads and writes over those machines. You also have more high availability as the loss of a single chunkserver is not as significant to the cluster and its also faster to recover from when bring files back up to your goal setting (assuming goals > 1). However, 6 chunkservers will consume more space and electricity and will involve some maintenance. So its a question of your priorities. We tend to use at least $goal+1 for the minimum number of chunkservers. One thought, keep your chunkservers somewhat uniform in terms of RAM and storage memory for ease of maintenance. MFS is so flexible its easy to just start throwing spare boxes into the mix for extra storage, but if you aren't paying attention, you can create a weak chunkserver that tends to impact things at a bad time. -bill > ------------------------------------------------------------------------------ > Magic Quadrant for Content-Aware Data Loss Prevention > Research study explores the data loss prevention market. Includes in-depth > analysis on the changes within the DLP market, and the criteria used to > evaluate the strengths and weaknesses of these DLP solutions. > http://www.accelacomm.com/jaw/sfnl/114/51385063/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
|
From: Laurent W. <lw...@hy...> - 2011-07-19 15:34:18
|
On Tue, 19 Jul 2011 09:51:35 -0400 Vineet Jain <vin...@gm...> wrote: Hi, > Couple of questions: > > 1. Is there an entry on how to add a chunkserver to a system. Offline > or online is fine? You can hot-add a chunkserver. Configure the box, start the service, master will see it automatically, that's it. Space and I/O load balance will be taken care of :) > 2. If I have 6 drives in a server. Is it better to have one chunk > server with 6 drives or 6 chunkservers with one drive each or > somewhere in the middle. I'm afraid I don't understand clearly the question. If you're talking about virtual machines, I'd say: One (real) box, one chunkserver. use disks as JBOD for data. HTH, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
|
From: WK <wk...@bn...> - 2011-07-19 15:14:06
|
4 chunk servers all using Cent5.x/EXT3. The cluster in question is being used for IMAP so they are mostly small files with a goal of 3. That cluster had been in productions for a few months and we never saw more than 400-500 deletions per minute and even that was rare, so it was really weird to see it go to 10,000+ We had been playing with increasing the trashlifetime a few weeks ago so that may have had something to do with it and of course with IMAP there is a huge number of deleted files as people clean out folders and move things around. We have the CHUNKS_DEL_LIMIT set to 20 now and are back to 400-500 deletions a minute. -bill The whole thing was very bizarre, On 7/19/2011 6:09 AM, Mike wrote: > What filesystem are the chunk servers using? > > > We had a small 7 million file cluster that had chunkservers with only > > 1GB or RAM, all of sudden start doing 10K+ deletions a minute. That > > bogged down the entire cluster making it unusuable and even sent two > > chunkservers into swap. |
|
From: Vineet J. <vin...@gm...> - 2011-07-19 13:51:43
|
Couple of questions: 1. Is there an entry on how to add a chunkserver to a system. Offline or online is fine? 2. If I have 6 drives in a server. Is it better to have one chunk server with 6 drives or 6 chunkservers with one drive each or somewhere in the middle. |
|
From: WK <wk...@bn...> - 2011-07-19 00:52:45
|
On 6/28/2011 4:14 AM, Ólafur Ósvaldsson wrote: > When reading the man page for mfsmaster.cfg I see the comments for > CHUNKS_LOOP_TIME and CHUNKS_DEL_LIMIT and my understanding is that > with the default values the maximum number of chunks to delete in one > loop (300 sek) is 100, it does not say if that is pr. chunkserver or > for the whole system, but each server here was around and over 5000 > chunk deletions pr. minute and with 10 servers thats over 50k chunk > deletions pr. minute for the whole system. > > CHUNKS_LOOP_TIME > Chunks loop frequency in seconds (default is 300) > > CHUNKS_DEL_LIMIT > Maximum number of chunks to delete in one loop (default > is 100) > We just got hit by this. We had a small 7 million file cluster that had chunkservers with only 1GB or RAM, all of sudden start doing 10K+ deletions a minute. That bogged down the entire cluster making it unusuable and even sent two chunkservers into swap. We were able to get the chunkservers shutdown and restarted the MFSMaster with the CHUNKS_DEL_LIMIT set to 50, after a settling down time, the deletion started again but this time at half the rate (around 5K deletions a minute), which was still excessive given what we have. So I can verify that change CHUNKS_DEL_LIMIT does have an affect (if you resetart the master), but that default is way to high, unless you are careful. In our case, we weren't paying attention and didn't realize the number of files was increasing past a reasonable point for those resources. We have since set it to 20 and have increased the RAM in the chunkservers. BTW, the sizing data in the FAQ for chunkservers, should be more explicit. It should say that you need about 150MB of RAM per chunk server for every Million chunks you get on chunk server (which is about what we are seeing), so the more chunkservers you have the less ram you need. Maybe the CGI could be used to query resources and warn about potential RAM resource issues. -bill |
|
From: Robert S. <rsa...@ne...> - 2011-07-18 18:37:54
|
I have had it running without a crash for more than 12 hours which is a new record here. I changed one setting: MASTER_TIMEOUT = 120 in mfschunkserver.cfg. My guess at the moment is that on the hour the Master blocks connections and dumps the metadata to disk and to the mfsmetalogger servers. Due to existing load and the number of files/objects/chunks in our system this takes longer than the chunk server timeout. This then leads to a process where the chunkserver goes into a disconnect, reconnect loop until the master gets confused. What also seems to contribute is that once mfsmaster starts blocking connections mfsmount and mfschunkserver may start using more CPU which tends to aggravate the situation. It may help the situation to move mfsmaster to an unloaded and dedicated machine, but I can't help but think that this behavior limits scalability. Given enough files/folders/chunks any timeout will be exceeded even if the master machine is completely unloaded. Robert On 7/18/11 10:55 AM, Mike wrote: > > Every time it gets into this state one or two chunks gets damaged > and I have to manually repair them. Sometimes losing a file. > > At this stage I can't even get to repairing the chunks as mfsmaster > does not stay up for long enough to show me which files to repair. > > What is also strange is how predictable it is. It always happens on > the hour. Not 2 minutes past the hour, but precisely on the hour. It is > > as if there is some job/process/thread that does something every > hour that causes it to go into this state. > > I can reproduce this on our install fairly easily (well, I could last > time I looked!) Given that I'm running a completely stock config with > 2 chunkservers, it shouldn't be TOO hard to figure out what's going > on. I can recompile/reinstall/change values as needed, someone just > needs to point me in the right direction. > > |
|
From: Mike <isp...@gm...> - 2011-07-18 14:55:55
|
> Every time it gets into this state one or two chunks gets damaged and I have to manually repair them. Sometimes losing a file. > At this stage I can't even get to repairing the chunks as mfsmaster does not stay up for long enough to show me which files to repair. > What is also strange is how predictable it is. It always happens on the hour. Not 2 minutes past the hour, but precisely on the hour. It is > as if there is some job/process/thread that does something every hour that causes it to go into this state. I can reproduce this on our install fairly easily (well, I could last time I looked!) Given that I'm running a completely stock config with 2 chunkservers, it shouldn't be TOO hard to figure out what's going on. I can recompile/reinstall/change values as needed, someone just needs to point me in the right direction. |
|
From: Robert S. <rsa...@ne...> - 2011-07-17 22:34:10
|
This is starting to annoy me to no end.
I now have this happening every few hours and I am very close to
abandoning MooseFS. The only reasons I don't is
1. I have spent a month moving my data to MooseFS and will have to redo
this.
2. I don't really see any alternatives which fill me with much confidence.
Every time it gets into this state one or two chunks gets damaged and I
have to manually repair them. Sometimes losing a file. At this stage I
can't even get to repairing the chunks as mfsmaster does not stay up for
long enough to show me which files to repair. What is also strange is
how predictable it is. It always happens on the hour. Not 2 minutes past
the hour, but precisely on the hour. It is as if there is some
job/process/thread that does something every hour that causes it to go
into this state.
It always seems to be the same chunkserver that is disconnected and
restarting the chunkserver has no effect. The chunkserver and mfsmaster
is running on the same machine. The other chunkserver does not seem to
ever drop out. I would have been able to add a 3rd chunkserver on Monday
but I will probably not do that until I can get the existing setup stable.
On Monday I will try to move mfsmaster to a different machine and see if
I can get it to stay up for longer than 8 hours. At this stage 6 hours
is about the longest it stays up without going into this state. If this
fails and I have no other feedback then I am back to square one and
probably will have to abandon MooseFS.
I have eliminated everything else that could be causing problems. At
this stage it can just be mfsmaster.
The following Swatch script is helping me keep my system online as much
as is possible:
watchfor /mfsmaster mfsmaster.*: chunkserver disconnected - ip:
xxx.xxx.xxx.xxx, port: 9422, usedspace: 0 \(0.00 GiB\), totalspace: 0
\(0.00 GiB\)/
threshold track_by=xxx.xxx.xxx.xxx,type=both,count=6,seconds=1200
mail=robert,subject="MFSMaster crashed yet again"
exec /usr/sbin/mfsmaster -c /etc/mfs/mfsmaster.cfg restart
watchfor /mfsmaster mfsmaster.*: about 60 seconds passed and lockfile is
still locked - giving up/
mail=robert,subject="MFSMaster crashed yet again and restart timed
out yet again"
exec /usr/sbin/mfsmaster -c /etc/mfs/mfsmaster.cfg restart
Robert
On 7/13/11 9:26 AM, Robert Sandilands wrote:
> Do you see the message "mfsmaster[pid]: chunkserver disconnected - ip:
> xxx.xxx.xxx.xxx, port: 9422" around the time the CPU jumps to 100%?
>
> Robert
>
> On 7/12/11 10:13 AM, Mike wrote:
>> I have a fairly small MFS installation - 14T of storage across 2
>> servers, a master node and a metalogger. I'm seeing the mfsmaster
>> jump to 100% cpu and just sit there... rendering the filesystem dead.
>> strace shows its not doing any IO.
>>
>> Any thoughts or ideas where to look next?
>>
>>
>>
>> ------------------------------------------------------------------------------
>> All of the data generated in your IT infrastructure is seriously valuable.
>> Why? It contains a definitive record of application performance, security
>> threats, fraudulent activity, and more. Splunk takes this data and makes
>> sense of it. IT sense. And common sense.
>> http://p.sf.net/sfu/splunk-d2d-c2
>>
>>
>> _______________________________________________
>> moosefs-users mailing list
>> moo...@li...
>> https://lists.sourceforge.net/lists/listinfo/moosefs-users
>
>
>
> ------------------------------------------------------------------------------
> AppSumo Presents a FREE Video for the SourceForge Community by Eric
> Ries, the creator of the Lean Startup Methodology on "Lean Startup
> Secrets Revealed." This video shows you how to validate your ideas,
> optimize your ideas and identify your business strategy.
> http://p.sf.net/sfu/appsumosfdev2dev
>
>
> _______________________________________________
> moosefs-users mailing list
> moo...@li...
> https://lists.sourceforge.net/lists/listinfo/moosefs-users
|
|
From: Anh K. H. <ky...@vi...> - 2011-07-14 09:04:47
|
On Thu, 14 Jul 2011 10:46:29 +0200 Kris Buytaert <ml...@in...> wrote: > .... > > Apparently not when you build the debian package .. somehow it does > a chuid to nobody and expects the files to be owned by > nobody:nogroup rather than mfs .. > > Therefore I modified the rcscripts from the package to do a diffent > chown of the three. > > So is this a bug in the "package" or what is this ? Yes, you're right. In the source file "configure.ac" the DEFAULT_USER is "nobody", while in the directory "debian" the Debian init scripts and settings use the default use "mfs". -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
|
From: Anh K. H. <ky...@vi...> - 2011-07-14 08:39:50
|
On Thu, 14 Jul 2011 09:44:27 +0200 Kris Buytaert <ml...@in...> wrote: > First of all I had to build my own packages from the sf.net > tarball , so I`m wondering if there are any official moosefs debian > packages out there . In the source tree you will see a directory named "debian": and you can simply build your package by the tool "dpkg-buildpackage" (Cf. http://www.debian.org/doc/manuals/maint-guide/build.en.html). > Secondly I`m running into the following issue when trying to start > mfs-master > ....... > Starting mfs-master: working directory: /var/lib/mfs > can't create lockfile in working directory: EACCES > (Permission denied) The directory /var/lib/mfs and its files should belong to the user "mfs" and the group "mfs". Hope this helps, Regards, -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
|
From: Kris B. <ml...@in...> - 2011-07-14 08:22:05
|
I`m having troubles getting moosefs to run on debian. First of all I had to build my own packages from the sf.net tarball , so I`m wondering if there are any official moosefs debian packages out there . Secondly I`m running into the following issue when trying to start mfs-master. After I stopped it first .. it runs once .. thereafter it doesn't want to start anymore . mogile01:/var/lib/mfs# /etc/init.d/mfs-master start Starting mfs-master: working directory: /var/lib/mfs lockfile created and locked initializing mfsmaster modules ... loading sessions ... file not found if it is not fresh installation then you have to restart all active mounts !!! exports file has been loaded loading metadata ... create new empty filesystemmetadata file has been loaded no charts data file - initializing empty charts master <-> metaloggers module: listen on *:9419 master <-> chunkservers module: listen on *:9420 main master server module: listen on *:9421 mfsmaster daemon initialized properly mfsmaster. mogile01:/var/lib/mfs# ls -al total 16 drwxrwxrwx 2 mfs mfs 4096 Jul 13 16:06 . drwxr-xr-x 30 root root 4096 Jul 13 12:54 .. -rw-r----- 1 nobody nogroup 95 Jul 13 16:06 metadata.mfs.back -rw-r----- 1 nobody nogroup 0 Jul 13 16:06 .mfsmaster.lock -rw-r----- 1 nobody nogroup 8 Jul 13 16:06 sessions.mfs mogile01:/var/lib/mfs# ps -ef ^C mogile01:/var/lib/mfs# /etc/init.d/mfs-master stop Stopping mfs-master: mfsmaster. mogile01:/var/lib/mfs# /etc/init.d/mfs-master start Starting mfs-master: working directory: /var/lib/mfs can't create lockfile in working directory: EACCES (Permission denied) greetings Kris PS. I have it working perfectly on CentOS ... |
|
From: Robert S. <rsa...@ne...> - 2011-07-13 13:26:17
|
Do you see the message "mfsmaster[pid]: chunkserver disconnected - ip: xxx.xxx.xxx.xxx, port: 9422" around the time the CPU jumps to 100%? Robert On 7/12/11 10:13 AM, Mike wrote: > I have a fairly small MFS installation - 14T of storage across 2 > servers, a master node and a metalogger. I'm seeing the mfsmaster jump > to 100% cpu and just sit there... rendering the filesystem dead. > strace shows its not doing any IO. > > Any thoughts or ideas where to look next? > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
|
From: Mike <isp...@gm...> - 2011-07-13 12:43:34
|
Configs are all defaults with the hostname for the master set to the correct IP :-) master server is slackware 13, chunk servers are debian, IIRC. >From the info page: version total space avail space trash space trash files reserved space reserved files all fs objects directories files chunks all chunk copies regular chunk copies 1.6.20 15 TiB 13 TiB 0 B 0 325 KiB 8 1030374 13040 1017333 1005532 2012227 2012227 On Tue, Jul 12, 2011 at 11:23 AM, Thomas S Hatch <tha...@gm...> wrote: > What is the version of moosefs you are using? and what do your configs look > like? > > Also, what OS are you using? > > |
|
From: Thomas S H. <tha...@gm...> - 2011-07-12 14:23:13
|
What is the version of moosefs you are using? and what do your configs look like? Also, what OS are you using? On Tue, Jul 12, 2011 at 8:13 AM, Mike <isp...@gm...> wrote: > I have a fairly small MFS installation - 14T of storage across 2 servers, a > master node and a metalogger. I'm seeing the mfsmaster jump to 100% cpu and > just sit there... rendering the filesystem dead. strace shows its not doing > any IO. > > Any thoughts or ideas where to look next? > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
|
From: Mike <isp...@gm...> - 2011-07-12 14:13:50
|
I have a fairly small MFS installation - 14T of storage across 2 servers, a master node and a metalogger. I'm seeing the mfsmaster jump to 100% cpu and just sit there... rendering the filesystem dead. strace shows its not doing any IO. Any thoughts or ideas where to look next? |
|
From: Robert S. <rsa...@ne...> - 2011-07-11 22:18:15
|
Hi Michal, Caching has not been a very successful path for us. With a 2 TB Squid cache we got < 35% cache hit rates. One of the things we need to do is to re-organize some of our processes to distribute the load a bit better. Our load do come in bursts and it would lead to other improvements if we can distribute the load better through time. But do I understand correctly that if I could distribute the opens to more mounts on more machines that I could expect to open more files simultaneously? That assumes the same mfsmaster and the same number of chunk servers. Is the limit/slowdown in mfsmount or mfsmaster? I know there is a limit on the disk speed and the number of spindles etc. But our disk utilization is currently < 50% according to "iostat -x". Even if I could distribute the load better, the total load will just increase with time and if there is a hard limit on the scalability of MooseFS then it may become a bit problematic in a year or two. Robert On 7/11/11 9:29 AM, Michal Borychowski wrote: > Hi Robert! > > We are not sure that switching to 10Gb/s network can help much. Still, all open files will be opened network connections in mfsmount... > > I would recommend to check the need of opening so many files at one moment... Is it really necessary? Or maybe you can use some mechanisms like memcache, etc.? > > We don't see any other way to improve the situation of opening so many files. > > > Regards > -Michał > > > -----Original Message----- > From: Robert Sandilands [mailto:rsa...@ne...] > Sent: Tuesday, July 05, 2011 2:32 PM > To: Michal Borychowski > Cc: moo...@li... > Subject: Re: [Moosefs-users] Write starvation > > Hi Michal, > > I need this to see if there is a way I can optimize the system to open > more files per minute. At this stage our systems can open a few hundred > files in parallel. I am not yet at the point where I can do thousands. > What I think I am seeing is that the writes are starved because there > are too many pending opens and most of them are for reading files. There > seems to be a limit of around 2,400 opens per minute on the hardware I > have and I am looking at what needs to be done to improve that. Based on > your answer it sounds like the network traffic from the machine running > mfsmount() to the master may be the biggest delay? Short of converting > to 10 GB/s or trying to get all the servers on the same switch I don't > know if there is much to be done about it? > > Robert > > On 7/5/11 3:15 AM, Michal Borychowski wrote: >> Hi Robert! >> >> Ad. 1. There is no limit in mfsmount itself, but there are some limits in the operating system. Generally speaking it is wise not to open more than several thousands files in parallel. >> >> Ad. 2. Fopen invokes open, and open invokes (through kernel and FUSE) functions mfs_lookup and mfs_open. Mfs_lookup function changes consequtive path elements into i-node number. While mfs_open makes the target file opening. It sends a packet to the master in order to receive information about possibility to keep the file in the cache. It also marks the file in the master as opened - in cases it is deleted, it is sustained to the moment of closing. >> >> BTW. Why do you need this? >> >> >> Kind regards >> Michał Borychowski >> MooseFS Support Manager >> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >> Gemius S.A. >> ul. Wołoska 7, 02-672 Warszawa >> Budynek MARS, klatka D >> Tel.: +4822 874-41-00 >> Fax : +4822 874-41-01 >> >> >> >> -----Original Message----- >> From: Robert Sandilands [mailto:rsa...@ne...] >> Sent: Saturday, July 02, 2011 2:54 AM >> To: moo...@li... >> Subject: Re: [Moosefs-users] Write starvation >> >> Based on some tests I think the limit in this case is the number of >> opens per minute. I think I need to understand what happens with an open >> before I can make guesses on what can be done to get the number higher. >> >> But then it still does not quite explain the write starvation except if >> the number of pending reads are just so much higher than the number of >> pending writes that it seems to starve the writes. Maybe this will >> resolve itself as I add more chunk servers. >> >> Some questions: >> >> 1. Is there a limit to the number of handles that client applications >> can open per mount, per chunk server, per disk? >> 2. What happens when an application does fopen() on a mount? Can >> somebody give a quick overview or do I have to read some code? >> >> Robert >> >> On 6/30/11 11:32 AM, Ricardo J. Barberis wrote: >>> El Miércoles 29 Junio 2011, Robert escribió: >>>> Yes, we use Centos, but installing and using the ktune package generally >>>> resolves most of the performance issues and differences I have seen with >>>> Ubuntu/Debian. >>> Nice to know about ktune and thank you for bringing it up, I'll take a look a >>> it. >>> >>>> I don't understand the comment on hitting metadata a lot? What is a lot? >>> A lot = reading / (re)writing / ls -l'ing / stat'ing too often. >>> >>> If the client can't cache the metadata but uses it often, that means it has to >>> query the master every time. >>> >>> Network latencies might also play a role in the performance degradation. >>> >>>> Why would it make a difference? All the metadata is in RAM anyway? The >>>> biggest limit to speed seems to be the number of IOPS that you can get out >>>> of your disks you have available to you. Looking up the metadata from RAM >>>> should be several orders of magnitude faster than that. >>> Yep, and you have plenty of RAM, so that shouldn't be an issue in your case. >>> >>>> The activity reported through the CGI interface on the master is around >>>> 2,400 opens per minute average. Reads and writes are also around 2400 per >>>> minute alternating with each other. mknod has some peaks around 2,800 per >>>> minute but is generally much lower. Lookup's are around 8,000 per minute >>>> and getattr is around 700 per minute. Chunk replication and deletion is >>>> around 50 per minute. The other numbers are generally very low. >>> Mmm, maybe 2 chunkservers are just too litle to handle that activity but I >>> would also check the network latencies. >>> >>> I'm also not really confident about having master and cunkserver on the same >>> server but I don't have any hard evidence to support my feelings ;) >>> >>>> Is there a guide/hints specific to MooseFS on what IO/Net/Process >>>> parameters would be good to investigate for mfsmaster? >>> I'd like to know that too! >>> >>> Cheers, >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
|
From: Vineet J. <vin...@gm...> - 2011-07-11 15:31:51
|
Thanks. Couple of follow on questions: 1. Is the replication count affect this limitation or is it just unique file/folder combinations. So if 25 million files takes 8gb ram and I set the replication count to 2 will it now take 16gb ram? 2. Is there any way to support more files without increasing memory size. I would have a large number of files which would not be used very frequently and it would be fine if access to attributes on them would be slow. Currently my options are: a. setup another meta data server and cluster b. increase ram. On Sun, Jul 10, 2011 at 5:41 PM, Robert Sandilands <rsa...@ne...> wrote: > We have 80 million objects using 27 GB of RAM for mfsmaster. RAM usage > does seem to scale linearly with the number of files. > > The limitation is really all about the speed of the meta-data access. > I.e. getting the location of a file to open it, determining the size and > attributes of a file. > > Performance would probably degrade significantly with insufficient > memory. It could also introduce network timeouts if you utilize too much > swap space on your master server and I have a suspicion this could have > a negative effect on the reliability of the file system. > > Robert > > On 7/10/11 5:24 PM, Vineet Jain wrote: >> I have 16gigs of ram on my meta data server. Is the max number of >> files that can be stored about 45-48 million? I got this number from >> the faq where 25 million files took 8gigs of ram. Is there any way to >> store more number of files other than to increase the ram? >> >> Is there any planned effort to remove this limitation or is this going >> to be around for some time. >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
|
From: Reinis R. <r...@ro...> - 2011-07-11 14:51:23
|
Hello, while deploying new moosefs instance (with the latest 1.6.20 and running a mongodb on it) I have stumbled upon situation where mfsmount hangs and all file operations halt. Both the mfsmount debug (-o debug) and normal log contains endless loop of: Jul 11 16:43:21 proc78 mfsmount[2066]: file: 44, index: 0, chunk: 359, version: 1 - writeworker: connection with (D5AF4BB6:9422) was timed out (unfinished writes: 2; try counter: 1) Jul 11 16:43:23 proc78 mfsmount[2066]: file: 44, index: 0, chunk: 359, version: 1 - writeworker: connection with (D5AF4BB6:9422) was timed out (unfinished writes: 2; try counter: 1) Jul 11 16:43:25 proc78 mfsmount[2066]: file: 44, index: 0, chunk: 359, version: 1 - writeworker: connection with (D5AF4BB4:9422) was timed out (unfinished writes: 2; try counter: 1) Jul 11 16:43:27 proc78 mfsmount[2066]: file: 44, index: 0, chunk: 359, version: 1 - writeworker: connection with (D5AF4BB6:9422) was timed out (unfinished writes: 2; try counter: 1) And the kernel log: Jul 11 15:33:16 proc78 kernel: [428520.580157] INFO: task mongod:8847 blocked for more than 120 seconds. Jul 11 15:33:16 proc78 kernel: [428520.580159] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jul 11 15:33:16 proc78 kernel: [428520.580160] mongod D 0000000000000002 0 8847 1 0x00000000 Jul 11 15:33:16 proc78 kernel: [428520.580164] ffff8802223ffe38 0000000000000082 0000000000000007 ffffffff81074bd8 Jul 11 15:33:16 proc78 kernel: [428520.580168] ffff8802223fffd8 ffff8802223fffd8 ffff8802223fffd8 0000000000012b00 Jul 11 15:33:16 proc78 kernel: [428520.580172] ffff88011207a2c0 ffff880221fbe0c0 0000000000000246 ffff880221005c80 Jul 11 15:33:16 proc78 kernel: [428520.580176] Call Trace: Jul 11 15:33:16 proc78 kernel: [428520.580183] [<ffffffffa0239d95>] fuse_set_nowrite+0x95/0xd0 [fuse] Jul 11 15:33:16 proc78 kernel: [428520.580200] [<ffffffffa023d1fa>] fuse_fsync_common+0xca/0x1a0 [fuse] Jul 11 15:33:16 proc78 kernel: [428520.580217] [<ffffffff8116ee0a>] vfs_fsync_range+0x5a/0xa0 Jul 11 15:33:16 proc78 kernel: [428520.580222] [<ffffffff8111a913>] sys_msync+0x153/0x1e0 Jul 11 15:33:16 proc78 kernel: [428520.580227] [<ffffffff81512292>] system_call_fastpath+0x16/0x1b Jul 11 15:33:16 proc78 kernel: [428520.580231] [<00007f415f730a7d>] 0x7f415f730a7c The cgi interface/mfsmaster doesn't report any problems with any servers or chunks. Checking all of the chunkserver logs (total 5 with goal 2) doesn't reveal anything besides them happily testing the local chunks. I have seen some past threads also the FAQ entry that this is a harmless message but the problem in my case is that the mountpoint really hangs making the file system commands (like df / ls) to freeze up and only way is to forcibly kill the mfsmount process and remount. Is there anything else I can do to solve the hang-up or any extra steps to identify/debug the cause? wbr rr |
|
From: Michal B. <mic...@ge...> - 2011-07-11 13:47:43
|
Hi Florent! When you do "ls -al /mnt/Ahng2u" , do you normally see files in your MooseFS system? Maybe you dismounted the resuorce in a “lazy” way (option –l under Linux) and processes which had the files opened still may read and write to them? Or mfsmount stopped working because of somethin else, but if you do not have “.masterinfo” it means, that MooseFS is not mounted there… Kind regards -Michał From: Florent Bautista [mailto:fl...@co...] Sent: Thursday, July 07, 2011 10:13 AM To: moo...@li... Subject: Re: [Moosefs-users] register to master: Permission denied Of course I have it ! Fully mounted using mfsmount and used by client (many KVM images are in this share, are running). Just this file seems missing... Le 07/07/2011 10:08, Michal Borychowski a écrit : So probalby you don’t even have the folder /mfs/Ahng2u/ itself? Regards Michał From: Florent Bautista [mailto:fl...@co...] Sent: Thursday, July 07, 2011 9:38 AM To: moo...@li... Subject: Re: [Moosefs-users] register to master: Permission denied Hi Michal, I still have the problem since I haven't rebooted master or client (I will migrate master soon, so ...). The result of that command is just... -bash: /mnt/Ahng2u/.masterinfo: No such file or directory What is this file and why is it missing ? Thank you Le 07/07/2011 08:39, Michal Borychowski a écrit : Hi Florent! Do you still have this problem? Can you run this command and send us the results? "hexdump -C /mnt/Ahng2u/.masterinfo" lub "xxd -g1 < /mnt/Ahng2u/.masterinfo" Regards -Michał From: Florent Bautista [mailto:fl...@co...] Sent: Thursday, June 23, 2011 9:46 AM To: moo...@li... Subject: [Moosefs-users] register to master: Permission denied Hi all, I have a problem with an installation of MooseFS. MFS is successfully mounted by a client, all files are readable and writtable, but every command mfs* is not working and returns : register to master: Permission denied For example : test1:~# mfsdirinfo /mfs/Ahng2u/ register to master: Permission denied /mfs/Ahng2u/: can't register to master (.masterinfo) But I confirm that this client is using the files without problem (some KVM machines are stored on it and running !). What can be the problem ? This is the first time I'm having this error. I do not see anything in syslog, but maybe I'm not looking in the right place! -- Florent Bautista _____ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. _____ -- Florent Bautista _____ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. _____ -- Florent Bautista _____ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. _____ |
|
From: Michal B. <mic...@ge...> - 2011-07-11 13:42:18
|
Hi! The whole system keeps testing the chunks. By default it reads one chunk each 10 seconds and checks the checksums. Such transfer usually doesn't influence other operations and it is not recommended to do these tests more rarely. If you really need to change this you can change HDD_TEST_FREQ from 10 seconds to 60 or 100. (In our environment the whole testing process takes 50 days) Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: i- [mailto:it...@it...] Sent: Wednesday, July 06, 2011 1:52 PM To: moo...@li... Subject: [Moosefs-users] CGI interface charts + performance Hi all, I'm running a very simple cluster with 4 machines (1 master+meta+chunk, 2 chunk-only, 1 client, all on the same gigabit network, all servers using SSDs) and I have 2 questions : 1/ in the server charts, my chunkservers all show a lot of bytes read (7M / s) though they are idle, no client is doing anything, how can this be possible ? I also noticed there are 2 colors in the charts : light green and dark green. The chart shows both dark and light green when I use the cluster and only light green when I don't. What are the colors for ? 2/ I'm using bonnie++ to do some basic performance testing and it shows the following performance : - Write : ~40MB/s - Rewrite : ~4MB/s - Read : ~50MB/s I guess the network latency is the bottleneck here because "iostats -m -x" shows small cpu load and small ssd usage on all machines. How can I verify that ? Thank you very much! ---------------------------------------------------------------------------- -- All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
|
From: Michal B. <mic...@ge...> - 2011-07-11 13:29:49
|
Hi Robert! We are not sure that switching to 10Gb/s network can help much. Still, all open files will be opened network connections in mfsmount... I would recommend to check the need of opening so many files at one moment... Is it really necessary? Or maybe you can use some mechanisms like memcache, etc.? We don't see any other way to improve the situation of opening so many files. Regards -Michał -----Original Message----- From: Robert Sandilands [mailto:rsa...@ne...] Sent: Tuesday, July 05, 2011 2:32 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Write starvation Hi Michal, I need this to see if there is a way I can optimize the system to open more files per minute. At this stage our systems can open a few hundred files in parallel. I am not yet at the point where I can do thousands. What I think I am seeing is that the writes are starved because there are too many pending opens and most of them are for reading files. There seems to be a limit of around 2,400 opens per minute on the hardware I have and I am looking at what needs to be done to improve that. Based on your answer it sounds like the network traffic from the machine running mfsmount() to the master may be the biggest delay? Short of converting to 10 GB/s or trying to get all the servers on the same switch I don't know if there is much to be done about it? Robert On 7/5/11 3:15 AM, Michal Borychowski wrote: > Hi Robert! > > Ad. 1. There is no limit in mfsmount itself, but there are some limits in the operating system. Generally speaking it is wise not to open more than several thousands files in parallel. > > Ad. 2. Fopen invokes open, and open invokes (through kernel and FUSE) functions mfs_lookup and mfs_open. Mfs_lookup function changes consequtive path elements into i-node number. While mfs_open makes the target file opening. It sends a packet to the master in order to receive information about possibility to keep the file in the cache. It also marks the file in the master as opened - in cases it is deleted, it is sustained to the moment of closing. > > BTW. Why do you need this? > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > -----Original Message----- > From: Robert Sandilands [mailto:rsa...@ne...] > Sent: Saturday, July 02, 2011 2:54 AM > To: moo...@li... > Subject: Re: [Moosefs-users] Write starvation > > Based on some tests I think the limit in this case is the number of > opens per minute. I think I need to understand what happens with an open > before I can make guesses on what can be done to get the number higher. > > But then it still does not quite explain the write starvation except if > the number of pending reads are just so much higher than the number of > pending writes that it seems to starve the writes. Maybe this will > resolve itself as I add more chunk servers. > > Some questions: > > 1. Is there a limit to the number of handles that client applications > can open per mount, per chunk server, per disk? > 2. What happens when an application does fopen() on a mount? Can > somebody give a quick overview or do I have to read some code? > > Robert > > On 6/30/11 11:32 AM, Ricardo J. Barberis wrote: >> El Miércoles 29 Junio 2011, Robert escribió: >>> Yes, we use Centos, but installing and using the ktune package generally >>> resolves most of the performance issues and differences I have seen with >>> Ubuntu/Debian. >> Nice to know about ktune and thank you for bringing it up, I'll take a look a >> it. >> >>> I don't understand the comment on hitting metadata a lot? What is a lot? >> A lot = reading / (re)writing / ls -l'ing / stat'ing too often. >> >> If the client can't cache the metadata but uses it often, that means it has to >> query the master every time. >> >> Network latencies might also play a role in the performance degradation. >> >>> Why would it make a difference? All the metadata is in RAM anyway? The >>> biggest limit to speed seems to be the number of IOPS that you can get out >>> of your disks you have available to you. Looking up the metadata from RAM >>> should be several orders of magnitude faster than that. >> Yep, and you have plenty of RAM, so that shouldn't be an issue in your case. >> >>> The activity reported through the CGI interface on the master is around >>> 2,400 opens per minute average. Reads and writes are also around 2400 per >>> minute alternating with each other. mknod has some peaks around 2,800 per >>> minute but is generally much lower. Lookup's are around 8,000 per minute >>> and getattr is around 700 per minute. Chunk replication and deletion is >>> around 50 per minute. The other numbers are generally very low. >> Mmm, maybe 2 chunkservers are just too litle to handle that activity but I >> would also check the network latencies. >> >> I'm also not really confident about having master and cunkserver on the same >> server but I don't have any hard evidence to support my feelings ;) >> >>> Is there a guide/hints specific to MooseFS on what IO/Net/Process >>> parameters would be good to investigate for mfsmaster? >> I'd like to know that too! >> >> Cheers, > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
|
From: Michal B. <mic...@ge...> - 2011-07-11 13:19:59
|
Hi! This is not trash, but "reserved" files. The file is kept opened by one of the mfsmounts. You need to find the process and kill it. You need to run grep over "metadata.mfs.back" looking for i-node and "R", something like: "mfsmetadump metadata.mfs.back | grep ^R | grep 298219" The outcome would be similar to: R|i: 298219|#:2|e:0|m:0644|u: 536|g: 100|a:1310025540,m:1310025550,c:1310025551|t: 0|l: 12345678|c:(00000000XXXXXXXX,.....)|r:(NNNNNNN) NNNNNN is the number of the session (visible in CGI monitor - session_id in Mounts tab) which keeps the file in the system. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Rajeev K Meharwal [mailto:raj...@gm...] Sent: Wednesday, July 06, 2011 3:53 PM To: moo...@li... Subject: [Moosefs-users] under goal chunks and reserved file in meta data. Hi All, Can some one help me with the following situation. CGI interface is showing 1 under goal chunk (red), and gui is reporting this file under 'reserved' metadata area. This is showing for more than 2 weeks now. currently unavailable chunk 00000000003791F5 (inode: 298219 ; index: 2) + currently unavailable reserved file 298219: [File_Path_snipped...]/mytestfile unavailable chunks: 1 unavailable reserved files: 1 I have trash quarantine setup for 2 days. Per documentation, files in 'reserved' metadata area are deleted files, but still open by some process and MFS will remove these once the file is closed. I have restarted master, all chunk servers and rebooted client from where this file was actually written in the first place. I mounted (META) data area and can see corresponding file under 'reserved' directory, but can not remove from there. Any other way to clean this up? rxknhe |
|
From: Robert S. <rsa...@ne...> - 2011-07-10 21:57:22
|
We have 80 million objects using 27 GB of RAM for mfsmaster. RAM usage does seem to scale linearly with the number of files. The limitation is really all about the speed of the meta-data access. I.e. getting the location of a file to open it, determining the size and attributes of a file. Performance would probably degrade significantly with insufficient memory. It could also introduce network timeouts if you utilize too much swap space on your master server and I have a suspicion this could have a negative effect on the reliability of the file system. Robert On 7/10/11 5:24 PM, Vineet Jain wrote: > I have 16gigs of ram on my meta data server. Is the max number of > files that can be stored about 45-48 million? I got this number from > the faq where 25 million files took 8gigs of ram. Is there any way to > store more number of files other than to increase the ram? > > Is there any planned effort to remove this limitation or is this going > to be around for some time. > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
|
From: Vineet J. <vin...@gm...> - 2011-07-10 21:24:52
|
I have 16gigs of ram on my meta data server. Is the max number of files that can be stored about 45-48 million? I got this number from the faq where 25 million files took 8gigs of ram. Is there any way to store more number of files other than to increase the ram? Is there any planned effort to remove this limitation or is this going to be around for some time. |