From: Wilson, S. M <st...@pu...> - 2015-11-05 14:53:17
|
Dear Krzysztof, Thanks for looking into it and locating the cause! Regards, Steve On Thu, 2015-11-05 at 09:07 +0100, Krzysztof Kielak wrote: Dear Steve, Thank you for pointing this out. We have checked the issue and the problem that you reported is connected with one of the mfsmount's threads, that is waking up too often (when looking and strace, you’ll see a lot of nanosleep syscalls). We will tweak the timing configuration for this thread in the next release of MooseFS. Best Regards, Krzysztof Kielak Moosefs.com<http://moosefs.com> | Director of Operations and Customer Support Mobile: +48 601 476 440 On 04 Nov 2015, at 19:03, Wilson, Steven M <st...@pu...<mailto:st...@pu...>> wrote: Hi, I've noticed that recent versions of mfsmount (after about version 2.73) are always consuming CPU cycles even though there is no activity on the mounted MooseFS file system. On one client, for example, top shows between 1.0% and 2.0% CPU utilization for six mfsmount processes and fairly uniform accumulated run times for each of these mfsmount processes. But the user consistently only accesses the same two mounted file systems; the other four aren't accessed. Here's the output of ps on that system filtered for mfsmount processes: root@puma:~# ps axu | grep mfsmount | grep -v grep root 2854 1.4 0.2 529200 140036 ? S<sl Oct28 152:15 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.126 -Hem-data.bio.purdue.edu<http://hem-data.bio.purdue.edu> -S/data root 2877 1.5 0.2 786140 155568 ? S<sl Oct28 154:19 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/apps root 2893 1.4 0.2 529200 140016 ? S<sl Oct28 152:12 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit root 2909 1.4 0.2 529200 140012 ? S<sl Oct28 152:09 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.126 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit-scratch root 4870 1.5 0.0 759432 35536 ? S<sl Oct28 158:53 /usr/bin/mfsmount /net/kihara -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu<http://hkihara-data.bio.purdue.edu> -S/home root 4887 1.5 0.0 463664 18976 ? S<sl Oct28 152:33 /usr/bin/mfsmount /net/kihara-scratch -o mfsbind=10.163.216.126 -Hkihara-data.bio.purdue.edu<http://hkihara-data.bio.purdue.edu> -S/scratch I have about a dozen clients using mfsmount version 2.76 or higher and they all show this behavior. The remaining ~75 clients use mfsmount version 2.73 or lower and they show a more expected behavior (i.e., mfsmount CPU utilization correlates to user access of mounted file systems). Here's the typical ps output for a system using one of the older clients (note the highly variable accumulated run times that correlates to the usage): root@noro:~# ps axu | grep mfsmount | grep -v grep root 2283 0.0 0.0 536556 4392 ? S<sl Sep09 8:11 /usr/bin/mfsmount /net/em -o mfsbind=10.163.216.90 -Hem-data.bio.purdue.edu<http://hem-data.bio.purdue.edu> -S/data root 2298 0.3 0.3 825344 38308 ? S<sl Sep09 250:02 /usr/bin/mfsmount /net/jiang -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu<http://hjiang-data.bio.purdue.edu> -S/home root 2316 0.0 0.0 774608 1800 ? S<sl Sep09 16:13 /usr/bin/mfsmount /net/jiang-scratch -o mfsbind=10.163.216.90 -Hjiang-data.bio.purdue.edu<http://hjiang-data.bio.purdue.edu> -S/scratch root 2333 0.0 0.3 815896 40220 ? S<sl Sep09 21:07 /usr/bin/mfsmount /net/apps -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/apps root 2348 0.0 0.0 462620 3052 ? S<sl Sep09 11:27 /usr/bin/mfsmount /net/cosit -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit root 2363 0.0 0.0 462620 3172 ? S<sl Sep09 9:56 /usr/bin/mfsmount /net/cosit-scratch -o mfsbind=10.163.216.90 -Hsb-data.bio.purdue.edu<http://hsb-data.bio.purdue.edu> -S/cosit-scratch What is mfsmount doing that would consume CPU cycles when there's no activity on the file system that it has mounted? I didn't notice anything in the change log that would account for this. Thanks! Steve ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |