autofsng-users Mailing List for Autofsng
Brought to you by:
mikewaychison
You can subscribe to this list here.
| 2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
(6) |
Jun
(6) |
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
|
From: Dana W. <Dan...@on...> - 2005-07-27 16:20:18
|
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Sorry, meant to post this to the list not just Mike directly.<br>
<br>
<br>
<br>
Mike,<br>
I am trying to apply the patch to the kernel 2.6.11.4-21.7 but when I
patch I get these errors;<br>
<br>
Thanks,<br>
Dana<br>
<br>
patching file Documentation/filesystems/expire_semantics.txt<br>
patching file Documentation/ioctl-number.txt<br>
Hunk #1 FAILED at 145.<br>
1 out of 1 hunk FAILED -- saving rejects to file
Documentation/ioctl-number.txt.rej<br>
patching file arch/i386/kernel/entry.S<br>
Hunk #1 FAILED at 901.<br>
1 out of 1 hunk FAILED -- saving rejects to file
arch/i386/kernel/entry.S.rej<br>
patching file fs/Kconfig<br>
Hunk #1 succeeded at 440 (offset -41 lines).<br>
patching file fs/Makefile<br>
Hunk #1 FAILED at 10.<br>
Hunk #2 succeeded at 89 (offset 5 lines).<br>
1 out of 2 hunks FAILED -- saving rejects to file fs/Makefile.rej<br>
patching file fs/afs/cmservice.c<br>
patching file fs/afs/mntpt.c<br>
patching file fs/afs/super.c<br>
patching file fs/autofsng/Makefile<br>
patching file fs/autofsng/autofs.h<br>
patching file fs/autofsng/cachetree.c<br>
patching file fs/autofsng/direct.c<br>
patching file fs/autofsng/indirect.c<br>
patching file fs/autofsng/init.c<br>
patching file fs/autofsng/mapcache.c<br>
patching file fs/autofsng/request.c<br>
patching file fs/autofsng/super.c<br>
patching file fs/libfs.c<br>
Hunk #1 succeeded at 236 (offset 1 line).<br>
patching file fs/mountfd.c<br>
patching file fs/namei.c<br>
Hunk #1 succeeded at 293 (offset 18 lines).<br>
Hunk #2 succeeded at 312 (offset 18 lines).<br>
Hunk #3 succeeded at 782 (offset 27 lines).<br>
Hunk #4 succeeded at 837 (offset 27 lines).<br>
patching file fs/namespace.c<br>
Hunk #1 FAILED at 34.<br>
Hunk #2 succeeded at 73 (offset 1 line).<br>
Hunk #3 succeeded at 127 (offset 1 line).<br>
Hunk #4 succeeded at 338 (offset 1 line).<br>
Hunk #5 succeeded at 467 (offset 1 line).<br>
Hunk #6 succeeded at 576 (offset 1 line).<br>
Hunk #7 succeeded at 624 (offset 1 line).<br>
Hunk #8 succeeded at 668 (offset 2 lines).<br>
Hunk #9 succeeded at 682 (offset 2 lines).<br>
Hunk #10 succeeded at 712 (offset 2 lines).<br>
Hunk #11 succeeded at 775 (offset 2 lines).<br>
Hunk #12 succeeded at 801 (offset 2 lines).<br>
Hunk #13 succeeded at 839 (offset 2 lines).<br>
Hunk #14 succeeded at 884 (offset 2 lines).<br>
Hunk #15 succeeded at 926 (offset 2 lines).<br>
Hunk #16 succeeded at 956 (offset 2 lines).<br>
Hunk #17 succeeded at 986 (offset 2 lines).<br>
Hunk #18 succeeded at 1003 (offset 2 lines).<br>
Hunk #19 succeeded at 1020 (offset 2 lines).<br>
Hunk #20 succeeded at 1038 (offset 2 lines).<br>
Hunk #21 succeeded at 1049 (offset 2 lines).<br>
Hunk #22 succeeded at 1208 (offset 2 lines).<br>
Hunk #23 succeeded at 1233 (offset 2 lines).<br>
Hunk #24 succeeded at 1249 (offset 2 lines).<br>
Hunk #25 succeeded at 1410 (offset 2 lines).<br>
Hunk #26 succeeded at 1448 (offset 2 lines).<br>
Hunk #27 succeeded at 1639 (offset -2 lines).<br>
Hunk #28 succeeded at 1656 (offset -2 lines).<br>
Hunk #29 succeeded at 1693 (offset -2 lines).<br>
Hunk #30 succeeded at 1798 (offset -2 lines).<br>
Hunk #31 succeeded at 1806 (offset -2 lines).<br>
1 out of 31 hunks FAILED -- saving rejects to file fs/namespace.c.rej<br>
patching file fs/nfs/nfs3proc.c<br>
Hunk #1 FAILED at 79.<br>
1 out of 1 hunk FAILED -- saving rejects to file fs/nfs/nfs3proc.c.rej<br>
patching file fs/nfs/proc.c<br>
Hunk #1 FAILED at 62.<br>
1 out of 1 hunk FAILED -- saving rejects to file fs/nfs/proc.c.rej<br>
patching file fs/open.c<br>
Hunk #1 succeeded at 122 (offset 1 line).<br>
Hunk #2 succeeded at 141 (offset 1 line).<br>
patching file fs/super.c<br>
Hunk #1 succeeded at 833 (offset 18 lines).<br>
patching file include/asm-i386/unistd.h<br>
Hunk #1 FAILED at 290.<br>
1 out of 1 hunk FAILED -- saving rejects to file
include/asm-i386/unistd.h.rej<br>
patching file include/linux/completion.h<br>
Hunk #1 FAILED at 28.<br>
1 out of 1 hunk FAILED -- saving rejects to file
include/linux/completion.h.rej<br>
patching file include/linux/fs.h<br>
Hunk #1 succeeded at 199 with fuzz 1 (offset -16 lines).<br>
Hunk #2 succeeded at 765 (offset 20 lines).<br>
patching file include/linux/kmod.h<br>
patching file include/linux/mount.h<br>
Hunk #1 succeeded at 16 with fuzz 2 (offset 2 lines).<br>
Hunk #2 succeeded at 44 (offset 2 lines).<br>
Hunk #3 succeeded at 151 (offset 2 lines).<br>
patching file include/linux/namespace.h<br>
patching file include/linux/sunrpc/sched.h<br>
Hunk #1 succeeded at 127 (offset 13 lines).<br>
Hunk #2 succeeded at 138 with fuzz 2 (offset 13 lines).<br>
patching file include/linux/sunrpc/xprt.h<br>
patching file kernel/kmod.c<br>
Hunk #1 succeeded at 117 (offset -23 lines).<br>
Hunk #2 succeeded at 150 (offset -23 lines).<br>
Hunk #3 succeeded at 172 (offset -23 lines).<br>
Hunk #4 succeeded at 193 (offset -23 lines).<br>
Hunk #5 succeeded at 205 (offset -23 lines).<br>
Hunk #6 succeeded at 216 (offset -23 lines).<br>
Hunk #7 succeeded at 233 (offset -23 lines).<br>
patching file kernel/sched.c<br>
Hunk #1 succeeded at 3167 with fuzz 2 (offset 210 lines).<br>
patching file net/sunrpc/xprt.c<br>
Hunk #1 succeeded at 77 with fuzz 1 (offset -1 lines).<br>
Hunk #2 succeeded at 1424 with fuzz 1 (offset 23 lines).<br>
Hunk #3 FAILED at 1528.<br>
Hunk #4 succeeded at 1676 (offset 24 lines).<br>
Hunk #5 succeeded at 1690 with fuzz 1 (offset 24 lines).<br>
Hunk #6 succeeded at 1758 (offset 28 lines).<br>
Hunk #7 succeeded at 1769 (offset 28 lines).<br>
1 out of 7 hunks FAILED -- saving rejects to file net/sunrpc/xprt.c.rej<br>
<br>
<br>
<br>
Mike Waychison wrote:
<blockquote cite="mid...@wa..." type="cite">Dana
Wellen wrote: <br>
<blockquote type="cite">I have been watching the development of the
autofsng for a while and would <br>
like to try it out. Are there any quick setup guides in order to get
started using <br>
this automounter? I have a mix of Unix and Linux systems and the Linux
<br>
automounter has always been a weak spot <br>
<br>
</blockquote>
<br>
There is a small README in the autofsng tarball
(<a class="moz-txt-link-freetext"
href="http://autofsng.bkbits.net:8080/autofsng/anno/autofsng/README@1.5?nav=index.html">http://autofsng.bkbits.net:8080/autofsng/anno/autofsng/README@1.5?nav=index.html</a>|src/|src/autofsng),
but better documentation still needs to be written. <br>
<br>
You will also want to go over the patches in the ./patches directory to
fix up any userspace utilities. <br>
<br>
Of course, you will need the autofsng kernel patch, which unfortunately
hasn't been updated since 2.6.9. It wouldn't be too hard to update to
the current tree, except for a couple changes in fs/*.c that happened
recently. <br>
<br>
One thing to note is that autofsng uses the automount: line in
/etc/nsswitch.conf, so you will want to make sure it says "automount:
files nis" if you are in a nis environment. <br>
<br>
Sorry for not having any better docs at this point. Feel free to ask
any other questions here. <br>
<br>
Mike Waychison <br>
<br>
<br>
</blockquote>
<div class="moz-signature">-- <br>
<img src="cid:par...@on..." border="0"></div>
</body>
</html>
|
|
From: Mike W. <mi...@wa...> - 2005-07-25 19:39:20
|
Dana Wellen wrote: > I have been watching the development of the autofsng for a while and would > like to try it out. Are there any quick setup guides in order to get > started using > this automounter? I have a mix of Unix and Linux systems and the Linux > automounter has always been a weak spot > There is a small README in the autofsng tarball (http://autofsng.bkbits.net:8080/autofsng/anno/autofsng/README@1.5?nav=index.html|src/|src/autofsng), but better documentation still needs to be written. You will also want to go over the patches in the ./patches directory to fix up any userspace utilities. Of course, you will need the autofsng kernel patch, which unfortunately hasn't been updated since 2.6.9. It wouldn't be too hard to update to the current tree, except for a couple changes in fs/*.c that happened recently. One thing to note is that autofsng uses the automount: line in /etc/nsswitch.conf, so you will want to make sure it says "automount: files nis" if you are in a nis environment. Sorry for not having any better docs at this point. Feel free to ask any other questions here. Mike Waychison |
|
From: Dana W. <Dan...@on...> - 2005-07-25 19:30:13
|
I have been watching the development of the autofsng for a while and would like to try it out. Are there any quick setup guides in order to get started using this automounter? I have a mix of Unix and Linux systems and the Linux automounter has always been a weak spot Thanks, Dana Wellen -- |
|
From: Brent A N. <br...@ph...> - 2005-06-08 21:56:32
|
On Wed, 8 Jun 2005, Mike Waychison wrote: > For kicks, I'm curious if you can work around the behaviour by setting > the gconf setting: > /apps/nautilus/preferences/show_directory_item_counts = "never". Nope. Nautilus initiates a huge mount storm as soon as it starts, just like before (the icons don't even show on the desktop for maybe 10 minutes or so). Thanks, Brent |
|
From: Brent A N. <br...@ph...> - 2005-06-08 21:22:55
|
On Wed, 8 Jun 2005, Mike Waychison wrote:
> For kicks, I'm curious if you can work around the behaviour by setting the
> gconf setting: /apps/nautilus/preferences/show_directory_item_counts =
> "never".
>
I'll see what happens...
> Hmm. I'll have to dig into this further, though having huge inode and dentry
> caches isn't that abnormal, especially in larger memory configs.
>
On that machine, almost the entire RAM was occupied. I didn't think my
ssh would succeed, but it eventually did. That's with no one logged in on
a 512MB machine, and very few processes (nothing of consequence; the OOM
Killer has apparently killed off a bunch of things, including the X
server).
free
total used free shared buffers cached
Mem: 506748 503120 3628 0 5304 3832
-/+ buffers/cache: 493984 12764
Swap: 524280 17940 506340
Thanks,
Brent
|
|
From: Mike W. <mi...@wa...> - 2005-06-08 20:38:46
|
Brent A Nelson wrote: > On Wed, 8 Jun 2005, Mike Waychison wrote: > >> Brent A Nelson wrote: >> >>> I have some experimental Sarge machines running AutofsNG, that don't >>> have patches for, in particular Nautilus, so they generate a lot of >>> mount storms. That's to be expected (I think I'm going to just >>> change the filesystem name to autofs in the kernel, to avoid patching >>> so many different things). >> >> >> I don't think changing the name to autofs will solve anything. None >> of the tools, including nautilus currently do anything special for >> 'autofsng' nor 'autofs'. In fact, the changes would be required even >> if autofs4 ghosting were used. >> > > On machines that use Autofs4 ghosting (with direct mounts), nautilus > doesn't produce mount storms on our sarge installs. Our old RedHat 9 > machines (autofs4, not direct mounted, not ghosted, but with symlinks in > the root directory to let us fake direct mounts) did have the issue. I > don't know if this was fixed in more recent versions or if Debian put in > its own patches, but it does seem to know better than to randomly hunt > through autofs filesystems (but not, of course, autofsng). Ah yes, you're right. Autofs4 ghosting isn't vulnerable because of the difference in the way it is implemented. For kicks, I'm curious if you can work around the behaviour by setting the gconf setting: /apps/nautilus/preferences/show_directory_item_counts = "never". > >>> >>> However, these machines (which would often go through thousands of >>> mount attempts per day), after a span of a week to several weeks, end >>> up swapping ridiculously and eventually have too little memory to >>> accept logins. The memory consumption is not associated with any >>> processes, so it appears to be a kernel memory leak. Could this be a >>> slow leak somewhere in AutofsNG? >>> >> >> Quite possibly. what does /proc/slabinfo say? >> > > The huge difference seems to be in inode_cache and dentry_cache. > > Old autofs4: > inode_cache 2715 2730 384 10 1 : tunables 54 27 > 0 : slabdata 273 273 0 > dentry_cache 7238 13132 140 28 1 : tunables 120 60 > 0 : slabdata 469 469 0 > > AutofsNG: > inode_cache 181582 181590 384 10 1 : tunables 54 27 > 8 : slabdata 18159 18159 0 > dentry_cache 182813 183492 144 27 1 : tunables 120 60 > 8 : slabdata 6796 6796 0 > Hmm. I'll have to dig into this further, though having huge inode and dentry caches isn't that abnormal, especially in larger memory configs. > >> >>> Thanks, >>> >>> Brent >>> >>> PS I have a small wishlist. Are any of the following "coming soon"? >>> 1) Ability to change/remove registered autofsng mountpoints >> >> >> Hmm. You should be able to pluck them off if you patched mount(8) >> using the new '-L' option. >> > > This will deregister the namespace for the mountpoint (not just allow > you to forcibly unmount the NFS mount if it's currently mounted)? Where > do I find this patch? It's not as easy as running automount again to > remove things that are no longer mapped or to update modified mappings, > but it would be a reasonable workaround... > >>> 2) The ability to update /etc/mtab rather than having to symlink >>> to /proc/mounts (which shows some "residue" from the initrd >>> boot, due to pivot_root, etc.) >> >> >> This is a toughy. The reason for doing the symlink, >> > > This got cutoff, but I assume that it is difficult to work with the > /etc/mtab file from kernel space? Oops. That should teach me to go for lunch half-way through an email :) The reason is that autofsng attempts to work well with Linux namespaces, which are likely to become way more important in the coming years. namespaces and /etc/mtab just don't work nice together, and mtab is never guaranteed to give you accurate information using anything that will automount (next-gen cifs, nfs4, autofs). A better fix would be for the kernel to a) accurately display mount options used for mounts, and b) figure out a way for losetup to keep working. Mike Waychison |
|
From: Brent A N. <br...@ph...> - 2005-06-08 20:03:42
|
On Wed, 8 Jun 2005, Mike Waychison wrote: > Brent A Nelson wrote: >> I have some experimental Sarge machines running AutofsNG, that don't have >> patches for, in particular Nautilus, so they generate a lot of mount >> storms. That's to be expected (I think I'm going to just change the >> filesystem name to autofs in the kernel, to avoid patching so many >> different things). > > I don't think changing the name to autofs will solve anything. None of the > tools, including nautilus currently do anything special for 'autofsng' nor > 'autofs'. In fact, the changes would be required even if autofs4 ghosting > were used. > On machines that use Autofs4 ghosting (with direct mounts), nautilus doesn't produce mount storms on our sarge installs. Our old RedHat 9 machines (autofs4, not direct mounted, not ghosted, but with symlinks in the root directory to let us fake direct mounts) did have the issue. I don't know if this was fixed in more recent versions or if Debian put in its own patches, but it does seem to know better than to randomly hunt through autofs filesystems (but not, of course, autofsng). >> >> However, these machines (which would often go through thousands of mount >> attempts per day), after a span of a week to several weeks, end up swapping >> ridiculously and eventually have too little memory to accept logins. The >> memory consumption is not associated with any processes, so it appears to >> be a kernel memory leak. Could this be a slow leak somewhere in AutofsNG? >> > > Quite possibly. what does /proc/slabinfo say? > The huge difference seems to be in inode_cache and dentry_cache. Old autofs4: inode_cache 2715 2730 384 10 1 : tunables 54 27 0 : slabdata 273 273 0 dentry_cache 7238 13132 140 28 1 : tunables 120 60 0 : slabdata 469 469 0 AutofsNG: inode_cache 181582 181590 384 10 1 : tunables 54 27 8 : slabdata 18159 18159 0 dentry_cache 182813 183492 144 27 1 : tunables 120 60 8 : slabdata 6796 6796 0 > >> Thanks, >> >> Brent >> >> PS I have a small wishlist. Are any of the following "coming soon"? >> 1) Ability to change/remove registered autofsng mountpoints > > Hmm. You should be able to pluck them off if you patched mount(8) using the > new '-L' option. > This will deregister the namespace for the mountpoint (not just allow you to forcibly unmount the NFS mount if it's currently mounted)? Where do I find this patch? It's not as easy as running automount again to remove things that are no longer mapped or to update modified mappings, but it would be a reasonable workaround... >> 2) The ability to update /etc/mtab rather than having to symlink >> to /proc/mounts (which shows some "residue" from the initrd >> boot, due to pivot_root, etc.) > > This is a toughy. The reason for doing the symlink, > This got cutoff, but I assume that it is difficult to work with the /etc/mtab file from kernel space? Thanks, Brent |
|
From: Mike W. <mi...@wa...> - 2005-06-08 19:27:32
|
Brent A Nelson wrote: > I have some experimental Sarge machines running AutofsNG, that don't > have patches for, in particular Nautilus, so they generate a lot of > mount storms. That's to be expected (I think I'm going to just change > the filesystem name to autofs in the kernel, to avoid patching so many > different things). I don't think changing the name to autofs will solve anything. None of the tools, including nautilus currently do anything special for 'autofsng' nor 'autofs'. In fact, the changes would be required even if autofs4 ghosting were used. > > However, these machines (which would often go through thousands of mount > attempts per day), after a span of a week to several weeks, end up > swapping ridiculously and eventually have too little memory to accept > logins. The memory consumption is not associated with any processes, so > it appears to be a kernel memory leak. Could this be a slow leak > somewhere in AutofsNG? > Quite possibly. what does /proc/slabinfo say? > Thanks, > > Brent > > PS I have a small wishlist. Are any of the following "coming soon"? > 1) Ability to change/remove registered autofsng mountpoints Hmm. You should be able to pluck them off if you patched mount(8) using the new '-L' option. > 2) The ability to update /etc/mtab rather than having to symlink > to /proc/mounts (which shows some "residue" from the initrd > boot, due to pivot_root, etc.) This is a toughy. The reason for doing the symlink, > 3) See, I told you it was a small list. ;-) > > > ------------------------------------------------------- > This SF.Net email is sponsored by: NEC IT Guy Games. How far can you > shotput > a projector? How fast can you ride your desk chair down the office luge > track? > If you want to score the big prize, get to know the little guy. Play to > win an NEC 61" plasma display: http://www.necitguy.com/?r=20 > _______________________________________________ > Autofsng-users mailing list > Aut...@li... > https://lists.sourceforge.net/lists/listinfo/autofsng-users |
|
From: Brent A N. <br...@ph...> - 2005-06-08 18:09:34
|
I have some experimental Sarge machines running AutofsNG, that don't have patches for, in particular Nautilus, so they generate a lot of mount storms. That's to be expected (I think I'm going to just change the filesystem name to autofs in the kernel, to avoid patching so many different things). However, these machines (which would often go through thousands of mount attempts per day), after a span of a week to several weeks, end up swapping ridiculously and eventually have too little memory to accept logins. The memory consumption is not associated with any processes, so it appears to be a kernel memory leak. Could this be a slow leak somewhere in AutofsNG? Thanks, Brent PS I have a small wishlist. Are any of the following "coming soon"? 1) Ability to change/remove registered autofsng mountpoints 2) The ability to update /etc/mtab rather than having to symlink to /proc/mounts (which shows some "residue" from the initrd boot, due to pivot_root, etc.) 3) See, I told you it was a small list. ;-) |
|
From: Mike W. <mi...@wa...> - 2005-05-11 23:17:48
|
Brent A Nelson wrote: > The -null map trick does indeed seem to work fine. It would be nice if > it automatically handled the situation (such as in the commercial UNIX > automounts), but short of that, the -null option is great. > > Does that work with autofsv4, too? > That's a pretty distribution specific question. It would be handled in the initscript, and adding -null checks in auto.master maps could/should be added there. Then again, I'm not up-to-speed with autofs4 specifics. HTH, Mike Waychison > Many thanks, > > Brent > > On Fri, 6 May 2005, Mike Waychison wrote: > >> Brent A Nelson wrote: >> >>> In an environment which does direct mounts, where you might have a >>> map entry of, say: >>> /blah/tmp1 blah:/blah/tmp1 >>> >>> autofsng (and the direct mount support of autofsv4) will go ahead and >>> attempt to NFS mount /blah/tmp1, even on the machine "blah". This >>> prevents access to the actual partition locally and also prevents the >>> possibility of exporting the filesytem to anyone else, either. >>> >>> I'm guessing there needs to be some code to only perform the >>> automount if the machine is a remote machine? >> >> >> No, usually if the nfs server is the localhost in question, then the >> mount is 'copied' on automount. In Linux, this is usually done using >> 'mount --bind /from /to'. That said, Autofsng is currently lacking >> the smarts to do this and will indeed try to mount over NFS. >> >> This isn't usually a problem unless you configure automount in the way >> described above. I don't think anybody handles the case you mention. >> >> The quick and dirty solution that should work fine in autofsng is to >> add the following line to your local auto.master map: >> >> /blah/tmp1 -null >> >> This will essentially override the later occurence of the direct mount >> at automountng time. >> >> Let me know if this works. >> >> Is that how the Sun automount >> >>> implements this, or does it try to avoid mounting over an existing >>> mountpoint, or...? >> >> >> Not that I know of. >> >> HTH, >> >> Mike Waychison >> |
|
From: Brent A N. <br...@ph...> - 2005-05-11 22:55:15
|
The -null map trick does indeed seem to work fine. It would be nice if it automatically handled the situation (such as in the commercial UNIX automounts), but short of that, the -null option is great. Does that work with autofsv4, too? Many thanks, Brent On Fri, 6 May 2005, Mike Waychison wrote: > Brent A Nelson wrote: >> In an environment which does direct mounts, where you might have a map >> entry of, say: >> /blah/tmp1 blah:/blah/tmp1 >> >> autofsng (and the direct mount support of autofsv4) will go ahead and >> attempt to NFS mount /blah/tmp1, even on the machine "blah". This prevents >> access to the actual partition locally and also prevents the possibility of >> exporting the filesytem to anyone else, either. >> >> I'm guessing there needs to be some code to only perform the automount if >> the machine is a remote machine? > > No, usually if the nfs server is the localhost in question, then the mount is > 'copied' on automount. In Linux, this is usually done using 'mount --bind > /from /to'. That said, Autofsng is currently lacking the smarts to do this > and will indeed try to mount over NFS. > > This isn't usually a problem unless you configure automount in the way > described above. I don't think anybody handles the case you mention. > > The quick and dirty solution that should work fine in autofsng is to add the > following line to your local auto.master map: > > /blah/tmp1 -null > > This will essentially override the later occurence of the direct mount at > automountng time. > > Let me know if this works. > > Is that how the Sun automount >> implements this, or does it try to avoid mounting over an existing >> mountpoint, or...? > > Not that I know of. > > HTH, > > Mike Waychison > |
|
From: Mike W. <mi...@wa...> - 2005-05-10 17:33:36
|
Byron Servies wrote: > Hi, > > I am using > > 2.6.9autofsng_0.4 #3 SMP Mon Jan 10 10:53:04 PST 2005 i686 i686 i386 GNU/Linux > > and experience frequent minute long delays (which seem like an eternity). These > occur when I am running normal commands with a path that has automounted entries > in it. > > How do I tell if this is a network problem or autofsng doing it's periodic mount > cleanup scan (or whatever that was, Mike)? > > Byron Well, it could be one of several thins going on. - In older patches, then upon expiry of a mountpoint (10 seconds default at the moment), you'll see system time spin up to 100% for a couple seconds, up to a couple minutes on very large memory configs. This is due to an inefficiency in invalidating inodes at unmount time. I thought this was merged into the current autofsng BK, but it appears not. Probably because it has already found itself into later 2.6 upstream kernels: http://linux.bkbits.net:8080/linux-2.5/gnupatch@41db80b4zxJOpMXdVUzDVE1mkKcA8Q The above will probably apply with fuzz. - Another reason is that your app validly is attempting to walk into an automounted nfs mount, in that case, any slowdown issues would be nfs related. - Also possible is the chance that your shell or commands are inappropriately trampling on browsed automount point, or direct mount points. Clearly this isn't the case if you aren't using either of these. If you suspect that it may be one of the commands issued or your shell, please check to see if there is a patch in the patches directory that already fixes the issue. I'd be interested in knowing what other applications break. HTH, Mike Waychison |
|
From: Byron S. <Byr...@Su...> - 2005-05-10 15:47:47
|
Hi, I am using 2.6.9autofsng_0.4 #3 SMP Mon Jan 10 10:53:04 PST 2005 i686 i686 i386 GNU/Linux and experience frequent minute long delays (which seem like an eternity). These occur when I am running normal commands with a path that has automounted entries in it. How do I tell if this is a network problem or autofsng doing it's periodic mount cleanup scan (or whatever that was, Mike)? Byron -- Byron Servies Sun Microsystems, Inc. 1 (831) 621-9807 voice 1 (831) 621-9807 fax mail to: byr...@su... http://www.sun.com ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
From: Mike W. <mi...@wa...> - 2005-05-06 23:09:54
|
Brent A Nelson wrote: > In an environment which does direct mounts, where you might have a map > entry of, say: > /blah/tmp1 blah:/blah/tmp1 > > autofsng (and the direct mount support of autofsv4) will go ahead and > attempt to NFS mount /blah/tmp1, even on the machine "blah". This > prevents access to the actual partition locally and also prevents the > possibility of exporting the filesytem to anyone else, either. > > I'm guessing there needs to be some code to only perform the automount > if the machine is a remote machine? No, usually if the nfs server is the localhost in question, then the mount is 'copied' on automount. In Linux, this is usually done using 'mount --bind /from /to'. That said, Autofsng is currently lacking the smarts to do this and will indeed try to mount over NFS. This isn't usually a problem unless you configure automount in the way described above. I don't think anybody handles the case you mention. The quick and dirty solution that should work fine in autofsng is to add the following line to your local auto.master map: /blah/tmp1 -null This will essentially override the later occurence of the direct mount at automountng time. Let me know if this works. Is that how the Sun automount > implements this, or does it try to avoid mounting over an existing > mountpoint, or...? Not that I know of. HTH, Mike Waychison |
|
From: Brent A N. <br...@ph...> - 2005-05-06 22:56:30
|
In an environment which does direct mounts, where you might have a map entry of, say: /blah/tmp1 blah:/blah/tmp1 autofsng (and the direct mount support of autofsv4) will go ahead and attempt to NFS mount /blah/tmp1, even on the machine "blah". This prevents access to the actual partition locally and also prevents the possibility of exporting the filesytem to anyone else, either. I'm guessing there needs to be some code to only perform the automount if the machine is a remote machine? Is that how the Sun automount implements this, or does it try to avoid mounting over an existing mountpoint, or...? Thanks, Brent |
|
From: Brent A N. <br...@ph...> - 2005-03-25 20:27:45
|
On Thu, 24 Mar 2005, Mike Waychison wrote: > Brent A Nelson wrote: > > 1) We use ld.so.preload to load a security library (libsafe). As it > > loads, it checks for the existence of an optional config file. Somehow > > (non-initialized or too early initialation of your status code?), > > automountng seems to be picking up on this and spits out an error "Problem > > while reading /proc/mounts: No such file or directory", even though an > > strace shows no trouble reading /proc/mounts. It has no trouble when > > libsafe is not preloaded. Let me know if you'd like an strace. > > > > Hmm. That's interesting. An strace would certainly help. Is this the > libsafe package in debian? I can try it with 2.0-16-6 if so. > Yes, straight out of Debian, allowing it to install /etc/ld.so.preload. > > 2) The mount timeout seems to be around 5 seconds, not 5 minutes. The > > '-t' option doesn't seem to have any effect. > > Ah yes. The userspace bits are currently hardcoded to unmount after 10 > seconds. This is done for development/testing purposes. One of the > current missing features is that timeout specifications in the master > map are also not passed down. Similarly, the -t option to automountng > is currently ignored. > > If you wish to increase the timeout value, simply change the > > #define TIMEOUT 10 > > in daemon/autofsng.c to something more reasonable like: > > #define TIMEOUT (10*60) > Okay, will do. > > > > 3) A file-based auto.master doesn't seem to call an NIS-based map. Using > > an NIS-based auto.master seems to work fine (it reads an NIS map that is > > listed in the auto.master). I believe Autofs checks for the existence of > > slashes in the map name to tell whether or not a map is file-based. > > Are you referring to NIS indirect maps listed in your auto.master? > No. I first tried a file-based auto.master with just (not counting comments): /- auto.direct -rw,nosuid,bg,intr From an strace, I could see that it read auto.master, but it didn't go to NIS for auto.direct, with or without a line in nsswitch.conf. Switching to "automount: nis", the NIS-based auto.master is read (which has the same line for auto.direct), and it happily calls the NIS-based auto.direct. The automount of Autofs would see the "auto.direct" and, because it doesn't start with a slash, assumes it isn't a file-based map and employs other lookup methods. At least, I believe that's how it works. > Distributions don't understand things like direct mounts, and will try > to unmount them on shutdown, which currently will cause another > automount attempt. I'm contemplating on a proper solution for > 'disabling' automounts system wide so initscripts don't trip up the > system on shutdown. > I'll have to see to what extent this trips up Sarge, but yes, there would need to be a way to shutoff the automounting. Ideally, a plain old umount should not trigger AutofsNG. However, looking at your util-linux patch, your '-N' option (no-follow) may be quite reasonable. Distros could modify their scripts to employ it or umount could be modified to employ it intelligently (really, why wouldn't "no-follow" be the default policy for umount?). > > > > My test setup is Debian Sarge (let me know if you'd like me to create a > > patchset for 2.6.8; this did require a few changes). Any chance of > > creating some Debian packages and getting them into Experimental? > > I haven't yet considered this, however it would still require kernel > support. I'm not quite familiar with debian's policies on packages that > are kernel patch dependent. Going forward I'd love to see it there > though ;) > I'm not familiar either. There are plenty of kernel-patch packages in Sarge (for building your own Kernels). There are also packages which do require additional kernel modules to be installed in order to function. Openafs-client would be an example, which sets as a dependency either the modules source or the modules themselves... > > How about other projects that need to be updated to > > recognize autofsng (it looks like the 'df' patch will be pretty important > > to us if we go ahead and deploy autofsng; not sure about the other ones)? > > > > Depending on what you need, the patches do become critical. Eg: > core-utils + ACL patches will trigger automounts for all browsed > directories. I have some nautilus patches kicking around that I'll > upload that keep nautilus from automounting filesystems when navigating > the directory view. As well, the util-linux patches are quite useful as > they allow to you manually remove a direct mount using the new '-N' > command line option. > Nautilus used to have that behavior with Autofs, too. It was quite unpleasant; browsing to the root directory would trigger all of our mountpoints and the system would be unusable for 10 minutes or so... It's a pity that you can't also use the "autofs" monicker, or if the programs searched for auto* rather than autofs, autofsv4, autofsng, etc. Maybe if there was some general way to register such mounts (and if df simply ignored mounts in /proc/mounts that it didn't understand, rather than stopping at the first one it runs across). Oh, well... Many thanks, Brent |
|
From: Mike W. <mi...@wa...> - 2005-03-24 20:21:21
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Brent, Brent A Nelson wrote: > First of all, thanks for AutofsNG! I thought I'd let you know, in case you > didn't already, that there really is interest in your project. We run an > environment with a lot of direct mounts, and we quickly ran into the > limits of AutofsV4's new direct mount support. For one thing, Autofs > starts up an automount process for every direct mount point (we get about > 100 automounts running), needlessly sucking up a good chunk of memory. > Okay, better than nothing (prior to the direct mount support, we had to > set up a large number of symlinks into the mount area for every system and > keep them maintained), but then we found Autofs mounting on top of > filesystems we were trying to export! We had to kill the corresponding > automount process for that particular filesystem to see our filesystem. > > Your namespace-based approach seems ideal. Memory usage must be near > zero, there are no erroneous overmounts, and submounts are fine. > Plus there's never an automount process to hangup whenever a machine goes > down (Autofs has often done this over the years), requiring manual > intervention to get the affected mounts working again (often requiring > killing off user processes). > > I suspect that the kernel module is rather solid at this point, although > the automountng utility still needs polish and implementation of missing > options, is that correct? I'd love to see AutofsNG take the place of > the Linux AutofsV4, so, to help get the ball rolling, here are some > bugs/missing features I've run across in the automountng command: > > 1) We use ld.so.preload to load a security library (libsafe). As it > loads, it checks for the existence of an optional config file. Somehow > (non-initialized or too early initialation of your status code?), > automountng seems to be picking up on this and spits out an error "Problem > while reading /proc/mounts: No such file or directory", even though an > strace shows no trouble reading /proc/mounts. It has no trouble when > libsafe is not preloaded. Let me know if you'd like an strace. > Hmm. That's interesting. An strace would certainly help. Is this the libsafe package in debian? I can try it with 2.0-16-6 if so. > 2) The mount timeout seems to be around 5 seconds, not 5 minutes. The > '-t' option doesn't seem to have any effect. Ah yes. The userspace bits are currently hardcoded to unmount after 10 seconds. This is done for development/testing purposes. One of the current missing features is that timeout specifications in the master map are also not passed down. Similarly, the -t option to automountng is currently ignored. If you wish to increase the timeout value, simply change the #define TIMEOUT 10 in daemon/autofsng.c to something more reasonable like: #define TIMEOUT (10*60) > > 3) A file-based auto.master doesn't seem to call an NIS-based map. Using > an NIS-based auto.master seems to work fine (it reads an NIS map that is > listed in the auto.master). I believe Autofs checks for the existence of > slashes in the map name to tell whether or not a map is file-based. Are you referring to NIS indirect maps listed in your auto.master? One thing that greatly differs from autofs4 is that we don't really support mapsource prefixes.. and instead rely heavily on /etc/nsswitch.conf having a correct entry. If the entry is misparsed or misunderstood, it should default to using 'automount: files nis'. If you could post your auto.master file, that would help me better understand the specific problem. Also, the following outputs could help better understand where the breakage is happening: - - dump of /proc/mounts - - output of /sbin/dumpmaster > > Despite any issues we've had with the utility, the Kernel module itself > really seems quite slick, and we're considering deploying it on all our > Linux machines. Note that we haven't tested it when a server goes down, > but I suspect your design is too simple and elegant for this to be much of > a problem beyond what we'd expect from NFS (worst case would probably be > having to manually unmount a filesystem, then AutofsNG would recover). Distributions don't understand things like direct mounts, and will try to unmount them on shutdown, which currently will cause another automount attempt. I'm contemplating on a proper solution for 'disabling' automounts system wide so initscripts don't trip up the system on shutdown. > > My test setup is Debian Sarge (let me know if you'd like me to create a > patchset for 2.6.8; this did require a few changes). Any chance of > creating some Debian packages and getting them into Experimental? I haven't yet considered this, however it would still require kernel support. I'm not quite familiar with debian's policies on packages that are kernel patch dependent. Going forward I'd love to see it there though ;) > Have you > already submitted to the Linux kernel maintainers or do you still consider > it too early? I've only submitted for code-review, but not for inclusion. The kernelspace calling out to /sbin/autofsng will likely go away as it doesn't work nicely when you are working with chroot's. Fear not though, as I hope to have a system in place where killing off the 'daemon' is not a system-breaker. Eg: the system will be able to heal itself by simply running automountng again. How about other projects that need to be updated to > recognize autofsng (it looks like the 'df' patch will be pretty important > to us if we go ahead and deploy autofsng; not sure about the other ones)? > Depending on what you need, the patches do become critical. Eg: core-utils + ACL patches will trigger automounts for all browsed directories. I have some nautilus patches kicking around that I'll upload that keep nautilus from automounting filesystems when navigating the directory view. As well, the util-linux patches are quite useful as they allow to you manually remove a direct mount using the new '-N' command line option. It's a shame that so many packages have to be touched. I almost forgot: about a month ago I spun rpms for RHEL4, that are now available at: http://waychison.com/autofsng/files/unsupported/rhel4-packages/current/ I'm glad autofsng is working out for you so far. Thanks for the feedback, Mike Waychison -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFCQyDzdQs4kOxk3/MRAlmiAKCUKmdYDMIyUmwnxmBaSooYRAjagQCfYXxb rlEvm0wcdNSiDANa1EMNX38= =B/vp -----END PGP SIGNATURE----- |
|
From: Brent A N. <br...@ph...> - 2005-03-24 19:44:41
|
First of all, thanks for AutofsNG! I thought I'd let you know, in case you didn't already, that there really is interest in your project. We run an environment with a lot of direct mounts, and we quickly ran into the limits of AutofsV4's new direct mount support. For one thing, Autofs starts up an automount process for every direct mount point (we get about 100 automounts running), needlessly sucking up a good chunk of memory. Okay, better than nothing (prior to the direct mount support, we had to set up a large number of symlinks into the mount area for every system and keep them maintained), but then we found Autofs mounting on top of filesystems we were trying to export! We had to kill the corresponding automount process for that particular filesystem to see our filesystem. Your namespace-based approach seems ideal. Memory usage must be near zero, there are no erroneous overmounts, and submounts are fine. Plus there's never an automount process to hangup whenever a machine goes down (Autofs has often done this over the years), requiring manual intervention to get the affected mounts working again (often requiring killing off user processes). I suspect that the kernel module is rather solid at this point, although the automountng utility still needs polish and implementation of missing options, is that correct? I'd love to see AutofsNG take the place of the Linux AutofsV4, so, to help get the ball rolling, here are some bugs/missing features I've run across in the automountng command: 1) We use ld.so.preload to load a security library (libsafe). As it loads, it checks for the existence of an optional config file. Somehow (non-initialized or too early initialation of your status code?), automountng seems to be picking up on this and spits out an error "Problem while reading /proc/mounts: No such file or directory", even though an strace shows no trouble reading /proc/mounts. It has no trouble when libsafe is not preloaded. Let me know if you'd like an strace. 2) The mount timeout seems to be around 5 seconds, not 5 minutes. The '-t' option doesn't seem to have any effect. 3) A file-based auto.master doesn't seem to call an NIS-based map. Using an NIS-based auto.master seems to work fine (it reads an NIS map that is listed in the auto.master). I believe Autofs checks for the existence of slashes in the map name to tell whether or not a map is file-based. Despite any issues we've had with the utility, the Kernel module itself really seems quite slick, and we're considering deploying it on all our Linux machines. Note that we haven't tested it when a server goes down, but I suspect your design is too simple and elegant for this to be much of a problem beyond what we'd expect from NFS (worst case would probably be having to manually unmount a filesystem, then AutofsNG would recover). My test setup is Debian Sarge (let me know if you'd like me to create a patchset for 2.6.8; this did require a few changes). Any chance of creating some Debian packages and getting them into Experimental? Have you already submitted to the Linux kernel maintainers or do you still consider it too early? How about other projects that need to be updated to recognize autofsng (it looks like the 'df' patch will be pretty important to us if we go ahead and deploy autofsng; not sure about the other ones)? Many thanks! Brent Nelson Director of Computing UF Physics |
|
From: Mike W. <mi...@wa...> - 2005-03-21 21:51:09
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Test -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFCP0GJdQs4kOxk3/MRAv2hAJ0cEir5DHVZCVJNIdyMbG85AeAWNACfTbc3 qPeGSj4r4IwBqIghE+7Ya1M= =T0iA -----END PGP SIGNATURE----- |