From: Boris P. <bpr...@ho...> - 2011-03-17 19:23:26
|
Ok... never mind, experimental error. Sorry for the noise, Boris. Hi, guys, I was hoping someone on the list could help explain the results of an experiment I setup earlier today. I have a storage backend that uses libfuse high-level interface with use_ino (but without noforget) option. Inode numbers supported by the backend are persistent and unique. I read the README.NFS file in libfuse distribution, and it kinda made sense. However, I still tried the following experiment to see what happens: - started backend server process, mounted fuse filesystem, exported fuse mount point over NFS - mounted the share from another machine - created some files and dirs from the NFS client, did ls, cat, etc. - stopped NFS server on the storage server machine, unmounted the fuse filesystem, terminated the backend storage server process At this point, if I am not mistaken, since both the filesystem is unmounted and the NFS server is stopped (and also because of the default attribute/dentry timeout values of 1 second), anything that was cached in the server must have been gone from the kernel caches. Additionally, since the fuse process (the backend storage server process) is terminated, all the state maintaned in libfuse is gone as well. So, I restart the backend server process, mount fuse filesystem, start NFS server again and go back to the client to see what's up. To my surprise, after a short delay, I get my NFS-mounted directories and files back, without any "stale handle" errors. This seems to contradict the info in the README.NFS, and I am not sure how to explain this behavior based on my understanding of how fuse works. Where does libfuse get the translations from inode numbers in NFS handles to paths ? How can it lookup a file three subdirs up above the fuse fs root dir with no priori state in libfuse ? Thanks in advance, Boris. |