Re: [Netnice-kernels] NNFS was: Re: Sourceforge Account
Status: Alpha
Brought to you by:
taost6
From: Scott B. <sco...@ve...> - 2004-09-29 15:20:38
|
Hi Takashi, > > my point is this: please use the CDROM system, to learn how it works. it > is a working specification, you need to use for your linux-NNFS. but, to > study how it is implemented on FreeBSD, please use the FreeBSD5/NNFS > implementation, by checking out the source from our CVS repository. > otherwise, you'll have a hard time to understand the messy implementation. > (i think you're on the right track about this, right? since you're mentioning > nnfs_vncache_alloc()... just in case.) > > http://cvs.sourceforge.net/viewcvs.py/netnice/FreeBSD5/?only_with_tag=netnice53 > > thanks!! > Thanks, I was working from the FreeBSD5 branch with the netnice521 tag. I have since done cvs update -r netnice53 and will reference these files in the future. Been using the Boot CD and experimenting with the fundamentals of the nnfs filesystem using the /proc/network/lo0/vif1, setting a symlink in /proc/curproc/sockets and using telnet to localhost. I am getting more comfortable with the internals. The VFS interface is different under Linux than FreeBSD in that Linux VFS is more OO. On FreeBSD NNFS there are a lot of functions with switch statements branching on the type of nnfs node (root vif, vif, virtual file, etc...) . On Linux VFS the different inodes (vnodes on BSD) each contain function pointers which differ depending on the inode type. I think this will make the code smaller in the end. I think I should proceed with the mkdir/rmdir operation first, I think this is relatively straight forward and maybe a small piece of the control file implementation. Next should come the implementation of the <pid>/sockets directory, the code here will come from the existing procfs code. After that would be the symlink operation between the socket and the and the vif, I suspect that this may be the most difficult. I will be going out of town tomorrow and will return Tuesday. Looking forward to a conference call next week when I return. Thanks, Scott Brumbaugh |