enfs-devel Mailing List for enfs - A user level VFS layer
Brought to you by:
tramm
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(8) |
Nov
(20) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(2) |
Feb
(6) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Tanvi S. <ta...@ds...> - 2016-04-28 11:53:06
|
<BODY scroll=auto><P><FONT color=#000040 face=Calibri>Hi, </FONT></P> <P><FONT color=#000040 face=Calibri> </FONT></P> <P><FONT color=#000040 face=Calibri>I tried understanding your target market to see if we can help your organization in generating leads via email, be it International or Domestic. Below is the quick snapsot of our lead generation strategy. </FONT></P> <P><FONT color=#000040 face=Calibri></FONT> </P> <P><BR><FONT color=#000040 face=Calibri>All we need from you is, to Identify your target audience. Based on your target market we will scrub our master files and let you know the number of records we have. The minimum volume of emails we send is 5,000/day all the way upto 20,000 email per day. </FONT></P> <P> </P> <P><BR><FONT color=#000040 face=Calibri>1. We will creat a text based email message and subject line, based on your product/services.</FONT></P> <P><FONT color=#000040 face=Calibri>2. Provide a standalone domain to receive replies. </FONT></P> <P><FONT color=#000040 face=Calibri>3. Deploy email campaigns for 22 working days. </FONT></P> <P><BR><FONT color=#000040 face=Calibri> </FONT></P> <P><FONT color=#000040 face=Calibri>Your sales team will follow-up on leads that come directly to the inbox. We have been catering companies across United States, India and European countries. Our services have helped companies prevent up to 30% of the cost on custom marketing projects. </FONT></P> <P><FONT color=#000040 face=Calibri></FONT> </P> <P><FONT color=#000040 face=Calibri> </FONT></P> <P><FONT color=#000040 face=Calibri>Let us know your target audience/market and suggest us a time to get on a call to see if we can replicate the same success, the way we did for our clients.</FONT></P> <P><FONT color=#000040 face=Calibri> </FONT></P> <P><FONT color=#000040 face=Calibri>Regards,</FONT></P> <P><FONT color=#000040 face=Calibri>Tanvi Shah</FONT></P> <P><FONT color=#000040 face=Calibri>Marketing Executive</FONT></P> <P><FONT color=#000040 face=Calibri>91-855-088-9722</FONT></P> <P><FONT face=Calibri><FONT color=gray> <FONT size=1>If you do not wish to hear from us again, please respond back with "REMOVE" and we will honor your request. Disclaimer: If you have received this electronic mail by error or if you're not the right person to speak with in this regard, I'd appreciate if you would be so kind as to forward this email to the right person or the person in charge.</FONT></FONT></FONT></P> <P><FONT color=#000040 face=Calibri> </FONT></P> <P><FONT color=#000040></FONT> </P></BODY> |
From: Johan R. <ch...@ed...> - 2002-02-26 16:58:47
|
Hello! On Fri, 22 Feb 2002, Lee Ward wrote: > I think your biggest problem is going to be that NFS is stateless while > FTP is stateful. Whether you choose ENFS or something else you'll be > faced with this problem I think. If the NFS server goes down, all the > state you will need to keep will disappear. For instance, say the NFS > client is reading file data and the server reboots or somebody blasts > the nfsd and restarts it, then the client will persist in trying the > request. When enfsd returns to service all it has to go on is the file > handle. For NFSv2, thats 20 bytes (in enfsd) of information. For the, in > progress, v3 that will be 42 bytes (in enfsd again). You are going to > have to keep full pathnames for active files as well as the server and > any leading path components for the sub-export external to the nfs > server. An external flat-file or (g)dbm instance? Okay. I was thinking of mounting the filesystem over the loopback interface, so that each client acts as it's own server... I attended a tutorial on unix network programming at last year's USENIX conferance in Boston and got the idea that it might be a good idea to use parts of NFS for this (ftpfs) kind of project. I'm still in the process of formulating what it is that I want and the best way of going about it. I appologize for being somewhat sluggish in my response, but please bear with me. Currently I haven't got the time that I'd like to spend on this project (work+studies...), I will however keep in touch and will let you know when I have given the idea more thought. Cheers, // Johan --=20 Johan Rudholm <ch...@ed...> Systems Administrator Tel: +46 31 772 58 63, Call: SM6XLV Chalmers University of Technolog= y PGP fingerprint: 4D47 9C3F F1D8 8AF5 10B3 66BF 8876 17BB EBC2 970= 8 Visit me! N57=B0 41'30'' O11=B0 58'65'' |
From: Lee W. <le...@sa...> - 2002-02-26 13:12:09
|
Johan, Haven't heard back from you. I was hoping you would keep in touch. I am interested to know what your decision will be regarding an FTP filesystem. I could make good use of such a thing here at Sandia. We use HPSS as our central archive solution and the best (only for us, right now) interface to that is FTP. A re-export through our ENFS solution would be a "good thing". Especially since I've been promising some sort of linkage to HPSS for more than a year :-) Anyway, keep in touch won't you? Let us know what you're thinking. Thanks. --Lee Johan Rudholm wrote: >=20 > Hi! >=20 > I have been thinkin about what a great idea it would be to have a FTP > filesystem (similar to what Alex was), which would provide a local > filesystem view of FTP-servers similar to the /net-filesystem provided = by > (solaris only?) automount. >=20 > I was thinking of using NFS for this. My primary platform would be > FreeBSD. >=20 > Re-inventing the wheel seemed unnecessary so I have been looking around > for similiar projects and came upon, amongst others, yours. >=20 > I haven't studied any of your code so with the risc of sounding > un-initiated, do you think enfs could serve as a wheel for my project? >=20 > // Johan >=20 > -- > Johan Rudholm <ch...@ed...> Systems Administrator > Tel: +46 31 772 58 63, Call: SM6XLV Chalmers University of Techn= ology > PGP fingerprint: 4D47 9C3F F1D8 8AF5 10B3 66BF 8876 17BB EBC2= 9708 > Visit me! N57=B0 41'30'' O11=B0 58'65'' >=20 > _______________________________________________ > Enfs-devel mailing list > Enf...@li... > https://lists.sourceforge.net/lists/listinfo/enfs-devel |
From: Lee W. <le...@sa...> - 2002-02-22 16:57:05
|
Johan Rudholm wrote: >=20 > Hi! >=20 > I have been thinkin about what a great idea it would be to have a FTP > filesystem (similar to what Alex was), which would provide a local > filesystem view of FTP-servers similar to the /net-filesystem provided = by > (solaris only?) automount. I don't know if it's solaris only but it's certainly not there under freebsd or Linux. >=20 > I was thinking of using NFS for this. My primary platform would be > FreeBSD. >=20 > Re-inventing the wheel seemed unnecessary so I have been looking around > for similiar projects and came upon, amongst others, yours. >=20 > I haven't studied any of your code so with the risc of sounding > un-initiated, do you think enfs could serve as a wheel for my project? Probably... One piece of core infrastructure is missing that I think you will need. It's also an open argument between Tramm and myself. You will need the full path to the file, for each file. We do not now have anything that will implicitly support that. Under Linux, one has the dinode tree which will deliver this, trivially. In the *bsd variants one could lock entries into the system dcache I suppose, though that would be dangerous. In this, ENFS, project, so far the file system is expected to keep that information itself if needed. For instance, if you look at the skelfs and procfs code, they do it by maintaining a tree. The nodes in that tree become the "private" info field in the vnode record. Tramm and I do have a kind of tentative agreement that this implicit maintenance of full path support will be added when needed. If you are serious about pursuing enfs then your project, together with the existing procfs/skelfs code, would be enough to sway me. I think your biggest problem is going to be that NFS is stateless while FTP is stateful. Whether you choose ENFS or something else you'll be faced with this problem I think. If the NFS server goes down, all the state you will need to keep will disappear. For instance, say the NFS client is reading file data and the server reboots or somebody blasts the nfsd and restarts it, then the client will persist in trying the request. When enfsd returns to service all it has to go on is the file handle. For NFSv2, thats 20 bytes (in enfsd) of information. For the, in progress, v3 that will be 42 bytes (in enfsd again). You are going to have to keep full pathnames for active files as well as the server and any leading path components for the sub-export external to the nfs server. An external flat-file or (g)dbm instance? BTW, I was at a party with Richard Stallman a few years back and he was mentioning that someone had done a VFS implementation supporting an FTP file system such as you describe. I don't know if it was exportable via NFS though. I don't have any more information than that. It was a 2 minute conversation and we switched topics to something more appropriate to a birthday party. You might look for that project (maybe in the HURD?) or drop Thomas Bushnell (or whomever maintains the HURD now) and/or Stallman a note asking about it. Maybe it'll save you some effort. --Lee >=20 > // Johan >=20 > -- > Johan Rudholm <ch...@ed...> Systems Administrator > Tel: +46 31 772 58 63, Call: SM6XLV Chalmers University of Techn= ology > PGP fingerprint: 4D47 9C3F F1D8 8AF5 10B3 66BF 8876 17BB EBC2= 9708 > Visit me! N57=B0 41'30'' O11=B0 58'65'' >=20 > _______________________________________________ > Enfs-devel mailing list > Enf...@li... > https://lists.sourceforge.net/lists/listinfo/enfs-devel |
From: Johan R. <ch...@ed...> - 2002-02-22 14:38:35
|
Hi! I have been thinkin about what a great idea it would be to have a FTP filesystem (similar to what Alex was), which would provide a local filesystem view of FTP-servers similar to the /net-filesystem provided by (solaris only?) automount. I was thinking of using NFS for this. My primary platform would be FreeBSD. Re-inventing the wheel seemed unnecessary so I have been looking around for similiar projects and came upon, amongst others, yours. I haven't studied any of your code so with the risc of sounding un-initiated, do you think enfs could serve as a wheel for my project? // Johan --=20 Johan Rudholm <ch...@ed...> Systems Administrator Tel: +46 31 772 58 63, Call: SM6XLV Chalmers University of Technolog= y PGP fingerprint: 4D47 9C3F F1D8 8AF5 10B3 66BF 8876 17BB EBC2 970= 8 Visit me! N57=B0 41'30'' O11=B0 58'65'' |
From: Trammell H. <Tra...@ce...> - 2002-02-14 16:22:51
|
Lee, I realize that you're hard at work on NFS v3, so I thought you might want to try out the NFS v4 client for Linux: http://www.citi.umich.edu/projects/nfsv4/ They've put out a press release, announcing the GPLed release of the code: http://lwn.net/2002/0214/a/nfsv4.php3 Trammell -- -----|----- hu...@sw... H 240-476-1373 *>=====[]L\ Tra...@ce... W 240-453-3317 ' -'-`- http://www.swcp.com/~hudson/ KC5RNF |
From: Hudson, T. B. <Tra...@ce...> - 2002-02-05 20:28:56
|
Lee and I discussed stacking file systems this afternoon and arrived at a decision to avoid the complexity of allowing multiple paths the underlying vnodes. This allows us to hold onto vnodes from the underlying file system without worrying about them getting reclaimed by another path into the filesystem. The implementation for mounting is also straight forward -- we use the argc+argv that is passed into the mount routine to encode the arguments for this mount, followed by the arguments for the underlying mount. The top-level fileystem calls the underlying mount routine on the current vnode and then covers the vnode with its own filesystem. This way namei (and v_lookup) traverse the cover list to track down .. from a top level mount. The interface for the mount command might look like this: mount /foo/bar/mountpoint -t layer1 -- -t layer2 -o ro -- -t nfs server:/export/enfs This does change the semantics of the mount command, but hopefully in a manner to make it more flexible. Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Tramm H. <hu...@sw...> - 2002-01-24 15:35:51
|
Readme is here: http://www.sciencething.org/geekthings/UVFS_README.html Yet another one. The author acknowledges userfs, PODFUK and fuse as prior-art, but was disatisfied with them. Requires a kernel module to communicate with the user space driver. The code has a simplistic array of request slots, no continuations or other fancy implementation features. Currently it is four entries. The README brings up a possible deadlock in the Linux VFS layer relating to one vfs calling rmdir on another vfs and has a workaround. The README also complains about hard links. I think they are a good thing, but he seems quite vehemently against them. Trammell -- -----|----- hu...@sw... H 240-476-1373 *>=====[]L\ Tra...@ce... W 240-453-3317 ' -'-`- http://www.swcp.com/~hudson/ KC5RNF |
From: Hudson, T. B. <Tra...@ce...> - 2002-01-23 15:15:25
|
Lee, The programs in question are fsx (File System Exerciser), which is a standard BSD program. The other is from Apple named xnu. I'm working on downloading it; the license may not be compatible with the GPL for inclusion in our test suite. Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Lee W. <le...@sa...> - 2001-11-21 17:30:20
|
All, I'm having some troubles deciding just how to do the NFS V3 implementation. Problem is, I've started down a path that is proving untenable. What I've done so far is to reorganize the NFS V2 into client and server directories, with subdirectories, by version. Like this: -> clnt -> v2 -> v3 ... -> nfs -> srvr -> v2 -> v3 Then, I lifted the infrastructure code (cached attributes, nfs_findi, etc) out of the V2 code and dropped it into the "clnt" and "srvr" directories. Finally, I altered the nfs_inode to try to account for supporting the different file handles and attribute records. What, I've *really* done is to make a base-class NFS inode, with two sub-classes; one for each version. I cheated though by just using a V3 file handle (it'll record both V2 and V3 handles with equal ease) and replaced the V2 attributes with a union of the two kinds of attributes. The idea is that the VOP_ routines know which kind of inode they are dealing with and can choose the appropriate union member correctly, without further knowledge. i.e., the "base-class" record definition wasn't really, itself, sub-classed. This all seemed like a good idea at the time. I've made good progress on the V3 implementation but, now, I'm trying to implement VOP_READDIR. The V3 READDIR3 and READDIRPLUS3 calls need a verifier cookie when using a non-zero directory offset. i.e., if the entire directory can't be enumerated in one call, the continuation requires a verifier so that the server can make a determination about the validity of the given offset. These cookies are returned by the readdir and readdir+ calls. I need to store them in the inode record so that I can use them in these potential continuations. The natural thing, based on what I've already done, is to just change the union in the nfs-inode from: ... union { fattr2 nfsi_attrs2; fattr3 nfsi_attrs3; } nfsi_attrs; ... to: ... union { fattr2 nfsi_attrs2; struct { fattr3 nfsi_attrs3; char nfsi_dircookie[NFS3_COOKIEVERFSIZE]; } nfsi_attrs3; } nfsi_attrs; ... While the V2 code would be unaffected by this new change, the V3 code is not so lucky. Everything in the V3 code that needs to set the attributes now needs to know whether it's working with a directory or some other kind of object, since we need to invalidate these cookies independent of the attributes. Like, when we do a rename (target parent != source), create, mkdir, unlink for instance. This would be OK, I suppose, but the V3 implementation balloons and is cumbersome. It occurs to me that I've made a fundamental mistake in the implementation design. I've shoved all the subclass-only info up into the base record. However, to do otherwise, I would have to either alter (or provide anew for each version) the new-inode function to account for different sizes/kinds of info or inform the generic, existing new-inode function a pointer to subclass data. Then, there's still the problem of whether I'm dealing with a directory or not. How does the super find it's base record info? It goes on... Worse, all this is going to be worse with NFS V4. In V4, there is now state kept. This state will have to be stored in the sub-class record. To just provide the super with indirect function pointers is attractive but could be more and more problematic. In short, I don't like anything I've come up with much. Some helpful thoughts would be appreciated. |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-20 17:14:42
|
I see that there are some rpcgen'ed files that have been hand hacked and checked into the repository. The one that caused some recent problems was "nexus/include/rpcsvc/enfs_prot.h". It appears to be hacked to alias most of the enfs calls to the nfs calls. Is this a good idea? Additionally, there appears to be a problem with the "include/cmn.h" definition of ALIAS_TO(). If the compiler is not gcc, the macro is defined as a NOP. This means that if we tried to compile on a non-gcc platform we would not have the functions nor the aliases and the link (or module load) would fail with undefined symbols. Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-20 15:14:42
|
Folks, Last night's mega-checkin appears to still work. Yeah! I haven't run extensive tests on it, but the preliminary cthon99 tests seem to be passing. Among the many formatting changes that were made, I fixed almost every gcc warning. The ones in the our code were fairly easy. The Sun code, however, was a little harder. So I just turned off warnings when we compile their code and don't worry about it. The other major change that I am sweeping through the code right now is Makefile cleaning. We have lots of similar tasks in many of the Makefiles, all of which could be written into rules. This does mean some amount of name changing for consistency: foo.x -+--> foo_clnt.c +--> foo_xdr.c +--> foo_svc.c +--> foo.h Additionally, the compilation of xdr files will be handled by the following rule to avoid the warnings caused by rpcgen: %_xdr.o: %_xdr.c $(CC) $(CFLAGS) -Wno-unused -c -o $@ $< Any thoughts on the naming scheme or changes? Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-19 20:54:36
|
Lee wrote: > "Hudson, Trammell B." wrote: > > If the file is to be licensed (i.e., not in the public > > domain), then it should state that. If it is in the public > > domain (i.e., not licensed) then it should state so. > > I vote we avoid the issue and deprecate the file. We're using > malloc now anyway. Ok. That's just as easy. I'll also take out the real-clean target in tonight's mega checkin. The ssh situation at Celera has not been resolved, although I have talked to the "security" guys about it. Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-19 20:13:39
|
Ruth, > You're right, it pretty much rm's everything, and it's like that in > fs/nfs also. Must be I *really* haven't used it much ;) I think that you only get to use it once... > According to the > log it was added on purpose, but it predates my presence, Lee might > remember why it is like that. Hopefully he'll chime in here any minute now. Perhaps it is the panic button. Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Ruth K. <rk...@sa...> - 2001-11-19 20:05:40
|
You're right, it pretty much rm's everything, and it's like that in fs/nfs also. Must be I *really* haven't used it much ;) According to the log it was added on purpose, but it predates my presence, Lee might remember why it is like that. Ruth "Hudson, Trammell B." wrote: > > Ruth, > > > The source code removed should only be rpcgen generated code, those c > > files don't get checked in. It's the corresponding .x files > > that are in the repos. I've never had any trouble with it - but I haven't > > used it that much either. > > The nexus/kern/Makefile 'real-clean' target removes $(SRC), which includes > lots of files that are not rpcgen output. It also removes the Makefile > itself to make it harder to find out what happened... > > It appears that the regular 'clean' target removes the rpcgen output in most > directories. > > I don't have access to historical CVS history; is it like that in the Sandia > tree as well? > > Trammell > -- > H: hu...@sw... 240 476 1373 > W: Tra...@ce... 240 453 3317 |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-19 19:53:00
|
Ruth, > The source code removed should only be rpcgen generated code, those c > files don't get checked in. It's the corresponding .x files > that are in the repos. I've never had any trouble with it - but I haven't > used it that much either. The nexus/kern/Makefile 'real-clean' target removes $(SRC), which includes lots of files that are not rpcgen output. It also removes the Makefile itself to make it harder to find out what happened... It appears that the regular 'clean' target removes the rpcgen output in most directories. I don't have access to historical CVS history; is it like that in the Sandia tree as well? Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Ruth K. <rk...@sa...> - 2001-11-19 19:44:26
|
The source code removed should only be rpcgen generated code, those c files don't get checked in. It's the corresponding .x files that are in the repos. I've never had any trouble with it - but I haven't used it that much either. Could rename it to something else like clean-rpc to make it explicit. It is probably needed when you are messing about with the .x files though, or you might not get a clean re-build. Ruth "Hudson, Trammell B." wrote: > > I'm worried about the "real-clean" Makefile target. Unlike most clean > targets, it actually removes source code. Perhaps I'm worried that one day > I'll type it in by accident and lose my uncommitted changes. > > Does anyone else have this concern? Has anyone ever used the real-clean > target? > > Trammell "real-clean" Hudson > -- > H: hu...@sw... 240 476 1373 > W: Tra...@ce... 240 453 3317 > > _______________________________________________ > Enfs-devel mailing list > Enf...@li... > https://lists.sourceforge.net/lists/listinfo/enfs-devel |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-19 19:42:47
|
In nexus/cmn/heap.c, the license states that the file has been released into the public domain, but is subject to certain restrictions. I'm not sure how the legal team managed to justify that one; "public domain" has a specific meaning and does not allow any extra restrictions. If the file is to be licensed (i.e., not in the public domain), then it should state that. If it is in the public domain (i.e., not licensed) then it should state so. Reference: http://www.templetons.com/brad/copymyths.html Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-19 19:11:52
|
I'm worried about the "real-clean" Makefile target. Unlike most clean targets, it actually removes source code. Perhaps I'm worried that one day I'll type it in by accident and lose my uncommitted changes. Does anyone else have this concern? Has anyone ever used the real-clean target? Trammell "real-clean" Hudson -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-19 14:58:21
|
[ Apologies for the formatting. Outlook is braindead ] The Sun RPC library files that we are using have the following license: * Sun RPC is a product of Sun Microsystems, Inc. and is provided for * unrestricted use provided that this legend is included on all tape * media and as a part of the software program in whole or part. Users * may copy or modify Sun RPC without charge, but are not authorized * to license or distribute it to anyone else except as part of a product or * program developed by the user. Although the license appears to be internally inconsistent, it sounds to me as if we do not have the rights to re-license the code in the RPC library, nor is it compatible with the GPL. It is not inconsistent because "unrestricted use" does not address distribution, while the second sentence addresses redistribution rights. It does not address GPL style distribution, in which the code is being licensed as part of a product with the source code included. The "tape media" comment is rather quaint. I suppose tar files, RPM's and floppies need not carry the copyright notice. Trammell -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-19 14:43:11
|
Folks, At lunch a few weeks ago Lee and I discussed non-blocking RPC libraries. I was concerned with the cost and overhead of threads, as well as the complexity for filesystem writers who were unaccustomed to threads. We agreed that without an actual coder who complained, the issue was moot. On a whim this morning I decided to check what other RPC libraries did and found a paper from USENIX 98 in which the authors used deferred continuations: http://www.usenix.org/publications/library/proceedings/usenix98/full_papers/ anderson/anderson_html/node11.html They have zero-copy RPC in their operating system, among other neat features. The continuations are generated in an interrupt service routine, which is an interesting way to schedule a "bottom half" handler to run. Trammell |
From: Hudson, T. B. <Tra...@ce...> - 2001-11-15 15:42:51
|
This time it's doing a loadable module and no NFS layer: http://www.inf.bmu.hu/~mszeredi/avfs/ Or: http://sourceforge.net/projects/avf I'm still reading the docs. -- H: hu...@sw... 240 476 1373 W: Tra...@ce... 240 453 3317 |
From: Lee W. <le...@sa...> - 2001-11-09 22:41:10
|
Tramm Hudson wrote: > > What is the difference between unlink and rmdir? I know that the > POSIX semantics allow for a distinction, but they do not mandate > it. I know of nothing. However, I suspect this is something I just don't know. > > And how does it affect filesystems that allow directories to contain > data other than files? This is most common in "registry" style file > systems that map tables or data into the file namespace. Can't do part 2 without having an answer for part 1, sorry. > > Trammell > -- > o hu...@sw... O___| > /|\ http://www.swcp.com/~hudson/ M 240.476.1373 /\ \_ > << KC5RNF H 505.315.5133 \ \/\_\ > 0 U \_ | > > _______________________________________________ > Enfs-devel mailing list > Enf...@li... > https://lists.sourceforge.net/lists/listinfo/enfs-devel |
From: Tramm H. <hu...@sw...> - 2001-11-09 16:52:14
|
What is the difference between unlink and rmdir? I know that the POSIX semantics allow for a distinction, but they do not mandate it. And how does it affect filesystems that allow directories to contain data other than files? This is most common in "registry" style file systems that map tables or data into the file namespace. Trammell -- o hu...@sw... O___| /|\ http://www.swcp.com/~hudson/ M 240.476.1373 /\ \_ << KC5RNF H 505.315.5133 \ \/\_\ 0 U \_ | |
From: Tramm H. <hu...@sw...> - 2001-11-09 15:46:35
|
Do the NFS semantics allow permissions on extents of files? I see that read and write both take credentials, so it could be implemented in the server. Trammell -- o hu...@sw... O___| /|\ http://www.swcp.com/~hudson/ M 240.476.1373 /\ \_ << KC5RNF H 505.315.5133 \ \/\_\ 0 U \_ | |