You can subscribe to this list here.
2000 |
Jan
|
Feb
(81) |
Mar
(26) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Rogelio M. S. Jr. <ro...@ev...> - 2000-03-01 02:35:42
|
>If capabilities are to reside in - unprotected - user space we'll need >to use encryption, but AFAIK revocation isn't easier with encrypted >capabilities than with clists in kernel space (IMHO it would be easier >with the kernel controlled clists). > >/Kasper We can just replace the random number in the protected object can't we? Encryption is also reliable. I'm afraid I don't see how in-kernel clists would be easier to manage than otherwise. |
From: Rogelio M. S. Jr. <ro...@ev...> - 2000-03-01 02:18:44
|
I just read through the paper. I think it is a cool idea. We should adopt it. Not only for interrupt handling but also for the block drivers. We should explore it more. |
From: Kasper V. L. <ve...@da...> - 2000-02-29 16:51:50
|
In this mail I'll try to explain a way of implementing 'small address spaces', which rely on segmentation for protection rather than paging. The motivation for such address spaces is that switching between these doesn't require context switches. 1. Memory Layout 0x00000000 - 0x0fffffff : Physical memory and kernel 0x10000000 - 0x3fffffff : Small address space region 0x40000000 - 0xffbfffff : Application virtual memory 0xffc00000 - 0xffffffff : Page directory and page tables mapping 2. Small Address Space Region The small address space region consists of 192 small address spaces - each 4Mb in size. A process can allocate a slot, and map 4Mb (one page directory entry) of it's virtual memory into the small address space slot. All small address spaces (SASes) are shared among all address spaces in the system, and segmentation is employed to ensure proper protection: * code and data segment of applications have a base of 0x40000000. * each SAS has a separate code and data segment restricting access to the SAS itself. 3. Protected Control Transfer When registering the PCT entry a process can request that PCT go to a specified SAS. This applies to uncommanded PCT (like IRQ handling) as well. As a consequence handling the firing of an IRQ is reduced to (that is if the handler is in a SAS): 1) trap to the kernel 2) upcall to the SAS 3) handle IRQ 4) yield to the kernel 5) return to interrupted process The gain is that we avoid two context switches. 4. Scheduling It should be possible to schedule SASes. When allocating a CPU quantum a process can specify that the prologue/epilogue upcalls should go to a specified SAS - instead of a full application address space. 5. Considerations * is 192 SASes enough, and is 4Mb a reasonable limit? * can a process have multiple SASes? What do you think? It should make quite a difference performance wise, without sacrificing protection and modularity. /Kasper ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Kasper V. L. <ve...@da...> - 2000-02-29 11:49:25
|
[snip: using ASHs for interrupt handling] > Yes, we do. But I'm not sure ASHs is the right solution. Personally I > would rather go for the Maybe I should elaborate a bit :-) We can avoid the context switches associated with interrupt handling by mapping a set of 'small address spaces' into all address spaces, and protecting them using segmentation. To the affected processes (the process running when the interrupt handler is invoked, and the interrupt handler process) there will be no difference. They'll still run in a virtual address space of their own, although they now are protected by means of segmentation instead of paging. I refer to "Improved Address Space Switching on Pentium Processors by Transparently Multiplexing User Address Spaces" (J. Liedtke), which is available from http://i30www.ira.uka.de/publications/pubcat/As-pent.ps /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Kasper V. L. <ve...@da...> - 2000-02-29 11:34:01
|
"Rogelio M. Serrano Jr." wrote: > > > Why don't we use ASH's (Application Specific Handlers)? > > > Isn't that the way it is done on XOK? > > > > As the solution to what? > > Interrupt handling. But then we still have to see how to run the interrupt handler. > In its own task or within the context of the inerrupted task. We currently use the > former scheme right? Yes, we do. But I'm not sure ASHs is the right solution. Personally I would rather go for the > > > Can we use the I/O permission map to control access to io ports? > > > > > That's the plan. Each address space will have it's own I/O permissions. > > > I see. That also means a TSS for each address space. Not necessarily. I'm planning to use only one TSS. To make it work the I/O permission bitmap will simply be mapped differently in different address spaces. That's why each address space - and not each process - has it's own I/O permissions. The other parts of the TSS will be shared among all processes. > > > Can we do capabilities the way it was done in Amoeba? > > > > What do you mean? Securing them by means of encryption? > > > Yes and more. Revocation would simply involve changing a random number. > > Like: > struct cap { > int own_pid; > int obj_id; > int rights; > /* random number which is a copy of random number stored in protected object */ > long check; > }; > > Revocations would be simpler this way. If capabilities are to reside in - unprotected - user space we'll need to use encryption, but AFAIK revocation isn't easier with encrypted capabilities than with clists in kernel space (IMHO it would be easier with the kernel controlled clists). /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Rogelio M. S. Jr. <ro...@ev...> - 2000-02-29 11:07:22
|
> > Why don't we use ASH's (Application Specific Handlers)? > > Isn't that the way it is done on XOK? > > As the solution to what? > Inerrupt handling. But then we still have to see how to run the interrupt handler. In its own task or within the context of the inerrupted task. We currently use the former scheme right? > > Can we use the I/O permission map to control access to io ports? > > That's the plan. Each address space will have it's own I/O permissions. > > I see. That also means a TSS for each address space. > > > Can we do capabilities the way it was done in Amoeba? > > What do you mean? Securing them by means of encryption? > Yes and more. Revocation would simply involve changing a random number. Like: struct cap { int own_pid; int obj_id; int rights; long check; /*random number which is a copy of random number stored in protected object*/ Revocations would be simpler this way. } > > > > I can't see why it shouldn't be 'exo-ish'. The primary exo principle is > that the system shouldn't enforce any abstractions that aren't strictly > necessary for protection. If the kernel can make sure that processes are > still protected from each other without using multiple address spaces I > see no reason not to. On the other hand I can't see the benefits of > mapping all *physical* memory into one address space. > I Agree. |
From: Kasper V. L. <ve...@da...> - 2000-02-28 11:30:55
|
Claus Matthiesen wrote: [snip] > As far as I see, mapping all physical memory in > one address space seems a very viable abstraction, even though it can be > argued that it's not the most exo-ish way of doing it (*I* think it is. I > just said it could be argued that it wasn't). I can't see why it shouldn't be 'exo-ish'. The primary exo principle is that the system shouldn't enforce any abstractions that aren't strictly necessary for protection. If the kernel can make sure that processes are still protected from each other without using multiple address spaces I see no reason not to. On the other hand I can't see the benefits of mapping all *physical* memory into one address space. /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Kasper V. L. <ve...@da...> - 2000-02-28 11:23:57
|
Adam Langley wrote: > I know it's aginst the principles of exokernels... but why put the disk server > in a seperate process? It's going to have to be a secure server (which > applications can't overright) so why not put it in the kernel? Putting it in > a process only gives us more context switches. Are there any reasons for > a server. I guess it's easier to replace, but we could have kernel modules. It's not necessarily true that we'll have more context switches by using a separate process for the disk block server. Jochen Liedtke has proposed mapping any commonly used servers into all address spaces (just as the kernel is) and protecting them by means of segmentation. Imagine having the following layout: 0 - 256 Mb : Kernel and kernel accessible physical memory 256 Mb - 3Gb : Application Virtual Memory 3Gb - 4Gb : Shared mapping of servers The datasegment of the applications should have a base of 0 and a limit of 3Gb. It would be impossible to access memory in the shared servers. The benefits of this approach are that we can dynamically - and quite easily - modify the set of processes that are mapped into the shared section based on communication patterns, and that the processes are still 100% separated by the hardware MMU. This means that any unfortunate mishaps in the processes will never (at least not directly) affect other processes. It's probably easier to allow access to the disk block server from other machines (thereby allowing some kind of distribution) if it's implemented as a separate process. Debugging kernel code is also much harder that traditional application code. Let's not put the disk block server in the kernel! :-) /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Kasper V. L. <ve...@da...> - 2000-02-28 09:11:28
|
> Can we do capabilities the way it was done in Amoeba? What do you mean? Securing them by means of encryption? /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Kasper V. L. <ve...@da...> - 2000-02-28 09:10:52
|
> Can we use the I/O permission map to control access to io ports? That's the plan. Each address space will have it's own I/O permissions. /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Kasper V. L. <ve...@da...> - 2000-02-28 09:09:10
|
> Why don't we use ASH's (Application Specific Handlers)? > Isn't that the way it is done on XOK? As the solution to what? /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Rogelio M. S. Jr. <ro...@ev...> - 2000-02-28 02:45:34
|
Why don't we use ASH's (Application Specific Handlers)? Isn't that the way it is done on XOK? |
From: Rogelio M. S. Jr. <ro...@ev...> - 2000-02-28 02:01:31
|
Can we use the I/O permission map to control access to io ports? |
From: Rogelio M. S. Jr. <ro...@ev...> - 2000-02-28 00:25:15
|
Can we do capabilities the way it was done in Amoeba? |
From: Rogelio M. S. Jr. <ro...@ev...> - 2000-02-27 23:52:55
|
I am having the same failures that Mr. Jekov reported. And I don't see characters on the console when I type. |
From: Claus M. <cla...@ma...> - 2000-02-27 21:20:48
|
I agree that given the facts in hand, doing any kind of "simulation" of a 64-bit processor is not efficient. I also concur that we should try to achieve cross-platform independency through libraries. This does present a problem, though, since we quickly could run into problems with the libraries being the de facto standard for calling the kernel - which is *not* what we want, since people should want to write things directly for the kernel to avoid unfortunate abstractions. I don't know if we can design a library broad enough in scope to make it interesting in both 32 and 64 bit, but in all circumstances this is somewhat against the exo spirit. You seem to feel that the IA-64 is a poor standard. I have never had the chance to work with it (for obvious reasons), but it sure does seem to be a heck of a lot easier writing efficient compilers for it rather than IA-32. But, technical arguments aside, we cannot ignore IA-64 for one reason: market dominance. Wether it's a good standard or not, in a few years (2-3, perhaps 4) it will probably be Intels only standard, certainly so for the high-end machines that really *need* something like Elysium. Ignoring it would be like when Steve Jobs ignored putting network hardware in Apple computers some years back, because he felt that having computers in a network was, and I quote, "wrong". This, of course, cost Apple hundreds of thousands, not to say millions. There is of course still a chance that Intel loses market dominance to for example AMD or the Alpha, but I sincerely doubt that. Especially AMD seems unlikely to make anything not Intel-compatible. They are famous for being better at implementing the IA-32 better than Intel, but they are *not* famous for inventing new architectures. How should a cross-platform, nonrestricting library for the lowest levels in Elysium look? And should we have one at all? Suggestions are welcome. - xmentor |
From: Adam L. <ag...@li...> - 2000-02-27 20:45:50
|
On Sun, Feb 27, 2000 at 07:43:46PM +0100, Claus Matthiesen wrote: I am *not* a great h/w person... > > We could setup non-present pages and map those. Actually we could have > > the whole disk mapped in memory this way (Damm 32-bit addresses) with > > page-faults on access etc. I don't really know the issues here, so feel > > free to simply say "Nope." ;) All those page-faults could kill the speed > > thou. >=20 > On the subject of 32 versus 64 bit CPU's: What *are* we going to do? Since > the Elysium kernel's behaviour is heavily dependent on the machine of whi= ch > it runs (or is it? Are there so few abstraction, they mainly will be > identical on most computers? I'm not the most hardware-minded here, so I'd > like to know how challenging porting Elysium will be. What, if any, will = the > changes in the specification for the kernel be on a 64-bit system?), The kernel will (I think) be an almost total rewrite. But the kernel is small. > at > least when it comes to 32 versus 64 bits, what will we do when Itanium hi= ts > the shelves later this year?=20 Itanium is going to be crap. If Intel are switching arch then I'm moving to something better (say an AS/390 ;). But Alpha is way better. > As far as I see, mapping all physical memory in > one address space seems a very viable abstraction, even though it can be > argued that it's not the most exo-ish way of doing it (*I* think it is. I > just said it could be argued that it wasn't). >=20 > As far as I see, there are two ways about it: Either we choose to impleme= nt > a 64-bit-style block server mapping the hard drive into physical memory a= nd > say "up yo*rs" to those old-fashioned 32-bit machines I don't want to drop all 32-bit boxes. There are too many - and we would get *no* support for years. > (like mine. Just > bought a dual-celery, overclocked and everything and it's already > oldfashioned. There's computers for you. Why, when you get right down to = it, > really bother?). The other alternative is of course to write a 32-bit ser= ver > first, wait until the 64-bit market matures and then launch a 64-bit vers= ion > then. Sounds about right. > The first choise presents the obvious problem that none of us has (or for > some time will have) an Itanium. The kernel would also have to be re-writ= ten > and/or recompiled for the Itanium. This option is of course less severe if > the changes in the specification for the Elysium kernel are miniscule, bu= t I > know too little about those things. In all circumstances we would need a > temporary 32-bit server for use with our own machines. >=20 > The other option rather ties us to the Pentium-style processors (or at le= ast > 32-bit processors) for some time. We could of course easily replace the > servers that are affected then the 64-bit processors come, but how many w= ill > they be? As we get higher and higher up in the system hierachy, the > abstractions should make the higher system servers immune to such changes. > But I rather expect the hierachy on our system to be rather flat, resulti= ng > in the possibly that nearly all servers and libraries have to be rewritten > or altered in some way. I think that most code will be independant within 1 level of library. Look at something like Debian src. The kernel, gcc, glibc and binutils need re- writing for a new arch, but 100's of megs of code are fine across arch's. Of course bleeding-edge-custom-processes (that exo-kernels allow) will need work. >=20 > Neither of these options are very appealing, IMHO. I have a suggestion, > which might make me very unpopular: Could we just use 64-bit adresses now= on > the 32-bit processors, perhaps by making adresses lower than > 0x000000100000000 memory-adresses and addresses larger than that > block-device addresses?=20 You just can't stuff that into IA32 * Segments can't handle it * nither can paging * nor can segment regs Intel have some very nasty hacks to allow 16GB (36-bit) addresses, but it is *a nasty hack*. And 16GB is nothing. > It would be introducing an abstraction (sort of. > What we really do is expand the address space and pretend it's larger than > it is, but what the hell), but if it proves an effective, non-restricting > and above all visionary abstraction, I feel that it at least should be > considered. > Thing is it gets complicated because of the page size (4KB, 8KB on Alpha). It is, thank god, a multiple of 512 (the block size of drives) but say: | 1 | 2 | 3 | 4 | 5 | - each | x | is 512 bytes. 1-5 make a file | 1 page | If we want to allow read-access to blocks 1-5, but not after 5, we have a problem. The page is too big, so we have to make the CPU page fault *on every access* and check *every* access. Too slow (and your TLB is *uttery buggered*) We could make the smallest disk block the page size (as in FAT clusters). That solves the page problem (but we still can't pack the disk in the address space), it wastes space on small files - but that's not too much. The thing is we *don't* want processes to have to copy disk block in local memory, when the same pages are in buffer cache anyway. If we allow processes to map pages in buffer cache (BC) then that works quite nicely. Since files are alloced in blocks that the same size as native CPU page size (which means disk block size chngs per arch) we can kill duplicate data. But what do we do when pages are dropped from BC? We have to walk process list and kill page table entries to that page Ouch! AGL --=20 Smoking is one of the leading causes of statistics. |
From: Claus M. <cla...@ma...> - 2000-02-27 18:47:52
|
> We could setup non-present pages and map those. Actually we could have > the whole disk mapped in memory this way (Damm 32-bit addresses) with > page-faults on access etc. I don't really know the issues here, so feel > free to simply say "Nope." ;) All those page-faults could kill the speed > thou. On the subject of 32 versus 64 bit CPU's: What *are* we going to do? Since the Elysium kernel's behaviour is heavily dependent on the machine of which it runs (or is it? Are there so few abstraction, they mainly will be identical on most computers? I'm not the most hardware-minded here, so I'd like to know how challenging porting Elysium will be. What, if any, will the changes in the specification for the kernel be on a 64-bit system?), at least when it comes to 32 versus 64 bits, what will we do when Itanium hits the shelves later this year? As far as I see, mapping all physical memory in one address space seems a very viable abstraction, even though it can be argued that it's not the most exo-ish way of doing it (*I* think it is. I just said it could be argued that it wasn't). As far as I see, there are two ways about it: Either we choose to implement a 64-bit-style block server mapping the hard drive into physical memory and say "up yo*rs" to those old-fashioned 32-bit machines (like mine. Just bought a dual-celery, overclocked and everything and it's already oldfashioned. There's computers for you. Why, when you get right down to it, really bother?). The other alternative is of course to write a 32-bit server first, wait until the 64-bit market matures and then launch a 64-bit version then. The first choise presents the obvious problem that none of us has (or for some time will have) an Itanium. The kernel would also have to be re-written and/or recompiled for the Itanium. This option is of course less severe if the changes in the specification for the Elysium kernel are miniscule, but I know too little about those things. In all circumstances we would need a temporary 32-bit server for use with our own machines. The other option rather ties us to the Pentium-style processors (or at least 32-bit processors) for some time. We could of course easily replace the servers that are affected then the 64-bit processors come, but how many will they be? As we get higher and higher up in the system hierachy, the abstractions should make the higher system servers immune to such changes. But I rather expect the hierachy on our system to be rather flat, resulting in the possibly that nearly all servers and libraries have to be rewritten or altered in some way. Neither of these options are very appealing, IMHO. I have a suggestion, which might make me very unpopular: Could we just use 64-bit adresses now on the 32-bit processors, perhaps by making adresses lower than 0x000000100000000 memory-adresses and addresses larger than that block-device addresses? It would be introducing an abstraction (sort of. What we really do is expand the address space and pretend it's larger than it is, but what the hell), but if it proves an effective, non-restricting and above all visionary abstraction, I feel that it at least should be considered. Looking very much forward to your comments, - xmentor PS. The first drafts of the graphics window system layout will be posted in about a week. Designing higher-level systems under Elysium is fun! |
From: Adam L. <ag...@li...> - 2000-02-27 17:18:33
|
On Sun, Feb 27, 2000 at 03:44:23PM +0100, Kasper Verdich Lund wrote: > > Well the kernel could let processes create capabilites of type PRIVATE = with the > > processes pid as the obj_id, totally freely. This means a server can ch= urn > > out capabilites and the kernel can check very simply if they are valid.= As > > for things like giving out real-only memory capabilites - the process w= ould > > have to present a capability that would grant *at least* that right when > > creating it. >=20 > How would a server go about protecting the individual resources it > manages by means of capabilities? If the obj_id refers to the server > process, and the obj_type is PRIVATE how does the server know which > (server internal) object it refers to? Do we need yet another capability > field (obj_pid) and let the obj_id be reserved for object identities?=20 > Well I suggested 3 fields obj_{type|obj|meth} I guess the server could free= ly use obj_meth > Yeah, but if you put some IDE code together we can start with an > unprotected version of the disk block server, which is quite easy. The > FAT16 code should probably go into a standard process library that > communicates with the disk block server. In the end the disk block > server should handle disk block caching, and that could be implemented > without protection as well. The way protection should work is by > uploading functions to the disk block server that given a meta-data disk > block returns the set of disk blocks the meta-data block refers to: >=20 > block_set_t *owns(meta_data_t md); >=20 > To get access to a disk block it must be in the disk block cache of the > disk block server, and the process requesting access must have access to > the parent disk block (the meta-data disk block that refers to the disk > block in question). When updating a meta-data disk block you pass the > new disk block and a description of your changes (in terms of changes to > the set of referred disk blocks) to the disk block server. The server > checks your modification by running the owns() function before and after > the modification and verifying that the resulting sets are updated > exactly as you have described.=20 >=20 > I'm working on a kernel specification, but once that's done I'll try > describing this particular server in more detail. Well, it would be quite nice if processes could map disk blocks into their memory space. But that would give the disk server hell when it wanted to free pages etc (and 4K isn't fine enough). Oh well We could setup non-present pages and map those. Actually we could have=20 the whole disk mapped in memory this way (Damm 32-bit addresses) with=20 page-faults on access etc. I don't really know the issues here, so feel free to simply say "Nope." ;) All those page-faults could kill the speed thou. I know it's aginst the principles of exokernels... but why put the disk ser= ver in a seperate process? It's going to have to be a secure server (which applications can't overright) so why not put it in the kernel? Putting it in a process only gives us more context switches. Are there any reasons for a server. I guess it's easier to replace, but we could have kernel modules. > =20 > > Another thought. If the kernel can generate capabilites with the proces= s's PID, > > how do we invalidate them. A process can die and another spawn in its P= ID. The > > capability is now valid for a different object (namely the new process)= . Hmm. >=20 > As it is now the PID is unique in time - at least quite unique. The PID > is 32-bit, and the kernel tries not to reuse PIDs. We probably need > something better than this, but it's almost there. Ah, ok. AGL --=20 Smoking is one of the leading causes of statistics. |
From: Kasper V. L. <ve...@da...> - 2000-02-27 14:47:51
|
Adam Langley wrote: > > I agree that 'real' capabilities would be a better way of doing it. > > There are some issues about how to create a capability, though. I think > > the kernel should create the capabilities, but that requires that the > > kernel is able to check whether or not a process has access to some > > resource. If the resource is managed by a server it's not trivial to > > figure out the best way of doing it. It would be great if servers > > (drivers), such as the keyboard server, could use capability based > > protection of the resources it manages. I'll elaborate in a later > > posting. > > Well the kernel could let processes create capabilites of type PRIVATE with the > processes pid as the obj_id, totally freely. This means a server can churn > out capabilites and the kernel can check very simply if they are valid. As > for things like giving out real-only memory capabilites - the process would > have to present a capability that would grant *at least* that right when > creating it. How would a server go about protecting the individual resources it manages by means of capabilities? If the obj_id refers to the server process, and the obj_type is PRIVATE how does the server know which (server internal) object it refers to? Do we need yet another capability field (obj_pid) and let the obj_id be reserved for object identities? > As a side note. I think there should be some wide ranging capabilites that > allow lots of simple things to be done. Security sensitive applications could > make a lot of more-targetted capabilites and discard the generic one. Agreed. > > > Where will the IDE code go in the end? I don't think it can securly go in > > > process code - it would be too easy for a rouge process to do damage. I doubt > > > it will go in the kernel, so will there be a 'Drive Server'? > > > > I expect we'll implement a 'disk block server'. The way it multiplexes > > the disk blocks will be explained later (it's rather complex - probably > > the most complex driver of all), but interested people should read 'The > > Exokernel Operating Systems Architecture' by D. R. Engler. Especially > > the chapter(s) about XN. > > I would start with the most complex wouldn't I! ;) Yeah, but if you put some IDE code together we can start with an unprotected version of the disk block server, which is quite easy. The FAT16 code should probably go into a standard process library that communicates with the disk block server. In the end the disk block server should handle disk block caching, and that could be implemented without protection as well. The way protection should work is by uploading functions to the disk block server that given a meta-data disk block returns the set of disk blocks the meta-data block refers to: block_set_t *owns(meta_data_t md); To get access to a disk block it must be in the disk block cache of the disk block server, and the process requesting access must have access to the parent disk block (the meta-data disk block that refers to the disk block in question). When updating a meta-data disk block you pass the new disk block and a description of your changes (in terms of changes to the set of referred disk blocks) to the disk block server. The server checks your modification by running the owns() function before and after the modification and verifying that the resulting sets are updated exactly as you have described. I'm working on a kernel specification, but once that's done I'll try describing this particular server in more detail. > Another thought. If the kernel can generate capabilites with the process's PID, > how do we invalidate them. A process can die and another spawn in its PID. The > capability is now valid for a different object (namely the new process). Hmm. As it is now the PID is unique in time - at least quite unique. The PID is 32-bit, and the kernel tries not to reuse PIDs. We probably need something better than this, but it's almost there. /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Kasper V. L. <ve...@da...> - 2000-02-27 10:21:38
|
> After booting, when console is appear, computer freeze. The character in > the > upper left corrner istart to blink. The character in the upper left corner should blink - it's simply a dummy process that does nothing but increment the upper left character. The only purpose of that is to show that Elysium really does multitask. Have you tried typing anything on the keyboard? If it doesn't work I'll be happy to look into it. /Kasper -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Stoyan J. <st...@li...> - 2000-02-27 04:45:57
|
System: Compaq Presario 2295 CPU: AMD-K6 3D processor RAM: 64M Elysium image: 15.02.200 After booting, when console is appear, computer freeze. The character in the upper left corrner istart to blink. Previous image ( 7.02.2000) start to wtite messages about memory pages and also freeze with blinking upper left character. Reason? How to fix? Regards, Stoyan |
From: Adam L. <ag...@li...> - 2000-02-26 19:40:17
|
On Sat, Feb 26, 2000 at 11:28:27AM +0100, Kasper Verdich Lund wrote: > I agree that 'real' capabilities would be a better way of doing it. > There are some issues about how to create a capability, though. I think > the kernel should create the capabilities, but that requires that the > kernel is able to check whether or not a process has access to some > resource. If the resource is managed by a server it's not trivial to > figure out the best way of doing it. It would be great if servers > (drivers), such as the keyboard server, could use capability based > protection of the resources it manages. I'll elaborate in a later > posting. Well the kernel could let processes create capabilites of type PRIVATE with= the processes pid as the obj_id, totally freely. This means a server can churn out capabilites and the kernel can check very simply if they are valid. As for things like giving out real-only memory capabilites - the process would have to present a capability that would grant *at least* that right when creating it. As a side note. I think there should be some wide ranging capabilites that allow lots of simple things to be done. Security sensitive applications cou= ld make a lot of more-targetted capabilites and discard the generic one. I think the main problem with capabilites knowing where they are. *NIX processes start with fds 0, 1 and 2. But how does a process know that index 1 is a capability to the keyboard server and so on? We could use well-known indexes, but that possibly means *lots* of well known indexes - and a big in kernel table. That table would be wasted if those capabilites were not granted. Maybe processes should startup with a page of memory (4096/4=3D1024 indexes) that have the indexes of the kernel table. The process could record the data it needs and free the rest. e.g. Kernel cap table: STDOUT Process cap table: -1, 0, 0, -1, -1, -1, .... the process startup could look like this: out =3D cap_table[CAP_TABLE_STDOUT] in =3D cap_table[CAP_TABLE_STDIN] if (in =3D=3D -1) /* This program is interactive and must have a STDIN capability! */ exit (1); CAP_TABLE_* would be defined in a common header file. This saves on memory. > =20 > > Where will the IDE code go in the end? I don't think it can securly go = in > > process code - it would be too easy for a rouge process to do damage. I= doubt > > it will go in the kernel, so will there be a 'Drive Server'? >=20 > I expect we'll implement a 'disk block server'. The way it multiplexes > the disk blocks will be explained later (it's rather complex - probably > the most complex driver of all), but interested people should read 'The > Exokernel Operating Systems Architecture' by D. R. Engler. Especially > the chapter(s) about XN. I would start with the most complex wouldn't I! ;) Another thought. If the kernel can generate capabilites with the process's = PID, how do we invalidate them. A process can die and another spawn in its PID. = The capability is now valid for a different object (namely the new process). Hm= m. AGL --=20 Smoking is one of the leading causes of statistics. |
From: Kasper V. L. <ve...@da...> - 2000-02-26 10:31:50
|
> Well, I've decided that I hate floppy drives, so I've gone onto hard drives. > The code currently: > * Scans and autodectects IDE drives > * The FAT16 code is good enough to list the root directory without hard coded > sector numbers This is good news :-) > I'm going to brush it up before a posting. Still no word on capabilites, Kasper? I agree that 'real' capabilities would be a better way of doing it. There are some issues about how to create a capability, though. I think the kernel should create the capabilities, but that requires that the kernel is able to check whether or not a process has access to some resource. If the resource is managed by a server it's not trivial to figure out the best way of doing it. It would be great if servers (drivers), such as the keyboard server, could use capability based protection of the resources it manages. I'll elaborate in a later posting. > Where will the IDE code go in the end? I don't think it can securly go in > process code - it would be too easy for a rouge process to do damage. I doubt > it will go in the kernel, so will there be a 'Drive Server'? I expect we'll implement a 'disk block server'. The way it multiplexes the disk blocks will be explained later (it's rather complex - probably the most complex driver of all), but interested people should read 'The Exokernel Operating Systems Architecture' by D. R. Engler. Especially the chapter(s) about XN. -- ------------------------------------------------------------------- Kasper Verdich Lund, Computer Science Department, Aarhus University Office: 34P.218 | Phone: (+45) 8942 5680 Email: ve...@da... | WWW: http://www.daimi.au.dk/~verdich |
From: Adam L. <ag...@li...> - 2000-02-25 16:18:14
|
Well, I've decided that I hate floppy drives, so I've gone onto hard drives. The code currently: * Scans and autodectects IDE drives * The FAT16 code is good enough to list the root directory without hard cod= ed sector numbers =20 I'm going to brush it up before a posting. Still no word on capabilites, Ka= sper? Where will the IDE code go in the end? I don't think it can securly go in process code - it would be too easy for a rouge process to do damage. I dou= bt it will go in the kernel, so will there be a 'Drive Server'? AGL --=20 Smoking is one of the leading causes of statistics. |