From: Adam L. <ag...@li...> - 2000-02-27 20:45:50
|
On Sun, Feb 27, 2000 at 07:43:46PM +0100, Claus Matthiesen wrote: I am *not* a great h/w person... > > We could setup non-present pages and map those. Actually we could have > > the whole disk mapped in memory this way (Damm 32-bit addresses) with > > page-faults on access etc. I don't really know the issues here, so feel > > free to simply say "Nope." ;) All those page-faults could kill the speed > > thou. >=20 > On the subject of 32 versus 64 bit CPU's: What *are* we going to do? Since > the Elysium kernel's behaviour is heavily dependent on the machine of whi= ch > it runs (or is it? Are there so few abstraction, they mainly will be > identical on most computers? I'm not the most hardware-minded here, so I'd > like to know how challenging porting Elysium will be. What, if any, will = the > changes in the specification for the kernel be on a 64-bit system?), The kernel will (I think) be an almost total rewrite. But the kernel is small. > at > least when it comes to 32 versus 64 bits, what will we do when Itanium hi= ts > the shelves later this year?=20 Itanium is going to be crap. If Intel are switching arch then I'm moving to something better (say an AS/390 ;). But Alpha is way better. > As far as I see, mapping all physical memory in > one address space seems a very viable abstraction, even though it can be > argued that it's not the most exo-ish way of doing it (*I* think it is. I > just said it could be argued that it wasn't). >=20 > As far as I see, there are two ways about it: Either we choose to impleme= nt > a 64-bit-style block server mapping the hard drive into physical memory a= nd > say "up yo*rs" to those old-fashioned 32-bit machines I don't want to drop all 32-bit boxes. There are too many - and we would get *no* support for years. > (like mine. Just > bought a dual-celery, overclocked and everything and it's already > oldfashioned. There's computers for you. Why, when you get right down to = it, > really bother?). The other alternative is of course to write a 32-bit ser= ver > first, wait until the 64-bit market matures and then launch a 64-bit vers= ion > then. Sounds about right. > The first choise presents the obvious problem that none of us has (or for > some time will have) an Itanium. The kernel would also have to be re-writ= ten > and/or recompiled for the Itanium. This option is of course less severe if > the changes in the specification for the Elysium kernel are miniscule, bu= t I > know too little about those things. In all circumstances we would need a > temporary 32-bit server for use with our own machines. >=20 > The other option rather ties us to the Pentium-style processors (or at le= ast > 32-bit processors) for some time. We could of course easily replace the > servers that are affected then the 64-bit processors come, but how many w= ill > they be? As we get higher and higher up in the system hierachy, the > abstractions should make the higher system servers immune to such changes. > But I rather expect the hierachy on our system to be rather flat, resulti= ng > in the possibly that nearly all servers and libraries have to be rewritten > or altered in some way. I think that most code will be independant within 1 level of library. Look at something like Debian src. The kernel, gcc, glibc and binutils need re- writing for a new arch, but 100's of megs of code are fine across arch's. Of course bleeding-edge-custom-processes (that exo-kernels allow) will need work. >=20 > Neither of these options are very appealing, IMHO. I have a suggestion, > which might make me very unpopular: Could we just use 64-bit adresses now= on > the 32-bit processors, perhaps by making adresses lower than > 0x000000100000000 memory-adresses and addresses larger than that > block-device addresses?=20 You just can't stuff that into IA32 * Segments can't handle it * nither can paging * nor can segment regs Intel have some very nasty hacks to allow 16GB (36-bit) addresses, but it is *a nasty hack*. And 16GB is nothing. > It would be introducing an abstraction (sort of. > What we really do is expand the address space and pretend it's larger than > it is, but what the hell), but if it proves an effective, non-restricting > and above all visionary abstraction, I feel that it at least should be > considered. > Thing is it gets complicated because of the page size (4KB, 8KB on Alpha). It is, thank god, a multiple of 512 (the block size of drives) but say: | 1 | 2 | 3 | 4 | 5 | - each | x | is 512 bytes. 1-5 make a file | 1 page | If we want to allow read-access to blocks 1-5, but not after 5, we have a problem. The page is too big, so we have to make the CPU page fault *on every access* and check *every* access. Too slow (and your TLB is *uttery buggered*) We could make the smallest disk block the page size (as in FAT clusters). That solves the page problem (but we still can't pack the disk in the address space), it wastes space on small files - but that's not too much. The thing is we *don't* want processes to have to copy disk block in local memory, when the same pages are in buffer cache anyway. If we allow processes to map pages in buffer cache (BC) then that works quite nicely. Since files are alloced in blocks that the same size as native CPU page size (which means disk block size chngs per arch) we can kill duplicate data. But what do we do when pages are dropped from BC? We have to walk process list and kill page table entries to that page Ouch! AGL --=20 Smoking is one of the leading causes of statistics. |