|
From: <sou...@fu...> - 2002-07-26 18:04:52
|
Re: [Challengeos-developers] more ideas Von: Philipp Gühring <p.g...@fu...> An: cha...@li... Datum: Fri, 26 Jul 2002 19:03:55 +0200 Nachricht enthält Signatur von Philipp Michael Gühring (Sourcerer) <pg...@pa...> Hi, > This file system will be soleley for data storage. Applications could be another idea too. Look at www.entity.cx I think that could be the basis for your componentizes modulized architecture too. > It will serve as some > sort of hierarchical database. It'll be designed in such a way that an > XML file can go through an import/export cycle on this file system and > will come out unaltered (maybe perhaps some whitespaces vanished or were > added, but that has not changed the contents of the file). Ok. > How is this file system accessed? Among the many ways thinkable I prefer > to map the hierarchical structure onto persistent data structures which > are accessed from memory. However, this can only be achieved in a > reasonable way when an object-oriented language is used. Actually, pure > procedural languages like C will have a hard time in ChallengeOS anyway > (see below). Evidently, you could design a plain procedural C-compatible > interface, but that'll be a lot more complicated to use. Yes. DOM, and all the necessary functions around it, have to be provided by the system kernel, for every kernel driver, for every programming language, and for every application. I see two necessary functions based on DOM, that should be made available: XPath and XSLT. For the programming languages: DOM should be a datatype: DOM mytree; XPath should be as integrated into the programming languages as Regular expressions are integrated in Perl: foreach (mytree =~ x/DOCUMENT/NODE/SUBNODE[@attribute='value']/Text ) { print; } And the kernel should make the following possible: DOM persistentTree("/dev/hdd4"); Which makes the DOMfs from the partition /dev/hdd4 available as DOM tree. > Note that there must still be conventional file system available because > at least the software must be stored on this file system. Hmmm. Have a look at Entity. I think Entity shows the way, how applications can be developed in the future. > The > requirements of both file system types absolutely exclude one another. > So there is no way to combine them. ;-) I am not sure yet. I don't think that it's impossible, it just isn't straitforward. Did you take a deep enough look at IVI::DB yet? IVI::DB is my own native XML database, which is available under the GPL from http://www.livingxml.net/ (->Plattform ->Database ->bottom of the page) With IVI::DB, I somehow succeed to build up a XML database on top of a normal hierarchical filesystem. > 2. Enhanced execution environment (The name somehow reminds me of Palladium ... but forget that) > [Note: This is in my oppinion the most important feature. I'll not give > up this one. No way!] Did I said anything against it? Something like that stumbles through my mind for some years now, but I am not yet sure, how I really want it, and I think I want it a bit differently than you, but I think the overall direction is not that wrong ... > This is more than a feature. It's a computing concept unmatched by > anything I've seen so far or that is to come in the near future. In > other words: it's unique! Marketing. > It's hard to describe all of this exactly. Imagine your software > consisting of a whole wagonload of small, specialized modules (or > libraries) which are running in the same address space. Every module > could use functions provided by every other module that is installed. > Extending such an environment would be easy: just write the missing > module using features from the modules that are already available. Sounds like the shared library concept. At the moment, we have several different execution environments: * Kernelspace (device drivers, ...) * Daemonspace (all those servers ...) * GUIspace (KDE, Gnome, Windows, ... "rich clients") * Webservicespace (everything running in a Browser "thin client" All those execution environments have very different needs, and should be thought trough on their own, I guess. > However, this has two rather obvious drawbacks: If only one module has a > bug the whole set would be forced down in a big crash. Second, every > module involved must be loaded in memory at startup. That's a waste of > memory. But these problems can be overcome. Have a look at the Perl module concept, and CPAN. It is nearly so automated, that it would automatically fetch and install the needed modules from the Internet, as soon as you call the first. > The way to achieve this is quite easy - in theory. Enforcing controls > and checks on the environment will give the modules the ability to > gracefully handle crashes without pulling down every other module. > Furthermore the same mechanism can be used to perform a sort of lazy > linking: a module will only be loaded when it is referenced (call'ed > into) for the first time and will be unloaded as soon as the last active > reference has gone. > > This is possible in an 80386 and upwards (and pretty likely even other > processor architectures as well): This processor has a pretty nifty > page-based access control. You can only set page table entries to > readable, writeable and/or non-existant, but that's sufficient, because > the rest is handled in software. > > Each module get's its own private adress room which contains "windows" > into which the referenced modules are mapped. This mapping might not > even be real page-level mapping because of access limitations which need > to be enforced. Again, this sounds like shared libraries. > Each time a window is accessed the context of the running project > changes to the adress space of the referenced module, after appropriate > mapping has been done. Accesses through windows are trapped (except when > non-pointer data is read and reading is allowed or other rather > trivial cases) and the instruction that has invoked the trap is > examined, and then - if it is valid - emulated by the trap handler. In > case the instruction is invalid the module having caused this violation > will get the chance to handle the error and exit gracefully. > > Of course access rights must be defined. This is done on a per-symbol > basis (symbol in this context means funcion/procedure entry point, > object, variable or data structure). Therefore each module must consist > of a binary and an access definition file. In this file there is an > entry for each symbol which grants or denies read, write and execute > rights for the owner of the module, his/her group and others (note that > making an extra file out of this has two benefits: first, there's no new > file format needed, and second this file could possibly edited by an > user or admin). In this file prototypes of each exported function and/or > variable must resides as well as definitions of exported data > structures, because during the adress space switch pointer addresses > might have to be tweaked so that they point into the right window > (imagnie that the process is tunneling back and forth between two > windows which map address spaces that have different real offsets). This > might not be neccessary when the windows are at the same addresses as > the modules that are referenced within their own address space. > > This mechanism can be extened even further: Windows can map modules > running on remote machines. This only needs a small extension in the > form of a network protocol stack which is able to serialize and > reassemble such requests automatically. Furthermore this mechanism can > be exploited to map contents of the DOMfs into persistent objects and > thereby providing a decent interface. > All of this sounds like a lot of overhead. But I assume that it isn't. I > expect that code controlled this way is no more than two to three times > slower than a usual executable under otherwise identical conditions. And > that performance hit isn't recognizeable for desktop users given the > performance of current PC hardware. > > 3. An interpreted language to automize the enhanced execution > environment: ObjectBasic > > This is an optional thing and can be described as the "shell" of the > enhanced executino environment I just described. The language should be > easy to use, yet powerful enough to write small applications and > automization scripts for every day use. > Have you ever thought of > remote-controlling your word processor from a shell and writing a letter > this way? Yes. I have. But afterwards, I found no answer to the question: "And why should I?" > With this interpreter, it'll be possible. I promise. ;) > That's all! This is just a short description of the most important > points in my design proposal. Of course it is much longer: > integrated installer with online software update facility (the installer might get > things optimized inside the enhanced execution environment quite a lot), That one sounds good. CPAN could be a good example, how the technical side could work. What I am missing at CPAN is the missing Quality Control. > a new and hopefully superior graphical user interface, etc. Have a look at "Berlin", which is somehow connected to GGI, and have a look at Entity. > What else? > An office suite? An ERP system? A web server? I just don't know. ;-) Do you plan to run SAP on it? Many greetings, -- ~ Philipp Gühring p.g...@fu... ~ http://www.livingxml.net/ ICQ UIN: 6588261 ~ <xsl:value-of select="file:/home/philipp/.sig"/> Ende der PGP-Nachricht |