From: <sou...@fu...> - 2002-07-29 01:01:01
|
Hi, > I'd only do this for interpreted code like Perl or Python fragments > (pretty much the same thing entity is doing). The reason is that I can't > think about a way to make this behave correctly for binary code > fragments. When should which piece of code be executed? Do we need binary code that much? > I can think of optional features in DOMfs that can't be mapped directly > into XML files that way like revision management or access rights. The > former is a pure convenience function, which might turn out quite > useful, but the latter is a must-have in the multi-user environment > ChallengeOS will have to provide. Hmmm ... revision control ... I don't think that it should be only a part of DOMfs. And on the other hand, I am not sure, whether it shouldn't be (mustn't be) application specific anyway ... I didn't had enough revision-control needing applications yet, to have a clear view, of what I want in that domain. Access control ... yes, that sounds interesting. While we come to access control ... Shouldn't we try to invent a better user-role-authorisation system? On the other hand, too much systematic might get too complex for users too ... > Currently I'm also pondering wether there should be the possibility to > define a format for XML subtrees (like it is done with DTDs or similar > files) and have DOMfs block any future changes that violate these > definitions. This is strictly optional, even if it should become > implemented. But is it worth implementing? Interesting idea. If we should do that, it should be based upon XML Schemas, instead of DTD's. I think, it would make sense to do it. Although I am not sure, what performace hit that would give ... I will have to investigate upon XML Schemas ... Another thing, that comes ot my mind, is the binding from DOM against classes from Object-Orientated Languages. It started, when Java learned to serialize and deserialize it's objects into XML, instead of Java's native formats. So the serialize and deserialize functions simply output all the member variables and recursively all included classes into XML. But what if the Classes as bound against persistent DOM? (Think of blessing hashes from Perl) > mount /usr/bin/adress.class /dom/adress That generates a new instance of adress.class, which gets mounted as persistent DOM on /dom/adress > ls -la /dom/adress -rw-r--r-- 1 philipp users 1168 Feb 3 1998 name -rw-r--r-- 1 philipp users 1168 Feb 3 1998 street -rwxr-xr-x 1 philipp users 16 Sep 18 1999 validate > echo "Philipp" >/dom/adress/name > /dom/adress/validate Validating Adress ... Street is missing! > ... Just an idea at the moment ... By the way, there is already a XML shell available > I agree except for the kernel and device drivers. It's senseless to do > some XML tricks in this area of an OS. I don't think so. > Remember that device drivers are > about providing unified interfaces to hardware. Correct. > And for the sake of > speed they must be as simple, flexible and straight-forward as possible. > Device drivers will most likely be interfaced via device files, i.e. the > good old /dev/ dir. This method is fast, clean and can be implemented by > providing a small number of syscalls (open, read, write, close, ioctrl). Ah. ;-) What about ioctrl? > You surely wouldn't want to "cat randomcode.xml > /dev/xxx" and expect > it to do something useful with your input, would you? No. Binary is binary. I would not change anything on open,read,write,close for storage devices. But everything else ... Just think about more complex hardware devices like TV Tuner cards. Being able to communicate with them with XML, would make sense, I think. > > I see two necessary functions based on DOM, that should be made > > available: XPath and XSLT. > > A question about XPath: Isn't there a supposedly more sophisticated > successor in preparation? XPath? I haven't heard about a successor yet. Perhaps you mean XPath 2.0 ... But the base functionality of XPath is so good, that I don't think that there can be something far better. > For XSLT: I think that it would be a good thing to leave the > implementation of this in user space. It should be available. Interesting question. I will have to think about it a bit more. > > For the programming languages: DOM should be a datatype: > > DOM mytree; > > Yes. Something like that. Anything else wouldn't make sense in a C++ > program. Maybe there should be a parameter passed to the constructor > telling it what node to take as a root node. But that would be only an optional parameter, yes. > Although that syntax is fine with Perl, it should be covered by > functions and classes in C++. Hmmm. > Reason is that it would otherwise hevily > break the already surprisingly complex C++ syntax (C++ syntax looks > simple, but it certainly isn't). In my opinion, C++ has so many design flaws, that it should not be used anymore. But there arises a much deeper question: Do we want to develop a compatible operating system, or a challenging operating system? I followed the starting of several new operating system in the last years, and I saw, that they all struggle with the question of compatibility. POSIX, C/C++, Win32, Linux, Device-Drivers, ... If you try to be compatible, you will gain a lot more applications for the platform. But as soon as you even commit yourself to one of those standards, you are bound to it, and cannot easily innovate beyond it. And I think that is the real challenge of developing a new operating system. > > And the kernel should make the following possible: > > DOM persistentTree("/dev/hdd4"); > > Which makes the DOMfs from the partition /dev/hdd4 available as DOM tree. > > Taking the device file name makes the node definition dependent on the > physical disk layout in the machine, which is a bad thing to do. Agreed. Lets do it differently: DOM persistentTree("/dom/existing/tree"); /etc/fdisk: /dev/hdd4 /dom/existing/tree domfs > Instead > I think there should be a way to mount existing DOMfs partitions into a > virtual root node. This way data can be spread across many disks and > partitions without requiring the application to care about this. > So it > would be similar to the directory tree on UNIX. Yes, ok. > As I already pointed out that I doubt that this could work for compiled > code. I agree that it's a neat concept they have, though. However, I > need some hints about how binary code could be included in this concept. > >> The > >> requirements of both file system types absolutely exclude one another. > >> So there is no way to combine them. > > With IVI::DB, I somehow succeed to build up a XML database on top of a > > normal > > hierarchical filesystem. > > And what is so special about this? I don't see the point here. It shows the integration of XML and normal Filesystems, and somehow the needs of big XML databases. > Please sugest a better name. I find it awful too, because it's so long. Component Execution Component Environment Enhanced Components > And I'm definitely against the development of Palladium. Then you do not know enough about it yet. ;-) But we will be able to add Palladium support to ChallengeOS afterwards too, it isn't a priority at the moment. > > Have a look at "Berlin", which is somehow connected to GGI, and have a > > look at Entity. > > Berlin's design is arguably one of the best, I agree. However I'd like > to direct the focus more on new ways of user interaction and usability > in general than on implementational details (the latter will possibly be > determined by the former). Ok! (You do not know, what you said there ;-))))) > Task 1 running module A accesses module B, which is not yet loaded. Ok. If we really want to talk about usability, then we have to dump a big pile of systematics, which we are used to. Do not see the operating system as a system, that has to drive the hardware, has to run applications, and has to abstract the hardware for the applications. Do not see applications as processes, which are instances of programs. Completely forget all the crap about the current systems we have. Have a look at what the user really wants, what the users has to get, and what the user will see. The user wants several different things: * Applications: Email, Web, Amazon.com, Heise.de, Würstelstand, Scanning images, watching TV, ... * Easy interface * Fast responding interface * Control over the computer * Stable applications * ... Lets begin with the applications: The user sees Websites like Amazon (this is where I can buy my books), Heise (this is where I get my informations), ... as applications. The concept of a browser as an application does not make sense. By the way, the concept of applications as programs you have to start, does not make sense too. The desktop should not be just some icons to start programs (from system design view). The desktop is where you work. The X-Server concept is an ancient relict from the times, where a computer had no graphical monitor directly attached. The concept, that a process manages several windows, is a nice internal concept, but you can see the flaw, as soon as several graphical windows vanish at the same time, because the process behind them died unexpectedly. Now lets take a look at what happens in the workflow of a window based user interface, to optimize the user experience: 1. A window pops up on the screen. 2. The window gets filled with widgets. 3. The user reads the contents of the widgets, and decides to click on a button. 4. The user moves the mouse to a location over the button. 5. Time passes. 6. The user clicks on the mouse button. 7. The click wents through the wires, to the operating system, to the windowing system, to the application, and starts to initiate a procedure. 8. The procedure fetches data from disk, does calculations, ... 9. The procedure changes data, writes data to disk, ... 10. The procedure is done 11. A new window pops up on the screen, asking the user for the next interaction In my opinion, we can guess what the user will do at point 4. As soon as the mouse pointer gets over a button, (and has stopped on it), it is possible, that the user might want to click on it. As soon as we know that the user might click on the button, we could fire off a thread, that does all the stuff, that does not change anything, and that does not hurt the rest of the system. So we can start a thread, that only does the actions of point 8 at point 4. Instead of point 8, we will have to wait for the finishing of the already started thread. In the best case, we can do all the necessary calculations even before the user clicks on a button. But to achieve that, we will have to redesign nearly everything. (Isn't that a challenge?) Of course, those threads will have to run with nice level 10 until the user really clicks on the button, ... > The > OS traps this and fires up module B. Module A then forces module B to > load some data - say a data file like "/tmp/somedata.tmp" from disk. > Then another task - task 2 - which just got started up also references > module B. This access is of course trapped. But it does not result in > loading another instance of module B. Instead task 2 shares module B > with task 1 from this moment on. This also means that it sees the > current state of module B, in this case that it has loaded a data file. Why do you force singleton-objects? I don't think that it would make much sense, if there could be only one window object on the graphical interface ... > In a nutshell, this is what the enhanced execution environment is about. > Of course it wound't be suitable for a multiuser environment if it can't > perform access checks to ensure data and system security. (Perhaps you should read the biography about Richard Stallman ;-) > With a few tweaks to this model it would even be possible that module B > runs on a remote machine but neither task would ever have to care about > that. Network-wide singletons? Cool ... Just one window for the whole network ... > A final note: Accesses to module B in that example are not performed > using any special interfaces in the style of CORBA or COM which require > wrapper code to be created. The accesses are formulated as > straight-forward, non-wrapped code and no wrapper generator or IDL > compiler is needed. IDL is bad, yes. CORAB is bad too. But COM has some nice ideas, it just isn't object orientated and serializable enough. > > At the moment, we have several different execution environments: > > * Kernelspace (device drivers, ...) > > This one is quite special and has nothing in common with any of the > other environments / spaces. Correct. > > * Daemonspace (all those servers ...) > > * GUIspace (KDE, Gnome, Windows, ... "rich clients") > > These two can be unified into a "classic" POSIX-compatible environment. No. I would only do the Daemonspace with as POSIX-compatible environment. For the GUIspace, I want something better. > The question I'm facing is whether this should be isolated from the > "enanced environement" I've been writing about here or whether the > latter can be implemented purely as an extension to the former one, > which would be great thing. I don't think that we need the Component Environment that much for the Daemonspace. I think Components are much more needed in the GUIspace. > > * Webservicespace (everything running in a Browser "thin client" > > I'd call that "interpreter space" because every interpreted programming > languge can have a set of abstactions to the OS they are running on. The > interpreter that is neccessary in this case typically runs in another > environment - normally POSIX-compatible user space. So this has not be > considered as special. What if not? > > All those execution environments have very different needs, and should be > > thought trough on their own, I guess. > > Right, though I don't agree with all the border lines you've been > drawing above. Why not? > > Have a look at the Perl module concept, and CPAN. > > It is nearly so automated, that it would automatically fetch and install > > the > > needed modules from the Internet, as soon as you call the first. > > This proves that it is possible, especially that it can all be totally > anonymous (it's something people really want to have). Yes. What I think is important, is that the system demands OpenSource by default, like Perl does. COM really suffers from closed source components. So everyone has to reinvent the wheel, becase everyone just needs some enhancements to the standard-wheels. It is important, that we have unified and administrated namespace for all modules. I don't like the Java approach of having domain names in every variable and class name. The other big problem with Java is that there is no central repository of all the java classes/modules that are available. I think your component environment is the concept to build up all the widgets for the user interface. > >> The way to achieve this is quite easy - in theory. Enforcing controls > >> and checks on the environment will give the modules the ability to > >> gracefully handle crashes without pulling down every other module. Just-in-Time-Compiling or Just-in-Time-Error-locating Regression tests like Perl modules do them > >> object, variable or data structure). Therefore each module must consist > >> of a binary and an access definition file. Why don't you do the access definition in XML, and contain the binary in the XML? > In this file there is an > >> entry for each symbol which grants or denies read, write and execute > >> rights for the owner of the module, his/her group and others (note that > >> making an extra file out of this has two benefits: first, there's no new > >> file format needed, and second this file could possibly edited by an > >> user or admin). Do you expect them to do it? Do you expect an admin to audit/define all the file permissions of the whole filesystem? > In this file prototypes of each exported function and/or > >> variable must resides as well as definitions of exported data > >> structures, Reminds me of IDL files ... > because during the adress space switch pointer addresses > >> might have to be tweaked so that they point into the right window > >> (imagnie that the process is tunneling back and forth between two > >> windows which map address spaces that have different real offsets). This > >> might not be neccessary when the windows are at the same addresses as > >> the modules that are referenced within their own address space. > >> This mechanism can be extened even further: Windows can map modules > >> running on remote machines. This only needs a small extension in the > >> form of a network protocol stack which is able to serialize and > >> reassemble such requests automatically. Furthermore this mechanism can > >> be exploited to map contents of the DOMfs into persistent objects and > >> thereby providing a decent interface. "Furthermore this mechanism can be exploited ..." ;-) > >> All of this sounds like a lot of overhead. But I assume that it isn't. I > >> expect that code controlled this way is no more than two to three times > >> slower than a usual executable under otherwise identical conditions. And > >> that performance hit isn't recognizeable for desktop users given the > >> performance of current PC hardware. Yes, I think so too. > >> 3. An interpreted language to automize the enhanced execution > >> environment: ObjectBasic Why not ObjectPerl? Basic does not have a powerful syntax ... > >> Have you ever thought of > >> remote-controlling your word processor from a shell and writing a letter > >> this way? > > > > Yes. I have. But afterwards, I found no answer to the question: > > "And why should I?" > > This example should point out the ease of component reusage and > automation that is possible. I heard that story too often. They told me that story at CORBA, CORSO, COM, OLE, DCOP, ... > So both software integration and software > development will hopefully become easier. Sounds too good to be true ... > > That one sounds good. CPAN could be a good example, how the technical > > side could work. What I am missing at CPAN is the missing Quality > > Control. > > The problem can only be overcome by manpower. And manpower is by default > rare among volunteer efforts. Wise words ... > But the direction is certainly right. Yes. > No. Having spent a whole week trying to install CA's Manufacturing > Knowledge (MK for short, which is a direct competitor to SAP R/3) at > work and having spent three months trying to get the faintest idea of > what it can do I think we could easily build our own ERP system on top > of ChallengeOS :). I told you not to try that stuff. Büroknecht/LivingXML is much better ... ;-) > PS: That MK installation still isn't complete. I've given it up by now :). :-( Many greetings, -- ~ Philipp Gühring p.g...@fu... ~ http://www.livingxml.net/ ICQ UIN: 6588261 ~ <xsl:value-of select="file:/home/philipp/.sig"/> |