You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(8) |
Aug
(6) |
Sep
(1) |
Oct
(2) |
Nov
|
Dec
|
---|
From: <Gre...@gm...> - 2002-10-20 20:20:01
|
Kyle Lahnakoski wrote: > I was interested to see "an interpreter for an object oriented BASIC > dialect" on Freshmeat this evening. I had just recently been working on > a language, and it appears to fit to your description quite well. I am > hoping this means that we may have similar ideas about what languages > should look like. > > Please give me an update on the status of ChallengeOS, and any other > recent tid-bits of information. Maybe I can help. > > Thanks > Hi! I'm quite surprised that somebody still shows interest in ChallengeOS. Honestly, I haven't cared for the project as much as I should have during the last few months because I kept myself busy with other things, which were not all neccessary. So the status update is very scarce: The project doesn't have much more than the docs that are online right now. The code we have is a boot loader, a boot-time ELF linker and the start of a memory manager. None of that has been written by me, though. There is no code related to the object oriented BASIC dialect I mentioned multiple times in the docs. I've layed out the sysntax in a very general way. Some ChallengeOS-specific syntax elements will find its way into the language, because it is planned to be a script language in this OS. And I expect the implementation of the interpreter very non-portable for the same reason: it'll probably be deeply embedded in the operating system itself at the very heart if its component model. The EBNF syntax for the basic dialect should be available online together with some examples. I believe that this is still available at our old homepage at http://challangeos.sourceforge.net in the docs section. As much as I hate to tell you that ChallengeOS currently has the smell of vaporware on it I'd love to have people helping me to get rid of it. I need help because I do not have the skills to implement all the low level stuff we need. If you know anybody who might want to help on this project please spread the word that we appreciate any contributions we can get. Thanks! Gregor |
From: Kyle L. <ky...@ar...> - 2002-10-19 07:26:33
|
I was interested to see "an interpreter for an object oriented BASIC dialect" on Freshmeat this evening. I had just recently been working on a language, and it appears to fit to your description quite well. I am hoping this means that we may have similar ideas about what languages should look like. Please give me an update on the status of ChallengeOS, and any other recent tid-bits of information. Maybe I can help. Thanks -- --------------------------------------------------------------------- Kyle Lahnakoski Arcavia Software Ltd. (416) 892-7784 http://www.arcavia.com |
From: <Gre...@gm...> - 2002-09-11 18:13:53
|
Hi, everybody! I think that is interesting news, so I am simply forwarding this mail to you. Bye, Gregor |
From: <Gre...@gm...> - 2002-08-15 18:49:30
|
Hi! Sorry for the unusually long delay. I've had a hard disk crash last weekend that carried a lot of data away and I'm still struggling to recover from it. I'll not be able to recover everything, though :(. wael oraiby wrote: > This is a free software so do with it what you want ;) > , for the tasking and etc.. , about the memory manager > i will try to do some work on it when i can get a free > time , cause currently I m under lots of presure , the > game engine is growing up fast and as it grow it > requires more time , and until I can make use of a > good algorithm (not sure of BSP yet since it requires > the hell lot of time compiling and requires lots of > work and time) , so basicly I m thinking in a > different approach , anyway hope I can make it ! > for the operating system I m surely agree with your > structure , it's revolutionary , however we still have > a long road to travel to get there , we need more > coders maybe , currently we must focus on making > memory manager , interrupt/signaling system and event > handler services and of course scheduling , those the > basic keys for any operating system , the file system > will surely requires lot of time later , but currently > we must get those stuff working first ! True. But I believe that when we know the requirements for the software we may be able to fulfill them better on the first attempt. So the discussion we recently had here hopefully pays itself off. The memory manager and interrupt management have highest priority right now. The low-level side of interrupt management is almost done, I assume. We should be fine for a while if we provide an assembler wrapper for interrupt handlers that saves the registers to a data structure, calls C code in a save way and restores the registers from that data structure afterwards. The event handling stuff is lowest priority now. I'm not sure whether it should be in the kernel at all. Maybe it's better to leave it at application/component level and build an ordinary syscall interface instead. The file system work can start only after some device drivers have been written. I'll try to put up a framework that should keep device drivers seperate from the main kernel at source level as far as possible. The reason is that the kernel should be linked at boot time in the future and not at compile time. > Another thing , where is your game engine going now , > you were using SDL and OpenGL last time we talked > about it , as well as you were coding for moonlight , > so what happened to your engine ? and what about the > university ? work :) and ... I've lost a lot of work on the engine because of the disk crash. But I'll recover from this I think. Before the disk crash I've been able to convert blender files to levels (only texture support was missing). Loading these files worked as well as basic view frustrum culling. That was all lost in that stupid hard disk crash (th drive is less than a year old). > > well cya soon , i hope :) > I hope you understand the delay I had. It has been a quite stressful time recovering from this accident. > -kintaro > > __________________________________________________ > Challengeos-developers mailing list > Cha...@li... > https://lists.sourceforge.net/lists/listinfo/challengeos-developers > > Gregor |
From: wael o. <wae...@ya...> - 2002-08-07 02:17:23
|
This is a free software so do with it what you want ;) , for the tasking and etc.. , about the memory manager i will try to do some work on it when i can get a free time , cause currently I m under lots of presure , the game engine is growing up fast and as it grow it requires more time , and until I can make use of a good algorithm (not sure of BSP yet since it requires the hell lot of time compiling and requires lots of work and time) , so basicly I m thinking in a different approach , anyway hope I can make it ! , for the operating system I m surely agree with your structure , it's revolutionary , however we still have a long road to travel to get there , we need more coders maybe , currently we must focus on making memory manager , interrupt/signaling system and event handler services and of course scheduling , those the basic keys for any operating system , the file system will surely requires lot of time later , but currently we must get those stuff working first ! Another thing , where is your game engine going now , you were using SDL and OpenGL last time we talked about it , as well as you were coding for moonlight , so what happened to your engine ? and what about the university ? work :) and ... well cya soon , i hope :) -kintaro __________________________________________________ Do You Yahoo!? Yahoo! Health - Feel better, live better http://health.yahoo.com |
From: Gregor M. <Gre...@gm...> - 2002-08-04 12:21:04
|
wael oraiby wrote: > hi , > > I dunno if you have this code , it should compile > fine and run tasking , it'snt much tho... > > hope it's the right file this time ;) > > kintaro > > > > __________________________________________________ > Do You Yahoo!? > Yahoo! Health - Feel better, live better > http://health.yahoo.com It is the right file this time :). The code is newer than anything I've seen so far. The multitasking demo behaves fine in bochs and on real hardware, although the output is a little bit different. I suspect that it's because of different simulated performance. This code actually outdates the CVS version. However, I think that some pieces of code should be reorganized: - The kfree() function currently takes two parameters, a pointer and the size of that memory block. This second parameter shouldn't be there as it requires the other kernel code to keep track of the size of all allocations they made, which is quite a bad thing. The block's size could be stored in a header for each block that is invisible to code outside of the memory manager. However, such a header could easily be corrupted by some misbehaving code. - The multitasking code should be moved out of main.c. task.c could be a new home for all this code. I can do this on my own - and if I do so, I'll send you the new versions of these files. This brings me to a another topic: Do you have trouble accessing the CVS archive at SourceForge? If so, we will need to find another solution to keep our code changes synchronized with less effort. I'm shure that we can move the CVS archive to a server without SSL if that would help you. Gregor -- ***************************************************** * Gregor Mueckl Gre...@gm... * * * * The ChallengeOS project: * * http://challengeos.sourceforge.net * ***************************************************** * Math problems? * * Call 1-800-[(10x)(13i)^2]-[sin(xy)/2.362x]. * ***************************************************** |
From: wael o. <wae...@ya...> - 2002-08-02 00:40:14
|
hi , I dunno if you have this code , it should compile fine and run tasking , it'snt much tho... hope it's the right file this time ;) kintaro __________________________________________________ Do You Yahoo!? Yahoo! Health - Feel better, live better http://health.yahoo.com |
From: Gregor M. <Gre...@gm...> - 2002-08-01 22:53:51
|
sou...@fu... wrote: > Hi, > >> I'd only do this for interpreted code like Perl or Python fragments >> (pretty much the same thing entity is doing). The reason is that I can't >> think about a way to make this behave correctly for binary code >> fragments. When should which piece of code be executed? > > > Do we need binary code that much? > You don't want an OS that wouldn't let you run only interpreted bytecode, would you? This way you are much slower than by running binary code. Besides, don't forget that GCC already has backends for a lot of processors out there and can output a lot of binary formats. If you wanted a bytecode interpreter, you'll have to make compiler first, which I believe to be *additional* and actually unneccessary programming effort. Source code for compiled programming languages like C and C++ can extremely portable, too, so this argument doesn't count. Actually I don't believe that it's that easy to write a perl script that can run out of the box on both Windows and UNIX. And last, if you write an interpreter you can simulate any environment for the interpreted code you want without the need for a special operating system to do it. >> I can think of optional features in DOMfs that can't be mapped directly >> into XML files that way like revision management or access rights. The >> former is a pure convenience function, which might turn out quite >> useful, but the latter is a must-have in the multi-user environment >> ChallengeOS will have to provide. > > > Hmmm ... revision control ... I don't think that it should be only a > part of DOMfs. > And on the other hand, I am not sure, whether it shouldn't be (mustn't > be) application specific anyway ... > I didn't had enough revision-control needing applications yet, to have a > clear view, of what I want in that domain. Well, if you sit next to one of the managers of a small company (150 employees) you'll soon get a feeling for what is really useful for business applications. And if you're programming company-specific solutions (literally) at the same time it becomes quite obvious that revision control should be a major feature of a document management scheme, even it's just an instructions telling the employees where to put their files. And DOMfs is a good candidate for implementing document mangement in my oppinion. About implementation of revision control: I suggest that it should be off by default and only be activated explicitely when needed. The different revisions of the DOM tree put under revision control will then get state and date tags as well as an application-defined free-formed tag string, which can be used by the application to store additional information about that specific revision. > Access control ... yes, that sounds interesting. > While we come to access control ... > Shouldn't we try to invent a better user-role-authorisation system? > On the other hand, too much systematic might get too complex for users > too ... > I'm currently used to two different authorisation systems: WindowsNT User/Group-ACL-system (quite complex, but flexible) and UNIX User/Group-System (easy to grasp, but not too flexible). On the performance side the UNIX authorisation system can be expected to be a lot faster than the Windows NT one, though. I expect an authorisation system that we design on our own to drift off into the direction of either one of those systems - presumably the Windows one. These two systems present the exteme poles which attract every other system you can think of. >> Currently I'm also pondering wether there should be the possibility to >> define a format for XML subtrees (like it is done with DTDs or similar >> files) and have DOMfs block any future changes that violate these >> definitions. This is strictly optional, even if it should become >> implemented. But is it worth implementing? > > > Interesting idea. If we should do that, it should be based upon XML > Schemas, instead of DTD's. I think, it would make sense to do it. > Although I am not sure, what performace hit that would give ... > I will have to investigate upon XML Schemas ... > Please do that. I was only referring to DTDs because I don't know any other formats that are out there. I am aware, though, that DTDs are actually obsolete by now. And if you find some information on XML Schemas, could you please give us a (short) summary on what can be described with them? > Another thing, that comes ot my mind, is the binding from DOM against > classes from Object-Orientated Languages. > It started, when Java learned to serialize and deserialize it's objects > into XML, instead of Java's native formats. > So the serialize and deserialize functions simply output all the member > variables and recursively all included classes into XML. > But what if the Classes as bound against persistent DOM? > (Think of blessing hashes from Perl) > >> mount /usr/bin/adress.class /dom/adress > > > That generates a new instance of adress.class, which gets mounted as > persistent DOM on /dom/adress > >> ls -la /dom/adress > > -rw-r--r-- 1 philipp users 1168 Feb 3 1998 name > -rw-r--r-- 1 philipp users 1168 Feb 3 1998 street > -rwxr-xr-x 1 philipp users 16 Sep 18 1999 validate > >> echo "Philipp" >/dom/adress/name > > >> /dom/adress/validate > > Validating Adress ... > Street is missing! > >> > ... > > Just an idea at the moment ... > By the way, there is already a XML shell available > Hmm... I don't think that DOMfs could be accessed easily via shell. You'll probably have to use helper tools to query the DOM tree to get information out of that fs. >> I agree except for the kernel and device drivers. It's senseless to do >> some XML tricks in this area of an OS. > > > I don't think so. > Why? >> Remember that device drivers are >> about providing unified interfaces to hardware. > > > Correct. > >> And for the sake of >> speed they must be as simple, flexible and straight-forward as possible. >> Device drivers will most likely be interfaced via device files, i.e. the >> good old /dev/ dir. This method is fast, clean and can be implemented by >> providing a small number of syscalls (open, read, write, close, ioctrl). > > > Ah. ;-) What about ioctrl? > KISS - keep it simple, stupid >> You surely wouldn't want to "cat randomcode.xml > /dev/xxx" and expect >> it to do something useful with your input, would you? > > > No. Binary is binary. I would not change anything on > open,read,write,close for storage devices. > But everything else ... > Just think about more complex hardware devices like TV Tuner cards. > Being able to communicate with them with XML, would make sense, I think. > Why have a device driver parse bloated XML data? That unneccessarily kills processor power, bloats the kernel because it has to have an own XML parser in memory and - worst of all - forces programs to generate XML segments by doing processor-intensive string concatenations when they could output binary data with just a few instructions. If you are using device files directly chances are that you want or need to go for speed. So there is no need to make the job more difficult for these people. >> > I see two necessary functions based on DOM, that should be made >> > available: XPath and XSLT. >> A question about XPath: Isn't there a supposedly more sophisticated >> successor in preparation? > > > XPath? I haven't heard about a successor yet. Perhaps you mean XPath 2.0 > ... > But the base functionality of XPath is so good, that I don't think that > there can be something far better. > If it's done right it can be implemented in user space, anyway. The interface to the DOMfs only has to be powerful enough. It just makes no sense if a simple query needs dozens of syscalls to get finished. I think that it should be done that way. This also does mean that you are not bound to XPath as a query language if you don't like it. >> > For the programming languages: DOM should be a datatype: >> > DOM mytree; >> Yes. Something like that. Anything else wouldn't make sense in a C++ >> program. Maybe there should be a parameter passed to the constructor >> telling it what node to take as a root node. > > > But that would be only an optional parameter, yes. > Not providing that parameter doesn't make much sense, either. Remember that different applications may share the same root node because the data is stored on an extra partition and partitions normally are very rare. >> Although that syntax is fine with Perl, it should be covered by >> functions and classes in C++. > > > Hmmm. > Does that mean that you agree? :) >> Reason is that it would otherwise hevily >> break the already surprisingly complex C++ syntax (C++ syntax looks >> simple, but it certainly isn't). > > > In my opinion, C++ has so many design flaws, that it should not be used > anymore. C++ is a good language that has saved me countless hours of silly typing exercises. I don't see any flaws in it (yes, it's my favorite language :). > But there arises a much deeper question: > Do we want to develop a compatible operating system, or a challenging > operating system? > I followed the starting of several new operating system in the last > years, and I saw, that they all struggle with the question of > compatibility. > POSIX, C/C++, Win32, Linux, Device-Drivers, ... > If you try to be compatible, you will gain a lot more applications for > the platform. > But as soon as you even commit yourself to one of those standards, you > are bound to it, and cannot easily innovate beyond it. > And I think that is the real challenge of developing a new operating > system. > Isn't it possible to provide a POSIX-compatible system with our own extensions placed just so that they don't interfere with the POSIX standard, but are there and useable at the same time? It is possible. There are many extensions to POSIX and the original UNIX on many UNIX-compatible systems including Linux. System-V-like IPC, BSD-like process accounting, etc. have started out as extensions to the original UNIX sources. But now they widely available. Another example: BeOS has a POSIX-compatible base system and a GUI system built around it. Even MacOS X is built that way. So was NeXTStep. Any questions? ;) >> > And the kernel should make the following possible: >> > DOM persistentTree("/dev/hdd4"); >> > Which makes the DOMfs from the partition /dev/hdd4 available as DOM >> tree. >> Taking the device file name makes the node definition dependent on the >> physical disk layout in the machine, which is a bad thing to do. > > > Agreed. Lets do it differently: > DOM persistentTree("/dom/existing/tree"); > /etc/fdisk: > /dev/hdd4 /dom/existing/tree domfs > I still don't see a reason why DOMfs partitions should be mapped into the normal UNIX fs tree. This only limits the possibilities of DOMfs. An ordinary file system requires that every file name in a directory is unique. Many file systems can't assign user-defined attributes to directory entries. XML and therefore DOMfs don't have these restrictions. Remember that you must be able to import any well-formed XML file into DOMfs without data loss according to the DOMfs requirements we agreed on earlier. So something doesn't work out here. >> Instead >> I think there should be a way to mount existing DOMfs partitions into a >> virtual root node. This way data can be spread across many disks and >> partitions without requiring the application to care about this. So it >> would be similar to the directory tree on UNIX. > > > Yes, ok. > See above, though. >> As I already pointed out that I doubt that this could work for compiled >> code. I agree that it's a neat concept they have, though. However, I >> need some hints about how binary code could be included in this concept. > > > > > >> >> The >> >> requirements of both file system types absolutely exclude one another. >> >> So there is no way to combine them. > > >> > With IVI::DB, I somehow succeed to build up a XML database on top of a >> > normal >> > hierarchical filesystem. >> And what is so special about this? I don't see the point here. > > > It shows the integration of XML and normal Filesystems, and somehow the > needs of big XML databases. > But that way it shouldn't find its way into an OS kernel, should it? >> Please sugest a better name. I find it awful too, because it's so long. > > > Component Execution > Component Environment > Enhanced Components > Hmm... my favorite is "Component Environment". Maybe the official name should be "Challenge Component Environment" and "ChallengeCE" for short (although that sounds much like the name of another OS). Any other ideas, anyone? >> And I'm definitely against the development of Palladium. > > > Then you do not know enough about it yet. ;-) > But we will be able to add Palladium support to ChallengeOS afterwards > too, it isn't a priority at the moment. > Why add Palladium support? In my oppinion it's a major hassle for the user and and a danger for data security. Remember that Palladium requires "secure" programs to be signed by the OS vendor - Microsoft in this case. This severely limits the openness of an OS platform. >> > Have a look at "Berlin", which is somehow connected to GGI, and have a >> > look at Entity. >> Berlin's design is arguably one of the best, I agree. However I'd like >> to direct the focus more on new ways of user interaction and usability >> in general than on implementational details (the latter will possibly be >> determined by the former). > > > Ok! (You do not know, what you said there ;-))))) > Oh, I think I do. :) >> Task 1 running module A accesses module B, which is not yet loaded. > > > Ok. If we really want to talk about usability, then we have to dump a > big pile of systematics, which we are used to. (Almost) done. > Do not see the operating system as a system, that has to drive the > hardware, has to run applications, and has to abstract the hardware for > the applications. No. You are wrong: The OS has to abstract the hardware. You don't want to recompile/redownload/reinstall all your software because you've just upgraded your PC's harddrive from IDE to SCSI. Two months later you discover that you should have chosen another host adapter and do the same procedure again. Do you really want to do this? And you certainly won't want to miss preemptive multitasking, would you? So you *have* to see this aspect of a desktop OS. It would be different for an embedded OS, but ChallengeOS will probably never find it's way onto a single chip. > Do not see applications as processes, which are instances of programs. > Completely forget all the crap about the current systems we have. > Have a look at what the user really wants, what the users has to get, > and what the user will see. > The user wants several different things: > * Applications: Email, Web, Amazon.com, Heise.de, Würstelstand, Scanning > images, watching TV, ... Are website applications? I know that we could discuss this until eternity. But let's assume for the scope of this discussion that they are not and therefore shouldn't be considered here. We are discussing binary executables running on the host the OS is running on. > * Easy interface > * Fast responding interface > * Control over the computer > * Stable applications > * ... These things are on a developer's TODO-list for about 10 years now. And sadly, they haven't changed (much). > Lets begin with the applications: > The user sees Websites like Amazon (this is where I can buy my books), > Heise (this is where I get my informations), ... as applications. > The concept of a browser as an application does not make sense. It does. For a very basic reason, indeed: How would you otherwise want to blend those HTML pages they deliver into the desktop? What component should interpret JavaScript if not the browser? > By the way, the concept of applications as programs you have to start, > does not make sense too. Right. It does not make sense to start the word-processing program and fiddle through hoards of dialogs until you can write a letter. Can't you just tell the computer that you want to write a letter and be presented with your own, individual letter template ready and waiting to be filled in? On the other hand, as a hobby web designer pointed out to me, commercial software manufacturers want to produce user interfaces that are recognisable and unique. Adobe is such an example. Of course this is done to keep users away from migrating to competing products. But this also results in a steeper learning curve for new users. I hope that ChallengeOS will not suffer from this as much as Windows software often does. > The desktop should not be just some icons to start programs (from system > design view). > The desktop is where you work. Correct. What is a desktop in the real world? Right, a desktop - with paper, pens and a computer on top of it :). > The X-Server concept is an ancient relict from the times, where a > computer had no graphical monitor directly attached. > The concept, that a process manages several windows, is a nice internal > concept, but you can see the flaw, as soon as several graphical windows > vanish at the same time, because the process behind them died unexpectedly. > Well, in a way you are right. But Windows have to be owned by something. I've been thinking about a model in which a window's contents is created only because several individual modules are interacting. None of those components could do this job without the others (i.e. no unneccessary component is involved). And yet are those components very specific to the tasks they do. This makes them reusable and flexible. Rearrange them and glue them together the way you need them and you'll get a custom application much faster than you'd probably expect. If one of those components dies, the others must be able to handle this gracefully. But you will never be able to totally avoid errors. If they are not in the code, they are in the design. That's a fact. So there must always be a way to handle an error, even if it means removing all related windows from the desktop. > Now lets take a look at what happens in the workflow of a window based > user interface, to optimize the user experience: > 1. A window pops up on the screen. > 2. The window gets filled with widgets. > 3. The user reads the contents of the widgets, and decides to click on a > button. > 4. The user moves the mouse to a location over the button. > 5. Time passes. > 6. The user clicks on the mouse button. > 7. The click wents through the wires, to the operating system, to the > windowing system, to the application, and starts to initiate a procedure. > 8. The procedure fetches data from disk, does calculations, ... > 9. The procedure changes data, writes data to disk, ... > 10. The procedure is done > 11. A new window pops up on the screen, asking the user for the next > interaction > In my opinion, we can guess what the user will do at point 4. > As soon as the mouse pointer gets over a button, (and has stopped on > it), it is possible, that the user might want to click on it. > As soon as we know that the user might click on the button, we could > fire off a thread, that does all the stuff, that does not change > anything, and that does not hurt the rest of the system. > So we can start a thread, that only does the actions of point 8 at point 4. > Instead of point 8, we will have to wait for the finishing of the > already started thread. > In the best case, we can do all the necessary calculations even before > the user clicks on a button. > Well... this actually sounds pretty bad to me - for a reason. On the one hand you suggest to waste performance by letting the kernel parse XML data and by using interpreted languages instead of compiled languages. On the other hand you suggest hacks like this one to compensate for the processor power previously wasted. Additionally it's pretty hard to kill a thread in mid-execution and let another one continue without corrupting internal data structures or causing memory leaks. So this is a hack forcing programmers to hacks inside the applications, which is even worse (that reminds me of Windows in a way). > But to achieve that, we will have to redesign nearly everything. > (Isn't that a challenge?) > Of course, those threads will have to run with nice level 10 until the > user really clicks on the button, ... > One last word: Either you can cut down the response time by conventional means or it's unavoidable to let the user wait anyway. This is only a solution if the user is slow in handling the UI, which an experienced user shurely is not. An experienced user is blindingly fast so the time you win by that way of guessing will essentially be worthless or even be eaten up by the time needed to rollback the actions performed in the other threads while the main thread could have done full-power calculations. >> The >> OS traps this and fires up module B. Module A then forces module B to >> load some data - say a data file like "/tmp/somedata.tmp" from disk. >> Then another task - task 2 - which just got started up also references >> module B. This access is of course trapped. But it does not result in >> loading another instance of module B. Instead task 2 shares module B >> with task 1 from this moment on. This also means that it sees the >> current state of module B, in this case that it has loaded a data file. > > > Why do you force singleton-objects? > I don't think that it would make much sense, if there could be only one > window object on the graphical interface ... > Aem.. there's certainly a misunderstanding here. Consider the following code in a module: namespace B { int senseless; class B1 { public: void foo() { senseless++; }; } B1 *bar() { B1 *tmp=new B1; B1->foo(); return B1; } } Of B::senseless and B::bar there would only be one single, shared instance. That is correct. However, there can be an unlimited number of objects of class B::B1. So there are effectively no singletons. >> In a nutshell, this is what the enhanced execution environment is about. >> Of course it wound't be suitable for a multiuser environment if it can't >> perform access checks to ensure data and system security. > > > (Perhaps you should read the biography about Richard Stallman ;-) > I think that I once read it, but can't remember it now. >> With a few tweaks to this model it would even be possible that module B >> runs on a remote machine but neither task would ever have to care about >> that. > > > Network-wide singletons? Cool ... > Just one window for the whole network ... > See above. >> > * Daemonspace (all those servers ...) >> > * GUIspace (KDE, Gnome, Windows, ... "rich clients") >> These two can be unified into a "classic" POSIX-compatible environment. > > > No. I would only do the Daemonspace with as POSIX-compatible environment. > For the GUIspace, I want something better. > This can be combined. And it can be a good thing to have components available for daemons/servers. This is especially true since I think that it is a good thing to implement the DOMfs' persistance feature using the same mechanisms that components use. So you won't get access to DOMfs partitions if you don't access the component environment. A possible scenario is an Apache instance running with a ChallengeOS-specific module that allows the daemon to access and deliver documents stored in the DOMfs. With a little bit of hands-on-work you could attach other components to any daemons in the system and make custom services available "normal" network clients this way. You wouldn't want to loose those possibilities, would you? >> The question I'm facing is whether this should be isolated from the >> "enanced environement" I've been writing about here or whether the >> latter can be implemented purely as an extension to the former one, >> which would be great thing. > > > I don't think that we need the Component Environment that much for the > Daemonspace. I think Components are much more needed in the GUIspace. > See above. In my oppinion they are equally suited for both purposes. >> > * Webservicespace (everything running in a Browser "thin client" >> I'd call that "interpreter space" because every interpreted programming >> languge can have a set of abstactions to the OS they are running on. The >> interpreter that is neccessary in this case typically runs in another >> environment - normally POSIX-compatible user space. So this has not be >> considered as special. > > > What if not? > I don't even see how this possibility could look like. >> > All those execution environments have very different needs, and >> should be >> > thought trough on their own, I guess. >> Right, though I don't agree with all the border lines you've been >> drawing above. > > > Why not? > Any process on any system can communicate either with other prcesses on the same system, or with processes on other system that are connected via network or with the user. Still it's a process. Even more, these methods can be combined in any way you like. So the point is that the GUI is nothing more than a part of the standard environment just like an POSIX-compatible core API is. You can make all this available at the same time and without performance loss. However, you don't have to use it, but it's nice to have it available when it is needed. Essentially all this becomes a single environment with many different facetes. >> > Have a look at the Perl module concept, and CPAN. >> > It is nearly so automated, that it would automatically fetch and >> install >> > the >> > needed modules from the Internet, as soon as you call the first. >> This proves that it is possible, especially that it can all be totally >> anonymous (it's something people really want to have). > > > Yes. > What I think is important, is that the system demands OpenSource by > default, like Perl does. COM really suffers from closed source > components. So everyone has to reinvent the wheel, becase everyone just > needs some enhancements to the standard-wheels. > It is important, that we have unified and administrated namespace for > all modules. Yes. I definitely agree. > I don't like the Java approach of having domain names in every variable > and class name. > The other big problem with Java is that there is no central repository > of all the java classes/modules that are available. > I can see the repository problem. And Java also suffers from poor managability of installed class files. This only will become apparent in large and complex Java-based software environments. > I think your component environment is the concept to build up all the > widgets for the user interface. > And hopefully not only that. >> >> The way to achieve this is quite easy - in theory. Enforcing controls >> >> and checks on the environment will give the modules the ability to >> >> gracefully handle crashes without pulling down every other module. > > > Just-in-Time-Compiling or Just-in-Time-Error-locating > Regression tests like Perl modules do them > I don't think that this is the way to go. C++-style exceptions are more like it, I think (and yes, I think I know the answer :). >> >> object, variable or data structure). Therefore each module must >> consist >> >> of a binary and an access definition file. > > > Why don't you do the access definition in XML, and contain the binary in > the XML? > No. Leave the binary format unchanged, please. And is XML suited well enough for large binary junks (it's safe to assume junk sizes of 10MB and more). >> In this file there is an >> >> entry for each symbol which grants or denies read, write and execute >> >> rights for the owner of the module, his/her group and others (note >> that >> >> making an extra file out of this has two benefits: first, there's >> no new >> >> file format needed, and second this file could possibly edited by an >> >> user or admin). > > > Do you expect them to do it? > Do you expect an admin to audit/define all the file permissions of the > whole filesystem? > Only if he/she is paranoid. But there's a chance that he or she will look over certain definitions if they don't fit properly. You are adjusting file access rights from time to time, too. But you don't check file permissions of every file regularly, if you aren't a masochist. >> In this file prototypes of each exported function and/or >> >> variable must resides as well as definitions of exported data >> >> structures, > > > Reminds me of IDL files ... > Sort of, but there is no code generated from them. They are needed to understand what is acutally going on and make the Component Environment work. The data therefore is needed at runtime and not in compile time like in Corba or COM. >> because during the adress space switch pointer addresses >> >> might have to be tweaked so that they point into the right window >> >> (imagnie that the process is tunneling back and forth between two >> >> windows which map address spaces that have different real offsets). >> This >> >> might not be neccessary when the windows are at the same addresses as >> >> the modules that are referenced within their own address space. >> >> This mechanism can be extened even further: Windows can map modules >> >> running on remote machines. This only needs a small extension in the >> >> form of a network protocol stack which is able to serialize and >> >> reassemble such requests automatically. Furthermore this mechanism can >> >> be exploited to map contents of the DOMfs into persistent objects and >> >> thereby providing a decent interface. > > > "Furthermore this mechanism can be exploited ..." ;-) > How? :) >> >> 3. An interpreted language to automize the enhanced execution >> >> environment: ObjectBasic > > > Why not ObjectPerl? > Basic does not have a powerful syntax ... > >> >> Have you ever thought of >> >> remote-controlling your word processor from a shell and writing a >> letter >> >> this way? >> > >> > Yes. I have. But afterwards, I found no answer to the question: >> > "And why should I?" >> This example should point out the ease of component reusage and >> automation that is possible. > > > I heard that story too often. They told me that story at CORBA, CORSO, > COM, OLE, DCOP, ... > Well, all those systems have a lot of internal overhead. That is hopefully decreased to a *convenient* minimum on ChallengeOS. It must be easy to use for the programmer. Otherwise noone will adopt it. >> So both software integration and software >> development will hopefully become easier. > > > Sounds too good to be true ... > Emphasis was on "hopefully" :) >> No. Having spent a whole week trying to install CA's Manufacturing >> Knowledge (MK for short, which is a direct competitor to SAP R/3) at >> work and having spent three months trying to get the faintest idea of >> what it can do I think we could easily build our own ERP system on top >> of ChallengeOS :). > > > I told you not to try that stuff. Büroknecht/LivingXML is much better ... > ;-) > Marketing. :) Sincerely, Gregor -- ***************************************************** * Gregor Mueckl Gre...@gm... * * * * The ChallengeOS project: * * http://challengeos.sourceforge.net * ***************************************************** * Math problems? * * Call 1-800-[(10x)(13i)^2]-[sin(xy)/2.362x]. * ***************************************************** |
From: Gregor M. <Gre...@gm...> - 2002-08-01 19:05:40
|
wael oraiby wrote: > well this is an old code I was hacking , it isn't > anything yet , just a multitasking test ! yes it did > multitasking :) , but needs lot of work , it is done > via task gate ! > Unfortunately this task switching code is commented out as far as I can tell. I haven't yet tried to get it to work. It seems to have some DJGPP calls in it. Are those still needed? > I must say that this code is 2-3 months old , i dunno > if the file is the right one , anyway if it didn't > work tell me ;) > I think you already sent me this file. If so, these sources should be the base for the CVS version I sent you a copy of with my last mail (if I didn't forget to attach the archive, that is :). > kintaro... > > __________________________________________________ > Do You Yahoo!? > Yahoo! Health - Feel better, live better > http://health.yahoo.com Bye, Gregor -- ***************************************************** * Gregor Mueckl Gre...@gm... * * * * The ChallengeOS project: * * http://challengeos.sourceforge.net * ***************************************************** * Math problems? * * Call 1-800-[(10x)(13i)^2]-[sin(xy)/2.362x]. * ***************************************************** |
From: wael o. <wae...@ya...> - 2002-07-29 22:23:10
|
well this is an old code I was hacking , it isn't anything yet , just a multitasking test ! yes it did multitasking :) , but needs lot of work , it is done via task gate ! I must say that this code is 2-3 months old , i dunno if the file is the right one , anyway if it didn't work tell me ;) kintaro... __________________________________________________ Do You Yahoo!? Yahoo! Health - Feel better, live better http://health.yahoo.com |
From: <sou...@fu...> - 2002-07-29 01:01:01
|
Hi, > I'd only do this for interpreted code like Perl or Python fragments > (pretty much the same thing entity is doing). The reason is that I can't > think about a way to make this behave correctly for binary code > fragments. When should which piece of code be executed? Do we need binary code that much? > I can think of optional features in DOMfs that can't be mapped directly > into XML files that way like revision management or access rights. The > former is a pure convenience function, which might turn out quite > useful, but the latter is a must-have in the multi-user environment > ChallengeOS will have to provide. Hmmm ... revision control ... I don't think that it should be only a part of DOMfs. And on the other hand, I am not sure, whether it shouldn't be (mustn't be) application specific anyway ... I didn't had enough revision-control needing applications yet, to have a clear view, of what I want in that domain. Access control ... yes, that sounds interesting. While we come to access control ... Shouldn't we try to invent a better user-role-authorisation system? On the other hand, too much systematic might get too complex for users too ... > Currently I'm also pondering wether there should be the possibility to > define a format for XML subtrees (like it is done with DTDs or similar > files) and have DOMfs block any future changes that violate these > definitions. This is strictly optional, even if it should become > implemented. But is it worth implementing? Interesting idea. If we should do that, it should be based upon XML Schemas, instead of DTD's. I think, it would make sense to do it. Although I am not sure, what performace hit that would give ... I will have to investigate upon XML Schemas ... Another thing, that comes ot my mind, is the binding from DOM against classes from Object-Orientated Languages. It started, when Java learned to serialize and deserialize it's objects into XML, instead of Java's native formats. So the serialize and deserialize functions simply output all the member variables and recursively all included classes into XML. But what if the Classes as bound against persistent DOM? (Think of blessing hashes from Perl) > mount /usr/bin/adress.class /dom/adress That generates a new instance of adress.class, which gets mounted as persistent DOM on /dom/adress > ls -la /dom/adress -rw-r--r-- 1 philipp users 1168 Feb 3 1998 name -rw-r--r-- 1 philipp users 1168 Feb 3 1998 street -rwxr-xr-x 1 philipp users 16 Sep 18 1999 validate > echo "Philipp" >/dom/adress/name > /dom/adress/validate Validating Adress ... Street is missing! > ... Just an idea at the moment ... By the way, there is already a XML shell available > I agree except for the kernel and device drivers. It's senseless to do > some XML tricks in this area of an OS. I don't think so. > Remember that device drivers are > about providing unified interfaces to hardware. Correct. > And for the sake of > speed they must be as simple, flexible and straight-forward as possible. > Device drivers will most likely be interfaced via device files, i.e. the > good old /dev/ dir. This method is fast, clean and can be implemented by > providing a small number of syscalls (open, read, write, close, ioctrl). Ah. ;-) What about ioctrl? > You surely wouldn't want to "cat randomcode.xml > /dev/xxx" and expect > it to do something useful with your input, would you? No. Binary is binary. I would not change anything on open,read,write,close for storage devices. But everything else ... Just think about more complex hardware devices like TV Tuner cards. Being able to communicate with them with XML, would make sense, I think. > > I see two necessary functions based on DOM, that should be made > > available: XPath and XSLT. > > A question about XPath: Isn't there a supposedly more sophisticated > successor in preparation? XPath? I haven't heard about a successor yet. Perhaps you mean XPath 2.0 ... But the base functionality of XPath is so good, that I don't think that there can be something far better. > For XSLT: I think that it would be a good thing to leave the > implementation of this in user space. It should be available. Interesting question. I will have to think about it a bit more. > > For the programming languages: DOM should be a datatype: > > DOM mytree; > > Yes. Something like that. Anything else wouldn't make sense in a C++ > program. Maybe there should be a parameter passed to the constructor > telling it what node to take as a root node. But that would be only an optional parameter, yes. > Although that syntax is fine with Perl, it should be covered by > functions and classes in C++. Hmmm. > Reason is that it would otherwise hevily > break the already surprisingly complex C++ syntax (C++ syntax looks > simple, but it certainly isn't). In my opinion, C++ has so many design flaws, that it should not be used anymore. But there arises a much deeper question: Do we want to develop a compatible operating system, or a challenging operating system? I followed the starting of several new operating system in the last years, and I saw, that they all struggle with the question of compatibility. POSIX, C/C++, Win32, Linux, Device-Drivers, ... If you try to be compatible, you will gain a lot more applications for the platform. But as soon as you even commit yourself to one of those standards, you are bound to it, and cannot easily innovate beyond it. And I think that is the real challenge of developing a new operating system. > > And the kernel should make the following possible: > > DOM persistentTree("/dev/hdd4"); > > Which makes the DOMfs from the partition /dev/hdd4 available as DOM tree. > > Taking the device file name makes the node definition dependent on the > physical disk layout in the machine, which is a bad thing to do. Agreed. Lets do it differently: DOM persistentTree("/dom/existing/tree"); /etc/fdisk: /dev/hdd4 /dom/existing/tree domfs > Instead > I think there should be a way to mount existing DOMfs partitions into a > virtual root node. This way data can be spread across many disks and > partitions without requiring the application to care about this. > So it > would be similar to the directory tree on UNIX. Yes, ok. > As I already pointed out that I doubt that this could work for compiled > code. I agree that it's a neat concept they have, though. However, I > need some hints about how binary code could be included in this concept. > >> The > >> requirements of both file system types absolutely exclude one another. > >> So there is no way to combine them. > > With IVI::DB, I somehow succeed to build up a XML database on top of a > > normal > > hierarchical filesystem. > > And what is so special about this? I don't see the point here. It shows the integration of XML and normal Filesystems, and somehow the needs of big XML databases. > Please sugest a better name. I find it awful too, because it's so long. Component Execution Component Environment Enhanced Components > And I'm definitely against the development of Palladium. Then you do not know enough about it yet. ;-) But we will be able to add Palladium support to ChallengeOS afterwards too, it isn't a priority at the moment. > > Have a look at "Berlin", which is somehow connected to GGI, and have a > > look at Entity. > > Berlin's design is arguably one of the best, I agree. However I'd like > to direct the focus more on new ways of user interaction and usability > in general than on implementational details (the latter will possibly be > determined by the former). Ok! (You do not know, what you said there ;-))))) > Task 1 running module A accesses module B, which is not yet loaded. Ok. If we really want to talk about usability, then we have to dump a big pile of systematics, which we are used to. Do not see the operating system as a system, that has to drive the hardware, has to run applications, and has to abstract the hardware for the applications. Do not see applications as processes, which are instances of programs. Completely forget all the crap about the current systems we have. Have a look at what the user really wants, what the users has to get, and what the user will see. The user wants several different things: * Applications: Email, Web, Amazon.com, Heise.de, Würstelstand, Scanning images, watching TV, ... * Easy interface * Fast responding interface * Control over the computer * Stable applications * ... Lets begin with the applications: The user sees Websites like Amazon (this is where I can buy my books), Heise (this is where I get my informations), ... as applications. The concept of a browser as an application does not make sense. By the way, the concept of applications as programs you have to start, does not make sense too. The desktop should not be just some icons to start programs (from system design view). The desktop is where you work. The X-Server concept is an ancient relict from the times, where a computer had no graphical monitor directly attached. The concept, that a process manages several windows, is a nice internal concept, but you can see the flaw, as soon as several graphical windows vanish at the same time, because the process behind them died unexpectedly. Now lets take a look at what happens in the workflow of a window based user interface, to optimize the user experience: 1. A window pops up on the screen. 2. The window gets filled with widgets. 3. The user reads the contents of the widgets, and decides to click on a button. 4. The user moves the mouse to a location over the button. 5. Time passes. 6. The user clicks on the mouse button. 7. The click wents through the wires, to the operating system, to the windowing system, to the application, and starts to initiate a procedure. 8. The procedure fetches data from disk, does calculations, ... 9. The procedure changes data, writes data to disk, ... 10. The procedure is done 11. A new window pops up on the screen, asking the user for the next interaction In my opinion, we can guess what the user will do at point 4. As soon as the mouse pointer gets over a button, (and has stopped on it), it is possible, that the user might want to click on it. As soon as we know that the user might click on the button, we could fire off a thread, that does all the stuff, that does not change anything, and that does not hurt the rest of the system. So we can start a thread, that only does the actions of point 8 at point 4. Instead of point 8, we will have to wait for the finishing of the already started thread. In the best case, we can do all the necessary calculations even before the user clicks on a button. But to achieve that, we will have to redesign nearly everything. (Isn't that a challenge?) Of course, those threads will have to run with nice level 10 until the user really clicks on the button, ... > The > OS traps this and fires up module B. Module A then forces module B to > load some data - say a data file like "/tmp/somedata.tmp" from disk. > Then another task - task 2 - which just got started up also references > module B. This access is of course trapped. But it does not result in > loading another instance of module B. Instead task 2 shares module B > with task 1 from this moment on. This also means that it sees the > current state of module B, in this case that it has loaded a data file. Why do you force singleton-objects? I don't think that it would make much sense, if there could be only one window object on the graphical interface ... > In a nutshell, this is what the enhanced execution environment is about. > Of course it wound't be suitable for a multiuser environment if it can't > perform access checks to ensure data and system security. (Perhaps you should read the biography about Richard Stallman ;-) > With a few tweaks to this model it would even be possible that module B > runs on a remote machine but neither task would ever have to care about > that. Network-wide singletons? Cool ... Just one window for the whole network ... > A final note: Accesses to module B in that example are not performed > using any special interfaces in the style of CORBA or COM which require > wrapper code to be created. The accesses are formulated as > straight-forward, non-wrapped code and no wrapper generator or IDL > compiler is needed. IDL is bad, yes. CORAB is bad too. But COM has some nice ideas, it just isn't object orientated and serializable enough. > > At the moment, we have several different execution environments: > > * Kernelspace (device drivers, ...) > > This one is quite special and has nothing in common with any of the > other environments / spaces. Correct. > > * Daemonspace (all those servers ...) > > * GUIspace (KDE, Gnome, Windows, ... "rich clients") > > These two can be unified into a "classic" POSIX-compatible environment. No. I would only do the Daemonspace with as POSIX-compatible environment. For the GUIspace, I want something better. > The question I'm facing is whether this should be isolated from the > "enanced environement" I've been writing about here or whether the > latter can be implemented purely as an extension to the former one, > which would be great thing. I don't think that we need the Component Environment that much for the Daemonspace. I think Components are much more needed in the GUIspace. > > * Webservicespace (everything running in a Browser "thin client" > > I'd call that "interpreter space" because every interpreted programming > languge can have a set of abstactions to the OS they are running on. The > interpreter that is neccessary in this case typically runs in another > environment - normally POSIX-compatible user space. So this has not be > considered as special. What if not? > > All those execution environments have very different needs, and should be > > thought trough on their own, I guess. > > Right, though I don't agree with all the border lines you've been > drawing above. Why not? > > Have a look at the Perl module concept, and CPAN. > > It is nearly so automated, that it would automatically fetch and install > > the > > needed modules from the Internet, as soon as you call the first. > > This proves that it is possible, especially that it can all be totally > anonymous (it's something people really want to have). Yes. What I think is important, is that the system demands OpenSource by default, like Perl does. COM really suffers from closed source components. So everyone has to reinvent the wheel, becase everyone just needs some enhancements to the standard-wheels. It is important, that we have unified and administrated namespace for all modules. I don't like the Java approach of having domain names in every variable and class name. The other big problem with Java is that there is no central repository of all the java classes/modules that are available. I think your component environment is the concept to build up all the widgets for the user interface. > >> The way to achieve this is quite easy - in theory. Enforcing controls > >> and checks on the environment will give the modules the ability to > >> gracefully handle crashes without pulling down every other module. Just-in-Time-Compiling or Just-in-Time-Error-locating Regression tests like Perl modules do them > >> object, variable or data structure). Therefore each module must consist > >> of a binary and an access definition file. Why don't you do the access definition in XML, and contain the binary in the XML? > In this file there is an > >> entry for each symbol which grants or denies read, write and execute > >> rights for the owner of the module, his/her group and others (note that > >> making an extra file out of this has two benefits: first, there's no new > >> file format needed, and second this file could possibly edited by an > >> user or admin). Do you expect them to do it? Do you expect an admin to audit/define all the file permissions of the whole filesystem? > In this file prototypes of each exported function and/or > >> variable must resides as well as definitions of exported data > >> structures, Reminds me of IDL files ... > because during the adress space switch pointer addresses > >> might have to be tweaked so that they point into the right window > >> (imagnie that the process is tunneling back and forth between two > >> windows which map address spaces that have different real offsets). This > >> might not be neccessary when the windows are at the same addresses as > >> the modules that are referenced within their own address space. > >> This mechanism can be extened even further: Windows can map modules > >> running on remote machines. This only needs a small extension in the > >> form of a network protocol stack which is able to serialize and > >> reassemble such requests automatically. Furthermore this mechanism can > >> be exploited to map contents of the DOMfs into persistent objects and > >> thereby providing a decent interface. "Furthermore this mechanism can be exploited ..." ;-) > >> All of this sounds like a lot of overhead. But I assume that it isn't. I > >> expect that code controlled this way is no more than two to three times > >> slower than a usual executable under otherwise identical conditions. And > >> that performance hit isn't recognizeable for desktop users given the > >> performance of current PC hardware. Yes, I think so too. > >> 3. An interpreted language to automize the enhanced execution > >> environment: ObjectBasic Why not ObjectPerl? Basic does not have a powerful syntax ... > >> Have you ever thought of > >> remote-controlling your word processor from a shell and writing a letter > >> this way? > > > > Yes. I have. But afterwards, I found no answer to the question: > > "And why should I?" > > This example should point out the ease of component reusage and > automation that is possible. I heard that story too often. They told me that story at CORBA, CORSO, COM, OLE, DCOP, ... > So both software integration and software > development will hopefully become easier. Sounds too good to be true ... > > That one sounds good. CPAN could be a good example, how the technical > > side could work. What I am missing at CPAN is the missing Quality > > Control. > > The problem can only be overcome by manpower. And manpower is by default > rare among volunteer efforts. Wise words ... > But the direction is certainly right. Yes. > No. Having spent a whole week trying to install CA's Manufacturing > Knowledge (MK for short, which is a direct competitor to SAP R/3) at > work and having spent three months trying to get the faintest idea of > what it can do I think we could easily build our own ERP system on top > of ChallengeOS :). I told you not to try that stuff. Büroknecht/LivingXML is much better ... ;-) > PS: That MK installation still isn't complete. I've given it up by now :). :-( Many greetings, -- ~ Philipp Gühring p.g...@fu... ~ http://www.livingxml.net/ ICQ UIN: 6588261 ~ <xsl:value-of select="file:/home/philipp/.sig"/> |
From: Gregor M. <Gre...@gm...> - 2002-07-28 16:03:28
|
sou...@fu... wrote: > Re: [Challengeos-developers] more ideas > Von: Philipp Gühring <p.g...@fu...> > An: cha...@li... > Datum: Fri, 26 Jul 2002 19:03:55 +0200 > > Nachricht enthält Signatur von Philipp Michael Gühring (Sourcerer) > <pg...@pa...> > Hi, > >> This file system will be soleley for data storage. > > > Applications could be another idea too. Look at www.entity.cx > I think that could be the basis for your componentizes modulized > architecture > too. > I'd only do this for interpreted code like Perl or Python fragments (pretty much the same thing entity is doing). The reason is that I can't think about a way to make this behave correctly for binary code fragments. When should which piece of code be executed? >> It will serve as some >> sort of hierarchical database. It'll be designed in such a way that an >> XML file can go through an import/export cycle on this file system and >> will come out unaltered (maybe perhaps some whitespaces vanished or were >> added, but that has not changed the contents of the file). > > > Ok. > I can think of optional features in DOMfs that can't be mapped directly into XML files that way like revision management or access rights. The former is a pure convenience function, which might turn out quite useful, but the latter is a must-have in the multi-user environment ChallengeOS will have to provide. Currently I'm also pondering wether there should be the possibility to define a format for XML subtrees (like it is done with DTDs or similar files) and have DOMfs block any future changes that violate these definitions. This is strictly optional, even if it should become implemented. But is it worth implementing? >> How is this file system accessed? Among the many ways thinkable I prefer >> to map the hierarchical structure onto persistent data structures which >> are accessed from memory. However, this can only be achieved in a >> reasonable way when an object-oriented language is used. Actually, pure >> procedural languages like C will have a hard time in ChallengeOS anyway >> (see below). Evidently, you could design a plain procedural C-compatible >> interface, but that'll be a lot more complicated to use. > > > Yes. > DOM, and all the necessary functions around it, have to be provided by the > system kernel, for every kernel driver, for every programming language, and > for every application. I agree except for the kernel and device drivers. It's senseless to do some XML tricks in this area of an OS. Remember that device drivers are about providing unified interfaces to hardware. And for the sake of speed they must be as simple, flexible and straight-forward as possible. Device drivers will most likely be interfaced via device files, i.e. the good old /dev/ dir. This method is fast, clean and can be implemented by providing a small number of syscalls (open, read, write, close, ioctrl). You surely wouldn't want to "cat randomcode.xml > /dev/xxx" and expect it to do something useful with your input, would you? > I see two necessary functions based on DOM, that should be made available: > XPath and XSLT. A question about XPath: Isn't there a supposedly more sophisticated successor in preparation? For XSLT: I think that it would be a good thing to leave the implementation of this in user space. It should be available. > For the programming languages: DOM should be a datatype: > DOM mytree; Yes. Something like that. Anything else wouldn't make sense in a C++ program. Maybe there should be a parameter passed to the constructor telling it what node to take as a root node. > XPath should be as integrated into the programming languages as Regular > expressions are integrated in Perl: > foreach (mytree =~ x/DOCUMENT/NODE/SUBNODE[@attribute='value']/Text ) > { > print; > } Although that syntax is fine with Perl, it should be covered by functions and classes in C++. Reason is that it would otherwise hevily break the already surprisingly complex C++ syntax (C++ syntax looks simple, but it certainly isn't). > And the kernel should make the following possible: > DOM persistentTree("/dev/hdd4"); > Which makes the DOMfs from the partition /dev/hdd4 available as DOM tree. > Taking the device file name makes the node definition dependent on the physical disk layout in the machine, which is a bad thing to do. Instead I think there should be a way to mount existing DOMfs partitions into a virtual root node. This way data can be spread across many disks and partitions without requiring the application to care about this. So it would be similar to the directory tree on UNIX. >> Note that there must still be conventional file system available because >> at least the software must be stored on this file system. > > > Hmmm. Have a look at Entity. > I think Entity shows the way, how applications can be developed in the > future. > As I already pointed out that I doubt that this could work for compiled code. I agree that it's a neat concept they have, though. However, I need some hints about how binary code could be included in this concept. >> The >> requirements of both file system types absolutely exclude one another. >> So there is no way to combine them. > > > ;-) > I am not sure yet. I don't think that it's impossible, it just isn't > straitforward. > Did you take a deep enough look at IVI::DB yet? No. > IVI::DB is my own native XML database, which is available under the GPL > from > http://www.livingxml.net/ (->Plattform ->Database ->bottom of the page) > With IVI::DB, I somehow succeed to build up a XML database on top of a > normal > hierarchical filesystem. > And what is so special about this? I don't see the point here. > >> 2. Enhanced execution environment > > > (The name somehow reminds me of Palladium ... but forget that) > Please sugest a better name. I find it awful too, because it's so long. And I'm definitely against the development of Palladium. >> [Note: This is in my oppinion the most important feature. I'll not give >> up this one. No way!] > > > Did I said anything against it? > Something like that stumbles through my mind for some years now, but I > am not > yet sure, how I really want it, and I think I want it a bit differently > than > you, but I think the overall direction is not that wrong ... > Well, let's fight it out... ;) >> This is more than a feature. It's a computing concept unmatched by >> anything I've seen so far or that is to come in the near future. In >> other words: it's unique! > > > Marketing. > Perhaps. But it's something every project does :). >> It's hard to describe all of this exactly. Imagine your software >> consisting of a whole wagonload of small, specialized modules (or >> libraries) which are running in the same address space. Every module >> could use functions provided by every other module that is installed. >> Extending such an environment would be easy: just write the missing >> module using features from the modules that are already available. > > > Sounds like the shared library concept. It's more than that. It's rather a component model. No module will ever have more than one instance (including code *and* data). Consider the following example: Task 1 running module A accesses module B, which is not yet loaded. The OS traps this and fires up module B. Module A then forces module B to load some data - say a data file like "/tmp/somedata.tmp" from disk. Then another task - task 2 - which just got started up also references module B. This access is of course trapped. But it does not result in loading another instance of module B. Instead task 2 shares module B with task 1 from this moment on. This also means that it sees the current state of module B, in this case that it has loaded a data file. In a nutshell, this is what the enhanced execution environment is about. Of course it wound't be suitable for a multiuser environment if it can't perform access checks to ensure data and system security. With a few tweaks to this model it would even be possible that module B runs on a remote machine but neither task would ever have to care about that. A final note: Accesses to module B in that example are not performed using any special interfaces in the style of CORBA or COM which require wrapper code to be created. The accesses are formulated as straight-forward, non-wrapped code and no wrapper generator or IDL compiler is needed. > At the moment, we have several different execution environments: > * Kernelspace (device drivers, ...) This one is quite special and has nothing in common with any of the other environments / spaces. > * Daemonspace (all those servers ...) > * GUIspace (KDE, Gnome, Windows, ... "rich clients") These two can be unified into a "classic" POSIX-compatible environment. The question I'm facing is whether this should be isolated from the "enanced environement" I've been writing about here or whether the latter can be implemented purely as an extension to the former one, which would be great thing. > * Webservicespace (everything running in a Browser "thin client" I'd call that "interpreter space" because every interpreted programming languge can have a set of abstactions to the OS they are running on. The interpreter that is neccessary in this case typically runs in another environment - normally POSIX-compatible user space. So this has not be considered as special. > All those execution environments have very different needs, and should be > thought trough on their own, I guess. > Right, though I don't agree with all the border lines you've been drawing above. >> However, this has two rather obvious drawbacks: If only one module has a >> bug the whole set would be forced down in a big crash. Second, every >> module involved must be loaded in memory at startup. That's a waste of >> memory. But these problems can be overcome. > > > Have a look at the Perl module concept, and CPAN. > It is nearly so automated, that it would automatically fetch and install > the > needed modules from the Internet, as soon as you call the first. > This proves that it is possible, especially that it can all be totally anonymous (it's something people really want to have). >> The way to achieve this is quite easy - in theory. Enforcing controls >> and checks on the environment will give the modules the ability to >> gracefully handle crashes without pulling down every other module. >> Furthermore the same mechanism can be used to perform a sort of lazy >> linking: a module will only be loaded when it is referenced (call'ed >> into) for the first time and will be unloaded as soon as the last active >> reference has gone. >> This is possible in an 80386 and upwards (and pretty likely even other >> processor architectures as well): This processor has a pretty nifty >> page-based access control. You can only set page table entries to >> readable, writeable and/or non-existant, but that's sufficient, because >> the rest is handled in software. >> Each module get's its own private adress room which contains "windows" >> into which the referenced modules are mapped. This mapping might not >> even be real page-level mapping because of access limitations which need >> to be enforced. > > > Again, this sounds like shared libraries. > I want it to be more than that. See the passage above where I've drawn the line between this "enhanced environment" (I'm still looking for a better name) and already available component models. I hope that the above becomes clearer now, too. >> Each time a window is accessed the context of the running project >> changes to the adress space of the referenced module, after appropriate >> mapping has been done. Accesses through windows are trapped (except when >> non-pointer data is read and reading is allowed or other rather >> trivial cases) and the instruction that has invoked the trap is >> examined, and then - if it is valid - emulated by the trap handler. In >> case the instruction is invalid the module having caused this violation >> will get the chance to handle the error and exit gracefully. >> Of course access rights must be defined. This is done on a per-symbol >> basis (symbol in this context means funcion/procedure entry point, >> object, variable or data structure). Therefore each module must consist >> of a binary and an access definition file. In this file there is an >> entry for each symbol which grants or denies read, write and execute >> rights for the owner of the module, his/her group and others (note that >> making an extra file out of this has two benefits: first, there's no new >> file format needed, and second this file could possibly edited by an >> user or admin). In this file prototypes of each exported function and/or >> variable must resides as well as definitions of exported data >> structures, because during the adress space switch pointer addresses >> might have to be tweaked so that they point into the right window >> (imagnie that the process is tunneling back and forth between two >> windows which map address spaces that have different real offsets). This >> might not be neccessary when the windows are at the same addresses as >> the modules that are referenced within their own address space. >> This mechanism can be extened even further: Windows can map modules >> running on remote machines. This only needs a small extension in the >> form of a network protocol stack which is able to serialize and >> reassemble such requests automatically. Furthermore this mechanism can >> be exploited to map contents of the DOMfs into persistent objects and >> thereby providing a decent interface. > > >> All of this sounds like a lot of overhead. But I assume that it isn't. I >> expect that code controlled this way is no more than two to three times >> slower than a usual executable under otherwise identical conditions. And >> that performance hit isn't recognizeable for desktop users given the >> performance of current PC hardware. >> 3. An interpreted language to automize the enhanced execution >> environment: ObjectBasic >> This is an optional thing and can be described as the "shell" of the >> enhanced executino environment I just described. The language should be >> easy to use, yet powerful enough to write small applications and >> automization scripts for every day use. > > >> Have you ever thought of >> remote-controlling your word processor from a shell and writing a letter >> this way? > > > Yes. I have. But afterwards, I found no answer to the question: > "And why should I?" > This example should point out the ease of component reusage and automation that is possible. So both software integration and software development will hopefully become easier. >> With this interpreter, it'll be possible. I promise. ;) > > >> That's all! This is just a short description of the most important >> points in my design proposal. Of course it is much longer: >> integrated installer with online software update facility (the installer > > might get > >> things optimized inside the enhanced execution environment quite a lot), > > > That one sounds good. CPAN could be a good example, how the technical side > could work. What I am missing at CPAN is the missing Quality Control. > The problem can only be overcome by manpower. And manpower is by default rare among volunteer efforts. But the direction is certainly right. >> a new and hopefully superior graphical user interface, etc. > > > Have a look at "Berlin", which is somehow connected to GGI, and have a look > at Entity. > Berlin's design is arguably one of the best, I agree. However I'd like to direct the focus more on new ways of user interaction and usability in general than on implementational details (the latter will possibly be determined by the former). >> What else? >> An office suite? An ERP system? A web server? I just don't know. > > > ;-) > Do you plan to run SAP on it? No. Having spent a whole week trying to install CA's Manufacturing Knowledge (MK for short, which is a direct competitor to SAP R/3) at work and having spent three months trying to get the faintest idea of what it can do I think we could easily build our own ERP system on top of ChallengeOS :). Gregor PS: That MK installation still isn't complete. I've given it up by now :). -- ***************************************************** * Gregor Mueckl Gre...@gm... * * * * The ChallengeOS project: * * http://challengeos.sourceforge.net * ***************************************************** * Math problems? * * Call 1-800-[(10x)(13i)^2]-[sin(xy)/2.362x]. * ***************************************************** |
From: <sou...@fu...> - 2002-07-26 18:04:52
|
Re: [Challengeos-developers] more ideas Von: Philipp Gühring <p.g...@fu...> An: cha...@li... Datum: Fri, 26 Jul 2002 19:03:55 +0200 Nachricht enthält Signatur von Philipp Michael Gühring (Sourcerer) <pg...@pa...> Hi, > This file system will be soleley for data storage. Applications could be another idea too. Look at www.entity.cx I think that could be the basis for your componentizes modulized architecture too. > It will serve as some > sort of hierarchical database. It'll be designed in such a way that an > XML file can go through an import/export cycle on this file system and > will come out unaltered (maybe perhaps some whitespaces vanished or were > added, but that has not changed the contents of the file). Ok. > How is this file system accessed? Among the many ways thinkable I prefer > to map the hierarchical structure onto persistent data structures which > are accessed from memory. However, this can only be achieved in a > reasonable way when an object-oriented language is used. Actually, pure > procedural languages like C will have a hard time in ChallengeOS anyway > (see below). Evidently, you could design a plain procedural C-compatible > interface, but that'll be a lot more complicated to use. Yes. DOM, and all the necessary functions around it, have to be provided by the system kernel, for every kernel driver, for every programming language, and for every application. I see two necessary functions based on DOM, that should be made available: XPath and XSLT. For the programming languages: DOM should be a datatype: DOM mytree; XPath should be as integrated into the programming languages as Regular expressions are integrated in Perl: foreach (mytree =~ x/DOCUMENT/NODE/SUBNODE[@attribute='value']/Text ) { print; } And the kernel should make the following possible: DOM persistentTree("/dev/hdd4"); Which makes the DOMfs from the partition /dev/hdd4 available as DOM tree. > Note that there must still be conventional file system available because > at least the software must be stored on this file system. Hmmm. Have a look at Entity. I think Entity shows the way, how applications can be developed in the future. > The > requirements of both file system types absolutely exclude one another. > So there is no way to combine them. ;-) I am not sure yet. I don't think that it's impossible, it just isn't straitforward. Did you take a deep enough look at IVI::DB yet? IVI::DB is my own native XML database, which is available under the GPL from http://www.livingxml.net/ (->Plattform ->Database ->bottom of the page) With IVI::DB, I somehow succeed to build up a XML database on top of a normal hierarchical filesystem. > 2. Enhanced execution environment (The name somehow reminds me of Palladium ... but forget that) > [Note: This is in my oppinion the most important feature. I'll not give > up this one. No way!] Did I said anything against it? Something like that stumbles through my mind for some years now, but I am not yet sure, how I really want it, and I think I want it a bit differently than you, but I think the overall direction is not that wrong ... > This is more than a feature. It's a computing concept unmatched by > anything I've seen so far or that is to come in the near future. In > other words: it's unique! Marketing. > It's hard to describe all of this exactly. Imagine your software > consisting of a whole wagonload of small, specialized modules (or > libraries) which are running in the same address space. Every module > could use functions provided by every other module that is installed. > Extending such an environment would be easy: just write the missing > module using features from the modules that are already available. Sounds like the shared library concept. At the moment, we have several different execution environments: * Kernelspace (device drivers, ...) * Daemonspace (all those servers ...) * GUIspace (KDE, Gnome, Windows, ... "rich clients") * Webservicespace (everything running in a Browser "thin client" All those execution environments have very different needs, and should be thought trough on their own, I guess. > However, this has two rather obvious drawbacks: If only one module has a > bug the whole set would be forced down in a big crash. Second, every > module involved must be loaded in memory at startup. That's a waste of > memory. But these problems can be overcome. Have a look at the Perl module concept, and CPAN. It is nearly so automated, that it would automatically fetch and install the needed modules from the Internet, as soon as you call the first. > The way to achieve this is quite easy - in theory. Enforcing controls > and checks on the environment will give the modules the ability to > gracefully handle crashes without pulling down every other module. > Furthermore the same mechanism can be used to perform a sort of lazy > linking: a module will only be loaded when it is referenced (call'ed > into) for the first time and will be unloaded as soon as the last active > reference has gone. > > This is possible in an 80386 and upwards (and pretty likely even other > processor architectures as well): This processor has a pretty nifty > page-based access control. You can only set page table entries to > readable, writeable and/or non-existant, but that's sufficient, because > the rest is handled in software. > > Each module get's its own private adress room which contains "windows" > into which the referenced modules are mapped. This mapping might not > even be real page-level mapping because of access limitations which need > to be enforced. Again, this sounds like shared libraries. > Each time a window is accessed the context of the running project > changes to the adress space of the referenced module, after appropriate > mapping has been done. Accesses through windows are trapped (except when > non-pointer data is read and reading is allowed or other rather > trivial cases) and the instruction that has invoked the trap is > examined, and then - if it is valid - emulated by the trap handler. In > case the instruction is invalid the module having caused this violation > will get the chance to handle the error and exit gracefully. > > Of course access rights must be defined. This is done on a per-symbol > basis (symbol in this context means funcion/procedure entry point, > object, variable or data structure). Therefore each module must consist > of a binary and an access definition file. In this file there is an > entry for each symbol which grants or denies read, write and execute > rights for the owner of the module, his/her group and others (note that > making an extra file out of this has two benefits: first, there's no new > file format needed, and second this file could possibly edited by an > user or admin). In this file prototypes of each exported function and/or > variable must resides as well as definitions of exported data > structures, because during the adress space switch pointer addresses > might have to be tweaked so that they point into the right window > (imagnie that the process is tunneling back and forth between two > windows which map address spaces that have different real offsets). This > might not be neccessary when the windows are at the same addresses as > the modules that are referenced within their own address space. > > This mechanism can be extened even further: Windows can map modules > running on remote machines. This only needs a small extension in the > form of a network protocol stack which is able to serialize and > reassemble such requests automatically. Furthermore this mechanism can > be exploited to map contents of the DOMfs into persistent objects and > thereby providing a decent interface. > All of this sounds like a lot of overhead. But I assume that it isn't. I > expect that code controlled this way is no more than two to three times > slower than a usual executable under otherwise identical conditions. And > that performance hit isn't recognizeable for desktop users given the > performance of current PC hardware. > > 3. An interpreted language to automize the enhanced execution > environment: ObjectBasic > > This is an optional thing and can be described as the "shell" of the > enhanced executino environment I just described. The language should be > easy to use, yet powerful enough to write small applications and > automization scripts for every day use. > Have you ever thought of > remote-controlling your word processor from a shell and writing a letter > this way? Yes. I have. But afterwards, I found no answer to the question: "And why should I?" > With this interpreter, it'll be possible. I promise. ;) > That's all! This is just a short description of the most important > points in my design proposal. Of course it is much longer: > integrated installer with online software update facility (the installer might get > things optimized inside the enhanced execution environment quite a lot), That one sounds good. CPAN could be a good example, how the technical side could work. What I am missing at CPAN is the missing Quality Control. > a new and hopefully superior graphical user interface, etc. Have a look at "Berlin", which is somehow connected to GGI, and have a look at Entity. > What else? > An office suite? An ERP system? A web server? I just don't know. ;-) Do you plan to run SAP on it? Many greetings, -- ~ Philipp Gühring p.g...@fu... ~ http://www.livingxml.net/ ICQ UIN: 6588261 ~ <xsl:value-of select="file:/home/philipp/.sig"/> Ende der PGP-Nachricht |
From: Gregor M. <Gre...@gm...> - 2002-07-26 12:30:16
|
Hi! The lastest idea I've been playing with is to develop a POSIX-compatible base system with the following extensions: 1. A DOM-based file system: DOMfs [DOM: Document Object Model, a W3C standard for hierarchical document organisation, which is obeyed by XML and other markup languages] This file system will be soleley for data storage. It will serve as some sort of hierarchical database. It'll be designed in such a way that an XML file can go through an import/export cycle on this file system and will come out unaltered (maybe perhaps some whitespaces vanished or were added, but that has not changed the contents of the file). How is this file system accessed? Among the many ways thinkable I prefer to map the hierarchical structure onto persistent data structures which are accessed from memory. However, this can only be achieved in a reasonable way when an object-oriented language is used. Actually, pure procedural languages like C will have a hard time in ChallengeOS anyway (see below). Evidently, you could design a plain procedural C-compatible interface, but that'll be a lot more complicated to use. Note that there must still be conventional file system available because at least the software must be stored on this file system. The requirements of both file system types absolutely exclude one another. So there is no way to combine them. 2. Enhanced execution environment [Note: This is in my oppinion the most important feature. I'll not give up this one. No way!] This is more than a feature. It's a computing concept unmatched by anything I've seen so far or that is to come in the near future. In other words: it's unique! It's hard to describe all of this exactly. Imagine your software consisting of a whole wagonload of small, specialized modules (or libraries) which are running in the same address space. Every module could use functions provided by every other module that is installed. Extending such an environment would be easy: just write the missing module using features from the modules that are already available. However, this has two rather obvious drawbacks: If only one module has a bug the whole set would be forced down in a big crash. Second, every module involved must be loaded in memory at startup. That's a waste of memory. But these problems can be overcome. The way to achieve this is quite easy - in theory. Enforcing controls and checks on the environment will give the modules the ability to gracefully handle crashes without pulling down every other module. Furthermore the same mechanism can be used to perform a sort of lazy linking: a module will only be loaded when it is referenced (call'ed into) for the first time and will be unloaded as soon as the last active reference has gone. This is possible in an 80386 and upwards (and pretty likely even other processor architectures as well): This processor has a pretty nifty page-based access control. You can only set page table entries to readable, writeable and/or non-existant, but that's sufficient, because the rest is handled in software. Each module get's its own private adress room which contains "windows" into which the referenced modules are mapped. This mapping might not even be real page-level mapping because of access limitations which need to be enforced. Each time a window is accessed the context of the running project changes to the adress space of the referenced module, after appropriate mapping has been done. Accesses through windows are trapped (except when non-pointer data is read and reading is allowed or other rather trivial cases) and the instruction that has invoked the trap is examined, and then - if it is valid - emulated by the trap handler. In case the instruction is invalid the module having caused this violation will get the chance to handle the error and exit gracefully. Of course access rights must be defined. This is done on a per-symbol basis (symbol in this context means funcion/procedure entry point, object, variable or data structure). Therefore each module must consist of a binary and an access definition file. In this file there is an entry for each symbol which grants or denies read, write and execute rights for the owner of the module, his/her group and others (note that making an extra file out of this has two benefits: first, there's no new file format needed, and second this file could possibly edited by an user or admin). In this file prototypes of each exported function and/or variable must resides as well as definitions of exported data structures, because during the adress space switch pointer addresses might have to be tweaked so that they point into the right window (imagnie that the process is tunneling back and forth between two windows which map address spaces that have different real offsets). This might not be neccessary when the windows are at the same addresses as the modules that are referenced within their own address space. This mechanism can be extened even further: Windows can map modules running on remote machines. This only needs a small extension in the form of a network protocol stack which is able to serialize and reassemble such requests automatically. Furthermore this mechanism can be exploited to map contents of the DOMfs into persistent objects and thereby providing a decent interface. All of this sounds like a lot of overhead. But I assume that it isn't. I expect that code controlled this way is no more than two to three times slower than a usual executable under otherwise identical conditions. And that performance hit isn't recognizeable for desktop users given the performance of current PC hardware. 3. An interpreted language to automize the enhanced execution environment: ObjectBasic This is an optional thing and can be described as the "shell" of the enhanced executino environment I just described. The language should be easy to use, yet powerful enough to write small applications and automization scripts for every day use. Have you ever thought of remote-controlling your word processor from a shell and writing a letter this way? With this interpreter, it'll be possible. I promise. ;) -- That's all! This is just a short description of the most important points in my design proposal. Of course it is much longer: integrated installer with online software update facility (the installer might get things optimized inside the enhanced execution environment quite a lot), a new and hopefully superior graphical user interface, etc. What else? An office suite? An ERP system? A web server? I just don't know. Gregor -- ***************************************************** * Gregor Mueckl Gre...@gm... * * * * The ChallengeOS project: * * http://challengeos.sourceforge.net * ***************************************************** * Math problems? * * Call 1-800-[(10x)(13i)^2]-[sin(xy)/2.362x]. * ***************************************************** |
From: Gregor M. <Gre...@gm...> - 2002-07-18 19:48:18
|
Hello, everybody! First of all I welcome all of you to this mailing list (you might already have noted that this is practically a brand new list). If you wonder what this all is about: I'm trying to revive this project the hard way. Consider this imaginary interview (sorry, didn't have a better idea to present this): What is "this project" all about? It's about creating an operating system. Right. But it should not become an ordinary OS though. It should be different. Different? In what way? The goal should be to create an operating system that defines a new way to create applications and to program and use a computer. Application programming and usage should be dramatically simplified by a new and unique environment the OS provides. How is that possible? Today's computers have nice and user friendly and intuitive graphical user interfaces. Are today's GUIs the best in terms of usability? Considering intuitivty I just have to think of how to shutdown Windows: click on "Start" (!) and then on "Shutdown...". I think there can indeed be improvements, not to details, but to the whole concept. So it's about a GUI? Nothing else? No! There is much more to the project: a new kind of file system and a support for a totally new way of application programming. And what could that be? (only silence follows) -- Actually I've gathered some ideas. But please make up your minds yourself and don't hesitate to post your own thoughts. No ideas could be worse than those in my collection;). I'll not go into detail on the user interface now because it's a long way to go until we can even think of that. But the rest is in my oppinion still a complex matter. I'll cut it down to just a few points now (the rest is up to your own imagination :). - No programs in the usual sense. Only components which are free to access each other virtually transparently. Nothing like CallFunction("myfunc") should be in the way. Instead, write myfunc() and your're fine, even across networks. - Apply access rights control to each individual functions/member functions/objects/classes/variables that is referenced from outside the owning component, thus creating a controlled and safe environment for them. - Try to avoid generating stubs or using traditional IPC mechanisms for the component model. Instead try to generate a common controlled memory segment for all components. - Use those components from within your automation scripts with exactly the same comfort. - DOM access through persistent objects to a new file system that can represent information exactly like an XML file (but it's a file system). To organize this as a file system is suitable as the amount of data stored in such a structure can become quite big (just imagine document management) [Thanks to Philipp Gühring for this idea] - Supply a document-view application framework that allows an unlimited number of users on an unlimed number of computers to edit the exactly same file simultaneously without causing strange conflicts. Ideally, every user would see the changes the others make in real time. - Make the components behave like a single, big, fully-featured-for-every-purpose application (i.e. display full - and working! - integration to the user and try every possiblity to present the same to the programmer) - Eventually implement a Linux/X-compatible subsystem That's it. I'd bid you to comment on it and/or present any alternatives that come to your mind. Gregor -- ***************************************************** * Gregor Mueckl Gre...@gm... * * * * The ChallengeOS project: * * http://challengeos.sourceforge.net * ***************************************************** * Math problems? * * Call 1-800-[(10x)(13i)^2]-[sin(xy)/2.362x]. * ***************************************************** |
From: Chris W. <ch...@wa...> - 2002-07-17 16:21:37
|
Hello, I'm a new guy. I know C/C++, and I'm learning a little ASM. Rk |
From: Chris W. <chr...@ho...> - 2002-07-16 02:46:02
|
Hello, I'm just a new guy. I know C and C++, and am currently learning ASM for = fun. Rk |