From: Claus M. <cla...@ma...> - 2000-02-21 18:00:30
|
This is a snippet of an earlier post on which I'd very much like to comment: > * <vage> How will the kernel demultiplex the console and keyboard? > Currently all the processes outputting can get a little strange. They > mustn't be allowed to write to the screen unless they have 'focus', > and maybe a keyboard handling process that handles the keyboard IRQ > and sends it on (via PCT) to the right process? First I think I'd better introduce myself; My name is Claus Matthiesen, and I study architecture at the AAA, the Aarhus School of Architecture. The reasons why I am involved in this project are many, but my primary interest is the interface between the computer and the user. The philosophy behind Exo kernels can easily ve regarded as more than a technical one, and thus Kasper V. Lund and I have spent some time evolving a general *principle* behind Exo systems. It is my concern that this principle is extended to include at least the graphical system. On a more elaborate description of the general principles, I refer to our new web page, which (hopefully, God be willing, etc.) will be up shortly. Anyway, on to the question in hand. The thought that only one process at any time may be allowed to write to a screen - not "the screen", as we might expect there to be more than one - has also occured to us. One might make a distinction between text interfaces and graphical interfaces and allow - much like Windows or X does - that more than one process might have the screen, or at least a window or view within the screen to write in. This approach is of course rather useful seen from at desktop application's view and rather less appealing from the viewpoint of a game or other graphically heavy applications with an interface significally different from the common desktop. As the Exo principle certainly dictates that: a) No (or at least very few) abstractions must be enforced by the system, though it might offer them as a possibility; b) Any process should have the possibility to access the hardware it uses directly; - this approach is entirely unsatisfactory. We must remember, however, that it must be *possible* to write a normal desktop application on the lines of Microsoft Word (to take a much hated example). The idea we have now looks something like this: There is a process owning what we might call the screen Focus. Ideally one process might own more than one Focus, allowing the two screens on a Matrox G400 dualhead or Max, for instance, to be controlled by a single process. It should also be possible to have several Focus-owning processes running at once, but of course one one screen always have one and only one Focus. Now such a Focus-owning process might decide to lend it out to another process upon request - Quake II, for instance. The Focus-owner does not have any primitives for writing to the screen, as this would be enforcing an unnecessary abstraction, so this must be supplied by the lender itself - in most cases probably via an independent graphics driver library. Now the process that owns the Focus might decide to recall it and lend it to someone else, effectively making another process take over the screen. The processes that are shifted in and out should of course be notified via RPC's or something along those lines. Processes trying to acces the ports on the graphics card without having the focus should be ignored or killed. Now what about Desktop applications? Desktop applications will run under some desktop manager or window manager. The window manager will hold the Focus (or Focuses) to allow it to write to the screen(s) it wants to (or is allowed to) write to. Now the window manager can deliver the primitives needed by the normal desktop applications in views within the window manager. Keyboard handling has not been discussed that elaborately, but it seems we can safely distribute keyboard commands to all processes that request it. They know if they're focused, so if they have no need for keyboard input when they are not displayed, they can safely ignore it. A variation on this theme could be to simply not schedule processes that are registered as graphical processes when they are unfocused. This means, however, that we need to distingiush between graphical and non-graphical processes, though we could simply say that processes that requested the screen would not be scheduled unless they got it. It would also require that the kernel was told by the graphics system what processes not to schedule - a dangerous proposition, because possibly destructive processes, such as viruses, might try to do the same thing. This method would, however, remove the administrative costs of giving and taking the Focus. Overall the disadventages seem to outweigh the advantages of this solution right now, but comments are very welcome indeed. In text-only modes the ideas discussed so far are just as applicable, and would allow for a rather nifty implementation of Borland's old TurboVision text-driven window library for instance. It is clear that as time progresses, some sort of window manager is required, since none of us are interested in having only full-screen graphics-mode programs. I am currently sketching the first outlines of what such an interface would have to offer, and many new approaches - marking menus, toolglasses, (semi-)3d-interfaces and document-oriented interfaces are being considered. This will all be available for comment, cooperative development and flaming once our new site is up. - xmentor (Claus) Lemon curry? |