|
From: Gregor M. <Gre...@gm...> - 2002-08-01 22:53:51
|
sou...@fu... wrote:
> Hi,
>
>> I'd only do this for interpreted code like Perl or Python fragments
>> (pretty much the same thing entity is doing). The reason is that I can't
>> think about a way to make this behave correctly for binary code
>> fragments. When should which piece of code be executed?
>
>
> Do we need binary code that much?
>
You don't want an OS that wouldn't let you run only interpreted
bytecode, would you? This way you are much slower than by running binary
code.
Besides, don't forget that GCC already has backends for a lot of
processors out there and can output a lot of binary formats. If you
wanted a bytecode interpreter, you'll have to make compiler first, which
I believe to be *additional* and actually unneccessary programming effort.
Source code for compiled programming languages like C and C++ can
extremely portable, too, so this argument doesn't count. Actually I
don't believe that it's that easy to write a perl script that can run
out of the box on both Windows and UNIX.
And last, if you write an interpreter you can simulate any environment
for the interpreted code you want without the need for a special
operating system to do it.
>> I can think of optional features in DOMfs that can't be mapped directly
>> into XML files that way like revision management or access rights. The
>> former is a pure convenience function, which might turn out quite
>> useful, but the latter is a must-have in the multi-user environment
>> ChallengeOS will have to provide.
>
>
> Hmmm ... revision control ... I don't think that it should be only a
> part of DOMfs.
> And on the other hand, I am not sure, whether it shouldn't be (mustn't
> be) application specific anyway ...
> I didn't had enough revision-control needing applications yet, to have a
> clear view, of what I want in that domain.
Well, if you sit next to one of the managers of a small company (150
employees) you'll soon get a feeling for what is really useful for
business applications. And if you're programming company-specific
solutions (literally) at the same time it becomes quite obvious that
revision control should be a major feature of a document management
scheme, even it's just an instructions telling the employees where to
put their files. And DOMfs is a good candidate for implementing document
mangement in my oppinion.
About implementation of revision control: I suggest that it should be
off by default and only be activated explicitely when needed. The
different revisions of the DOM tree put under revision control will then
get state and date tags as well as an application-defined free-formed
tag string, which can be used by the application to store additional
information about that specific revision.
> Access control ... yes, that sounds interesting.
> While we come to access control ...
> Shouldn't we try to invent a better user-role-authorisation system?
> On the other hand, too much systematic might get too complex for users
> too ...
>
I'm currently used to two different authorisation systems: WindowsNT
User/Group-ACL-system (quite complex, but flexible) and UNIX
User/Group-System (easy to grasp, but not too flexible). On the
performance side the UNIX authorisation system can be expected to be a
lot faster than the Windows NT one, though.
I expect an authorisation system that we design on our own to drift off
into the direction of either one of those systems - presumably the
Windows one. These two systems present the exteme poles which attract
every other system you can think of.
>> Currently I'm also pondering wether there should be the possibility to
>> define a format for XML subtrees (like it is done with DTDs or similar
>> files) and have DOMfs block any future changes that violate these
>> definitions. This is strictly optional, even if it should become
>> implemented. But is it worth implementing?
>
>
> Interesting idea. If we should do that, it should be based upon XML
> Schemas, instead of DTD's. I think, it would make sense to do it.
> Although I am not sure, what performace hit that would give ...
> I will have to investigate upon XML Schemas ...
>
Please do that. I was only referring to DTDs because I don't know any
other formats that are out there. I am aware, though, that DTDs are
actually obsolete by now. And if you find some information on XML
Schemas, could you please give us a (short) summary on what can be
described with them?
> Another thing, that comes ot my mind, is the binding from DOM against
> classes from Object-Orientated Languages.
> It started, when Java learned to serialize and deserialize it's objects
> into XML, instead of Java's native formats.
> So the serialize and deserialize functions simply output all the member
> variables and recursively all included classes into XML.
> But what if the Classes as bound against persistent DOM?
> (Think of blessing hashes from Perl)
>
>> mount /usr/bin/adress.class /dom/adress
>
>
> That generates a new instance of adress.class, which gets mounted as
> persistent DOM on /dom/adress
>
>> ls -la /dom/adress
>
> -rw-r--r-- 1 philipp users 1168 Feb 3 1998 name
> -rw-r--r-- 1 philipp users 1168 Feb 3 1998 street
> -rwxr-xr-x 1 philipp users 16 Sep 18 1999 validate
>
>> echo "Philipp" >/dom/adress/name
>
>
>> /dom/adress/validate
>
> Validating Adress ...
> Street is missing!
>
>>
> ...
>
> Just an idea at the moment ...
> By the way, there is already a XML shell available
>
Hmm... I don't think that DOMfs could be accessed easily via shell.
You'll probably have to use helper tools to query the DOM tree to get
information out of that fs.
>> I agree except for the kernel and device drivers. It's senseless to do
>> some XML tricks in this area of an OS.
>
>
> I don't think so.
>
Why?
>> Remember that device drivers are
>> about providing unified interfaces to hardware.
>
>
> Correct.
>
>> And for the sake of
>> speed they must be as simple, flexible and straight-forward as possible.
>> Device drivers will most likely be interfaced via device files, i.e. the
>> good old /dev/ dir. This method is fast, clean and can be implemented by
>> providing a small number of syscalls (open, read, write, close, ioctrl).
>
>
> Ah. ;-) What about ioctrl?
>
KISS - keep it simple, stupid
>> You surely wouldn't want to "cat randomcode.xml > /dev/xxx" and expect
>> it to do something useful with your input, would you?
>
>
> No. Binary is binary. I would not change anything on
> open,read,write,close for storage devices.
> But everything else ...
> Just think about more complex hardware devices like TV Tuner cards.
> Being able to communicate with them with XML, would make sense, I think.
>
Why have a device driver parse bloated XML data? That unneccessarily
kills processor power, bloats the kernel because it has to have an own
XML parser in memory and - worst of all - forces programs to generate
XML segments by doing processor-intensive string concatenations when
they could output binary data with just a few instructions.
If you are using device files directly chances are that you want or need
to go for speed. So there is no need to make the job more difficult for
these people.
>> > I see two necessary functions based on DOM, that should be made
>> > available: XPath and XSLT.
>> A question about XPath: Isn't there a supposedly more sophisticated
>> successor in preparation?
>
>
> XPath? I haven't heard about a successor yet. Perhaps you mean XPath 2.0
> ...
> But the base functionality of XPath is so good, that I don't think that
> there can be something far better.
>
If it's done right it can be implemented in user space, anyway. The
interface to the DOMfs only has to be powerful enough. It just makes no
sense if a simple query needs dozens of syscalls to get finished. I
think that it should be done that way. This also does mean that you are
not bound to XPath as a query language if you don't like it.
>> > For the programming languages: DOM should be a datatype:
>> > DOM mytree;
>> Yes. Something like that. Anything else wouldn't make sense in a C++
>> program. Maybe there should be a parameter passed to the constructor
>> telling it what node to take as a root node.
>
>
> But that would be only an optional parameter, yes.
>
Not providing that parameter doesn't make much sense, either. Remember
that different applications may share the same root node because the
data is stored on an extra partition and partitions normally are very rare.
>> Although that syntax is fine with Perl, it should be covered by
>> functions and classes in C++.
>
>
> Hmmm.
>
Does that mean that you agree? :)
>> Reason is that it would otherwise hevily
>> break the already surprisingly complex C++ syntax (C++ syntax looks
>> simple, but it certainly isn't).
>
>
> In my opinion, C++ has so many design flaws, that it should not be used
> anymore.
C++ is a good language that has saved me countless hours of silly typing
exercises. I don't see any flaws in it (yes, it's my favorite language :).
> But there arises a much deeper question:
> Do we want to develop a compatible operating system, or a challenging
> operating system?
> I followed the starting of several new operating system in the last
> years, and I saw, that they all struggle with the question of
> compatibility.
> POSIX, C/C++, Win32, Linux, Device-Drivers, ...
> If you try to be compatible, you will gain a lot more applications for
> the platform.
> But as soon as you even commit yourself to one of those standards, you
> are bound to it, and cannot easily innovate beyond it.
> And I think that is the real challenge of developing a new operating
> system.
>
Isn't it possible to provide a POSIX-compatible system with our own
extensions placed just so that they don't interfere with the POSIX
standard, but are there and useable at the same time?
It is possible. There are many extensions to POSIX and the original UNIX
on many UNIX-compatible systems including Linux. System-V-like IPC,
BSD-like process accounting, etc. have started out as extensions to the
original UNIX sources. But now they widely available.
Another example: BeOS has a POSIX-compatible base system and a GUI
system built around it. Even MacOS X is built that way. So was NeXTStep.
Any questions? ;)
>> > And the kernel should make the following possible:
>> > DOM persistentTree("/dev/hdd4");
>> > Which makes the DOMfs from the partition /dev/hdd4 available as DOM
>> tree.
>> Taking the device file name makes the node definition dependent on the
>> physical disk layout in the machine, which is a bad thing to do.
>
>
> Agreed. Lets do it differently:
> DOM persistentTree("/dom/existing/tree");
> /etc/fdisk:
> /dev/hdd4 /dom/existing/tree domfs
>
I still don't see a reason why DOMfs partitions should be mapped into
the normal UNIX fs tree. This only limits the possibilities of DOMfs. An
ordinary file system requires that every file name in a directory is
unique. Many file systems can't assign user-defined attributes to
directory entries. XML and therefore DOMfs don't have these
restrictions. Remember that you must be able to import any well-formed
XML file into DOMfs without data loss according to the DOMfs
requirements we agreed on earlier. So something doesn't work out here.
>> Instead
>> I think there should be a way to mount existing DOMfs partitions into a
>> virtual root node. This way data can be spread across many disks and
>> partitions without requiring the application to care about this. So it
>> would be similar to the directory tree on UNIX.
>
>
> Yes, ok.
>
See above, though.
>> As I already pointed out that I doubt that this could work for compiled
>> code. I agree that it's a neat concept they have, though. However, I
>> need some hints about how binary code could be included in this concept.
>
>
>
>
>
>> >> The
>> >> requirements of both file system types absolutely exclude one another.
>> >> So there is no way to combine them.
>
>
>> > With IVI::DB, I somehow succeed to build up a XML database on top of a
>> > normal
>> > hierarchical filesystem.
>> And what is so special about this? I don't see the point here.
>
>
> It shows the integration of XML and normal Filesystems, and somehow the
> needs of big XML databases.
>
But that way it shouldn't find its way into an OS kernel, should it?
>> Please sugest a better name. I find it awful too, because it's so long.
>
>
> Component Execution
> Component Environment
> Enhanced Components
>
Hmm... my favorite is "Component Environment". Maybe the official name
should be "Challenge Component Environment" and "ChallengeCE" for short
(although that sounds much like the name of another OS). Any other
ideas, anyone?
>> And I'm definitely against the development of Palladium.
>
>
> Then you do not know enough about it yet. ;-)
> But we will be able to add Palladium support to ChallengeOS afterwards
> too, it isn't a priority at the moment.
>
Why add Palladium support? In my oppinion it's a major hassle for the
user and and a danger for data security. Remember that Palladium
requires "secure" programs to be signed by the OS vendor - Microsoft in
this case. This severely limits the openness of an OS platform.
>> > Have a look at "Berlin", which is somehow connected to GGI, and have a
>> > look at Entity.
>> Berlin's design is arguably one of the best, I agree. However I'd like
>> to direct the focus more on new ways of user interaction and usability
>> in general than on implementational details (the latter will possibly be
>> determined by the former).
>
>
> Ok! (You do not know, what you said there ;-)))))
>
Oh, I think I do. :)
>> Task 1 running module A accesses module B, which is not yet loaded.
>
>
> Ok. If we really want to talk about usability, then we have to dump a
> big pile of systematics, which we are used to.
(Almost) done.
> Do not see the operating system as a system, that has to drive the
> hardware, has to run applications, and has to abstract the hardware for
> the applications.
No. You are wrong: The OS has to abstract the hardware. You don't want
to recompile/redownload/reinstall all your software because you've just
upgraded your PC's harddrive from IDE to SCSI. Two months later you
discover that you should have chosen another host adapter and do the
same procedure again. Do you really want to do this?
And you certainly won't want to miss preemptive multitasking, would you?
So you *have* to see this aspect of a desktop OS. It would be different
for an embedded OS, but ChallengeOS will probably never find it's way
onto a single chip.
> Do not see applications as processes, which are instances of programs.
> Completely forget all the crap about the current systems we have.
> Have a look at what the user really wants, what the users has to get,
> and what the user will see.
> The user wants several different things:
> * Applications: Email, Web, Amazon.com, Heise.de, Würstelstand, Scanning
> images, watching TV, ...
Are website applications? I know that we could discuss this until
eternity. But let's assume for the scope of this discussion that they
are not and therefore shouldn't be considered here. We are discussing
binary executables running on the host the OS is running on.
> * Easy interface
> * Fast responding interface
> * Control over the computer
> * Stable applications
> * ...
These things are on a developer's TODO-list for about 10 years now. And
sadly, they haven't changed (much).
> Lets begin with the applications:
> The user sees Websites like Amazon (this is where I can buy my books),
> Heise (this is where I get my informations), ... as applications.
> The concept of a browser as an application does not make sense.
It does. For a very basic reason, indeed: How would you otherwise want
to blend those HTML pages they deliver into the desktop? What component
should interpret JavaScript if not the browser?
> By the way, the concept of applications as programs you have to start,
> does not make sense too.
Right. It does not make sense to start the word-processing program and
fiddle through hoards of dialogs until you can write a letter. Can't you
just tell the computer that you want to write a letter and be presented
with your own, individual letter template ready and waiting to be filled in?
On the other hand, as a hobby web designer pointed out to me, commercial
software manufacturers want to produce user interfaces that are
recognisable and unique. Adobe is such an example. Of course this is
done to keep users away from migrating to competing products. But this
also results in a steeper learning curve for new users. I hope that
ChallengeOS will not suffer from this as much as Windows software often
does.
> The desktop should not be just some icons to start programs (from system
> design view).
> The desktop is where you work.
Correct. What is a desktop in the real world? Right, a desktop - with
paper, pens and a computer on top of it :).
> The X-Server concept is an ancient relict from the times, where a
> computer had no graphical monitor directly attached.
> The concept, that a process manages several windows, is a nice internal
> concept, but you can see the flaw, as soon as several graphical windows
> vanish at the same time, because the process behind them died unexpectedly.
>
Well, in a way you are right. But Windows have to be owned by something.
I've been thinking about a model in which a window's contents is
created only because several individual modules are interacting. None of
those components could do this job without the others (i.e. no
unneccessary component is involved). And yet are those components very
specific to the tasks they do. This makes them reusable and flexible.
Rearrange them and glue them together the way you need them and you'll
get a custom application much faster than you'd probably expect.
If one of those components dies, the others must be able to handle this
gracefully. But you will never be able to totally avoid errors. If they
are not in the code, they are in the design. That's a fact. So there
must always be a way to handle an error, even if it means removing all
related windows from the desktop.
> Now lets take a look at what happens in the workflow of a window based
> user interface, to optimize the user experience:
> 1. A window pops up on the screen.
> 2. The window gets filled with widgets.
> 3. The user reads the contents of the widgets, and decides to click on a
> button.
> 4. The user moves the mouse to a location over the button.
> 5. Time passes.
> 6. The user clicks on the mouse button.
> 7. The click wents through the wires, to the operating system, to the
> windowing system, to the application, and starts to initiate a procedure.
> 8. The procedure fetches data from disk, does calculations, ...
> 9. The procedure changes data, writes data to disk, ...
> 10. The procedure is done
> 11. A new window pops up on the screen, asking the user for the next
> interaction
> In my opinion, we can guess what the user will do at point 4.
> As soon as the mouse pointer gets over a button, (and has stopped on
> it), it is possible, that the user might want to click on it.
> As soon as we know that the user might click on the button, we could
> fire off a thread, that does all the stuff, that does not change
> anything, and that does not hurt the rest of the system.
> So we can start a thread, that only does the actions of point 8 at point 4.
> Instead of point 8, we will have to wait for the finishing of the
> already started thread.
> In the best case, we can do all the necessary calculations even before
> the user clicks on a button.
>
Well... this actually sounds pretty bad to me - for a reason. On the one
hand you suggest to waste performance by letting the kernel parse XML
data and by using interpreted languages instead of compiled languages.
On the other hand you suggest hacks like this one to compensate for the
processor power previously wasted. Additionally it's pretty hard to kill
a thread in mid-execution and let another one continue without
corrupting internal data structures or causing memory leaks. So this is
a hack forcing programmers to hacks inside the applications, which is
even worse (that reminds me of Windows in a way).
> But to achieve that, we will have to redesign nearly everything.
> (Isn't that a challenge?)
> Of course, those threads will have to run with nice level 10 until the
> user really clicks on the button, ...
>
One last word: Either you can cut down the response time by conventional
means or it's unavoidable to let the user wait anyway. This is only a
solution if the user is slow in handling the UI, which an experienced
user shurely is not. An experienced user is blindingly fast so the time
you win by that way of guessing will essentially be worthless or even be
eaten up by the time needed to rollback the actions performed in the
other threads while the main thread could have done full-power calculations.
>> The
>> OS traps this and fires up module B. Module A then forces module B to
>> load some data - say a data file like "/tmp/somedata.tmp" from disk.
>> Then another task - task 2 - which just got started up also references
>> module B. This access is of course trapped. But it does not result in
>> loading another instance of module B. Instead task 2 shares module B
>> with task 1 from this moment on. This also means that it sees the
>> current state of module B, in this case that it has loaded a data file.
>
>
> Why do you force singleton-objects?
> I don't think that it would make much sense, if there could be only one
> window object on the graphical interface ...
>
Aem.. there's certainly a misunderstanding here. Consider the following
code in a module:
namespace B {
int senseless;
class B1 {
public:
void foo() { senseless++; };
}
B1 *bar()
{
B1 *tmp=new B1;
B1->foo();
return B1;
}
}
Of B::senseless and B::bar there would only be one single, shared
instance. That is correct. However, there can be an unlimited number of
objects of class B::B1. So there are effectively no singletons.
>> In a nutshell, this is what the enhanced execution environment is about.
>> Of course it wound't be suitable for a multiuser environment if it can't
>> perform access checks to ensure data and system security.
>
>
> (Perhaps you should read the biography about Richard Stallman ;-)
>
I think that I once read it, but can't remember it now.
>> With a few tweaks to this model it would even be possible that module B
>> runs on a remote machine but neither task would ever have to care about
>> that.
>
>
> Network-wide singletons? Cool ...
> Just one window for the whole network ...
>
See above.
>> > * Daemonspace (all those servers ...)
>> > * GUIspace (KDE, Gnome, Windows, ... "rich clients")
>> These two can be unified into a "classic" POSIX-compatible environment.
>
>
> No. I would only do the Daemonspace with as POSIX-compatible environment.
> For the GUIspace, I want something better.
>
This can be combined. And it can be a good thing to have components
available for daemons/servers. This is especially true since I think
that it is a good thing to implement the DOMfs' persistance feature
using the same mechanisms that components use. So you won't get access
to DOMfs partitions if you don't access the component environment. A
possible scenario is an Apache instance running with a
ChallengeOS-specific module that allows the daemon to access and deliver
documents stored in the DOMfs. With a little bit of hands-on-work you
could attach other components to any daemons in the system and make
custom services available "normal" network clients this way.
You wouldn't want to loose those possibilities, would you?
>> The question I'm facing is whether this should be isolated from the
>> "enanced environement" I've been writing about here or whether the
>> latter can be implemented purely as an extension to the former one,
>> which would be great thing.
>
>
> I don't think that we need the Component Environment that much for the
> Daemonspace. I think Components are much more needed in the GUIspace.
>
See above. In my oppinion they are equally suited for both purposes.
>> > * Webservicespace (everything running in a Browser "thin client"
>> I'd call that "interpreter space" because every interpreted programming
>> languge can have a set of abstactions to the OS they are running on. The
>> interpreter that is neccessary in this case typically runs in another
>> environment - normally POSIX-compatible user space. So this has not be
>> considered as special.
>
>
> What if not?
>
I don't even see how this possibility could look like.
>> > All those execution environments have very different needs, and
>> should be
>> > thought trough on their own, I guess.
>> Right, though I don't agree with all the border lines you've been
>> drawing above.
>
>
> Why not?
>
Any process on any system can communicate either with other prcesses on
the same system, or with processes on other system that are connected
via network or with the user. Still it's a process. Even more, these
methods can be combined in any way you like.
So the point is that the GUI is nothing more than a part of the standard
environment just like an POSIX-compatible core API is. You can make all
this available at the same time and without performance loss. However,
you don't have to use it, but it's nice to have it available when it is
needed. Essentially all this becomes a single environment with many
different facetes.
>> > Have a look at the Perl module concept, and CPAN.
>> > It is nearly so automated, that it would automatically fetch and
>> install
>> > the
>> > needed modules from the Internet, as soon as you call the first.
>> This proves that it is possible, especially that it can all be totally
>> anonymous (it's something people really want to have).
>
>
> Yes.
> What I think is important, is that the system demands OpenSource by
> default, like Perl does. COM really suffers from closed source
> components. So everyone has to reinvent the wheel, becase everyone just
> needs some enhancements to the standard-wheels.
> It is important, that we have unified and administrated namespace for
> all modules.
Yes. I definitely agree.
> I don't like the Java approach of having domain names in every variable
> and class name.
> The other big problem with Java is that there is no central repository
> of all the java classes/modules that are available.
>
I can see the repository problem. And Java also suffers from poor
managability of installed class files. This only will become apparent in
large and complex Java-based software environments.
> I think your component environment is the concept to build up all the
> widgets for the user interface.
>
And hopefully not only that.
>> >> The way to achieve this is quite easy - in theory. Enforcing controls
>> >> and checks on the environment will give the modules the ability to
>> >> gracefully handle crashes without pulling down every other module.
>
>
> Just-in-Time-Compiling or Just-in-Time-Error-locating
> Regression tests like Perl modules do them
>
I don't think that this is the way to go. C++-style exceptions are more
like it, I think (and yes, I think I know the answer :).
>> >> object, variable or data structure). Therefore each module must
>> consist
>> >> of a binary and an access definition file.
>
>
> Why don't you do the access definition in XML, and contain the binary in
> the XML?
>
No. Leave the binary format unchanged, please. And is XML suited well
enough for large binary junks (it's safe to assume junk sizes of 10MB
and more).
>> In this file there is an
>> >> entry for each symbol which grants or denies read, write and execute
>> >> rights for the owner of the module, his/her group and others (note
>> that
>> >> making an extra file out of this has two benefits: first, there's
>> no new
>> >> file format needed, and second this file could possibly edited by an
>> >> user or admin).
>
>
> Do you expect them to do it?
> Do you expect an admin to audit/define all the file permissions of the
> whole filesystem?
>
Only if he/she is paranoid. But there's a chance that he or she will
look over certain definitions if they don't fit properly. You are
adjusting file access rights from time to time, too. But you don't check
file permissions of every file regularly, if you aren't a masochist.
>> In this file prototypes of each exported function and/or
>> >> variable must resides as well as definitions of exported data
>> >> structures,
>
>
> Reminds me of IDL files ...
>
Sort of, but there is no code generated from them. They are needed to
understand what is acutally going on and make the Component Environment
work. The data therefore is needed at runtime and not in compile time
like in Corba or COM.
>> because during the adress space switch pointer addresses
>> >> might have to be tweaked so that they point into the right window
>> >> (imagnie that the process is tunneling back and forth between two
>> >> windows which map address spaces that have different real offsets).
>> This
>> >> might not be neccessary when the windows are at the same addresses as
>> >> the modules that are referenced within their own address space.
>> >> This mechanism can be extened even further: Windows can map modules
>> >> running on remote machines. This only needs a small extension in the
>> >> form of a network protocol stack which is able to serialize and
>> >> reassemble such requests automatically. Furthermore this mechanism can
>> >> be exploited to map contents of the DOMfs into persistent objects and
>> >> thereby providing a decent interface.
>
>
> "Furthermore this mechanism can be exploited ..." ;-)
>
How? :)
>> >> 3. An interpreted language to automize the enhanced execution
>> >> environment: ObjectBasic
>
>
> Why not ObjectPerl?
> Basic does not have a powerful syntax ...
>
>> >> Have you ever thought of
>> >> remote-controlling your word processor from a shell and writing a
>> letter
>> >> this way?
>> >
>> > Yes. I have. But afterwards, I found no answer to the question:
>> > "And why should I?"
>> This example should point out the ease of component reusage and
>> automation that is possible.
>
>
> I heard that story too often. They told me that story at CORBA, CORSO,
> COM, OLE, DCOP, ...
>
Well, all those systems have a lot of internal overhead. That is
hopefully decreased to a *convenient* minimum on ChallengeOS. It must be
easy to use for the programmer. Otherwise noone will adopt it.
>> So both software integration and software
>> development will hopefully become easier.
>
>
> Sounds too good to be true ...
>
Emphasis was on "hopefully" :)
>> No. Having spent a whole week trying to install CA's Manufacturing
>> Knowledge (MK for short, which is a direct competitor to SAP R/3) at
>> work and having spent three months trying to get the faintest idea of
>> what it can do I think we could easily build our own ERP system on top
>> of ChallengeOS :).
>
>
> I told you not to try that stuff. Büroknecht/LivingXML is much better ...
> ;-)
>
Marketing. :)
Sincerely,
Gregor
--
*****************************************************
* Gregor Mueckl Gre...@gm... *
* *
* The ChallengeOS project: *
* http://challengeos.sourceforge.net *
*****************************************************
* Math problems? *
* Call 1-800-[(10x)(13i)^2]-[sin(xy)/2.362x]. *
*****************************************************
|