From: Peter C. <pet...@ne...> - 2002-01-14 16:56:59
|
> From: J.P. King [mailto:jp...@he...] > _10000_ ? I find that hard to swallow By the time you've done all the data structure marshalling and demarshalling, it's about right --- especially in a client-server environment where you have thread locks and the like. You can get c. 100M memory accesses per second, but 10k IPC calls per second is pretty good. I can point you at some interesting Web sites if you want, notably the TAO site (which is the ORB I'm using for work right now). > No we aren't up to 10,000 times, but we are well into the hundreds. > Once you get up to a multiple GHz CPU we should more or less > be there. :-) And? This leaves us where we were ten years ago: we can't have a large system, because it's too slow. > The main delay with multiple processes is context switching, but if > you only have two active processes this shouldn't be much of an issue. > If the delay were _quite_ as bad as you are suggesting then why don't > all this big database people build webservers, and the like, > into their > SQL servers - I am sure that people would pay for a factor of > 10 increase, let alone a factor of 10000. Increasingly, they do. In-process data servers have a long and distinguished history. > The object store will be able to bounce back faster than the mud with > the object store built in by virtue of the fact that there will be > less code. Er... what? Half a second, maybe, for loading the extra code into RAM. - Peter |