From: <rea...@gm...> - 2002-06-02 15:56:03
|
[Thorsten has already got this. I have to pay more attention to such details as addresses ;) ] > biological systems are massively parallel while computers are serial. I was not implying a simulation of a real biological system. It's just convenient for eventual AI-like programmed robots to have a simulation structure that is similar in the order of processing (sensors first, motorics last). > Once you start > pushing in a computer you'll go all the way to the motors before another > sensor event is taken into account. What about (pseudo-) asynchronous threads? One main loop in the sim core sending events to appropiate loops in the bots which invoke threads to deal with events. (Yes, that is definitely not a concept that would run on my 350 MHz CPU.) The threads could do everything else themselves, up to sending data to motorics (or cognitive routines?). > Thus I think, we should implement both. Good idea. > The CPU load will presumably be almost the same in both approaches. If I understand correctly, every motorics node in pull mode would constantly require new data, wouldn't it? Opposed to that, in push mode, there would be only one such loop per bot which would wait for new sensor events. *Slightly* faster? Doesn't matter ... > The interface of the base class for nodes could contain five things: > A pull method, a push method (both private), an input data member > object, and output member data object and an abstract method where > client programmers put their implementation of the module (the later > three protected). Seems a good start. But how should pushing work if it's private? -- GMX - Die Kommunikationsplattform im Internet. http://www.gmx.net |