From: Thorsten R. <sch...@un...> - 2002-05-27 01:22:32
|
Hi, Sorry in advance ... for being rude Concerning the physics: Writing a physics engine is no trivial task ... not nearly. We might be able to get something running, if we forgot about everything else - though I doubt we would be able to make a significant contribution to the open source community. You seem to underestimate physics simulations ... my guess is the project will get nowhere fast and stay there if we tackle physics ourselves. The best we could hope for would be a kinematic sim but even that would be one great waste of time. It's all there, the question is what tools we use. Don't make the purpose of this project to reinvent the wheel yet again. As I said in the forum I know somebody who thouroughly explored the options available regarding dynamics simulations. I'll ask him ASAP. I don't know if it's any good, but here's a teaser: http://dynamechs.sourceforge.net/ I do BTW know one of the projects mentioned on that page - it's a respectable scientific project with some merits, thus the library is probably not completely crappy. As I also already said: The problem we should tackle first is the bones of BorgAsm - the beams and nuts and bolts that are once going to hold everything together. I already eleborated on scripts as glue, but well, forget it ... it is not essential what the glue is. We need to understand what the parts of BorgAsm will be and how they interact. When we know that, we have to come up with very basic decisions on technologies to use (threads, pipes, corba, rpc ... I don't know). Then we'll have to provide a basic hirarchy and framework for the app. It would be a good idea to use UML for that. When the UML diagrams are available, we'll have to decide on some minor details like what languages and libraries to use. And then we can start coding. All that does not have to take long. We don't have to come up with a finely grained specification in advance. We "just" need enough details to describe tasks for the developers. Tasks that won't result in frustration for everybody. "Go, do some hacking on a physics engine, we'll think about interfacing it later" is a task but it would be a waste of time even if there were no physics libs already available. The interfaces must come first. We can fake all the data flows to test things we built on top of these basics. We can even start wrapping the chosen physics lib before the basics are ready. But the specification must be there before anything can really start. To make a start, I'll describe the problem as I see it. There'll be a simulation, a visualization (might come with the physics lib) and a controller. The robot geometry (including number of limbs and actuators) should be configurable, as should the controller. I know most about controllers, thus I'll start with that. A controller can and should be implemented as a system of linked cybernetic or other processing modules. It could be one monolithic module that gets all the sensory input and produces all the motor output, but for most applications, that'll not do because it's not flexible enough. Sensors and motors were mentioned in the above paragraph. Sensors get input from outside the cybernetic graph-like circuitry and produces output that is fed into cybernetic (or other) modules. Motors read their input from the controller circuit, but send their output somewhere else. Thus sensors are something where information is read from, motors are something that read information. All other modules in the controller do both - they read their input from inside the controller circuit and send their output to the circuit. That makes three basic modules: sensors, motors and nodes. The nodes might just be something that inherits from both sensor and motor. Now the order in which the the modules are to be called is to be determined. The most flexible and simple yet efficient method is imho to start with the motors. They are called and then call whatever they are linked to to get their input. Before they can do any processing they have to wait for the input. The modules that are called by the motors do the same until these chains reach the sensors. The sensors have no further modules to call and can thus provide the input. Then everything works backwards, lot's of processing happens until the motors are reached and one iteration is complete. That way the correct order of calling the modules is ensured. That will even work, if one plugs in additional modules during runtime. That's the way I did it in my controller, and that's the way, the overflow guys do it. Now the simulation can also be something that's derived from both sensor and motor, but it's not the same as a node. That way arbitrary simulations can be plugged in (I have not tested that approach, but I think it's feasable - in my program the simulation is completely seperate from the controller graph). If the visualization is seperate from the simulation, it's a "motor" in terms of the base hirarchy (again I do it differently but for no good reason). If one does it like that, the simulation could even be left out of the program completely and sensors and motors of a real robot could be used if researchers feel like it. Open questions are: how are the modules linked? What is the information that flows through the controller? In simple cases it's just a couple of doubles that may describe joint angles. But what about images from robot eyes or from the virtual eyes of the simulation? What about speech/sound data? With speech the whole approach might not work (or maybe it would). Anyway, how is the flexibility achieved to tunnel everything through the controller graph ... or is that not necesary. When joint angles are to be enough, something like an array of doubles can transport anything. With pictures that approach would also work but it would become very, very ugly. Maybe that calls for templates. Or some cunning hirarchy. Well now, that said, how independent are the modules? It should at least be possible to run modules in seperate threads. Because of the overhead that should not be the default, but considering that certain sensors (e.g. cameras) can be extremely slow in terms of modern processors, one should not hang the the whole controller up only for retrieving some data. Threads will probably do but we should even consider completely seperate programs that communicate somehow - what if somebody wants to run only essential sensor readers and motor writers on the robot's processor and use a huge cluster for the actual controller? We don't have to start with implementing something like that, but we should arrange things to allow getting there some day if we have to or want to ... Ok, sorry again for being rude ... Thorsten |