You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(5) |
Jun
(7) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: <rea...@gm...> - 2002-06-02 15:59:15
|
> Modules call the push methods of other modules - all that is implemented > in the same base class so it can be made private. Thx. Someday, I'll maybe read a book about OOP :) -- GMX - Die Kommunikationsplattform im Internet. http://www.gmx.net |
From: <rea...@gm...> - 2002-06-02 15:56:03
|
[Thorsten has already got this. I have to pay more attention to such details as addresses ;) ] > biological systems are massively parallel while computers are serial. I was not implying a simulation of a real biological system. It's just convenient for eventual AI-like programmed robots to have a simulation structure that is similar in the order of processing (sensors first, motorics last). > Once you start > pushing in a computer you'll go all the way to the motors before another > sensor event is taken into account. What about (pseudo-) asynchronous threads? One main loop in the sim core sending events to appropiate loops in the bots which invoke threads to deal with events. (Yes, that is definitely not a concept that would run on my 350 MHz CPU.) The threads could do everything else themselves, up to sending data to motorics (or cognitive routines?). > Thus I think, we should implement both. Good idea. > The CPU load will presumably be almost the same in both approaches. If I understand correctly, every motorics node in pull mode would constantly require new data, wouldn't it? Opposed to that, in push mode, there would be only one such loop per bot which would wait for new sensor events. *Slightly* faster? Doesn't matter ... > The interface of the base class for nodes could contain five things: > A pull method, a push method (both private), an input data member > object, and output member data object and an abstract method where > client programmers put their implementation of the module (the later > three protected). Seems a good start. But how should pushing work if it's private? -- GMX - Die Kommunikationsplattform im Internet. http://www.gmx.net |
From: Thorsten R. <sch...@un...> - 2002-06-02 15:26:09
|
Hi, Sorry, I made the same error as real nowhereman - I sent something that was intended to go to the list to realnowhereman only. Here are the mails: Hi, > > I think it's best to do the sensor input using portals. > > I agree. This should be a good way of ensuring that the simulation tasks are > adequately distributed between the engine (world geometry, environment data, > collision control etc.) and the bots (movement, actions, reactions to > environment). Sorry, I don't understand the notion portals. Can somebody provide a link or a short explanation please? I also don't understand, why some 3D data should be used as the universal datatype. I think, if we use C++ we should definitly use template methods for data transport. The data can be handled by a specialized class that stores data using void pointers, has some internal data typing mechanism to handle invalid module linking gracefully and allow access to the data through template methods. This has the advantage, that it can be completely encapsulated in that data handling entity, it can be made safe and it can be made efficient. But the real advantage is that this way, you can transparently pass objects or any built in types between modules. If you pass images you can just use the image format of your favorite image processing library; same for speach; and if you just want to pass a bool or double, great, pass that, no problem. Client programmers only have to make sure, that each of two communicating modules agree on what is passed between them. *** I completely agree with realnowhereman on the machine code issue. If anybody needs something like that, an interpreter should be used and plugged in. Anything else would presumably break the project in more than one way ... *** In my last mail I wrote about a concept to control program flow (start with motors work through the links to the sensors and back ...). I thought a lot about that. I'll call the approach I described in my last mail the pull technic because you start with the motors and they'll "pull" the data all the way through the module graph. The two alternatives that come to my mind are layered execution and the "push" technic. With the later you start at the sensors and they push the data to the next order of modules and so on until the motors are reached. The push technic allows for a completely different approach to controllers. One big advantage is, you only need to process stuff that really gets new inputs. I.e. sensors only push data when they actually have new data. This could also be used asynchronously with event driven sensors. And it much morwe closely resembles what's happening inside our heads. We should not completely ignore the push approach and I think push and pull can be simultainously implemented in the base hirarchy without much effort and without adding too much confusion for client programmers. Layered execution means that modules are ordered into layers. Sensors would be in the top layer, motors in the botom layer, all other modules in between. Execution starts with the top layer and works serialy through all layers to the bottom. This would be easy for us to implement and we can think about keeping it in mind for later versions of BorgAsm, but it makes designing the controller awkward for client programmers, because it adds complexity to their task. Thorsten Hi, > The push model seems the most sane to me out of two reasons - it's close to > biology (reaction to stimulus) and it would (probably) create less CPU load > than the pull model. A layered hierarchy is implementable, but I agree with > Thorsten's critique - it's too awkward. The push model indeed looks more biology-like. But biological systems are massively parallel while computers are serial. Once you start pushing in a computer you'll go all the way to the motors before another sensor event is taken into account. To avoid that, one would have to frequently check for new events, creating much overhead. And still it's unlikely that you'd get anywehere with this approach, because computers are so fast. Brains are very, very slow and massively parallel. The dynamics of asynchronous events probably play a crucial role. But these dynamics are not at all understood, and we can't hope to change that with BorgAsm. Thus the push approach can only be used for simple Reflexes, and might be very usefull there. With the pull approach you can handle problems, where you need information from many sensors simultainously (e.g. higher coordination problems in multi limbed robots). Thus I think, we should implement both. The CPU load will presumably be almost the same in both approaches. In push mode each module has to make one function call for each output it generates. In pull mode each module has to make one function call for each input it requires. Inputs and outputs will obviously mach numbers. In push mode a push call by a sensor won't return until the motors are reached and the stack has worked back to the sensor. Vice versa for pull mode. The interface of the base class for nodes could contain five things: A pull method, a push method (both private), an input data member object, and output member data object and an abstract method where client programmers put their implementation of the module (the later three protected). Pull will call pull methods in upstream modules, write the result to the input member once the upstream calls returned, then call the implementation and finally return the output data member. Push will write the output to the downstream input data object, call the downstream implementation and finally call push An example of how I imagine this to work in C++: Node : public ... { protected: InputDataContainer in; // smart container of InputData objects OutputDataContainer out; // same for output virtual void implementation() = 0; // Client programmers use in to get their input and out to write // their output. Their implementation is thus not affected of // wether the module operates in push or in pull mode. private: Time t; InputLinks inLinks; OutputLinks outLinks; template<class Cont> void push(Cont inData) { in.add(inData); implementation(); for(unsigned int i = 0; i < outLinks.size(); i++) outLinks.link(i).push(out[outLinks.outPutDataIdentifier(i)]); } OutputDataObject pull(OutputDataIdentifier outId) { if(t.upToDate()) return out[outId]; for(unsigned int i = 0; i < inLinks.size(); i++) in.add(inLinks.link(i).pull(inLinks.inputDataIdentifier(i))); implementation(); return out[outId]; } }; To support arbitrary datatypes InputDataContainer and OutputDataContainer have Template methods. Sensors and motors have modified implementations of push and pull. One is public in Sensors, the other in motors. These public functions can then be called by the main loop of the program. Client programmers can choose if the want push or pull mode, they can even use both simultainously for different tasks. This is just a first coarse first draft that is meant to serve as an illustration of the lines I think along. Don't take it too seriously, the implementation might look completely different. Cheers, Thorsten |
From: Thorsten R. <sch...@un...> - 2002-06-02 15:19:12
|
Hi, > Seems a good start. But how should pushing work if it's private? Modules call the push methods of other modules - all that is implemented in the same base class so it can be made private. I think everything that can be made private should be made private. Motors and sensors need public interfaces to pull/push because they are the points where everything starts. Cheers Thorsten |
From: <rea...@gm...> - 2002-06-02 11:42:17
|
> > > I think it's best to do the sensor input using portals. > > Sorry, I don't understand the notion portals. Can somebody provide a > link or a short explanation please? I think that what was meant is this: a bot's AI (I'm not implying that it is indeed something in the AI category, can be just usual programming as in "go round in circles") may (and probably will) request information about the bot's environment. This information would include all simulated parameters that may be perceived by the bot, the limitations being the bot's simulated sensors' FOV and data filters that would have to be set by either the hardware module or the AI. > I also don't understand, why some 3D data should be used as the > universal datatype. It shouldn't, except when 3D data is sent. Btw, a universal typing library would be cool. [Pull vs. Push vs. Layered] The push model seems the most sane to me out of two reasons - it's close to biology (reaction to stimulus) and it would (probably) create less CPU load than the pull model. A layered hierarchy is implementable, but I agree with Thorsten's critique - it's too awkward. -- GMX - Die Kommunikationsplattform im Internet. http://www.gmx.net |
From: <rea...@gm...> - 2002-06-01 21:10:04
|
> As a follow-up to Thorsten's message - This is meant as a request to everyone. If your post is a followup to someone else's, please hit the reply button in your mail client instead of writing a totally new email. The reason why you should do so is that some email clients and the WWW archives (attempt to) display the posts in threads. They can't do that if every followup has a new subject. So please *reply* when you mean it, write a *new* email if - and only if - you start a new thread. Thank you in advance. > I think it's best to do the sensor input using portals. I agree. This should be a good way of ensuring that the simulation tasks are adequately distributed between the engine (world geometry, environment data, collision control etc.) and the bots (movement, actions, reactions to environment). > But if we wish the engine to run faster, we can have some built-in sensor > types (eg: IR, UV, Encoders..) I think it would be sufficient if the (generic) sensor API would offer ways to limit its output to certain bots/sensors. E. g., sensorAPI -> setLimits( 5 /* meters */, SENSOR_CONTOUR /* view */ ); (where sensorAPI is an instance of the generic sensor and interacts directly with the sim core). > What I mean is that the commands to the robot will be given > using a machine-code set (depends on which processor). Wouldn't this be rather uncomforable for those who want to build a virtual robot? While real robots may be programmed in assembler or machine code, it would, on the PC, require a simulation of the processor the code is targetted at plus an interface to the simulation. Additionally, the user would have to learn yet another language - neither assembler nor machine code are really hard to learn, but they're not the most comfortable languages or the fastest to write software in either. > 1) More realistic That would depend on how far we want the sim to become hardware-specific. > 2) Faster than scripting of any kind > 3) Safer than changing the source-code What about offering an API for writing robots in, then letting the sim dlopen() (or whatever) the robots' DLLs? That would let the robot developers write their code in any language (e. g. C(++)) for which a compiler is available which can build DLLs for the target OS. (OK, it's not that easy. But you know what I mean ;) ) If required, a way to integrate robot machine code in this concept would be to create an interpreter for this code, then make the interpreter behave like a robot towards the sim. The interpreter would then effectively be a macro-programmed robot, steered by the machine code. > We can also support several processors with different machine-code set for > each. If the project will be as modular as has been asked for, that should be no problem. Erez: sorry, I meant to send this to the list right away. The webmail interface was too dumb :( -- GMX - Die Kommunikationsplattform im Internet. http://www.gmx.net |
From: Erez S. <er...@pr...> - 2002-06-01 15:49:43
|
As a follow-up to Thorsten's message - I think it's best to do the sensor input using portals. Each sensor will receive the info from the portal as a 3d space (limited in the view range) and will respond to the data using what's written in it's module. This will provide the most realistic results, and allow us to add new types of sensors easily. But if we wish the engine to run faster, we can have some built-in sensor types (eg: IR, UV, Encoders..) that will return to the modules only the value they require (whether it's distance, or boolean values) and the required processing will be done inside the engine. It will run faster and shouldn't lower the quality of the result if we do it right. In any case, the users can always choose to build their own module that will recieve the full data (the 3d space).. As for programming the virtual robots - I think that for a real simulation, we should not only simulate the physics of the robot, but also simulate the proccesor. What I mean is that the commands to the robot will be given using a machine-code set (depends on which processor). Machine code is: 1) More realistic 2) Faster than scripting of any kind 3) Safer than changing the source-code (especially for ppl who aren't in this project). We can also support several processors with different machine-code set for each. Example: This year I build a robot with a HC12 processor . I still have it's full source-code (in asm). If, when this project is done (or partially at least), I can create a copy of that robot into our engine, and put in the HC12 code of it (whether in asm or compiled), and it will run like it ran reality, then i will be able to say proudly - "We did an exellent work!!" -- _______________________________________________ Sign-up for your own FREE Personalized E-mail at Mail.com http://www.mail.com/?sr=signup |
From: Anthony C R. <pha...@li...> - 2002-05-30 19:31:13
|
Thorsten, you haven't been rude at all. I think that your plans are solid, and that they're the best forward. Of course, I'm a coder through and through. I like to code, so planning isn't really my strong point. I'd also like to point out that since I'm going to be moving next week, I need to take rather more of a back seat on this project. Also, I strongly believe that the person most qualified to make a decision should make that decision. In this spirit, I would like to state, for posterity (and for the readers of the archives) that I believe that Thorsten should lead the planning of this project. I may object on one or two points, purely in the interests of debate, but that's all. In summary, planning this software project is something that I can feel I can contribute to, but the for the sake of the project I will defer to anyone with greater knowledge. I hope that this sets a precedent for anyone else who might join the project. Now I've got that out of the way, I can do some research into linking these modules :) -- Anthony C Reid [www.acreid.com] |
From: Thorsten R. <sch...@un...> - 2002-05-27 01:22:32
|
Hi, Sorry in advance ... for being rude Concerning the physics: Writing a physics engine is no trivial task ... not nearly. We might be able to get something running, if we forgot about everything else - though I doubt we would be able to make a significant contribution to the open source community. You seem to underestimate physics simulations ... my guess is the project will get nowhere fast and stay there if we tackle physics ourselves. The best we could hope for would be a kinematic sim but even that would be one great waste of time. It's all there, the question is what tools we use. Don't make the purpose of this project to reinvent the wheel yet again. As I said in the forum I know somebody who thouroughly explored the options available regarding dynamics simulations. I'll ask him ASAP. I don't know if it's any good, but here's a teaser: http://dynamechs.sourceforge.net/ I do BTW know one of the projects mentioned on that page - it's a respectable scientific project with some merits, thus the library is probably not completely crappy. As I also already said: The problem we should tackle first is the bones of BorgAsm - the beams and nuts and bolts that are once going to hold everything together. I already eleborated on scripts as glue, but well, forget it ... it is not essential what the glue is. We need to understand what the parts of BorgAsm will be and how they interact. When we know that, we have to come up with very basic decisions on technologies to use (threads, pipes, corba, rpc ... I don't know). Then we'll have to provide a basic hirarchy and framework for the app. It would be a good idea to use UML for that. When the UML diagrams are available, we'll have to decide on some minor details like what languages and libraries to use. And then we can start coding. All that does not have to take long. We don't have to come up with a finely grained specification in advance. We "just" need enough details to describe tasks for the developers. Tasks that won't result in frustration for everybody. "Go, do some hacking on a physics engine, we'll think about interfacing it later" is a task but it would be a waste of time even if there were no physics libs already available. The interfaces must come first. We can fake all the data flows to test things we built on top of these basics. We can even start wrapping the chosen physics lib before the basics are ready. But the specification must be there before anything can really start. To make a start, I'll describe the problem as I see it. There'll be a simulation, a visualization (might come with the physics lib) and a controller. The robot geometry (including number of limbs and actuators) should be configurable, as should the controller. I know most about controllers, thus I'll start with that. A controller can and should be implemented as a system of linked cybernetic or other processing modules. It could be one monolithic module that gets all the sensory input and produces all the motor output, but for most applications, that'll not do because it's not flexible enough. Sensors and motors were mentioned in the above paragraph. Sensors get input from outside the cybernetic graph-like circuitry and produces output that is fed into cybernetic (or other) modules. Motors read their input from the controller circuit, but send their output somewhere else. Thus sensors are something where information is read from, motors are something that read information. All other modules in the controller do both - they read their input from inside the controller circuit and send their output to the circuit. That makes three basic modules: sensors, motors and nodes. The nodes might just be something that inherits from both sensor and motor. Now the order in which the the modules are to be called is to be determined. The most flexible and simple yet efficient method is imho to start with the motors. They are called and then call whatever they are linked to to get their input. Before they can do any processing they have to wait for the input. The modules that are called by the motors do the same until these chains reach the sensors. The sensors have no further modules to call and can thus provide the input. Then everything works backwards, lot's of processing happens until the motors are reached and one iteration is complete. That way the correct order of calling the modules is ensured. That will even work, if one plugs in additional modules during runtime. That's the way I did it in my controller, and that's the way, the overflow guys do it. Now the simulation can also be something that's derived from both sensor and motor, but it's not the same as a node. That way arbitrary simulations can be plugged in (I have not tested that approach, but I think it's feasable - in my program the simulation is completely seperate from the controller graph). If the visualization is seperate from the simulation, it's a "motor" in terms of the base hirarchy (again I do it differently but for no good reason). If one does it like that, the simulation could even be left out of the program completely and sensors and motors of a real robot could be used if researchers feel like it. Open questions are: how are the modules linked? What is the information that flows through the controller? In simple cases it's just a couple of doubles that may describe joint angles. But what about images from robot eyes or from the virtual eyes of the simulation? What about speech/sound data? With speech the whole approach might not work (or maybe it would). Anyway, how is the flexibility achieved to tunnel everything through the controller graph ... or is that not necesary. When joint angles are to be enough, something like an array of doubles can transport anything. With pictures that approach would also work but it would become very, very ugly. Maybe that calls for templates. Or some cunning hirarchy. Well now, that said, how independent are the modules? It should at least be possible to run modules in seperate threads. Because of the overhead that should not be the default, but considering that certain sensors (e.g. cameras) can be extremely slow in terms of modern processors, one should not hang the the whole controller up only for retrieving some data. Threads will probably do but we should even consider completely seperate programs that communicate somehow - what if somebody wants to run only essential sensor readers and motor writers on the robot's processor and use a huge cluster for the actual controller? We don't have to start with implementing something like that, but we should arrange things to allow getting there some day if we have to or want to ... Ok, sorry again for being rude ... Thorsten |
From: Anthony C R. <pha...@li...> - 2002-05-26 13:36:31
|
http://iregt1.iai.fzk.de/ Of course, the BorgASM (we need a better name...) project's goals aren't quite the same, but we'll be working along similar lines to these guys. When you think of BorgASM, think of KISMET, but completely free and designed for cheaper hardware. -- Anthony C Reid [www.acreid.com] |
From: Anthony C R. <pha...@li...> - 2002-05-26 13:25:35
|
http://robotics.sourceforge.net/ -- Anthony C Reid [www.acreid.com] |
From: Anthony C R. <pha...@li...> - 2002-05-26 12:28:48
|
Okay, it's time to discuss definite plans for the physics sim. I'd like to see something simple sometime soon, some blocks falling down some stairs, and the like. IMO, decisions need to be made. I don't think we should we get bogged down in debate, just sort out a plan by next week and start cranking out code. -- Anthony C Reid [www.acreid.com] |