From: Oliver O. <Oli...@ne...> - 2007-02-05 03:31:18
|
Hi, On 04/02/2007, at 3:50 PM, Yuan Xu wrote: > 2007/2/3, Oliver Obst <Oli...@ne...>: >> sounds all quite good. >> I've also found this hints for timing the physics loop (in a computer >> game) here: [1], it's an interesting read, at least. From this page, >> there are also more interesting articles. [...] >> [1] http://www.gaffer.org/game-physics/fix-your-timestep/ > The article is great! > There is a similar implementation in the simspark ( See void > SimulationServer::Step() ). > Set the mSimStep to 0.01, then the physics will run in 10ms step. a smaller step should make the simulation more stable, but for this, we need to run some experiments (maybe later on). > I face a problem when try introduce the threads: > there is only one scene information in the simulator, > then, different threads can not operate the scene information at > the same time. > For example, when the agent-management-thread encodes the > perception messages, > the scene information should not be changed, > so the physics-thread should be hang up. > As a result, this kind multi-threads is meaningless, I think. I understand the problem. But it's not entirely meaningless, please see below. > Maybe we need a buffer, like the frame buffer in OpenGL, > saving the scene information which will be handled by > agent-management-thread to the buffer. > But it is not a straight-forward work, for the scene information is > stored in different objects. Well, one direction to approach that would be (1) to start using threads anyway: 1.a) We decide to care about inconsistencies in sensor readings. We could start this by using no buffer, and for the beginning, we could use locks so that no 2 threads operate on the scene graph. Later on, if the threads are working, we can think about something to get rid of the big locks (like the proposed buffer). 1.b) We decide not to care about inconsistencies in sensor readings. We start using threads and no buffers (like 1.a), but care only about multiple writes (which are obviously a bad thing, and shouldn't happen anyway). At the moment, I think the worst thing that can happen would be an inconsistent sensor reading... the physics should be fine. But maybe I'm missing something, what would be the reason for the physics thread to hang up? Inconsistent readings can happen with real sensors as well (e.g. a laser range finder and moving objects etc). Later on, if that's working, we resolve the inconsitency issue by small locks or buffering. The alternative approach (2) would be to not use threads in the simulator for now, but to think of a smart single-threaded solution along the lines of the article [1] above. The single-threaded solution would be more interesting for the development phase, because it's easier to debug (and maybe also easier to implement). At the end of the day, I'd prefer a multi-threaded solution, because it might help with the current performance issues (and all the machines are becoming multi-core anyways). But if a single threaded solution is more stable, let's stick with one thread and think about more threads later. > If there is a class or struct represent the scene information, the > work will be easier. > But the plug-in mechanism makes the scene changes, so build such a > scene class is not simple. The original idea of the whole simulator framework was to represent all kind of information of an object in an extended scene graph (tree); this includes visual aspects like textures etc, if an object possesses these. I still think, that the tree kind representation is still very good, but for a few reasons we have to think over this concept again. Firstly, if the monitor is external (a separate program), there is no point in representing (and updating) visual information together with the physical properties of an object. Secondly, even if the monitor is part of the simulator process (which would be nice to avoid serialization), the visual information doesn't need to be updated at the same rate as the physical information. My feeling is that some of the steps to a solution are something like this (comments more than welcome): - initially (or whenever new objects are spawned), the simulator needs to tell the monitor _how_ everything wants to be displayed (in terms of textures, meshes etc). The simulator has to store this information somewhere (but not necessarily in the part of the tree that get's updated by the physics). - if we want an extra thread for the visualization, we should check if it's better to copy the scene tree or if it's better to let the visualization plugin access the scene tree directly (using a lock or so). Copying takes extra time, but can have advantages too. - the same thing holds for the agents' sensors, with the difference that these have to be coupled with the actual simulation speed, whereas the visualization just displays the situation as it currently is (i.e. the visualization can run at a constant framerate even if the simulator step rate changes). - the actions of an agent should then not modify the world directly, but be placed in a queue and the physics loop should take care of them (so that there is only a single thread actually changing the world directly). I think that's not fundamentally different from the current approach, but still worth looking over. - if we want the agents to run as fast as possible, we should use something like a syncmode in 2D, but for giving the agents enough time to think, the simulator should wait with the physics if it is faster than real time for competition settings. cheers Oliver -- Oliver Obst ES208 form follows function (Louis Sullivan). Fon: +61 2 492 16175 http://oliver.obst.eu/ Uni Newcastle School of Electrical Engineering and Computer Science |