From: Joschka B. <jos...@am...> - 2008-04-10 11:25:33
|
Hi all, I've been thinking about how to implement a pixel camera perceptor for Simspark using hardware-accelerated OpenGL Render to Texture via the Frame Buffer Object extension (see, e.g. [1]). I'm not proposing we should adopt this in the 3D league, but it will be handy for doing research. One problem I see is that this is a sensor, but it relies on rendering also. In the multi-threading implementation, these two are seperated as far as I remember, right? Maybe it is not a problem, since the rendering only needs to produce the image, which can be copied in the thread for the perceptors, but I'm not sure about this, so I thought I'll ask for ideas here. Another thing I've been wondering about is whether it would be worth it to compress the images before sending it to the agent or not. Of course this will depend on color depth and size of the image, as well as network delay, but encoding (and decoding on the agent side) would also take time. For small images and a fast local network without much delay, it might be overkill I guess. Again, any ideas are welcome. Cheers, Joschka [1] http://www.gamedev.net/reference/articles/article2331.asp |
From: Yuan X. <xuy...@gm...> - 2008-04-10 13:07:42
|
Hi Joschka, I think we may implement the 'pixel camera perceptors server' which receives scene messages from simspark (just likes the external monitor) and renders images for connected robots. Then we can run a 'pixel camera perceptors server' in machines that runs robots. In this case, the simulation runs in multi-machines. What do you think about it? 2008/4/10, Joschka Boedecker <jos...@am...>: > Hi all, > > I've been thinking about how to implement a pixel camera perceptor for > Simspark using hardware-accelerated OpenGL Render to Texture via the > Frame Buffer Object extension (see, e.g. [1]). I'm not proposing we > should adopt this in the 3D league, but it will be handy for doing > research. > > One problem I see is that this is a sensor, but it relies on rendering > also. In the multi-threading implementation, these two are seperated > as far as I remember, right? Maybe it is not a problem, since the > rendering only needs to produce the image, which can be copied in the > thread for the perceptors, but I'm not sure about this, so I thought > I'll ask for ideas here. > > Another thing I've been wondering about is whether it would be worth > it to compress the images before sending it to the agent or not. Of > course this will depend on color depth and size of the image, as well > as network delay, but encoding (and decoding on the agent side) would > also take time. For small images and a fast local network without much > delay, it might be overkill I guess. Again, any ideas are welcome. > > Cheers, > Joschka > > [1] http://www.gamedev.net/reference/articles/article2331.asp > > ------------------------------------------------------------------------- > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone > _______________________________________________ > Simspark Generic Physical MAS Simulator > simspark-devel mailing list > sim...@li... > https://lists.sourceforge.net/lists/listinfo/simspark-devel > -- Best wishes! Xu Yuan School of Automation Southeast University, Nanjing, China mail: xuy...@gm... xy...@ya... web: http://xuyuan.cn.googlepages.com -------------------------------------------------- |
From: Joschka B. <jos...@am...> - 2008-04-14 05:04:46
|
Hi Yuan, On Apr 10, 2008, at 10:07 PM, Yuan Xu wrote: > > I think we may implement the 'pixel camera perceptors server' which > receives scene messages from simspark (just likes the external > monitor) and renders images for connected robots. > Then we can run a 'pixel camera perceptors server' in machines that > runs robots. > In this case, the simulation runs in multi-machines. > What do you think about it? Yes, that's a nice idea :-) I think they are using a similar setup in the RoboCup Rescue virtual robots competition (using USARSim). But first I'll have to get started on the camera implementation anyways which will probably take some time (can't spare much time on this right now) ;-) Cheers, Joschka |
From: Sander v. D. <sgv...@gm...> - 2008-04-16 11:58:30
|
Hey, Joschka, have you already started working on this? If not I'd like to try and give it a go :-) As Markus mentioned, at least the first person view in the monitor shouldn't be to hard to do. I think there are three main ways to do this: - Fix the single Camera in the scene to an agent's vision perceptor - Add multiple Camera objects, one for each agent and change the RenderServer so we can switch between cameras and select to render to texture - Create a new extension of BaseRenderServer that can render from a VisionPerceptor (or maybe any other object in the scene graph) point of view and is specialized in rendering to texture. The first would be the easiest probably. But I think the latter two would be more useful when we want to extend it to the pixel camera perceptor for the agents. However, the single camera can just be cycled through all agents to render the view of each agent.. To me the last idea seems the most appealing, so the rendering of agent views is clearly separated from the normal view. What do you think. Sander On Mon, Apr 14, 2008 at 7:04 AM, Joschka Boedecker < jos...@am...> wrote: > Hi Yuan, > > On Apr 10, 2008, at 10:07 PM, Yuan Xu wrote: > > > > > I think we may implement the 'pixel camera perceptors server' which > > receives scene messages from simspark (just likes the external > > monitor) and renders images for connected robots. > > Then we can run a 'pixel camera perceptors server' in machines that > > runs robots. > > In this case, the simulation runs in multi-machines. > > What do you think about it? > > Yes, that's a nice idea :-) I think they are using a similar setup in > the RoboCup Rescue virtual robots competition (using USARSim). But > first I'll have to get started on the camera implementation anyways > which will probably take some time (can't spare much time on this > right now) ;-) > > Cheers, > Joschka > > ------------------------------------------------------------------------- > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone > _______________________________________________ > Simspark Generic Physical MAS Simulator > simspark-devel mailing list > sim...@li... > https://lists.sourceforge.net/lists/listinfo/simspark-devel > |
From: Oliver O. <oli...@cs...> - 2008-04-16 12:12:28
|
Hey Sander, On 16/04/2008, at 8:52 PM, Sander van Dijk wrote: > Joschka, have you already started working on this? If not I'd like > to try and give it a go :-) As Markus mentioned, at least the first > person view in the monitor shouldn't be to hard to do. I think there > are three main ways to do this: please go for it. I didn't give permission to Joschka to working on the simulator for this week ;-) -- he has to work on his publications. :-) cheers Oliver |
From: Markus R. <rol...@un...> - 2008-04-16 18:55:48
|
Hi, Sander van Dijk wrote: > - Create a new extension of BaseRenderServer that can render from a >> VisionPerceptor (or maybe any other object in the scene graph) point of >> view and is specialized in rendering to texture. [...] > To me the last > idea seems the most appealing, so the rendering of agent views is > clearly separated from the normal view. What do you think. [...] ... I think you're right that we need to distinguish between cameras with a different purpose. But this is a second step. The first thing should be to support different cameras at all. Making this distinction based on the 'VisionPercpetor' class requires us to put this class into one of the base libraries (oxygen, kerosin...) in order to be able to test for it (using the C++ rtti you cannot test for a base class declared in a plugin as it is not know at compile time). Currently the Visionperceptor is Soccer specific and should stay it imo. I think labeling cameras using the default node name (i.e. rsg SetName function) is sufficient. If the user cycles through the cameras he can either cycles through all cameras or choose an individual camera based on the node name or other context information. If a camera shoild serve as a viewpoint for a percpetor node (e.g. the VisionPerceptor) it is the responsibility of this perceptor node to register/enable the cameras for 'render to texture' and 'serialization' to the agent (directly or indirectly via a render proxy as proposed by Yuan). This could be done in it's OnLink() methode if it has a direct camera child. In this way we can avoid any hard/special dependencies between the base libraries and percpetor nodes. cheers, Markus |
From: Markus R. <rol...@un...> - 2008-04-16 17:55:28
|
Hi, Sander van Dijk wrote: > I think there are three main ways to do this: > > - Fix the single Camera in the scene to an agent's vision perceptor > - Add multiple Camera objects, one for each agent and change the > RenderServer so we can switch between cameras and select to render to > texture > - Create a new extension of BaseRenderServer that can render from a > VisionPerceptor (or maybe any other object in the scene graph) point of > view and is specialized in rendering to texture. > > The first would be the easiest probably. But I think the latter two > would be more useful when we want to extend it to the pixel camera > perceptor for the agents. However, the single camera can just be cycled > through all agents to render the view of each agent.. To me the last > idea seems the most appealing, so the rendering of agent views is > clearly separated from the normal view. What do you think. I prefer your second proposal. The library already supports the necessary abstractions to support multiple viewports, i.e. camera Objects and the concept of an active Camera. We should explicitly put a Camera object in the agent .rsg and allow the user to select and/or cycle between the available cameras in the scene. Currently the topmost camera below usr/scene is implicitly the default camera (see kerosin\renderserver\renderserver.cpp RenderServer::Render()). This camera is passed to RenderServer::BindCamera() to setup the necessary OpenGl transformations. So implementing an explicit mechanism to select an active camera would be the first step. This allows to change the camera for the whole rendered window. Together with a key binding to cycle the view makes for a usable first step in the monitor. The second step is to support multiple parallel viewports in the monitor. This is either possible with multiple passes in one OpenGl context (each time setting a different viewport size and position) or with the help of multiple OpenGL contexts (in different windows). I'd prefer the second version as this allows for an easy modification later on to support 'render to texture' cheers, Markus |
From: Joschka B. <jos...@am...> - 2008-04-17 06:58:10
|
Hi, On Apr 17, 2008, at 2:54 AM, Markus Rollmann wrote: > Hi, > > Sander van Dijk wrote: >> I think there are three main ways to do this: >> >> - Fix the single Camera in the scene to an agent's vision perceptor >> - Add multiple Camera objects, one for each agent and change the >> RenderServer so we can switch between cameras and select to render to >> texture >> - Create a new extension of BaseRenderServer that can render from a >> VisionPerceptor (or maybe any other object in the scene graph) >> point of >> view and is specialized in rendering to texture. >> >> The first would be the easiest probably. But I think the latter two >> would be more useful when we want to extend it to the pixel camera >> perceptor for the agents. However, the single camera can just be >> cycled >> through all agents to render the view of each agent.. To me the last >> idea seems the most appealing, so the rendering of agent views is >> clearly separated from the normal view. What do you think. > > I prefer your second proposal. The library already supports the > necessary abstractions to support multiple viewports, i.e. camera > Objects and the concept of an active Camera. > > We should explicitly put a Camera object in the agent .rsg and allow > the > user to select and/or cycle between the available cameras in the > scene. > I think I also prefer the second proposal (that was what I had in mind), but maybe it's because I don't understand the advantages of the third possibility yet :-/ > Currently the topmost camera below usr/scene is implicitly the default > camera (see kerosin\renderserver\renderserver.cpp > RenderServer::Render()). This camera is passed to > RenderServer::BindCamera() to setup the necessary OpenGl > transformations. > > So implementing an explicit mechanism to select an active camera would > be the first step. This allows to change the camera for the whole > rendered window. Together with a key binding to cycle the view makes > for > a usable first step in the monitor. > Right. > The second step is to support multiple parallel viewports in the > monitor. This is either possible with multiple passes in one OpenGl > context (each time setting a different viewport size and position) or > with the help of multiple OpenGL contexts (in different windows). > > I'd prefer the second version as this allows for an easy modification > later on to support 'render to texture' > Once the render to texture is implemented, you don't need multiple viewports in the monitor, since you can simply display the textures on quads in front of the camera (kind of like a 2D overlay). So maybe we could even skip the second step and move directly to rendering to texture? It doesn't look so difficult and you can get some sample code e.g. at [1] (code for chapter 18 or 19). The code uses the Framebuffer object (FBO) extension to render to texture which seems to be the preferred way right now (you can directly render to texture without even having to copy any pixels from offscreen buffers into an OpenGL texture). There are some subtleties however, for instance whether to use several FBOs or just use one and attach several textures and switch between them (better performance I believe). We have a new graphics programming guru (former PS3 developer at SONY) in the lab now, I'll ask him for directions. @Sander: thanks for taking over this task :-) I can't ignore Oliver's instructions about not working on it since he's currently sitting next to me here in Osaka ;-) Cheers, Joschka [1] http://www.opengl.org/sdk/docs/books/SuperBible/ |