From: Joschka B. <jos...@am...> - 2008-04-17 06:58:10
|
Hi, On Apr 17, 2008, at 2:54 AM, Markus Rollmann wrote: > Hi, > > Sander van Dijk wrote: >> I think there are three main ways to do this: >> >> - Fix the single Camera in the scene to an agent's vision perceptor >> - Add multiple Camera objects, one for each agent and change the >> RenderServer so we can switch between cameras and select to render to >> texture >> - Create a new extension of BaseRenderServer that can render from a >> VisionPerceptor (or maybe any other object in the scene graph) >> point of >> view and is specialized in rendering to texture. >> >> The first would be the easiest probably. But I think the latter two >> would be more useful when we want to extend it to the pixel camera >> perceptor for the agents. However, the single camera can just be >> cycled >> through all agents to render the view of each agent.. To me the last >> idea seems the most appealing, so the rendering of agent views is >> clearly separated from the normal view. What do you think. > > I prefer your second proposal. The library already supports the > necessary abstractions to support multiple viewports, i.e. camera > Objects and the concept of an active Camera. > > We should explicitly put a Camera object in the agent .rsg and allow > the > user to select and/or cycle between the available cameras in the > scene. > I think I also prefer the second proposal (that was what I had in mind), but maybe it's because I don't understand the advantages of the third possibility yet :-/ > Currently the topmost camera below usr/scene is implicitly the default > camera (see kerosin\renderserver\renderserver.cpp > RenderServer::Render()). This camera is passed to > RenderServer::BindCamera() to setup the necessary OpenGl > transformations. > > So implementing an explicit mechanism to select an active camera would > be the first step. This allows to change the camera for the whole > rendered window. Together with a key binding to cycle the view makes > for > a usable first step in the monitor. > Right. > The second step is to support multiple parallel viewports in the > monitor. This is either possible with multiple passes in one OpenGl > context (each time setting a different viewport size and position) or > with the help of multiple OpenGL contexts (in different windows). > > I'd prefer the second version as this allows for an easy modification > later on to support 'render to texture' > Once the render to texture is implemented, you don't need multiple viewports in the monitor, since you can simply display the textures on quads in front of the camera (kind of like a 2D overlay). So maybe we could even skip the second step and move directly to rendering to texture? It doesn't look so difficult and you can get some sample code e.g. at [1] (code for chapter 18 or 19). The code uses the Framebuffer object (FBO) extension to render to texture which seems to be the preferred way right now (you can directly render to texture without even having to copy any pixels from offscreen buffers into an OpenGL texture). There are some subtleties however, for instance whether to use several FBOs or just use one and attach several textures and switch between them (better performance I believe). We have a new graphics programming guru (former PS3 developer at SONY) in the lab now, I'll ask him for directions. @Sander: thanks for taking over this task :-) I can't ignore Oliver's instructions about not working on it since he's currently sitting next to me here in Osaka ;-) Cheers, Joschka [1] http://www.opengl.org/sdk/docs/books/SuperBible/ |