From: Tim M. <ti...@re...> - 2008-06-26 10:52:50
|
Hi, I'd like your comments on some changes to views and OSG cameras in FlightGear before I start working on them. In particular, my understanding of how FGViewer works might not be complete. What we have now FGViewer calculates the parameters of a view. This includes camera-independent parameters such as the view origin and direction. These parameters are calculated differently based on, among other things, whether the viewpoint is tied to a model and whether the view direction is fixed, relative to a model, or always tracking another model. FGViewer filters and dampens some of these parameters. FGViewer also controls some basic camera parameters such as the field of view (fov) and a modification of the camera aspect ratio called the "aspect ratio multiplier." There is only one active FGViewer object at a time. This is managed by FGViewMgr and is accessed using get_current_view(). The origin of the current view is used by the tile manager to make sure that visible scenery is loaded. Each frame, code in Main/renderer.cxx uses the current view to update the Open Scene Graph cameras. The arrangement of the OSG cameras is dictated by the osgViewer::Viewer class that we use to render the view on the scene. There is one master camera with several attached slave cameras. The master camera sets principal view (position, orientation) and projection (the frustum) parameters. The slaves can either specify additional transforms to those parameters, in both model-view and projection space, or they can be absolute in their own right. Currently the "out the window" views are rendered in relative slave cameras, while the GUI and HUD are drawn in an absolute slave. Slave cameras are created using properties in preferences.xml. One slave is always created that is aligned with the viewing parameters of the master camera. Others can be opened in different graphics windows, possibly on other displays and screens. A slave camera is presently created in its own window. Parameters for the slave are currently pretty limited; they include the dimensions and position of the window, "shear" values in projection space, and "heading-deg," a heading offset that I suspect was added specifically for LinuxTag :) The shear-x and shear-y values are really only useful for setting up a "video wall" type display where monitors arranged around the "master view" show a view in an offset frustum with the same aspect ratio and fov as the master. Problems With the Current Approach Many features are not now possible using only a single running instance of FlightGear. There can't be more than one view at a time. It would be nice to keep the principal "out the window" view around -- in order to fly the aircraft -- while having inset model views, tower views, missile-cam views, an a340 tail-strike view, etc. Our OSG camera creation procedure is completely insufficient for many things that people want to do with FlightGear. The requirement that slave cameras be opened in different graphics windows doesn't match well the most common multi-head graphics hardware. Most people are using a setup that drives several monitors with one graphics card, such as the Nvidia TwinView or Matrox 2Go products. These configurations work best with a single graphics window that spans all the monitors; the graphics context switches needed to render to different windows on the same graphics card are expensive. The camera parameters we support are not sufficient to specify monitors arranged around a cockpit for a real out-the-window view, to say nothing of views projected onto a screen or dome. Furthermore, for those configurations the FGViewer should never be able to change the field of view or other camera parameters. Proposal Define a CameraGroup object that is the bridge between an FGViewer and the OSG cameras that render the view. An FGViewer points to one CameraGroup, and only one active view can drive a CameraGroup at a time. The CameraGroup manipulates osg::Camera objects as necessary. Subclasses of CameraGroup might not respond to FGViewer requests to change camera parameters. Extend the camera creation options in preferences.xml to specify named CameraGroup objects. Allow the specification of graphics windows to which slave cameras in CameraGroup objects are assigned. Allow the full specification of viewing parameters -- position, orientation -- either as relative to a master camera or independent. Allow the camera parameters to be specified relative to the master, as they are now, or independently. The camera parameters can be specified using the Clotho / glFrustum scheme (top, bottom, left, right) or a syntax used by ProjectionDesigner (http://orihalcon.jp/projdesigner/) that uses field of view, aspect ratio, and offset. A full 4x4 matrix can also be specified. Camera groups can be created and destroyed on the fly; the CameraGroup will create OSG cameras as necessary and attach them to the proper graphics window. A camera group named "default-camera-group" will be used by FGViewer objects by default. This group will be created based on the command line arguments if it isn't specified in preferences.xml. FGViewer objects can either use named camera groups or can create new ones on the fly. I don't know if the creation of new graphics windows on the fly will be supported. Eliminate get_current_view(). There will be a list of active views. Try to eliminate code that depends on the current view. There still needs to be a "current location" for the terrain pager, but more on that later. This proposal is a little vague; the specifics need to be worked out when the CameraGroup is implemented and FGViewer is changed to use it. Future Possibilities. The cameras in a camera group don't need to render directly to the screen. They can render to a texture which can be used either in the scene, like in a video screen in the instrument panel, or for distortion correction in a projected or dome environment. Open Scene Graph supports a CompositeViewer object that supports rendering from several widely separated viewpoints, complete with support for multiple terrain pager threads. We could move to CompositeViewer and support simultaneous views from e.g., the tower, AI models, drones, etc. |
From: Vivian M. <viv...@li...> - 2008-06-26 13:23:28
|
Tim Moore wrote > Sent: 26 June 2008 11:53 > To: FlightGear developers discussions > Subject: [Flightgear-devel] RFC: changes to views and cameras > > Hi, > I'd like your comments on some changes to views and OSG cameras in > FlightGear before I start working on them. In particular, my > understanding of how FGViewer works might not be complete. > > What we have now > > FGViewer calculates the parameters of a view. This includes > camera-independent parameters such as the view origin and > direction. These parameters are calculated differently based on, among > other things, whether the viewpoint is tied to a model and whether the > view direction is fixed, relative to a model, or always tracking another > model. FGViewer filters and dampens some of these parameters. FGViewer > also controls some basic camera parameters such as the field of view (fov) > and a modification of the camera aspect ratio called the "aspect ratio > multiplier." > > There is only one active FGViewer object at a time. This is managed by > FGViewMgr and is accessed using get_current_view(). The origin of the > current view is used by the tile manager to make sure that visible > scenery is loaded. Each frame, code in Main/renderer.cxx uses the current > view to update the Open Scene Graph cameras. > > The arrangement of the OSG cameras is dictated by the osgViewer::Viewer > class that we use to render the view on the scene. There is one master > camera with several attached slave cameras. The master camera sets > principal view (position, orientation) and projection (the frustum) > parameters. The slaves can either specify additional transforms to those > parameters, in both model-view and projection space, or they can be > absolute in their own right. Currently the "out the window" views are > rendered in relative slave cameras, while the GUI and HUD are drawn in > an absolute slave. > > Slave cameras are created using properties in preferences.xml. One slave > is always created that is aligned with the viewing parameters of the > master camera. Others can be opened in different graphics windows, > possibly on other displays and screens. A slave camera is presently > created in its own window. Parameters for the slave are currently pretty > limited; they include the dimensions and position of the window, "shear" > values in projection space, and "heading-deg," a heading offset that I > suspect was added specifically for LinuxTag :) The shear-x and shear-y > values are really only useful for setting up a "video wall" type display > where monitors arranged around the "master view" show a view in an > offset frustum with the same aspect ratio and fov as the master. > > > Problems With the Current Approach > > Many features are not now possible using only a single running instance > of FlightGear. There can't be more than one view at a time. It would be > nice to keep the principal "out the window" view around -- in order to fly > the > aircraft -- while having inset model views, tower views, missile-cam > views, an a340 tail-strike view, etc. > > Our OSG camera creation procedure is completely insufficient for many > things that people want to do with FlightGear. The requirement that > slave cameras be opened in different graphics windows doesn't match well > the most common multi-head graphics hardware. Most people are using a > setup that drives several monitors with one graphics card, such as the > Nvidia TwinView or Matrox 2Go products. These configurations work > best with a single graphics window that spans all the monitors; the > graphics context switches needed to render to different windows on the > same graphics card are expensive. The camera parameters we support are > not sufficient to specify monitors arranged around a cockpit for a real > out-the-window view, to say nothing of views projected onto a screen or > dome. Furthermore, for those configurations the FGViewer should never > be able to change the field of view or other camera parameters. > > > Proposal > > Define a CameraGroup object that is the bridge between an FGViewer and > the OSG cameras that render the view. An FGViewer points to one > CameraGroup, and only one active view can drive a CameraGroup at a > time. The CameraGroup manipulates osg::Camera objects as > necessary. Subclasses of CameraGroup might not respond to FGViewer > requests to change camera parameters. > > Extend the camera creation options in preferences.xml to specify named > CameraGroup objects. Allow the specification of graphics windows to > which slave cameras in CameraGroup objects are assigned. Allow the full > specification of viewing parameters -- position, orientation -- either > as relative to a master camera or independent. Allow the camera > parameters to be specified relative to the master, as they are now, or > independently. The camera parameters can be specified using the Clotho > / glFrustum scheme (top, bottom, left, right) or a syntax used by > ProjectionDesigner (http://orihalcon.jp/projdesigner/) that uses field > of view, aspect ratio, and offset. A full 4x4 matrix can also be > specified. > > Camera groups can be created and destroyed on the fly; the CameraGroup > will create OSG cameras as necessary and attach them to the proper > graphics window. > > A camera group named "default-camera-group" will be used by FGViewer > objects by default. This group will be created based on the command line > arguments if it isn't specified in preferences.xml. > > FGViewer objects can either use named camera groups or can create new > ones on the fly. I don't know if the creation of new graphics windows on > the fly will be supported. > > Eliminate get_current_view(). There will be a list of active > views. Try to eliminate code that depends on the current view. There > still needs to be a "current location" for the terrain pager, but more > on that later. > > This proposal is a little vague; the specifics need to be worked out > when the CameraGroup is implemented and FGViewer is changed to use it. > > > Future Possibilities. > > The cameras in a camera group don't need to render directly to the > screen. They can render to a texture which can be used either in the > scene, like in a video screen in the instrument panel, or for distortion > correction in a projected or dome environment. > > Open Scene Graph supports a CompositeViewer object that supports > rendering from several widely separated viewpoints, complete with > support for multiple terrain pager threads. We could move to > CompositeViewer and support simultaneous views from e.g., the tower, AI > models, drones, etc. > It all looks well thought through and comprehensive ... BUT what's the likely/possible impact on framerate? There's little enough to spare on all but the most modern and capable machines already. Of course if there is significant negative impact on the frame rate that would be Comprehensive Rational And Purposeful ... Vivian |
From: Tim M. <ti...@re...> - 2008-06-26 15:38:26
|
On Thu, 26 Jun 2008 14:23:23 +0100 "Vivian Meazza" <viv...@li...> wrote: > Tim Moore wrote > > > Sent: 26 June 2008 11:53 > > To: FlightGear developers discussions > > Subject: [Flightgear-devel] RFC: changes to views and cameras > > > > Hi, > > I'd like your comments on some changes to views and OSG cameras in > > FlightGear before I start working on them. In particular, my > > understanding of how FGViewer works might not be complete. ... > It all looks well thought through and comprehensive ... BUT what's > the likely/possible impact on framerate? There's little enough to > spare on all but the most modern and capable machines already. > > Of course if there is significant negative impact on the frame rate > that would be Comprehensive Rational And Purposeful ... For the configurations supported today, there will be no change in frame rate. Some multihead configurations i.e., multiple monitors on one card, will be faster. If you load up your display with different views, the frame rate will probably be lower :) Tim |
From: Heiko S. <aei...@ya...> - 2008-06-26 16:59:49
|
Hi, at least from my point it sounds good and resonable. These are features really missing and will not only keep FlightGear up to date aganinst the commercial sims- it will make it better. > > This proposal is a little vague; the specifics need to be > worked out > when the CameraGroup is implemented and FGViewer is changed > to use it. But I wonder how long you need for all this, and what happens to all the other things you announced? Shadows, shader library... I know you are the only one in the moment beside Till Busch and Stuart ??? who are working on the OSG-part of FlightGear. Don't understand me wrong: these changes you want to do are important, so we can be very lucky having someone like you working on that. ... > > Future Possibilities. > > The cameras in a camera group don't need to render > directly to the > screen. They can render to a texture which can be used > either in the > scene, like in a video screen in the instrument panel, or > for distortion > correction in a projected or dome environment. > > Open Scene Graph supports a CompositeViewer object that > supports > rendering from several widely separated viewpoints, > complete with > support for multiple terrain pager threads. We could move > to > CompositeViewer and support simultaneous views from e.g., > the tower, AI > models, drones, etc. Yes, that are rhings very important. So from my point of view as mostly user and 3d-modeller there is nothing against as long the perfomance doesen't decrease too much. Regards HHS __________________________________________________________ Gesendet von Yahoo! Mail. Dem pfiffigeren Posteingang. http://de.overview.mail.yahoo.com |
From: Tim M. <ti...@re...> - 2008-06-26 19:48:29
|
On Thu, 26 Jun 2008 16:23:03 +0000 (GMT) Heiko Schulz <aei...@ya...> wrote: > Hi, > > at least from my point it sounds good and resonable. These are > features really missing and will not only keep FlightGear up to date > aganinst the commercial sims- it will make it better. > > > > > This proposal is a little vague; the specifics need to be > > worked out > > when the CameraGroup is implemented and FGViewer is changed > > to use it. > > > But I wonder how long you need for all this, and what happens to all > the other things you announced? Shadows, shader library... I know you > are the only one in the moment beside Till Busch and Stuart ??? who > are working on the OSG-part of FlightGear. Don't understand me wrong: > these changes you want to do are important, so we can be very lucky > having someone like you working on that. ... > > True enough, but I have a specific short-term need for the camera work. My work on the shader support for shadows is proceeding, albeit slowly. If anyone else feels like they want to dive into it, I'm happy to shift gears into mentor mode on that. Tim |
From: Mathias F. <Mat...@gm...> - 2008-06-28 07:05:05
|
Hi Tim, On Thursday 26 June 2008, Tim Moore wrote: > Problems With the Current Approach > > Many features are not now possible using only a single running instance > of FlightGear. There can't be more than one view at a time. It would be > nice to keep the principal "out the window" view around -- in order to fly > the aircraft -- while having inset model views, tower views, missile-cam > views, an a340 tail-strike view, etc. > > Our OSG camera creation procedure is completely insufficient for many > things that people want to do with FlightGear. The requirement that > slave cameras be opened in different graphics windows doesn't match well > the most common multi-head graphics hardware. Most people are using a > setup that drives several monitors with one graphics card, such as the > Nvidia TwinView or Matrox 2Go products. These configurations work > best with a single graphics window that spans all the monitors; the > graphics context switches needed to render to different windows on the > same graphics card are expensive. The camera parameters we support are > not sufficient to specify monitors arranged around a cockpit for a real > out-the-window view, to say nothing of views projected onto a screen or > dome. Furthermore, for those configurations the FGViewer should never > be able to change the field of view or other camera parameters. This is true in general. Mostly I agree. But I would like to be able to still use displays and screens for some parts of the viewer. So while this would be very good to have and probably better for the end performance where you are heading to, we should have that as an addition to the way we can now redirect views to different displays and screens. Just think of a 2 gpu machine. You get the best performance with 2 screenn each on one gpu. Then have exactly one graphics context per gpu. When you have two monitors on each gpu, subdivide that single graphics context among two cameras like you are probably heading to ... > Proposal > > Define a CameraGroup object that is the bridge between an FGViewer and > the OSG cameras that render the view. An FGViewer points to one > CameraGroup, and only one active view can drive a CameraGroup at a > time. The CameraGroup manipulates osg::Camera objects as > necessary. Subclasses of CameraGroup might not respond to FGViewer > requests to change camera parameters. > > Extend the camera creation options in preferences.xml to specify named > CameraGroup objects. Allow the specification of graphics windows to > which slave cameras in CameraGroup objects are assigned. Allow the full > specification of viewing parameters -- position, orientation -- either > as relative to a master camera or independent. Allow the camera > parameters to be specified relative to the master, as they are now, or > independently. The camera parameters can be specified using the Clotho > / glFrustum scheme (top, bottom, left, right) or a syntax used by > ProjectionDesigner (http://orihalcon.jp/projdesigner/) that uses field > of view, aspect ratio, and offset. A full 4x4 matrix can also be > specified. Ok, in principle yes. I do not know projdesigned. But Note that you have to be careful with osg. You just can have a sheared frustum in osg as a perspective projection matrix. If you specify arbitrary projection matrices osg bails out when culling ... > Camera groups can be created and destroyed on the fly; the CameraGroup > will create OSG cameras as necessary and attach them to the proper > graphics window. > > A camera group named "default-camera-group" will be used by FGViewer > objects by default. This group will be created based on the command line > arguments if it isn't specified in preferences.xml. > > FGViewer objects can either use named camera groups or can create new > ones on the fly. I don't know if the creation of new graphics windows on > the fly will be supported. > > Eliminate get_current_view(). There will be a list of active > views. Try to eliminate code that depends on the current view. There > still needs to be a "current location" for the terrain pager, but more > on that later. > > This proposal is a little vague; the specifics need to be worked out > when the CameraGroup is implemented and FGViewer is changed to use it. Sounds good in general. What we just need is the ability to still redirect some windows to an other display/screen. What would be good to have is the specify a completely different scenegraph in some subcameras. I think of having panel like instruments on an additional screen/display for example. > Future Possibilities. > > The cameras in a camera group don't need to render directly to the > screen. They can render to a texture which can be used either in the > scene, like in a video screen in the instrument panel, or for distortion > correction in a projected or dome environment. Well, I have an animation that I call rendertexture, where you can replace a texture on a subobject with such a rtt camera. Then specify a usual scenegraph to render to that texture and voila. I believe that I could finish that in a few days - depending on the weather here :) The idea is to make mfd instruments with usual scenegraphs and pin that on an object ... > Open Scene Graph supports a CompositeViewer object that supports > rendering from several widely separated viewpoints, complete with > support for multiple terrain pager threads. We could move to > CompositeViewer and support simultaneous views from e.g., the tower, AI > models, drones, etc. Good thing to have!!! Just still support graphics context on different screens/displays too ... GReetings and thanks!!! Mathias |
From: Tim M. <ti...@re...> - 2008-06-28 12:20:19
|
On Sat, 28 Jun 2008 09:04:59 +0200 Mathias Fröhlich <Mat...@gm...> wrote: > > Hi Tim, > > On Thursday 26 June 2008, Tim Moore wrote: > > Problems With the Current Approach > > > > Many features are not now possible using only a single running > > instance of FlightGear. There can't be more than one view at a > > time. It would be nice to keep the principal "out the window" view > > around -- in order to fly the aircraft -- while having inset model > > views, tower views, missile-cam views, an a340 tail-strike view, > > etc. > > > > Our OSG camera creation procedure is completely insufficient for > > many things that people want to do with FlightGear. The requirement > > that slave cameras be opened in different graphics windows doesn't > > match well the most common multi-head graphics hardware. Most > > people are using a setup that drives several monitors with one > > graphics card, such as the Nvidia TwinView or Matrox 2Go products. > > These configurations work best with a single graphics window that > > spans all the monitors; the graphics context switches needed to > > render to different windows on the same graphics card are > > expensive. The camera parameters we support are not sufficient to ... > This is true in general. Mostly I agree. > But I would like to be able to still use displays and screens for > some parts of the viewer. So while this would be very good to have > and probably better for the end performance where you are heading to, > we should have that as an addition to the way we can now redirect > views to different displays and screens. > > Just think of a 2 gpu machine. You get the best performance with 2 > screenn each on one gpu. Then have exactly one graphics context per > gpu. When you have two monitors on each gpu, subdivide that single > graphics context among two cameras like you are probably heading > to ... > I received a private comment about this point too. Mapping cameras to different windows, which can be opened on arbitrary screens, will absolutely still be supported. I know that multi-GPU setups are important for professional users and our demos. The setup you just described would be an important test case for my proposed system: 4 monitors attached to two GPUS. As I understand it you will get the best performance with each GPU treated as a large virtual screen, so here you want two graphics windows -- one opened on each display (GPU) -- with two cameras mapped to each window. > > Proposal > > > > Define a CameraGroup object that is the bridge between an FGViewer > > and the OSG cameras that render the view. An FGViewer points to one > > CameraGroup, and only one active view can drive a CameraGroup at a > > time. The CameraGroup manipulates osg::Camera objects as > > necessary. Subclasses of CameraGroup might not respond to FGViewer > > requests to change camera parameters. > > > > Extend the camera creation options in preferences.xml to specify > > named CameraGroup objects. Allow the specification of graphics > > windows to which slave cameras in CameraGroup objects are assigned. > > Allow the full specification of viewing parameters -- position, > > orientation -- either as relative to a master camera or > > independent. Allow the camera parameters to be specified relative > > to the master, as they are now, or independently. The camera > > parameters can be specified using the Clotho / glFrustum scheme > > (top, bottom, left, right) or a syntax used by ProjectionDesigner > > (http://orihalcon.jp/projdesigner/) that uses field of view, aspect > > ratio, and offset. A full 4x4 matrix can also be specified. > Ok, in principle yes. I do not know projdesigned. But Note that you > have to be careful with osg. You just can have a sheared frustum in > osg as a perspective projection matrix. If you specify arbitrary > projection matrices osg bails out when culling ... Interesting to know. OK, perhaps no matrix syntax. ... > > This proposal is a little vague; the specifics need to be worked out > > when the CameraGroup is implemented and FGViewer is changed to use > > it. > Sounds good in general. > What we just need is the ability to still redirect some windows to an > other display/screen. > > What would be good to have is the specify a completely different > scenegraph in some subcameras. I think of having panel like > instruments on an additional screen/display for example. Yup. We can think about how to specify that. > > > Future Possibilities. > > > > The cameras in a camera group don't need to render directly to the > > screen. They can render to a texture which can be used either in the > > scene, like in a video screen in the instrument panel, or for > > distortion correction in a projected or dome environment. > Well, I have an animation that I call rendertexture, where you can > replace a texture on a subobject with such a rtt camera. Then specify > a usual scenegraph to render to that texture and voila. I believe > that I could finish that in a few days - depending on the weather > here :) The idea is to make mfd instruments with usual scenegraphs > and pin that on an object ... It sounds like the arrangement I described could be easily hooked up to your animation. > > > Open Scene Graph supports a CompositeViewer object that supports > > rendering from several widely separated viewpoints, complete with > > support for multiple terrain pager threads. We could move to > > CompositeViewer and support simultaneous views from e.g., the > > tower, AI models, drones, etc. > Good thing to have!!! > Just still support graphics context on different screens/displays > too ... You bet. Tim |
From: Mathias F. <Mat...@gm...> - 2008-07-01 05:46:20
|
Tim, On Saturday 28 June 2008, Tim Moore wrote: > I received a private comment about this point too. Mapping cameras to > different windows, which can be opened on arbitrary screens, will > absolutely still be supported. I know that multi-GPU setups are > important for professional users and our demos. The setup you just > described would be an important test case for my proposed system: 4 > monitors attached to two GPUS. As I understand it you will get the best > performance with each GPU treated as a large virtual screen, so here > you want two graphics windows -- one opened on each display (GPU) -- > with two cameras mapped to each window. Yep, The reason is more or less clear: Imagine one gpu. You can feed that single gpu with exactly one command stream. If two applications work on the same gpu, they must be serialized by locks in the driver/kernel anyway. If two applications/threads race for that gpu, you will observe that the kernel decides at some ramdom points when to reschedule. When the other application starts running, the gpu's state must be set up for the application in question. That is in effect a graphics context switch, which is something of the expensiver things for a gpu. In contrast to that, if you have one single window for one gpu, osg will coordinate when to switch graphics contexts. you will change that context n times for n windows without racing for the same gpu from within the same application. Ok, if you have more than one gpu, you have independent command streams into the gpu, that can be used without context changes in real parallel. For them it is best you feed them in parallel. So one thread per gpu will be best ... So far theory, Robert told me that this would be observed with real setups. Matthias Börner, has told me that they measured nvidia OpenGL performamce for a customer and that they found best performance in a slightly different test scenario with that multi screen setup. So, in the end what is best will be somehow driver and may be version dependent. That is, the configuration mechanism schould be flexible enough to support both variants ... > Interesting to know. OK, perhaps no matrix syntax. > ... Well, there is the osg::Matrix::getFrustum() method. This one returns the clipping planes for the cull visitor. Make sure this method does not give an error. Or extend the implementation (if this is sensible extensible) of that method and feed that to Robert. > It sounds like the arrangement I described could be easily hooked up to > your animation. Hmm, I am not sure if we need some application code in the animation code. I believe that we need to distinguish between different render to texture cameras. That ones that will end in mfd displays or hud or whatever that is pinned into models. And one that are real application windows like what you describe - additional fly by view, and so on. And I believe that we should keep that separate and not intermix the code required for application level stuff with building of 3d models that do not need anything application level code to animate the models ... I think of some kind of separation that will also be good if we would do HLA between a viewer and an application computing physical models or controlling an additional view hooking into a federate ... Looking forward to that stuff! Greetings Mathias |