From: Steve B. <sjb...@ai...> - 2004-02-21 00:33:47
|
Richard Rauch wrote: >>b) Within glutDisplayFunc itself? >> >> If so, then it absolutely is necessary because you've said that >> off-screen >> rendering areas are treated just like regular windows - so they have to >> have >> the same behaviour or all the messy little exceptions like this one will >> confuse the application writers no end. If you are going to pretend >> that these rendering areas are actually windows then they'd better behave >> EXACTLY like them. > > > The point is, this is wrong for all windows. Including onscreen. > At least, I believe this to be so. I absolutely agree - but we have to be compatible with GLUT - so we are stuck with it the way it is. > Onscreen windows, at least under X, are not created synchronously. > To my understanding, they cannot be, as X is a network protocol. Yep. And offscreen buffers absolutely much be created when the application demands them. This ends the debate as far as I'm concerned. > Suppose as part of a game (games seem to be frequently cited) there > is a room with a 4x4 grid of video monitors. The monitors have images > from other places in the game. By design of the game, we know in advance > that at most 2 monitors will have activity at any given time, but when > activity comes, it tends to come in high-frequency bursts. Resolving the > activity is easy, but rendering the scene to be put on the monitor as a > texture is potentially expensive. Looking at our target hardware, and > the kinds of things that we want to support, we juggle complexities up > to the point that we can generate 2 video monitor images per game > display frame without adversely impacting the frame rate when you're > in the video monitor room. > > Rendering all 16 monitor textures is wrong. Rendering the monitor > textures as soon as there is any changed data is wrong. Both could > blow the "budget" for rendering the video frames. > > An easy and natural choice, with the current API, is to make the > monitors into offscreen windows and use glutPostRedisplay() to > throttle their rates back, and at the same time to select just the > right monitor. Speaking as someone who has quite literally done this exact thing (look here: http://plib.sourceforge.net/gallery/evilo_thumbnail.png ) I can tell you that you wouldn't leave it up to GLUT to do this for you because you have little or no control over the order that windows get rendered in. Hence, it would be very difficult to schedule that rendering. Instead, what would happen is this: * The main window redraw event happens. * We decide (based on whatever timing criteria) which (if any) monitor windows we have time to render this frame - and which ones have been least frequently rendered. * We switch OpenGL's rendering context to the pbuffer - we render a monitor view, we copy that into a texture map (or we render directly into the texture map if the hardware supports it). * We switch back to the main window. * We render the scene using appropriate textures for the monitor screens. * We glutPostRedisplay() * We return control to GLUT to let it handle events and hopefully call the redraw event again ASAP. Having two or more GLUT windows open at the same time is just asking for trouble because you have NO control over the rendering timing. As I have said (FREQUENTLY) pbuffers are NOT LIKE WINDOWS - you don't use them like that. > If you are *really* picky, you might care about > the redrawing order of the textures and the monitors to which > they are applied, though in practice that probably won't matter to > anyone, ever. You couldn't be more wrong. Latency between the monitors and the main view has to be tightly controlled - if you let them get too far out of sync with each other or with the main view then the lag between them gets very noticable. Another place I use pbuffers is in rendering an infra-red camera view on an F16 flight simulator. The camera's image is projected into the heads-up display (HUD) and lines up with the real world. If the pbuffer isn't rendered in sync with the view out of the cockpit window, then everything you see in the HUD is double-imaged. Yet another example is for things like simulating heat haze where you rendering a small part of the scene into a pbuffer then render that as a texture over the top of the corresponding part of the 'real' image - with the polygons holding the pbuffer/texture wobbling slightly. If the two are not rendered precisely in sync - the result looks ridiculous. I can come up with dozens of practical examples. In every useful case I can come up with, the pbuffer will be rendered at precisely the same time as the main window...and that means rendering it inside the main window's render callback. If we let freeglut do it by issuing a render callback on the pbuffer window then it will either be one frame ahead or one frame behind the main window - and because there are no guarantees about window rendering order, you'll have no way to control that. No - in reality, you'll want to render to the pbuffer when you want to do it - and not when GLUT feels like making the callback. >>>Rendering into vertex arrays? I'm not sure that I understand this >>>correctly. >> >>Oh well, never mind. Call us back when you have a clue what you are >>talking about. > > > Sorry for not keeping abreast of things without which life itself > would fail. How foolish of me. I will make a note that I must obtain > a clue. Well, it's just that if you are arguing the point of what these buffers are like and what they'll be used for - it's really rather important to understand what's just around the corner or else we'll be rewriting this in July this year when the OpenGL 2.0 spec is finally official. > Though it does beg the question of when OpenGL 2.0 will actually > be in everyone's hands. If you run Windows - you can get an OpenGL 2.0 beta from ATI or from 3D labs. Since the OpenGL specification is not yet finalised (and won't be until SigGraph 2004 where OpenGL announcements are traditionally made), you should treat these as tentative. However, since 3D Labs and ATI have provided *all* of the staff for the OpenGL 2.0 initiative - we can imagine that their present implementations are fairly authoritative. nVidia have promised to release an OpenGL 2.0 for their hardware - but have not committed to a date. A Linux version of the ATI OpenGL 2.0 drivers is not generally available. The 3D Labs drivers for Linux come from a 3rd party and aren't derived from their Windoze drivers...so who knows what'll happen there. >>An OpenGL 'buffer' would be a rendering region that can be attached to a >>vertex array or turned magically into a texture...both things that a >>'window' >>can't do. > > > And probably not something that freeglut should be getting into, either. > > Freeglut does windows pretty well, and offscreen rendering can fit quite > well into that perspective. In the near term, it works now, and if it > proves crufty, it can be deprecated and allowed to die after everyone > and their pet rock as OpenGL 2.0 with pbuffers. Basing the freeglut > feature on the assumption of pbuffers seems to make undue complication > today, and will almost certainly be worthless tomorrow when OpenGL 2.0 is > in-hand. I think that if we make as few assumptions about what a 'buffer' is as possible (beyond that OpenGL will render into it) - then our code should be extensible as we learn more. ---------------------------- Steve Baker ------------------------- HomeEmail: <sjb...@ai...> WorkEmail: <sj...@li...> HomePage : http://www.sjbaker.org Projects : http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net -----BEGIN GEEK CODE BLOCK----- GCS d-- s:+ a+ C++++$ UL+++$ P--- L++++$ E--- W+++ N o+ K? w--- !O M- V-- PS++ PE- Y-- PGP-- t+ 5 X R+++ tv b++ DI++ D G+ e++ h--(-) r+++ y++++ -----END GEEK CODE BLOCK----- |