From: Keith P. <ke...@ke...> - 2004-05-17 17:48:13
|
Around 11 o'clock on May 17, Andy Ritger wrote: > How should a direct rendering client interact with Damage/Composite? > There seem to be two pieces to this: damage notification, and > synchronization. Thanks for getting this topic started. > When a direct rendering client damages the X screen, it needs to > communicate that information to the X server so that the X server can > notify Damage clients of the damage. We can easily send damage data over the wire if you like; that would require the active participation of the direct-rendering client. You can do that today easily enough -- use XClearArea after setting the window background to None (and perhaps back again when you're done). Oh wait, that doesn't actually work right now -- I've got a kludge which ignores background None painting to windows. I need to fix that anyway to handle mapping of background None windows cleanly. Alternatively, we can use the existing DamageDamageRegion function which is already exposed within the server to mark regions from the direct rendering clients as they modify the window pixmap. > 1) client kicks off rendering, notifies X server of damage, > X server sends Damage event to composite manager, composite > manager sends compositing request back to server, server > performs composite. There needs to be some synchronization to > guarantee that the composite is not performed until the client > rendering is completed by the hardware. Given that most applications double buffer their output, this seems like a pretty well constrainted problem. The only request which can affect the front buffer is a buffer swap, and that modifies the entire window contents. So, the X server must be able to find out when the buffer swap actually occurs, and either be signalled or block until that point. > 2) some damage occurs, composite manager sends composite request, > additional rendering is performed, part of which the composite > operation picks up, but the rest of the rendering is not > composited until the next "frame" of the composite manager, > and we see visible tearing. Applications which wish to avoid tearing must double buffer their output, just as they do today. Once that is true, then there is no 'partial' rendering, the buffer swap damages the entire window and replaces all of the contents. A more efficient implementation could actually perform this buffer swap without copying any data around -- just flip the off-screen storage for front/back buffers. That's probably easier with GL than 2D apps which tend to create window-sized pixmaps for 'back buffers', leaving the semantic mismatch between copy and swap. > Perhaps the best solution is to introduce two new requests to the > Composite extension: a "BeginComposite" and an "EndComposite" that > composite managers would call, bracketing their compositing requests. I don't think this is necessary -- the X server receives the damage information related to a specific drawable. Any future requests for contents from that drawable must delay until that damage has actually occurred. > 1) Truly double buffer the compositing system. Keith's sample > xcompmgr double buffers the compositing by creating a pixmap the > size of the root window, compositing into that, and then after > each frame of compositing is complete, copying from the pixmap > to the visible X screen (is that accurate, Keith?) I don't think we can avoid doing this; one of the primary goals of the system is to provide a clean tear-free user experience, so all screen updates must be performed under double buffering. > I can't make a strong argument for it, but if instead a back > buffer for the root window were automatically allocated when a > composite manager started redirecting windows, and compositing > was done into that buffer, then this might allow for various > minor optimizations: A GL-based compositing manager could easily do this. And, an X-based compositing manager could use the double buffering extension if it wanted to. My tiny kdrive based server doesn't happen to include that extension. > 2) An actual fullscreen mode. This is admittedly orthogonal > to compositing, but the overhead of compositing suggests that > we should have a mode of operation that clients can request > where they are given exclusive access to the hardware, > bypassing the compositing system. The compositing manager could recognise this case automatically if it were coupled with the window manager a bit more. > - It is important that X.org maintain a binary compatible driver > interface, so that vendors are not required to provide multiple > driver binaries (how to determine which binary to install? etc...) Absolutely. The Composite extension is being integrated in a completely binary compatible fashion. If any changes are required in the future, we'll have long lead times and cross-version compatibility to deal with at that point. > - An X driver should be able to wrap the redirection of windows to > offscreen storage: It already can -- per-window pixmaps are created and the driver notified before any rendering occurs; a clever driver could migrate those pixmaps to special offscreen storage if it wanted to. > - An X driver should be able to call into the core X server to > notify X of damage done by direct rendering clients. See DamageDamageRegion > - A Video Overlay Xv Adaptor is obviously fundamentally incompatible > with Damage/Composite. Should X drivers no longer advertise > Video Overlay Xv adaptors if they are running in an X server that > includes Composite support? Actually, as long as the windows are aligned on the screen with their nominal position and are opaque, this works just fine. However, when the windows are not so carefully positioned, the system will need to use a YUV texture to paint the video contents into the window pixmap and damage the region so the compositing manager can update the screen as appropriate. > - As window managers and desktop environments start folding composite > manager functionality into their projects, it would be nice > for them to provide a way to dynamically disable/enable > compositing. Yeah, I often turn off the compositing manager when doing 'odd' things. -keith |
From: Keith P. <ke...@ke...> - 2004-05-17 17:50:18
|
Around 9 o'clock on May 17, Alex Deucher wrote: > Many video overlays support alpha blending with the graphics layer, > it's just that support was never implemented since xfree86 never > supported it. Composite doesn't really expose things in a way that would make this hardware capability usable. Instead, it expects the video to be painted into the window pixmap so that those pixels can be composed to form the screen image. -keith |
From: Alex D. <ag...@ya...> - 2004-05-17 18:25:35
|
--- Keith Packard <ke...@ke...> wrote: > > Around 9 o'clock on May 17, Alex Deucher wrote: > > > Many video overlays support alpha blending with the graphics layer, > > it's just that support was never implemented since xfree86 never > > supported it. > > Composite doesn't really expose things in a way that would make this > hardware capability usable. > > Instead, it expects the video to be painted into the window pixmap so > that > those pixels can be composed to form the screen image. Sorry for my composite ignorance. Couldn't we just have Xv ignore composite and just use the video engine's native blending abilities to blend video with the graphics layer? or add composite "support" to Xv by just passing it the required gamma value and letting the hardware take care of the rest? > > -keith > > > __________________________________ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ |
From: Eric A. <et...@lc...> - 2004-05-17 18:56:31
|
On Mon, 2004-05-17 at 11:02, Alex Deucher wrote: > --- Keith Packard <ke...@ke...> wrote: > > > > Around 9 o'clock on May 17, Alex Deucher wrote: > > > > > Many video overlays support alpha blending with the graphics layer, > > > it's just that support was never implemented since xfree86 never > > > supported it. > > > > Composite doesn't really expose things in a way that would make this > > hardware capability usable. > > > > Instead, it expects the video to be painted into the window pixmap so > > that > > those pixels can be composed to form the screen image. > > Sorry for my composite ignorance. Couldn't we just have Xv ignore > composite and just use the video engine's native blending abilities to > blend video with the graphics layer? or add composite "support" to Xv > by just passing it the required gamma value and letting the hardware > take care of the rest? With composite you really want to be able to get at the pixels to be displayed so that transformations can be applied to them before displaying them, rather than just putting them up as the last transformation to be applied to the screen before display, as the overlay scaler would do. I've found that the 3d hardware solves the XV problem pretty well in Xati (and gives you as many XV ports as you want), though it lacks the controls typically associated with YUV conversion using the overlay scaler, like brightness/saturation. -- Eric Anholt et...@lc... http://people.freebsd.org/~anholt/ anholt@FreeBSD.org |
From: Alex D. <ag...@ya...> - 2004-05-17 20:22:54
|
--- Eric Anholt <et...@lc...> wrote: > On Mon, 2004-05-17 at 11:02, Alex Deucher wrote: > > --- Keith Packard <ke...@ke...> wrote: > > > > > > Around 9 o'clock on May 17, Alex Deucher wrote: > > > > > > > Many video overlays support alpha blending with the graphics > layer, > > > > it's just that support was never implemented since xfree86 > never > > > > supported it. > > > > > > Composite doesn't really expose things in a way that would make > this > > > hardware capability usable. > > > > > > Instead, it expects the video to be painted into the window > pixmap so > > > that > > > those pixels can be composed to form the screen image. > > > > Sorry for my composite ignorance. Couldn't we just have Xv ignore > > composite and just use the video engine's native blending abilities > to > > blend video with the graphics layer? or add composite "support" to > Xv > > by just passing it the required gamma value and letting the > hardware > > take care of the rest? > > With composite you really want to be able to get at the pixels to be > displayed so that transformations can be applied to them before > displaying them, rather than just putting them up as the last > transformation to be applied to the screen before display, as the > overlay scaler would do. I've found that the 3d hardware solves the > XV > problem pretty well in Xati (and gives you as many XV ports as you > want), though it lacks the controls typically associated with YUV > conversion using the overlay scaler, like brightness/saturation. > that's probably the way to go since I suspect graphics HW may eventually do away with the overlay all together in favor of YUV textures. if it doesn't already, I'd imagine the 3d hardware will eventually allow you to adjust the brightness or gamma of the textures. Alex > -- > Eric Anholt et...@lc... > http://people.freebsd.org/~anholt/ anholt@FreeBSD.org > > __________________________________ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ |
From: Alan C. <al...@lx...> - 2004-05-17 20:49:54
|
On Llu, 2004-05-17 at 18:49, Keith Packard wrote: > Composite doesn't really expose things in a way that would make this > hardware capability usable. > > Instead, it expects the video to be painted into the window pixmap so that > those pixels can be composed to form the screen image. For Xv that seems to involve working server side GL and using GL to take Xv data (as a texture) and putting it to the video visible buffer. I've been looking at exactly this for Voodoo2 although I'm still trying to get the 3D init code right (its nasty stuff) For DRI it does sound like the backbuffer->frontbuffer draw becomes a compositor task. |
From: Keith P. <ke...@ke...> - 2004-05-18 18:34:30
|
Around 1 o'clock on May 18, Andy Ritger wrote: > I'm debating whether it is better for the X server to not even know > of the damage until it has completed in hardware, or if it is > better to tell the X server as soon as the rendering has kicked off, > and then require X to wait for completion only when it needs to > use the drawable as a source. I don't think we'll be able to know which is best without giving them both a try. I was quite surprised at what method turned out best for the compositing manager -- one has been taught to avoid round trips at all cost, but the lowest latency was accomplished with an XSync call in the middle of the drawing loop. Just goes to show that intuition and reality are often in conflict... > > I just thought of another case here -- we want to allow for direct > > rendering compositing managers as well. That will require inter-client > > synchronization along the same lines... > > This introduces the problem of how to get the pixmap data to the > client efficiently. That's a whole separate thread. If the X server is drawing with GL, then the target GL drawing objects should be reachable by other GL applications. If the X server is drawing through another mechanism, we'll need to create a way to label X pixmaps with GL names. There are already two groups working on GL-based compositing managers, so we'll want to have this sooner, not later... > Yes, if the composite manager grabs the server while updating the > screen, then everything will be fine. Your sample xcompmgr doesn't > grab the server when updating the screen, and I expect many future > composite managers will use xcompmgr as a starting point. Fortunately, it's easy to add the grabs. And, it might fix some other problems I've seen... The existing compositing manager code needs to be replaced; it served as a test bed for many different ideas, some of which negatively affected the overall structure. > That seems possible. However, that seems like a lot to ask of all > window managers. Would common functionality like that be better > contained within an X server extension? Not an extension (there's no need), but surely a library would be useful. I've briefly looked into creating a library to help build compositing managers and composite-aware applications. > > It could pretend the overlay port was busy for new apps and silently > > translate an existing overlay application to textures. > > Interesting; this will require some more thought. Yeah, it would be nice to just say "overlays are dead, use textures", but overlays remain an important option in many environments (better color, more features, higher performance). So, I think we need to permit them, but find a way to cut over to textures where necessary. -keith |
From: Andy R. <ar...@nv...> - 2004-05-18 19:41:54
|
On Tue, 18 May 2004, Keith Packard wrote: > > Around 1 o'clock on May 18, Andy Ritger wrote: > > > I'm debating whether it is better for the X server to not even know > > of the damage until it has completed in hardware, or if it is > > better to tell the X server as soon as the rendering has kicked off, > > and then require X to wait for completion only when it needs to > > use the drawable as a source. > > I don't think we'll be able to know which is best without giving them both > a try. I was quite surprised at what method turned out best for the > compositing manager -- one has been taught to avoid round trips at all > cost, but the lowest latency was accomplished with an XSync call in the > middle of the drawing loop. > > Just goes to show that intuition and reality are often in conflict... Yup; this will require some experimentation to get right, but atleast there are several options. Thanks, - Andy |
From: Keith P. <ke...@ke...> - 2004-05-18 19:03:44
|
Around 14 o'clock on May 18, Soeren Sandmann wrote: > What if another client has already grabbed the server for whatever > reason? Is screen updating then turned off? Currently, yes. We need to fix this... -keith |
From: Keith P. <ke...@ke...> - 2004-05-17 20:17:30
|
Around 15 o'clock on May 17, Andy Ritger wrote: > Even for front buffered flushes I would be inclined to just say that it > damages the whole drawable, rather than try to compute a smaller bounding > region. That's certainly fine for now; if people really get excited about optimizing this case, they can fix it themselves. I don't know of too many applications which draw to the front buffer... > The tricky part here is that the damage event shouldn't be sent to > Damage clients until the hardware has completed the damage, but > that is the vendor's problem... I'm just trying to make sure > everything that is needed is in place so that vendors can solve that. It can't even be seen by the X server until the rendering is complete. When using 'automatic' update mode, there isn't an external application waiting for the event; the X server updates the screen directly. > One solution would be for the direct rendering client to send private > protocol to the X server as soon as the rendering is sent to the hw. Sure; just as long as the X server could then block awaiting completion. > BeginComposite/EndComposite bracketing would facilitate that (it > would be BeginComposite's job to make sure the hw had completed). There's no need for these extra requests -- the X server just needs to block when using the indicated source window buffer. This way, the X server can actually pend lots of other parts of the compositing operation and only when the affected window finally comes into play will the X server block. I just thought of another case here -- we want to allow for direct rendering compositing managers as well. That will require inter-client synchronization along the same lines... > glxgears is then redrawn (and swapped) before the compositing > is performed. When the compositing is performed, the xterm > and the portion of the glxgears window beneath the xterm are > recomposited into the backing pixmap, which is then blitted to > the visible screen. At this point, we have a tear between the > portion of the glxgears window that is not beneath the xterm > and the part that is (the part that is beneath the xterm is > from glxgear's new frame, while the part not beneath the xterm > is from the old frame). The window of vulnerability isn't as long as you fear -- the compositing manager can always use the damaged region of each window precisely at the time of the compositing operation, without reference to any events it has received. That's because the damage accumulates inside X server regions where it can be used to compute correct updates. As long as the compositing manager holds the server grabbed (which presumably locks out direct clients as well) while it updates the screen, there shouldn't be any tearing. No need to drain the event queue or anything else so dramatic. > > information related to a specific drawable. Any future requests for > > contents from that drawable must delay until that damage has actually > > occurred. > > Right, but how is that enforced? Who delays until the damage has > actually occurred? The X server would have to stall waiting for the swap to complete. It would "know" to do this because the direct client would have indicated that the swap was queued to the hardware. > True, but window managers can't cause video memory to be freed, > which would be really nice to do when you are transitioning into a > fullscreen application. They can free the extra buffers used for Composite, and the X server can migrate less used pixmaps from the video card. > Even the RandR implementation naively leaves the video memory allocated for > the largest possible root window size. Not in kdrive. > OK; how does a driver differentiate the per-window pixmaps from > regular pixmaps? The driver can see them associated with windows by wrapping SetWindowPixmap. > So if the X server might start compositing, then the driver can't advertise > the overlay port; is that correct? It could pretend the overlay port was busy for new apps and silently translate an existing overlay application to textures. I don't quite know; I use overlay video with composite now and it works as long as the windows are aligned on the screen correctly. I'd like to make that possible in the future as well, but I'm not quite sure how to do that. -keith |
From: Andy R. <ar...@nv...> - 2004-05-18 05:48:54
|
On Mon, 17 May 2004, Keith Packard wrote: > > Around 15 o'clock on May 17, Andy Ritger wrote: [snip] > > The tricky part here is that the damage event shouldn't be sent to > > Damage clients until the hardware has completed the damage, but > > that is the vendor's problem... I'm just trying to make sure > > everything that is needed is in place so that vendors can solve that. > > It can't even be seen by the X server until the rendering is complete. > When using 'automatic' update mode, there isn't an external application > waiting for the event; the X server updates the screen directly. Ah right, good point. > > One solution would be for the direct rendering client to send private > > protocol to the X server as soon as the rendering is sent to the hw. > > Sure; just as long as the X server could then block awaiting completion. > > > BeginComposite/EndComposite bracketing would facilitate that (it > > would be BeginComposite's job to make sure the hw had completed). > > There's no need for these extra requests -- the X server just needs to > block when using the indicated source window buffer. This way, the X > server can actually pend lots of other parts of the compositing operation > and only when the affected window finally comes into play will the X > server block. I'm debating whether it is better for the X server to not even know of the damage until it has completed in hardware, or if it is better to tell the X server as soon as the rendering has kicked off, and then require X to wait for completion only when it needs to use the drawable as a source. The former will avoid blocking in the server, while the latter may reduce latencies... that will require some experimentation. > I just thought of another case here -- we want to allow for direct > rendering compositing managers as well. That will require inter-client > synchronization along the same lines... This introduces the problem of how to get the pixmap data to the client efficiently. That's a whole separate thread. > > glxgears is then redrawn (and swapped) before the compositing > > is performed. When the compositing is performed, the xterm > > and the portion of the glxgears window beneath the xterm are > > recomposited into the backing pixmap, which is then blitted to > > the visible screen. At this point, we have a tear between the > > portion of the glxgears window that is not beneath the xterm > > and the part that is (the part that is beneath the xterm is > > from glxgear's new frame, while the part not beneath the xterm > > is from the old frame). > > The window of vulnerability isn't as long as you fear -- the compositing > manager can always use the damaged region of each window precisely at the > time of the compositing operation, without reference to any events it has > received. That's because the damage accumulates inside X server regions > where it can be used to compute correct updates. OK, I think that makes sense. > As long as the compositing manager holds the server grabbed (which > presumably locks out direct clients as well) while it updates the screen, > there shouldn't be any tearing. No need to drain the event queue or > anything else so dramatic. Yes, if the composite manager grabs the server while updating the screen, then everything will be fine. Your sample xcompmgr doesn't grab the server when updating the screen, and I expect many future composite managers will use xcompmgr as a starting point. > > > information related to a specific drawable. Any future requests for > > > contents from that drawable must delay until that damage has actually > > > occurred. > > > > Right, but how is that enforced? Who delays until the damage has > > actually occurred? > > The X server would have to stall waiting for the swap to complete. It > would "know" to do this because the direct client would have indicated > that the swap was queued to the hardware. OK, so X drivers would have to hook into this and stall when appropriate. > > True, but window managers can't cause video memory to be freed, > > which would be really nice to do when you are transitioning into a > > fullscreen application. > > They can free the extra buffers used for Composite, and the X server can > migrate less used pixmaps from the video card. That seems possible. However, that seems like a lot to ask of all window managers. Would common functionality like that be better contained within an X server extension? > > Even the RandR implementation naively leaves the video memory allocated for > > the largest possible root window size. > > Not in kdrive. OK, that's something I'd like to fix in the monolithic server. > > OK; how does a driver differentiate the per-window pixmaps from > > regular pixmaps? > > The driver can see them associated with windows by wrapping > SetWindowPixmap. OK. > > So if the X server might start compositing, then the driver can't advertise > > the overlay port; is that correct? > > It could pretend the overlay port was busy for new apps and silently > translate an existing overlay application to textures. I don't quite > know; I use overlay video with composite now and it works as long as the > windows are aligned on the screen correctly. I'd like to make that > possible in the future as well, but I'm not quite sure how to do that. Interesting; this will require some more thought. Thanks, - Andy > -keith > > > |
From: Soeren S. <san...@da...> - 2004-05-18 12:47:35
|
Keith Packard <ke...@ke...> writes: > As long as the compositing manager holds the server grabbed (which > presumably locks out direct clients as well) while it updates the screen, > there shouldn't be any tearing. No need to drain the event queue or=20 > anything else so dramatic. What if another client has already grabbed the server for whatever reason? Is screen updating then turned off? S=F8ren |
From: Andy R. <ar...@nv...> - 2004-05-18 13:31:52
|
On Tue, 18 May 2004, Soeren Sandmann wrote: > Keith Packard <ke...@ke...> writes: >=20 > > As long as the compositing manager holds the server grabbed (which > > presumably locks out direct clients as well) while it updates the > screen, > > there shouldn't be any tearing. No need to drain the event queue or=20 > > anything else so dramatic. >=20 > What if another client has already grabbed the server for whatever > reason? Is screen updating then turned off? If a client has grabbed the server, then requests from all other clients (including the XGrabServer request) are not processed until that client has ungrabbed the server. The composite manager would block until the other client had ungrabbed. - Andy > S=F8ren >=20 |
From: Soeren S. <san...@da...> - 2004-05-18 15:54:03
|
Andy Ritger <ar...@nv...> writes: > > What if another client has already grabbed the server for whatever > > reason? Is screen updating then turned off? >=20 > If a client has grabbed the server, then requests from all other > clients (including the XGrabServer request) are not processed until > that client has ungrabbed the server. The composite manager would > block until the other client had ungrabbed. But if the compositing manager is blocked, nothing appears on the screen, right? This means screen updating is effectively turned off when an application is grabbing the server. S=F8ren |
From: Jim G. <Jim...@hp...> - 2004-05-18 16:02:31
|
On Tue, 2004-05-18 at 11:53, Soeren Sandmann wrote: > Andy Ritger <ar...@nv...> writes: > > > > What if another client has already grabbed the server for whatever > > > reason? Is screen updating then turned off? > > > > If a client has grabbed the server, then requests from all other > > clients (including the XGrabServer request) are not processed until > > that client has ungrabbed the server. The composite manager would > > block until the other client had ungrabbed. > > But if the compositing manager is blocked, nothing appears on the > screen, right? This means screen updating is effectively turned off > when an application is grabbing the server. Which is why avoiding server grabs is imporant, as much as possible. It takes a global lock out on the X server and needs to be used with great care. - Jim -- Jim Gettys <Jim...@hp...> HP Labs, Cambridge Research Laboratory |
From: Egbert E. <ei...@pd...> - 2004-05-18 16:46:38
|
Jim Gettys writes: > > Which is why avoiding server grabs is imporant, as much > as possible. It takes a global lock out on the X server and > needs to be used with great care. But you cannot rule out that some legacy client apps don't use server grabs for strange purposes. It may in fact be necessary to make some 'priviledged' clients like the composition manager immune to server grabs. Cheers, Egbert. |
From: Keith P. <ke...@ke...> - 2004-05-17 21:02:16
|
Around 18 o'clock on May 17, Alan Cox wrote: > For Xv that seems to involve working server side GL and using GL to > take Xv data (as a texture) and putting it to the video visible buffer. > I've been looking at exactly this for Voodoo2 although I'm still trying > to get the 3D init code right (its nasty stuff) That's certainly one way of doing it. The Matrox driver has direct support for 'textured video', accessing the appropriate mechanisms from the X server without going through the GL library. -keith |
From: Alex D. <ag...@ya...> - 2004-05-17 21:41:40
|
--- Keith Packard <ke...@ke...> wrote: > > Around 18 o'clock on May 17, Alan Cox wrote: > > > For Xv that seems to involve working server side GL and using GL to > > take Xv data (as a texture) and putting it to the video visible > buffer. > > I've been looking at exactly this for Voodoo2 although I'm still > trying > > to get the 3D init code right (its nasty stuff) > > That's certainly one way of doing it. The Matrox driver has direct > support for 'textured video', accessing the appropriate mechanisms > from > the X server without going through the GL library. unfortunately it doesn't keep state with the 3d driver so you get one or the other: 3d or textured video. Alex > > -keith > > > __________________________________ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ |
From: Keith P. <ke...@ke...> - 2004-05-18 19:53:39
|
Around 18 o'clock on May 18, Egbert Eich wrote: > It may in fact be necessary to make some 'priviledged' clients like > the composition manager immune to server grabs. Yup. Then we'll need some kind of 'super grab' to keep multiple ones of those from stepping on each other. And recurse. -keith |
From: Andy R. <ar...@nv...> - 2004-05-17 19:55:36
|
On Mon, 17 May 2004, Keith Packard wrote: > > Around 11 o'clock on May 17, Andy Ritger wrote: > > > How should a direct rendering client interact with Damage/Composite? > > There seem to be two pieces to this: damage notification, and > > synchronization. > > Thanks for getting this topic started. > > > When a direct rendering client damages the X screen, it needs to > > communicate that information to the X server so that the X server can > > notify Damage clients of the damage. > > We can easily send damage data over the wire if you like; that would > require the active participation of the direct-rendering client. > > You can do that today easily enough -- use XClearArea after setting the > window background to None (and perhaps back again when you're done). Oh > wait, that doesn't actually work right now -- I've got a kludge which > ignores background None painting to windows. I need to fix that anyway to > handle mapping of background None windows cleanly. > > Alternatively, we can use the existing DamageDamageRegion function which > is already exposed within the server to mark regions from the direct > rendering clients as they modify the window pixmap. OK, I've honestly not looked at the implementation in X.org, yet. DamageDamageRegion sounds like exactly what we would need. > > 1) client kicks off rendering, notifies X server of damage, > > X server sends Damage event to composite manager, composite > > manager sends compositing request back to server, server > > performs composite. There needs to be some synchronization to > > guarantee that the composite is not performed until the client > > rendering is completed by the hardware. > > Given that most applications double buffer their output, this seems like a > pretty well constrainted problem. The only request which can affect the > front buffer is a buffer swap, and that modifies the entire window > contents. Right: swaps and front buffered flushes are the only GLX operations that should trigger a damage event. Even for front buffered flushes I would be inclined to just say that it damages the whole drawable, rather than try to compute a smaller bounding region. > So, the X server must be able to find out when the buffer swap > actually occurs, and either be signalled or block until that point. The tricky part here is that the damage event shouldn't be sent to Damage clients until the hardware has completed the damage, but that is the vendor's problem... I'm just trying to make sure everything that is needed is in place so that vendors can solve that. One solution would be for the direct rendering client to send private protocol to the X server as soon as the rendering is sent to the hw. The X server then sends a damage event to the Damage clients. The composite manager then starts performing a composite. Ideally, you would defer waiting for the hw to complete the direct rendering operation until the composite manager wants to perform the composite. BeginComposite/EndComposite bracketing would facilitate that (it would be BeginComposite's job to make sure the hw had completed). > > 2) some damage occurs, composite manager sends composite request, > > additional rendering is performed, part of which the composite > > operation picks up, but the rest of the rendering is not > > composited until the next "frame" of the composite manager, > > and we see visible tearing. > > Applications which wish to avoid tearing must double buffer their output, > just as they do today. Once that is true, then there is no 'partial' > rendering, the buffer swap damages the entire window and replaces all of > the contents. Sorry, I wasn't clear here. Allow me to clarify with an example: glxgears is partially overlapped by a translucent xterm: _____________ | | ____________|.... | | | . | | glxgears | xterm | | | . | |___________|.... | |___________| The xterm updates (say, it scrolls) and a damage event is sent to the composite manager. The composite manager drains the event queue and builds the list of damaged regions. As far as the composite manager knows, glxgears has not been damaged. The composite manager then sends protocol to recomposite the xterm; presumably this operation would also use as a source the portion of the glxgears window beneath the xterm. glxgears is then redrawn (and swapped) before the compositing is performed. When the compositing is performed, the xterm and the portion of the glxgears window beneath the xterm are recomposited into the backing pixmap, which is then blitted to the visible screen. At this point, we have a tear between the portion of the glxgears window that is not beneath the xterm and the part that is (the part that is beneath the xterm is from glxgear's new frame, while the part not beneath the xterm is from the old frame). The composite manager then returns to its event loop, receives notification that glxgears was damaged, and eventually updates the screen with the change. In the period between these two composite "frames", glxgears is torn vertically along the xterm boundary. Again, this should not be specific to direct rendering: it could just as easily happen with an animating 2d app. The race is that rendering (either direct or indirect) can occur between when the composite manager builds its damage list, and when its compositing requests are processed by the server. The only sure fire solution that I can think of is for the composite manager to grab the server, drain its event queue, perform its compositing, and then ungrab the server. That seems very heavy weight, though, so I'm curious what other solutions people might have. One compromise would be to introduce new BeginComposite/EndComposite commands, and get composite managers into the habit of using them now. This would give vendors the flexibility to synchronize this in whatever way makes most sense for their architecture. > A more efficient implementation could actually perform this buffer swap > without copying any data around -- just flip the off-screen storage for > front/back buffers. That's probably easier with GL than 2D apps which > tend to create window-sized pixmaps for 'back buffers', leaving the > semantic mismatch between copy and swap. Yes, that is a good idea, though it would mean a round trip (the client would need to wait for the server to update it's state of front/back before the client could start rendering to the new back). > > Perhaps the best solution is to introduce two new requests to the > > Composite extension: a "BeginComposite" and an "EndComposite" that > > composite managers would call, bracketing their compositing requests. > > I don't think this is necessary -- the X server receives the damage > information related to a specific drawable. Any future requests for > contents from that drawable must delay until that damage has actually > occurred. Right, but how is that enforced? Who delays until the damage has actually occurred? > > 1) Truly double buffer the compositing system. Keith's sample > > xcompmgr double buffers the compositing by creating a pixmap the > > size of the root window, compositing into that, and then after > > each frame of compositing is complete, copying from the pixmap > > to the visible X screen (is that accurate, Keith?) > > I don't think we can avoid doing this; one of the primary goals of the > system is to provide a clean tear-free user experience, so all screen > updates must be performed under double buffering. Agreed. > > I can't make a strong argument for it, but if instead a back > > buffer for the root window were automatically allocated when a > > composite manager started redirecting windows, and compositing > > was done into that buffer, then this might allow for various > > minor optimizations: > > A GL-based compositing manager could easily do this. And, an X-based > compositing manager could use the double buffering extension if it wanted > to. My tiny kdrive based server doesn't happen to include that extension. OK, I'll need to learn more about DBE before I can comment on that. > > 2) An actual fullscreen mode. This is admittedly orthogonal > > to compositing, but the overhead of compositing suggests that > > we should have a mode of operation that clients can request > > where they are given exclusive access to the hardware, > > bypassing the compositing system. > > The compositing manager could recognise this case automatically if it were > coupled with the window manager a bit more. True, but window managers can't cause video memory to be freed, which would be really nice to do when you are transitioning into a fullscreen application. Even the RandR implementation naively leaves the video memory allocated for the largest possible root window size. > > - It is important that X.org maintain a binary compatible driver > > interface, so that vendors are not required to provide multiple > > driver binaries (how to determine which binary to install? etc...) > > Absolutely. The Composite extension is being integrated in a completely > binary compatible fashion. If any changes are required in the future, > we'll have long lead times and cross-version compatibility to deal with at > that point. Excellent; I just wanted to reinforce the importance of this from an IHV point of view. > > - An X driver should be able to wrap the redirection of windows to > > offscreen storage: > > It already can -- per-window pixmaps are created and the driver notified > before any rendering occurs; a clever driver could migrate those pixmaps > to special offscreen storage if it wanted to. OK; how does a driver differentiate the per-window pixmaps from regular pixmaps? > > - An X driver should be able to call into the core X server to > > notify X of damage done by direct rendering clients. > > See DamageDamageRegion Very good. > > - A Video Overlay Xv Adaptor is obviously fundamentally incompatible > > with Damage/Composite. Should X drivers no longer advertise > > Video Overlay Xv adaptors if they are running in an X server that > > includes Composite support? > > Actually, as long as the windows are aligned on the screen with their > nominal position and are opaque, this works just fine. > > However, when the windows are not so carefully positioned, the system will > need to use a YUV texture to paint the video contents into the window > pixmap and damage the region so the compositing manager can update the > screen as appropriate. The problem is that Xv works in terms of "ports" -- a driver advertises an overlay port, a blitter port, a texture port, etc. The ports are advertised for the life of the X server (like visuals); my understanding is you can't dynamically add/remove Xv ports or migrate one into another while in use. So if the X server might start compositing, then the driver can't advertise the overlay port; is that correct? > > - As window managers and desktop environments start folding composite > > manager functionality into their projects, it would be nice > > for them to provide a way to dynamically disable/enable > > compositing. > > Yeah, I often turn off the compositing manager when doing 'odd' things. Sure. Thanks, - Andy > -keith > > > |