Keith Whitwell wrote:
> Kristian H=F8gsberg wrote:
>> On 10/4/07, Keith Packard <keithp@...> wrote:
>>> On Thu, 2007-10-04 at 01:27 -0400, Kristian H=F8gsberg wrote:
>>>> There is an issue with the design, though, related to how and when t=
>>>> DRI driver discovers that the front buffer has changed (typically
>>> Why would the rendering application even need to know the size of the=
>>> front buffer? The swap should effectively be under the control of the=
>>> front buffer owner, not the rendering application.
>> Ok, I phrased that wrong: what the DRI driver needs to look out for is=
>> when size of the rendering buffers change. For a redirected window,
>> this does involve resizing the front buffer, but that's not the case
>> for a non-redirected window. The important part, though, is that the
>> drawable size changes and before submitting rendering, the DRI driver
>> has to allocate new private backbuffers that are big enough to hold
>> the contents.
>>> As far as figuring out how big to make the rendering buffers, that's
>>> outside the scope of DRM in my book. The GLX interface can watch for
>>> ConfigureNotify events on the associated window and resize the back
>>> buffers as appropriate.
>> I guess you're proposing libGL should transparently listen for
>> ConfigureNotify events? I don't see how that can work, there is no
>> guarantee that an OpenGL application handles events. For example,
>> glxgears without an event loop, just rendering. If the rendering
>> extends outside the window bounds and you increase the window size,
>> the next frame should include those parts that were clipped by the
>> window in previous frames. X events aren't reliable for this kind of
>> And regardless, the issue isn't so much how to get the resize
>> notification from the X server to the direct rendering client, but
>> rather that the Gallium design doesn't expect these kinds of
>> interruptions while rendering a frame. So while libGL (or AIGLX) may
>> be able to notice that the window size changed, what I'm missing is a
>> mechanism to ask the DRI driver to reallocate its back buffers.
> I think basically we just need a tweak to what we're already doing for =
> private backbuffers to cope with the periodic rendering case you've=20
> highlighted. So basically checking before the first draw and again=20
> before swapbuffers, rather than just before swapbuffers.
> This doesn't address the question about contexts in potentially=20
> different processes sharing a backbuffer, but I'm not 100% convinced it=
> possible, and if it is possible under glx, I'm not 100% convinced that =
> its a sensible thing to support anyway...
Basically what I'm saying above is that 1) I haven't had a chance to dig =
into the shared-context issue, 2) in my experience GL and GLX specs=20
provide a good amount of wiggle room to allow for a variety of=20
implementation strategies, and 3) we should be careful not to jump to an =
unfavourable interpretation of the spec that ties us into a non-optimal=20
I don't think we're looking at a particularly unique or unusual strategy =
- quite a few GL stacks end up with private backbuffers it seems, so=20
these are problems that have all been faced and solved before.