From: Michel D. <mi...@da...> - 2009-08-28 09:20:12
|
On Thu, 2009-08-27 at 17:33 -0400, Kristian Høgsberg wrote: > 2009/8/27 Thomas Hellström <th...@sh...>: > > > > b) It requires the master to act as a scheduler, and circumvents the DRM > > command submission mechanism through the delayed unpin callback. If this > > is to workaround any inability of GEM to serve a command submission > > while a previous command submission is blocked in the kernel, then IMHO > > that should be fixed and not worked around. > > It's not about workarounds. Your suggestion *blocks the hw* while > waiting for vsync. My patch doesn't do that, it lets other clients > submit rendering to pixmap or backbuffer BOs that aren't involved in > the pending page flip. If you're willing to block the command stream, > GEM can keep the buffer pinned for you until the swap happens just > fine. Just like it does for any other command - it's not about that. > In fact, I think using the scheduler to keep buffers pinned for > scanout is conflating things. The scheduler pulls buffers in and out > of the aperture so that they are there for the GPU when it needs to > access them. Pinning and unpinning buffers for scanout is a different > matter. How is scanout not GPU access? :) I'd look at it the other way around: pinning a scanout buffer is a workaround which is only needed while there are no outstanding fences on it. > > If the plan is to eliminate DRI2GetBuffers() once per frame, what will > > then be used to block clients rendering to the old back buffer? > > There'll be an event that's sent back after each DRI2SwapBuffer and > the clients will block on receiving that event. Are you referring to a DRI2 protocol event or a DRM event? > We still need to send a request to the xserver and receive confirmation > that the xserver has received it before we can render again. DRI2GetBuffers is a request > that expects a reply and will block the client on the xserver when we > call it. DRI2SwapBuffers is an async request, ie there's no reply and > calling it wont block necessarily the client. We still have to wait > for the new event before we can go on rendering, but doing it this way > makes the client and server less tightly coupled. We may end up doing > the roundtrip between client and server at a point where the client > was going to block anyway (like disk i/o or something) saving a > context switch. It's still a bit fuzzy how this is all supposed to work. To me it seems like (ab)using DRI2GetBuffers for this will only allow clients to avoid synchronous rendering with triple buffering, which would be a shame. If you disagree, can you explain why that isn't the case? -- Earthling Michel Dänzer | http://www.vmware.com Libre software enthusiast | Debian, X and DRI developer |