You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
(10) |
Apr
(28) |
May
(41) |
Jun
(91) |
Jul
(63) |
Aug
(45) |
Sep
(37) |
Oct
(80) |
Nov
(91) |
Dec
(47) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(48) |
Feb
(121) |
Mar
(126) |
Apr
(16) |
May
(85) |
Jun
(84) |
Jul
(115) |
Aug
(71) |
Sep
(27) |
Oct
(33) |
Nov
(15) |
Dec
(71) |
2002 |
Jan
(73) |
Feb
(34) |
Mar
(39) |
Apr
(135) |
May
(59) |
Jun
(116) |
Jul
(93) |
Aug
(40) |
Sep
(50) |
Oct
(87) |
Nov
(90) |
Dec
(32) |
2003 |
Jan
(181) |
Feb
(101) |
Mar
(231) |
Apr
(240) |
May
(148) |
Jun
(228) |
Jul
(156) |
Aug
(49) |
Sep
(173) |
Oct
(169) |
Nov
(137) |
Dec
(163) |
2004 |
Jan
(243) |
Feb
(141) |
Mar
(183) |
Apr
(364) |
May
(369) |
Jun
(251) |
Jul
(194) |
Aug
(140) |
Sep
(154) |
Oct
(167) |
Nov
(86) |
Dec
(109) |
2005 |
Jan
(176) |
Feb
(140) |
Mar
(112) |
Apr
(158) |
May
(140) |
Jun
(201) |
Jul
(123) |
Aug
(196) |
Sep
(143) |
Oct
(165) |
Nov
(158) |
Dec
(79) |
2006 |
Jan
(90) |
Feb
(156) |
Mar
(125) |
Apr
(146) |
May
(169) |
Jun
(146) |
Jul
(150) |
Aug
(176) |
Sep
(156) |
Oct
(237) |
Nov
(179) |
Dec
(140) |
2007 |
Jan
(144) |
Feb
(116) |
Mar
(261) |
Apr
(279) |
May
(222) |
Jun
(103) |
Jul
(237) |
Aug
(191) |
Sep
(113) |
Oct
(129) |
Nov
(141) |
Dec
(165) |
2008 |
Jan
(152) |
Feb
(195) |
Mar
(242) |
Apr
(146) |
May
(151) |
Jun
(172) |
Jul
(123) |
Aug
(195) |
Sep
(195) |
Oct
(138) |
Nov
(183) |
Dec
(125) |
2009 |
Jan
(268) |
Feb
(281) |
Mar
(295) |
Apr
(293) |
May
(273) |
Jun
(265) |
Jul
(406) |
Aug
(679) |
Sep
(434) |
Oct
(357) |
Nov
(306) |
Dec
(478) |
2010 |
Jan
(856) |
Feb
(668) |
Mar
(927) |
Apr
(269) |
May
(12) |
Jun
(13) |
Jul
(6) |
Aug
(8) |
Sep
(23) |
Oct
(4) |
Nov
(8) |
Dec
(11) |
2011 |
Jan
(4) |
Feb
(2) |
Mar
(3) |
Apr
(9) |
May
(6) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
(1) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jakob B. <ja...@vm...> - 2010-03-01 15:32:14
|
On 1 mar 2010, at 15.23, Jose Fonseca wrote: > Module: Mesa > Branch: gallium-format-cleanup > Commit: 4c3bfc9778d9a0a75bf93b15303a4839f971f695 > URL: http://cgit.freedesktop.org/mesa/mesa/commit/?id=4c3bfc9778d9a0a75bf93b15303a4839f971f695 > > Author: José Fonseca <jfo...@vm...> > Date: Mon Mar 1 15:17:41 2010 +0000 > > gallium: Remove inexisting formats. > > Can't find these formats used in any state tracker or any API. > > For some of these probably the reverse notation was meant, for which > formats already exist. src/gallium/state_trackers/xorg/xorg_exa.c:62 currently they aren't translated to Gallium formats and I wonder if the new swizzle state on the sampler views will solve this instead. Cheers Jakob. |
From: Jerome G. <gl...@fr...> - 2010-03-01 15:30:38
|
On Mon, Mar 01, 2010 at 04:21:45PM +0100, Marek Olšák wrote: > On Mon, Mar 1, 2010 at 3:02 PM, Jerome Glisse <gl...@fr...>wrote: > > > On Mon, Mar 01, 2010 at 12:24:19PM +0000, Keith Whitwell wrote: > > > On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote: > > > > On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote: > > > > > On Mon, Mar 01, 2010 at 11:46:08AM +0000, Keith Whitwell wrote: > > > > > > On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote: > > > > > > > On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > > > > > > > > Hi, > > > > > > > > > > > > > > > > I am a bit puzzled, how a pipe driver should handle > > > > > > > > draw callback failure ? On radeon (pretty sure nouveau > > > > > > > > or intel hit the same issue) we can only know when one > > > > > > > > of the draw_* context callback is call if we can do > > > > > > > > the rendering or not. > > > > > > > > > > > > > > > > The failure here is dictated by memory constraint, ie > > > > > > > > if user bind big texture, big vbo ... we might not have > > > > > > > > enough GPU address space to bind all the desired object > > > > > > > > (even for drawing a single triangle) ? > > > > > > > > > > > > > > > > What should we do ? None of the draw callback can return > > > > > > > > a value ? Maybe for a GL stack tracker we should report > > > > > > > > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > > > > > > > > is i think pipe driver are missing something here. Any > > > > > > > > idea ? Thought ? Is there already a plan to address that ? :) > > > > > > > > > > > > > > Gallium draw calls had return codes before. They were used for > > the > > > > > > > fallover driver IIRC and were recently deleted. > > > > > > > > > > > > > > Either we put the return codes back, or we add a new > > > > > > > pipe_context::validate() that would ensure that all necessary > > conditions > > > > > > > to draw successfully are met. > > > > > > > > > > > > > > Putting return codes on bind time won't work, because one can't > > set all > > > > > > > atoms simultaneously -- atoms are set one by one, so when one's > > setting > > > > > > > the state there are state combinations which may exceed the > > available > > > > > > > resources but that are never drawn with. E.g. you have a huge VB > > you > > > > > > > finished drawing, and then you switch to drawing with a small VB > > with a > > > > > > > huge texture, but in between it may happen that you have both > > bound > > > > > > > simultaneously. > > > > > > > > > > > > > > If ignoring is not an alternative, then I'd prefer a validate > > call. > > > > > > > > > > > > > > Whether to fallback to software or not -- it seems to me it's > > really a > > > > > > > problem that must be decided case by case. Drivers are supposed > > to be > > > > > > > useful -- if hardware is so limited that it can't do anything > > useful > > > > > > > then falling back to software is sensible. I don't think that a > > driver > > > > > > > should support every imaginable thing -- apps should check > > errors, and > > > > > > > users should ensure they have enough hardware resources for the > > > > > > > workloads they want. > > > > > > > > > > > > > > Personally I think state trackers shouldn't emulate anything with > > CPU > > > > > > > beyond unsupported pixel formats. If a hardware is so limited > > that in > > > > > > > need CPU assistence this should taken care transparently by the > > pipe > > > > > > > driver. Nevertheless we can and should provide auxiliary > > libraries like > > > > > > > draw to simplify the pipe driver implementation. > > > > > > > > > > > > > > > > > > My opinion on this is similar: the pipe driver is responsible for > > > > > > getting the rendering done. If it needs to pull in a fallback > > module to > > > > > > achieve that, it is the pipe driver's responsibility to do so. > > > > > > > > > > > > Understanding the limitations of hardware and the best ways to work > > > > > > around those limitations is really something that the driver itself > > is > > > > > > best positioned to handle. > > > > > > > > > > > > The slight quirk of OpenGL is that there are some conditions where > > > > > > theoretically the driver is allowed to throw an OUT_OF_MEMORY error > > (or > > > > > > similar) and not render. This option isn't really available to > > gallium > > > > > > drivers, mainly because we don't know inside gallium whether the > > API > > > > > > permits this. Unfortunately, even in OpenGL, very few applications > > > > > > actually check the error conditions, or do anything sensible when > > they > > > > > > fail. > > > > > > > > > > > > I don't really like the idea of pipe drivers being able to fail > > render > > > > > > calls, as it means that every state tracker and every bit of > > utility > > > > > > code that issues a pipe->draw() call will have to check the return > > code > > > > > > and hook in fallback code on failure. > > > > > > > > > > > > One interesting thing would be to consider creating a layer that > > exposes > > > > > > a pipe_context interface to the state tracker, but revives some of > > the > > > > > > failover ideas internally - maybe as a first step just lifting the > > draw > > > > > > module usage up to a layer above the actual hardware driver. > > > > > > > > > > > > Keith > > > > > > > > > > > > > > > > So you don't like the pipe_context::validate() of Jose ? My > > > > > taste goes to the pipe_context::validate() and having state > > > > > tracker setting the proper flag according to the API they > > > > > support (GL_OUT_OF_MEMORY for GL), this means just drop > > > > > rendering command that we can't do. > > > > > > > > I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but > > > > the pipe driver should: > > > > > > > > a) not rely on validate() being called - ie it is just a query, not a > > > > mandatory prepare-to-render notification. > > > > > > > > b) make a best effort to render in subsequent draw() calls, even if > > > > validate has been called - ie. it is just a query, does not modify pipe > > > > driver behaviour. > > > > > > > > > I am not really interested in doing software fallback. What > > > > > would be nice is someone testing with closed source driver > > > > > what happen when you try to draw somethings the GPU can't > > > > > handle. Maybe even people from closed source world can give > > > > > us a clue on what they are doing in front of such situation :) > > > > > > > > I've seen various things, but usually they try to render something even > > > > if its incorrect. > > > > > > It's always interesting to think about the OpenGL mechanisms and > > > understand why they do things a particular way. > > > > > > In this case we bump into a fairly interesting bit of OpenGL -- the > > > asynchronous error mechanism. Why doesn't OpenGL just return > > > OUT_OF_MEMORY from glBegin() or glDrawElemenets()? Basically GL is > > > preserving asynchronous operations between the application and the GL > > > implementation - eg for indirect contexts, but also for cases where the > > > errors are generated by the memory manager or even the hardware long > > > after the actual draw() call itself. > > > > > > I think we probably will face the same issues in gallium. Nobody has > > > tried to do a "remote gallium" yet, but any sort of synchronous > > > round-trip query (like validate, or return codes from draw calls) will > > > be a pain to accommodate in that environment. Likewise errors that are > > > raised by TTM at command-buffer submission would be better handled by an > > > asynchronous error mechanism. > > > > > > For now, validate() sounds fine, but at some point in the future a > > > less-synchronous version may be appealing. > > > > > > Keith > > > > > > > Note that i likely didn't described well my error case, in my pipe > > driver i can very easily split rendering operation so vbo size is > > virtualy not a problem. The error case i was thinking about is > > for instance binding 16 3d 8096x8096x8096 textures ie something > > the hw doesn't have enough memory to achieve no matter in how > > small piece we break draw command. > > > > > Wouldn't it be better and easier to just fail to allocate such large > textures? I think the memory manager should made more humble when exceeding > the VRAM size. > > > I guess i should first fine tune the texture size limit i am The point is allocating 1 or 2, 3, ... of such texture might be doable but using more than 1 at a time might not be doable. So you can't know at alloc time if you will be able to use or not the texture until you know what are the others states. Cheers, Jerome |
From: Marek O. <ma...@gm...> - 2010-03-01 15:21:57
|
On Mon, Mar 1, 2010 at 3:02 PM, Jerome Glisse <gl...@fr...>wrote: > On Mon, Mar 01, 2010 at 12:24:19PM +0000, Keith Whitwell wrote: > > On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote: > > > On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote: > > > > On Mon, Mar 01, 2010 at 11:46:08AM +0000, Keith Whitwell wrote: > > > > > On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote: > > > > > > On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > > > > > > > Hi, > > > > > > > > > > > > > > I am a bit puzzled, how a pipe driver should handle > > > > > > > draw callback failure ? On radeon (pretty sure nouveau > > > > > > > or intel hit the same issue) we can only know when one > > > > > > > of the draw_* context callback is call if we can do > > > > > > > the rendering or not. > > > > > > > > > > > > > > The failure here is dictated by memory constraint, ie > > > > > > > if user bind big texture, big vbo ... we might not have > > > > > > > enough GPU address space to bind all the desired object > > > > > > > (even for drawing a single triangle) ? > > > > > > > > > > > > > > What should we do ? None of the draw callback can return > > > > > > > a value ? Maybe for a GL stack tracker we should report > > > > > > > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > > > > > > > is i think pipe driver are missing something here. Any > > > > > > > idea ? Thought ? Is there already a plan to address that ? :) > > > > > > > > > > > > Gallium draw calls had return codes before. They were used for > the > > > > > > fallover driver IIRC and were recently deleted. > > > > > > > > > > > > Either we put the return codes back, or we add a new > > > > > > pipe_context::validate() that would ensure that all necessary > conditions > > > > > > to draw successfully are met. > > > > > > > > > > > > Putting return codes on bind time won't work, because one can't > set all > > > > > > atoms simultaneously -- atoms are set one by one, so when one's > setting > > > > > > the state there are state combinations which may exceed the > available > > > > > > resources but that are never drawn with. E.g. you have a huge VB > you > > > > > > finished drawing, and then you switch to drawing with a small VB > with a > > > > > > huge texture, but in between it may happen that you have both > bound > > > > > > simultaneously. > > > > > > > > > > > > If ignoring is not an alternative, then I'd prefer a validate > call. > > > > > > > > > > > > Whether to fallback to software or not -- it seems to me it's > really a > > > > > > problem that must be decided case by case. Drivers are supposed > to be > > > > > > useful -- if hardware is so limited that it can't do anything > useful > > > > > > then falling back to software is sensible. I don't think that a > driver > > > > > > should support every imaginable thing -- apps should check > errors, and > > > > > > users should ensure they have enough hardware resources for the > > > > > > workloads they want. > > > > > > > > > > > > Personally I think state trackers shouldn't emulate anything with > CPU > > > > > > beyond unsupported pixel formats. If a hardware is so limited > that in > > > > > > need CPU assistence this should taken care transparently by the > pipe > > > > > > driver. Nevertheless we can and should provide auxiliary > libraries like > > > > > > draw to simplify the pipe driver implementation. > > > > > > > > > > > > > > > My opinion on this is similar: the pipe driver is responsible for > > > > > getting the rendering done. If it needs to pull in a fallback > module to > > > > > achieve that, it is the pipe driver's responsibility to do so. > > > > > > > > > > Understanding the limitations of hardware and the best ways to work > > > > > around those limitations is really something that the driver itself > is > > > > > best positioned to handle. > > > > > > > > > > The slight quirk of OpenGL is that there are some conditions where > > > > > theoretically the driver is allowed to throw an OUT_OF_MEMORY error > (or > > > > > similar) and not render. This option isn't really available to > gallium > > > > > drivers, mainly because we don't know inside gallium whether the > API > > > > > permits this. Unfortunately, even in OpenGL, very few applications > > > > > actually check the error conditions, or do anything sensible when > they > > > > > fail. > > > > > > > > > > I don't really like the idea of pipe drivers being able to fail > render > > > > > calls, as it means that every state tracker and every bit of > utility > > > > > code that issues a pipe->draw() call will have to check the return > code > > > > > and hook in fallback code on failure. > > > > > > > > > > One interesting thing would be to consider creating a layer that > exposes > > > > > a pipe_context interface to the state tracker, but revives some of > the > > > > > failover ideas internally - maybe as a first step just lifting the > draw > > > > > module usage up to a layer above the actual hardware driver. > > > > > > > > > > Keith > > > > > > > > > > > > > So you don't like the pipe_context::validate() of Jose ? My > > > > taste goes to the pipe_context::validate() and having state > > > > tracker setting the proper flag according to the API they > > > > support (GL_OUT_OF_MEMORY for GL), this means just drop > > > > rendering command that we can't do. > > > > > > I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but > > > the pipe driver should: > > > > > > a) not rely on validate() being called - ie it is just a query, not a > > > mandatory prepare-to-render notification. > > > > > > b) make a best effort to render in subsequent draw() calls, even if > > > validate has been called - ie. it is just a query, does not modify pipe > > > driver behaviour. > > > > > > > I am not really interested in doing software fallback. What > > > > would be nice is someone testing with closed source driver > > > > what happen when you try to draw somethings the GPU can't > > > > handle. Maybe even people from closed source world can give > > > > us a clue on what they are doing in front of such situation :) > > > > > > I've seen various things, but usually they try to render something even > > > if its incorrect. > > > > It's always interesting to think about the OpenGL mechanisms and > > understand why they do things a particular way. > > > > In this case we bump into a fairly interesting bit of OpenGL -- the > > asynchronous error mechanism. Why doesn't OpenGL just return > > OUT_OF_MEMORY from glBegin() or glDrawElemenets()? Basically GL is > > preserving asynchronous operations between the application and the GL > > implementation - eg for indirect contexts, but also for cases where the > > errors are generated by the memory manager or even the hardware long > > after the actual draw() call itself. > > > > I think we probably will face the same issues in gallium. Nobody has > > tried to do a "remote gallium" yet, but any sort of synchronous > > round-trip query (like validate, or return codes from draw calls) will > > be a pain to accommodate in that environment. Likewise errors that are > > raised by TTM at command-buffer submission would be better handled by an > > asynchronous error mechanism. > > > > For now, validate() sounds fine, but at some point in the future a > > less-synchronous version may be appealing. > > > > Keith > > > > Note that i likely didn't described well my error case, in my pipe > driver i can very easily split rendering operation so vbo size is > virtualy not a problem. The error case i was thinking about is > for instance binding 16 3d 8096x8096x8096 textures ie something > the hw doesn't have enough memory to achieve no matter in how > small piece we break draw command. > > Wouldn't it be better and easier to just fail to allocate such large textures? I think the memory manager should made more humble when exceeding the VRAM size. I guess i should first fine tune the texture size limit i am > reporting, but i don't want to cripple app that use 1 4096x4096 > texture in order to forbid app which use 16 of those. > > Cheers, > Jerome > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Mesa3d-dev mailing list > Mes...@li... > https://lists.sourceforge.net/lists/listinfo/mesa3d-dev > |
From: Jerome G. <gl...@fr...> - 2010-03-01 15:08:47
|
On Mon, Mar 01, 2010 at 03:24:51PM +0100, Olivier Galibert wrote: > On Mon, Mar 01, 2010 at 02:57:08PM +0100, Jerome Glisse wrote: > > validate function i have in mind as virtualy a zero cost (it will > > boil down to a bunch of add followed by a test) and what validate > > would do would be done by draw operation anyway. > > Not "would", "will". You have no way to be sure nothing changed > between validate and draw, unless you're happy with an interface that > will always be unusable for multithreading. So you'll do it twice for > something that will always tell "yes" except once in a blue moon. > > And if you want to be sure that validate passes implies draw will > work, it's often more than a bunch of adds. Allocations can fail even > if the apparent free space is enough. See "fragmentation" and > "alignment", among others. > > Morality: reduce the number of operations in the normal (often called > fast) path *first*, ask questions later. Trying to predict failures > is both unreliable and costly. Xorg/mesa is perceived slow enough as > it is. > > OG. > Do you have solution/proposal/idea on how to handle the situation i am describing ? Cheers, Jerome |
From: <bug...@fr...> - 2010-03-01 14:42:18
|
http://bugs.freedesktop.org/show_bug.cgi?id=26820 Summary: Sharing contexts crashes on Windows Product: Mesa Version: 7.6 Platform: All OS/Version: Windows (All) Status: NEW Severity: critical Priority: high Component: Mesa core AssignedTo: mes...@li... ReportedBy: s.z...@go... Its quite simple to reproduce on Windows using the GDI driver (did not test the gallium driver but it should suffer fom the same problem): initialization: HGLRC rc_to_share = wglCreateContext( hDC ); HGLRC rc_2 = wglCreateContext( hDC ); wglShareLists(rc_to_share, rc_2); rendering: glBegin( GL_TRIANGLES ); => CRASH The reason seems to be a bug in the _mesa_share_state implementation. This points the rc_2 context's "Shared" attribut to the "Shared" instance of rc_to_share. It then realizes that the old "Shared" instance of rc_2 has a refCount of 0 and deletes the associated memory. Unfortunately the GLcontext rc_2 has some attributes that still point to the just freed memory. When calling glBegin the application "crashes" due to an assert in vbo_exec_vtx_map. There you will find that vbo_exec_context::vtx::bufferobj still points to GLcontext::Shared->NullBufferObj BUT not of the rc_to_share context but of the rc_2 context. In other words the bufferobj in the vbo_exec_context does still point to the memory location that has been freed. It should point to the memory location of the rc_to_share's Shared::NullBufferObj. I cannot tell if other attributes of those structures are concerned but obviously the _mesa_share_state function does only take care of the gl_shared_state contents and it does NOT adjust other structure's field that point to the same addresses as the (then freed) pointers in gl_shared_state. So there might be more problems around the corner which I have not investigated. As a small workaround to share contexts on Windows I have added my own function to the GDI driver. The _mesa_initialize_context function (internally used by wglCreateContext via WMesaCreateContext) already has a parameter share_lists so I wrote a function myWglCreateSharingContext (HDC, HDLRC) to feed this parameter. Apart from the its implementation is the same as wglCreateContext(HDC). This workaround works without any problems which leads to the assumption that only the sharing between two existing contexts is concerned. Creating a new context as a sharing context right from the start (as you usually do on X11) does not suffer from problems. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug. |
From: Olivier G. <gal...@po...> - 2010-03-01 14:25:01
|
On Mon, Mar 01, 2010 at 02:57:08PM +0100, Jerome Glisse wrote: > validate function i have in mind as virtualy a zero cost (it will > boil down to a bunch of add followed by a test) and what validate > would do would be done by draw operation anyway. Not "would", "will". You have no way to be sure nothing changed between validate and draw, unless you're happy with an interface that will always be unusable for multithreading. So you'll do it twice for something that will always tell "yes" except once in a blue moon. And if you want to be sure that validate passes implies draw will work, it's often more than a bunch of adds. Allocations can fail even if the apparent free space is enough. See "fragmentation" and "alignment", among others. Morality: reduce the number of operations in the normal (often called fast) path *first*, ask questions later. Trying to predict failures is both unreliable and costly. Xorg/mesa is perceived slow enough as it is. OG. |
From: Jakob B. <ja...@vm...> - 2010-03-01 14:21:06
|
On 1 mar 2010, at 13.05, Keith Whitwell wrote: > On Thu, 2010-02-25 at 20:46 -0800, Jakob Bornecrantz wrote: >> Howdy >> >> I'm hoping to merge the gallium-winsys-handle branch to master this >> weekend or early next week. The branch adds two new pipe screen >> functions to be used by the co state tracker (state tracker manger in >> new st_api.h speak). The branch also adds a new >> PIPE_TEXTURE_USAGE_SHARABLE flag telling the driver that the driver >> that the texture might be used for sharing. Even more so it also >> renames PIPE_TEXTURE_USAGE_PRIMARY to PIPE_TEXTURE_USAGE_SCANOUT just >> to make things clear. I also ported the egl,xorg,dri state trackers >> to >> the new interface and removing unneeded functions from drm_api making >> it even smaller, however I only ported i915g, i965 and svga to the >> new >> API. Looking at the commits where I port the other drivers it should >> be pretty clear what to do. > > Jakob, > > I think it's probably best if you make an attempt at the changes to > the > other drivers and then ask people to test your changes. This is how > we've been doing other interface changes and it seems to be a good > balance. Ok I'll take a stab at it. Cheers Jakob. |
From: Jerome G. <gl...@fr...> - 2010-03-01 14:08:18
|
On Mon, Mar 01, 2010 at 12:24:19PM +0000, Keith Whitwell wrote: > On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote: > > On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote: > > > On Mon, Mar 01, 2010 at 11:46:08AM +0000, Keith Whitwell wrote: > > > > On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote: > > > > > On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > > > > > > Hi, > > > > > > > > > > > > I am a bit puzzled, how a pipe driver should handle > > > > > > draw callback failure ? On radeon (pretty sure nouveau > > > > > > or intel hit the same issue) we can only know when one > > > > > > of the draw_* context callback is call if we can do > > > > > > the rendering or not. > > > > > > > > > > > > The failure here is dictated by memory constraint, ie > > > > > > if user bind big texture, big vbo ... we might not have > > > > > > enough GPU address space to bind all the desired object > > > > > > (even for drawing a single triangle) ? > > > > > > > > > > > > What should we do ? None of the draw callback can return > > > > > > a value ? Maybe for a GL stack tracker we should report > > > > > > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > > > > > > is i think pipe driver are missing something here. Any > > > > > > idea ? Thought ? Is there already a plan to address that ? :) > > > > > > > > > > Gallium draw calls had return codes before. They were used for the > > > > > fallover driver IIRC and were recently deleted. > > > > > > > > > > Either we put the return codes back, or we add a new > > > > > pipe_context::validate() that would ensure that all necessary conditions > > > > > to draw successfully are met. > > > > > > > > > > Putting return codes on bind time won't work, because one can't set all > > > > > atoms simultaneously -- atoms are set one by one, so when one's setting > > > > > the state there are state combinations which may exceed the available > > > > > resources but that are never drawn with. E.g. you have a huge VB you > > > > > finished drawing, and then you switch to drawing with a small VB with a > > > > > huge texture, but in between it may happen that you have both bound > > > > > simultaneously. > > > > > > > > > > If ignoring is not an alternative, then I'd prefer a validate call. > > > > > > > > > > Whether to fallback to software or not -- it seems to me it's really a > > > > > problem that must be decided case by case. Drivers are supposed to be > > > > > useful -- if hardware is so limited that it can't do anything useful > > > > > then falling back to software is sensible. I don't think that a driver > > > > > should support every imaginable thing -- apps should check errors, and > > > > > users should ensure they have enough hardware resources for the > > > > > workloads they want. > > > > > > > > > > Personally I think state trackers shouldn't emulate anything with CPU > > > > > beyond unsupported pixel formats. If a hardware is so limited that in > > > > > need CPU assistence this should taken care transparently by the pipe > > > > > driver. Nevertheless we can and should provide auxiliary libraries like > > > > > draw to simplify the pipe driver implementation. > > > > > > > > > > > > My opinion on this is similar: the pipe driver is responsible for > > > > getting the rendering done. If it needs to pull in a fallback module to > > > > achieve that, it is the pipe driver's responsibility to do so. > > > > > > > > Understanding the limitations of hardware and the best ways to work > > > > around those limitations is really something that the driver itself is > > > > best positioned to handle. > > > > > > > > The slight quirk of OpenGL is that there are some conditions where > > > > theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or > > > > similar) and not render. This option isn't really available to gallium > > > > drivers, mainly because we don't know inside gallium whether the API > > > > permits this. Unfortunately, even in OpenGL, very few applications > > > > actually check the error conditions, or do anything sensible when they > > > > fail. > > > > > > > > I don't really like the idea of pipe drivers being able to fail render > > > > calls, as it means that every state tracker and every bit of utility > > > > code that issues a pipe->draw() call will have to check the return code > > > > and hook in fallback code on failure. > > > > > > > > One interesting thing would be to consider creating a layer that exposes > > > > a pipe_context interface to the state tracker, but revives some of the > > > > failover ideas internally - maybe as a first step just lifting the draw > > > > module usage up to a layer above the actual hardware driver. > > > > > > > > Keith > > > > > > > > > > So you don't like the pipe_context::validate() of Jose ? My > > > taste goes to the pipe_context::validate() and having state > > > tracker setting the proper flag according to the API they > > > support (GL_OUT_OF_MEMORY for GL), this means just drop > > > rendering command that we can't do. > > > > I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but > > the pipe driver should: > > > > a) not rely on validate() being called - ie it is just a query, not a > > mandatory prepare-to-render notification. > > > > b) make a best effort to render in subsequent draw() calls, even if > > validate has been called - ie. it is just a query, does not modify pipe > > driver behaviour. > > > > > I am not really interested in doing software fallback. What > > > would be nice is someone testing with closed source driver > > > what happen when you try to draw somethings the GPU can't > > > handle. Maybe even people from closed source world can give > > > us a clue on what they are doing in front of such situation :) > > > > I've seen various things, but usually they try to render something even > > if its incorrect. > > It's always interesting to think about the OpenGL mechanisms and > understand why they do things a particular way. > > In this case we bump into a fairly interesting bit of OpenGL -- the > asynchronous error mechanism. Why doesn't OpenGL just return > OUT_OF_MEMORY from glBegin() or glDrawElemenets()? Basically GL is > preserving asynchronous operations between the application and the GL > implementation - eg for indirect contexts, but also for cases where the > errors are generated by the memory manager or even the hardware long > after the actual draw() call itself. > > I think we probably will face the same issues in gallium. Nobody has > tried to do a "remote gallium" yet, but any sort of synchronous > round-trip query (like validate, or return codes from draw calls) will > be a pain to accommodate in that environment. Likewise errors that are > raised by TTM at command-buffer submission would be better handled by an > asynchronous error mechanism. > > For now, validate() sounds fine, but at some point in the future a > less-synchronous version may be appealing. > > Keith > Note that i likely didn't described well my error case, in my pipe driver i can very easily split rendering operation so vbo size is virtualy not a problem. The error case i was thinking about is for instance binding 16 3d 8096x8096x8096 textures ie something the hw doesn't have enough memory to achieve no matter in how small piece we break draw command. I guess i should first fine tune the texture size limit i am reporting, but i don't want to cripple app that use 1 4096x4096 texture in order to forbid app which use 16 of those. Cheers, Jerome |
From: Jerome G. <gl...@fr...> - 2010-03-01 14:03:32
|
On Mon, Mar 01, 2010 at 01:40:37PM +0100, Olivier Galibert wrote: > On Mon, Mar 01, 2010 at 12:55:09PM +0100, Jerome Glisse wrote: > > So you don't like the pipe_context::validate() of Jose ? My > > taste goes to the pipe_context::validate() and having state > > tracker setting the proper flag according to the API they > > support (GL_OUT_OF_MEMORY for GL), this means just drop > > rendering command that we can't do. > > validate-then-do is a race condition waiting to happen. Validate is > also a somewhat costly operation to do, and 99.999% of the time for > nothing. You don't want to optimize for the error case, and that's > what validate *is*. > > OG. > validate function i have in mind as virtualy a zero cost (it will boil down to a bunch of add followed by a test) and what validate would do would be done by draw operation anyway. Cheers, Jerome |
From: Keith W. <ke...@vm...> - 2010-03-01 13:17:44
|
On Wed, 2010-02-24 at 16:48 -0800, Mike Stroyan wrote: > The ifdef changes to allow building with older libdrm and glproto > header files are working. > But the configure.ac requirements are still aggressive, requiring > LIBDRM_REQUIRED=2.4.15 > and > GLPROTO_REQUIRED=1.4.11 > Could those back down now that the source code itself is more flexible? > Those version requirements are not met by debian testing or ubuntu 9.10. > I'd be fine with this. Keith |
From: Keith W. <ke...@vm...> - 2010-03-01 13:06:06
|
On Thu, 2010-02-25 at 20:46 -0800, Jakob Bornecrantz wrote: > Howdy > > I'm hoping to merge the gallium-winsys-handle branch to master this > weekend or early next week. The branch adds two new pipe screen > functions to be used by the co state tracker (state tracker manger in > new st_api.h speak). The branch also adds a new > PIPE_TEXTURE_USAGE_SHARABLE flag telling the driver that the driver > that the texture might be used for sharing. Even more so it also > renames PIPE_TEXTURE_USAGE_PRIMARY to PIPE_TEXTURE_USAGE_SCANOUT just > to make things clear. I also ported the egl,xorg,dri state trackers to > the new interface and removing unneeded functions from drm_api making > it even smaller, however I only ported i915g, i965 and svga to the new > API. Looking at the commits where I port the other drivers it should > be pretty clear what to do. Jakob, I think it's probably best if you make an attempt at the changes to the other drivers and then ask people to test your changes. This is how we've been doing other interface changes and it seems to be a good balance. Keith |
From: Luca B. <luc...@gm...> - 2010-03-01 12:45:40
|
Falling back to CPU rendering, while respecting the OpenGL spec, is likely going to be unusably slow in most cases and thus not really better for real usage than just not rendering. I think the only way to have an usable fallback mechanism is to do fallbacks with the GPU, by automatically introducing multiple rendering passes. For instance, if you were to run each fragment shader instruction in a separate pass (using floating point targets), then you would never have more than two texture operands. If the render targets are too large, you can also just split them in multiple portions, and you can limit texture size so that 2 textures plus a render target portion always fit in memory. Alternatively, you can split textures too, try to statically deduce the referenced portion and KIL if you guessed wrong, combined with occlusion queries to check if you KILled. Control flow complicates things, but you can probably just put the execution mask in a stencil buffer or secondary render target/texture, and use occlusion queries to find out if it is empty. Of course, this requires to write and test a very significant amount of complex code (probably including a TGSI->LLVM->TGSI infrastructure, since you likely need nontrivial compiler techniques to do this optimally). However, we may need part of this anyway to support multi-GPU configurations, and it also allows to emulate advanced shader capabilities on less capable hardware (e.g. shaders with more instructions or temporaries than the hardware limitations, or SM3+/GLSL shaders on SM2 hardware), with some hope of getting usable performance. |
From: Keith W. <ke...@vm...> - 2010-03-01 12:43:36
|
On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote: > On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote: > > On Mon, Mar 01, 2010 at 11:46:08AM +0000, Keith Whitwell wrote: > > > On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote: > > > > On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > > > > > Hi, > > > > > > > > > > I am a bit puzzled, how a pipe driver should handle > > > > > draw callback failure ? On radeon (pretty sure nouveau > > > > > or intel hit the same issue) we can only know when one > > > > > of the draw_* context callback is call if we can do > > > > > the rendering or not. > > > > > > > > > > The failure here is dictated by memory constraint, ie > > > > > if user bind big texture, big vbo ... we might not have > > > > > enough GPU address space to bind all the desired object > > > > > (even for drawing a single triangle) ? > > > > > > > > > > What should we do ? None of the draw callback can return > > > > > a value ? Maybe for a GL stack tracker we should report > > > > > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > > > > > is i think pipe driver are missing something here. Any > > > > > idea ? Thought ? Is there already a plan to address that ? :) > > > > > > > > Gallium draw calls had return codes before. They were used for the > > > > fallover driver IIRC and were recently deleted. > > > > > > > > Either we put the return codes back, or we add a new > > > > pipe_context::validate() that would ensure that all necessary conditions > > > > to draw successfully are met. > > > > > > > > Putting return codes on bind time won't work, because one can't set all > > > > atoms simultaneously -- atoms are set one by one, so when one's setting > > > > the state there are state combinations which may exceed the available > > > > resources but that are never drawn with. E.g. you have a huge VB you > > > > finished drawing, and then you switch to drawing with a small VB with a > > > > huge texture, but in between it may happen that you have both bound > > > > simultaneously. > > > > > > > > If ignoring is not an alternative, then I'd prefer a validate call. > > > > > > > > Whether to fallback to software or not -- it seems to me it's really a > > > > problem that must be decided case by case. Drivers are supposed to be > > > > useful -- if hardware is so limited that it can't do anything useful > > > > then falling back to software is sensible. I don't think that a driver > > > > should support every imaginable thing -- apps should check errors, and > > > > users should ensure they have enough hardware resources for the > > > > workloads they want. > > > > > > > > Personally I think state trackers shouldn't emulate anything with CPU > > > > beyond unsupported pixel formats. If a hardware is so limited that in > > > > need CPU assistence this should taken care transparently by the pipe > > > > driver. Nevertheless we can and should provide auxiliary libraries like > > > > draw to simplify the pipe driver implementation. > > > > > > > > > My opinion on this is similar: the pipe driver is responsible for > > > getting the rendering done. If it needs to pull in a fallback module to > > > achieve that, it is the pipe driver's responsibility to do so. > > > > > > Understanding the limitations of hardware and the best ways to work > > > around those limitations is really something that the driver itself is > > > best positioned to handle. > > > > > > The slight quirk of OpenGL is that there are some conditions where > > > theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or > > > similar) and not render. This option isn't really available to gallium > > > drivers, mainly because we don't know inside gallium whether the API > > > permits this. Unfortunately, even in OpenGL, very few applications > > > actually check the error conditions, or do anything sensible when they > > > fail. > > > > > > I don't really like the idea of pipe drivers being able to fail render > > > calls, as it means that every state tracker and every bit of utility > > > code that issues a pipe->draw() call will have to check the return code > > > and hook in fallback code on failure. > > > > > > One interesting thing would be to consider creating a layer that exposes > > > a pipe_context interface to the state tracker, but revives some of the > > > failover ideas internally - maybe as a first step just lifting the draw > > > module usage up to a layer above the actual hardware driver. > > > > > > Keith > > > > > > > So you don't like the pipe_context::validate() of Jose ? My > > taste goes to the pipe_context::validate() and having state > > tracker setting the proper flag according to the API they > > support (GL_OUT_OF_MEMORY for GL), this means just drop > > rendering command that we can't do. > > I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but > the pipe driver should: > > a) not rely on validate() being called - ie it is just a query, not a > mandatory prepare-to-render notification. > > b) make a best effort to render in subsequent draw() calls, even if > validate has been called - ie. it is just a query, does not modify pipe > driver behaviour. > > > I am not really interested in doing software fallback. What > > would be nice is someone testing with closed source driver > > what happen when you try to draw somethings the GPU can't > > handle. Maybe even people from closed source world can give > > us a clue on what they are doing in front of such situation :) > > I've seen various things, but usually they try to render something even > if its incorrect. It's always interesting to think about the OpenGL mechanisms and understand why they do things a particular way. In this case we bump into a fairly interesting bit of OpenGL -- the asynchronous error mechanism. Why doesn't OpenGL just return OUT_OF_MEMORY from glBegin() or glDrawElemenets()? Basically GL is preserving asynchronous operations between the application and the GL implementation - eg for indirect contexts, but also for cases where the errors are generated by the memory manager or even the hardware long after the actual draw() call itself. I think we probably will face the same issues in gallium. Nobody has tried to do a "remote gallium" yet, but any sort of synchronous round-trip query (like validate, or return codes from draw calls) will be a pain to accommodate in that environment. Likewise errors that are raised by TTM at command-buffer submission would be better handled by an asynchronous error mechanism. For now, validate() sounds fine, but at some point in the future a less-synchronous version may be appealing. Keith |
From: Olivier G. <gal...@po...> - 2010-03-01 12:40:56
|
On Mon, Mar 01, 2010 at 12:55:09PM +0100, Jerome Glisse wrote: > So you don't like the pipe_context::validate() of Jose ? My > taste goes to the pipe_context::validate() and having state > tracker setting the proper flag according to the API they > support (GL_OUT_OF_MEMORY for GL), this means just drop > rendering command that we can't do. validate-then-do is a race condition waiting to happen. Validate is also a somewhat costly operation to do, and 99.999% of the time for nothing. You don't want to optimize for the error case, and that's what validate *is*. OG. |
From: Keith W. <ke...@vm...> - 2010-03-01 12:07:27
|
On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote: > On Mon, Mar 01, 2010 at 11:46:08AM +0000, Keith Whitwell wrote: > > On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote: > > > On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > > > > Hi, > > > > > > > > I am a bit puzzled, how a pipe driver should handle > > > > draw callback failure ? On radeon (pretty sure nouveau > > > > or intel hit the same issue) we can only know when one > > > > of the draw_* context callback is call if we can do > > > > the rendering or not. > > > > > > > > The failure here is dictated by memory constraint, ie > > > > if user bind big texture, big vbo ... we might not have > > > > enough GPU address space to bind all the desired object > > > > (even for drawing a single triangle) ? > > > > > > > > What should we do ? None of the draw callback can return > > > > a value ? Maybe for a GL stack tracker we should report > > > > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > > > > is i think pipe driver are missing something here. Any > > > > idea ? Thought ? Is there already a plan to address that ? :) > > > > > > Gallium draw calls had return codes before. They were used for the > > > fallover driver IIRC and were recently deleted. > > > > > > Either we put the return codes back, or we add a new > > > pipe_context::validate() that would ensure that all necessary conditions > > > to draw successfully are met. > > > > > > Putting return codes on bind time won't work, because one can't set all > > > atoms simultaneously -- atoms are set one by one, so when one's setting > > > the state there are state combinations which may exceed the available > > > resources but that are never drawn with. E.g. you have a huge VB you > > > finished drawing, and then you switch to drawing with a small VB with a > > > huge texture, but in between it may happen that you have both bound > > > simultaneously. > > > > > > If ignoring is not an alternative, then I'd prefer a validate call. > > > > > > Whether to fallback to software or not -- it seems to me it's really a > > > problem that must be decided case by case. Drivers are supposed to be > > > useful -- if hardware is so limited that it can't do anything useful > > > then falling back to software is sensible. I don't think that a driver > > > should support every imaginable thing -- apps should check errors, and > > > users should ensure they have enough hardware resources for the > > > workloads they want. > > > > > > Personally I think state trackers shouldn't emulate anything with CPU > > > beyond unsupported pixel formats. If a hardware is so limited that in > > > need CPU assistence this should taken care transparently by the pipe > > > driver. Nevertheless we can and should provide auxiliary libraries like > > > draw to simplify the pipe driver implementation. > > > > > > My opinion on this is similar: the pipe driver is responsible for > > getting the rendering done. If it needs to pull in a fallback module to > > achieve that, it is the pipe driver's responsibility to do so. > > > > Understanding the limitations of hardware and the best ways to work > > around those limitations is really something that the driver itself is > > best positioned to handle. > > > > The slight quirk of OpenGL is that there are some conditions where > > theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or > > similar) and not render. This option isn't really available to gallium > > drivers, mainly because we don't know inside gallium whether the API > > permits this. Unfortunately, even in OpenGL, very few applications > > actually check the error conditions, or do anything sensible when they > > fail. > > > > I don't really like the idea of pipe drivers being able to fail render > > calls, as it means that every state tracker and every bit of utility > > code that issues a pipe->draw() call will have to check the return code > > and hook in fallback code on failure. > > > > One interesting thing would be to consider creating a layer that exposes > > a pipe_context interface to the state tracker, but revives some of the > > failover ideas internally - maybe as a first step just lifting the draw > > module usage up to a layer above the actual hardware driver. > > > > Keith > > > > So you don't like the pipe_context::validate() of Jose ? My > taste goes to the pipe_context::validate() and having state > tracker setting the proper flag according to the API they > support (GL_OUT_OF_MEMORY for GL), this means just drop > rendering command that we can't do. I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but the pipe driver should: a) not rely on validate() being called - ie it is just a query, not a mandatory prepare-to-render notification. b) make a best effort to render in subsequent draw() calls, even if validate has been called - ie. it is just a query, does not modify pipe driver behaviour. > I am not really interested in doing software fallback. What > would be nice is someone testing with closed source driver > what happen when you try to draw somethings the GPU can't > handle. Maybe even people from closed source world can give > us a clue on what they are doing in front of such situation :) I've seen various things, but usually they try to render something even if its incorrect. Keith |
From: Jerome G. <gl...@fr...> - 2010-03-01 11:55:34
|
On Mon, Mar 01, 2010 at 11:46:08AM +0000, Keith Whitwell wrote: > On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote: > > On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > > > Hi, > > > > > > I am a bit puzzled, how a pipe driver should handle > > > draw callback failure ? On radeon (pretty sure nouveau > > > or intel hit the same issue) we can only know when one > > > of the draw_* context callback is call if we can do > > > the rendering or not. > > > > > > The failure here is dictated by memory constraint, ie > > > if user bind big texture, big vbo ... we might not have > > > enough GPU address space to bind all the desired object > > > (even for drawing a single triangle) ? > > > > > > What should we do ? None of the draw callback can return > > > a value ? Maybe for a GL stack tracker we should report > > > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > > > is i think pipe driver are missing something here. Any > > > idea ? Thought ? Is there already a plan to address that ? :) > > > > Gallium draw calls had return codes before. They were used for the > > fallover driver IIRC and were recently deleted. > > > > Either we put the return codes back, or we add a new > > pipe_context::validate() that would ensure that all necessary conditions > > to draw successfully are met. > > > > Putting return codes on bind time won't work, because one can't set all > > atoms simultaneously -- atoms are set one by one, so when one's setting > > the state there are state combinations which may exceed the available > > resources but that are never drawn with. E.g. you have a huge VB you > > finished drawing, and then you switch to drawing with a small VB with a > > huge texture, but in between it may happen that you have both bound > > simultaneously. > > > > If ignoring is not an alternative, then I'd prefer a validate call. > > > > Whether to fallback to software or not -- it seems to me it's really a > > problem that must be decided case by case. Drivers are supposed to be > > useful -- if hardware is so limited that it can't do anything useful > > then falling back to software is sensible. I don't think that a driver > > should support every imaginable thing -- apps should check errors, and > > users should ensure they have enough hardware resources for the > > workloads they want. > > > > Personally I think state trackers shouldn't emulate anything with CPU > > beyond unsupported pixel formats. If a hardware is so limited that in > > need CPU assistence this should taken care transparently by the pipe > > driver. Nevertheless we can and should provide auxiliary libraries like > > draw to simplify the pipe driver implementation. > > > My opinion on this is similar: the pipe driver is responsible for > getting the rendering done. If it needs to pull in a fallback module to > achieve that, it is the pipe driver's responsibility to do so. > > Understanding the limitations of hardware and the best ways to work > around those limitations is really something that the driver itself is > best positioned to handle. > > The slight quirk of OpenGL is that there are some conditions where > theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or > similar) and not render. This option isn't really available to gallium > drivers, mainly because we don't know inside gallium whether the API > permits this. Unfortunately, even in OpenGL, very few applications > actually check the error conditions, or do anything sensible when they > fail. > > I don't really like the idea of pipe drivers being able to fail render > calls, as it means that every state tracker and every bit of utility > code that issues a pipe->draw() call will have to check the return code > and hook in fallback code on failure. > > One interesting thing would be to consider creating a layer that exposes > a pipe_context interface to the state tracker, but revives some of the > failover ideas internally - maybe as a first step just lifting the draw > module usage up to a layer above the actual hardware driver. > > Keith > So you don't like the pipe_context::validate() of Jose ? My taste goes to the pipe_context::validate() and having state tracker setting the proper flag according to the API they support (GL_OUT_OF_MEMORY for GL), this means just drop rendering command that we can't do. I am not really interested in doing software fallback. What would be nice is someone testing with closed source driver what happen when you try to draw somethings the GPU can't handle. Maybe even people from closed source world can give us a clue on what they are doing in front of such situation :) Cheers, Jerome |
From: Keith W. <ke...@vm...> - 2010-03-01 11:46:21
|
On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote: > On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > > Hi, > > > > I am a bit puzzled, how a pipe driver should handle > > draw callback failure ? On radeon (pretty sure nouveau > > or intel hit the same issue) we can only know when one > > of the draw_* context callback is call if we can do > > the rendering or not. > > > > The failure here is dictated by memory constraint, ie > > if user bind big texture, big vbo ... we might not have > > enough GPU address space to bind all the desired object > > (even for drawing a single triangle) ? > > > > What should we do ? None of the draw callback can return > > a value ? Maybe for a GL stack tracker we should report > > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > > is i think pipe driver are missing something here. Any > > idea ? Thought ? Is there already a plan to address that ? :) > > Gallium draw calls had return codes before. They were used for the > fallover driver IIRC and were recently deleted. > > Either we put the return codes back, or we add a new > pipe_context::validate() that would ensure that all necessary conditions > to draw successfully are met. > > Putting return codes on bind time won't work, because one can't set all > atoms simultaneously -- atoms are set one by one, so when one's setting > the state there are state combinations which may exceed the available > resources but that are never drawn with. E.g. you have a huge VB you > finished drawing, and then you switch to drawing with a small VB with a > huge texture, but in between it may happen that you have both bound > simultaneously. > > If ignoring is not an alternative, then I'd prefer a validate call. > > Whether to fallback to software or not -- it seems to me it's really a > problem that must be decided case by case. Drivers are supposed to be > useful -- if hardware is so limited that it can't do anything useful > then falling back to software is sensible. I don't think that a driver > should support every imaginable thing -- apps should check errors, and > users should ensure they have enough hardware resources for the > workloads they want. > > Personally I think state trackers shouldn't emulate anything with CPU > beyond unsupported pixel formats. If a hardware is so limited that in > need CPU assistence this should taken care transparently by the pipe > driver. Nevertheless we can and should provide auxiliary libraries like > draw to simplify the pipe driver implementation. My opinion on this is similar: the pipe driver is responsible for getting the rendering done. If it needs to pull in a fallback module to achieve that, it is the pipe driver's responsibility to do so. Understanding the limitations of hardware and the best ways to work around those limitations is really something that the driver itself is best positioned to handle. The slight quirk of OpenGL is that there are some conditions where theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or similar) and not render. This option isn't really available to gallium drivers, mainly because we don't know inside gallium whether the API permits this. Unfortunately, even in OpenGL, very few applications actually check the error conditions, or do anything sensible when they fail. I don't really like the idea of pipe drivers being able to fail render calls, as it means that every state tracker and every bit of utility code that issues a pipe->draw() call will have to check the return code and hook in fallback code on failure. One interesting thing would be to consider creating a layer that exposes a pipe_context interface to the state tracker, but revives some of the failover ideas internally - maybe as a first step just lifting the draw module usage up to a layer above the actual hardware driver. Keith |
From: José F. <jfo...@vm...> - 2010-03-01 11:33:05
|
On Sun, 2010-02-28 at 21:35 -0800, Corbin Simpson wrote: > On Sun, Feb 28, 2010 at 9:15 PM, Dave Airlie <ai...@gm...> wrote: > > On Mon, Mar 1, 2010 at 12:43 PM, Joakim Sindholt <ba...@zh...> wrote: > >> On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote: > >>> Hi, > >>> > >>> I am a bit puzzled, how a pipe driver should handle > >>> draw callback failure ? On radeon (pretty sure nouveau > >>> or intel hit the same issue) we can only know when one > >>> of the draw_* context callback is call if we can do > >>> the rendering or not. > >>> > >>> The failure here is dictated by memory constraint, ie > >>> if user bind big texture, big vbo ... we might not have > >>> enough GPU address space to bind all the desired object > >>> (even for drawing a single triangle) ? > >>> > >>> What should we do ? None of the draw callback can return > >>> a value ? Maybe for a GL stack tracker we should report > >>> GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > >>> is i think pipe driver are missing something here. Any > >>> idea ? Thought ? Is there already a plan to address that ? :) > >>> > >>> Cheers, > >>> Jerome > >> > >> I think a vital point you're missing is: do we even care? If rendering > >> fails because we simply can't render any more, do we even want to fall > >> back? I can see a point in having a cap on how large a buffer can be > >> rendered but apart from that, I'm not sure there even is a problem. > >> > > > > Welcome to GL. If I have a 32MB graphics card, and I advertise > > a maximum texture size of 4096x4096 + cubemapping + 3D textures, > > there is no nice way for the app to get a clue about what it can legally > > ask me to do. Old DRI drivers used to either use texmem which would > > try and scale the limits etc to what it could legally fit in the > > memory available, > > or with bufmgr drivers they would check against a limit from the kernel, > > and in both cases sw fallback if necessary. Gallium seemingly can't do this, > > maybe its okay to ignore it but it wasn't an option when we did the > > old DRI drivers. > > GL_ATI_meminfo is unfortunately the best bet. :C > > Also Gallium's API is written so that drivers must never fail on > render calls. This is *incredibly* lame but there's nothing that can > be done. Every single driver is currently encouraged to just drop shit > on the floor if e.g. u_trim_pipe_prim fails, and every driver is > encouraged to call u_trim_pipe_prim, so we have stupidity like: if > (!u_trim_pipe_prim(mode, &count)) { return; } > > In EVERY SINGLE DRIVER. Most uncool. What's the point of a unified API > if it can't do sanity checks? >:T I don't see what sanity checking has to do with the topic of failing draw calls. Would (!u_trim_pipe_prim(mode, &count)) { return FALSE; } make you any happier? I think we all agree sanity checking should be done by the state trackers. You're just confusing the result of common practices of cut'n'pasting code and working around third-party problems in their code with the encouraged design principles. I'm sure a patch to fix this would be most welcome. Jose |
From: José F. <jfo...@vm...> - 2010-03-01 11:21:40
|
On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote: > Hi, > > I am a bit puzzled, how a pipe driver should handle > draw callback failure ? On radeon (pretty sure nouveau > or intel hit the same issue) we can only know when one > of the draw_* context callback is call if we can do > the rendering or not. > > The failure here is dictated by memory constraint, ie > if user bind big texture, big vbo ... we might not have > enough GPU address space to bind all the desired object > (even for drawing a single triangle) ? > > What should we do ? None of the draw callback can return > a value ? Maybe for a GL stack tracker we should report > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > is i think pipe driver are missing something here. Any > idea ? Thought ? Is there already a plan to address that ? :) Gallium draw calls had return codes before. They were used for the fallover driver IIRC and were recently deleted. Either we put the return codes back, or we add a new pipe_context::validate() that would ensure that all necessary conditions to draw successfully are met. Putting return codes on bind time won't work, because one can't set all atoms simultaneously -- atoms are set one by one, so when one's setting the state there are state combinations which may exceed the available resources but that are never drawn with. E.g. you have a huge VB you finished drawing, and then you switch to drawing with a small VB with a huge texture, but in between it may happen that you have both bound simultaneously. If ignoring is not an alternative, then I'd prefer a validate call. Whether to fallback to software or not -- it seems to me it's really a problem that must be decided case by case. Drivers are supposed to be useful -- if hardware is so limited that it can't do anything useful then falling back to software is sensible. I don't think that a driver should support every imaginable thing -- apps should check errors, and users should ensure they have enough hardware resources for the workloads they want. Personally I think state trackers shouldn't emulate anything with CPU beyond unsupported pixel formats. If a hardware is so limited that in need CPU assistence this should taken care transparently by the pipe driver. Nevertheless we can and should provide auxiliary libraries like draw to simplify the pipe driver implementation. Jose |
From: Corbin S. <mos...@gm...> - 2010-03-01 05:35:38
|
On Sun, Feb 28, 2010 at 9:15 PM, Dave Airlie <ai...@gm...> wrote: > On Mon, Mar 1, 2010 at 12:43 PM, Joakim Sindholt <ba...@zh...> wrote: >> On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote: >>> Hi, >>> >>> I am a bit puzzled, how a pipe driver should handle >>> draw callback failure ? On radeon (pretty sure nouveau >>> or intel hit the same issue) we can only know when one >>> of the draw_* context callback is call if we can do >>> the rendering or not. >>> >>> The failure here is dictated by memory constraint, ie >>> if user bind big texture, big vbo ... we might not have >>> enough GPU address space to bind all the desired object >>> (even for drawing a single triangle) ? >>> >>> What should we do ? None of the draw callback can return >>> a value ? Maybe for a GL stack tracker we should report >>> GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line >>> is i think pipe driver are missing something here. Any >>> idea ? Thought ? Is there already a plan to address that ? :) >>> >>> Cheers, >>> Jerome >> >> I think a vital point you're missing is: do we even care? If rendering >> fails because we simply can't render any more, do we even want to fall >> back? I can see a point in having a cap on how large a buffer can be >> rendered but apart from that, I'm not sure there even is a problem. >> > > Welcome to GL. If I have a 32MB graphics card, and I advertise > a maximum texture size of 4096x4096 + cubemapping + 3D textures, > there is no nice way for the app to get a clue about what it can legally > ask me to do. Old DRI drivers used to either use texmem which would > try and scale the limits etc to what it could legally fit in the > memory available, > or with bufmgr drivers they would check against a limit from the kernel, > and in both cases sw fallback if necessary. Gallium seemingly can't do this, > maybe its okay to ignore it but it wasn't an option when we did the > old DRI drivers. GL_ATI_meminfo is unfortunately the best bet. :C Also Gallium's API is written so that drivers must never fail on render calls. This is *incredibly* lame but there's nothing that can be done. Every single driver is currently encouraged to just drop shit on the floor if e.g. u_trim_pipe_prim fails, and every driver is encouraged to call u_trim_pipe_prim, so we have stupidity like: if (!u_trim_pipe_prim(mode, &count)) { return; } In EVERY SINGLE DRIVER. Most uncool. What's the point of a unified API if it can't do sanity checks? >:T ~ C. -- Only fools are easily impressed by what is only barely beyond their reach. ~ Unknown Corbin Simpson <Mos...@gm...> |
From: Dave A. <ai...@gm...> - 2010-03-01 05:15:27
|
On Mon, Mar 1, 2010 at 12:43 PM, Joakim Sindholt <ba...@zh...> wrote: > On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote: >> Hi, >> >> I am a bit puzzled, how a pipe driver should handle >> draw callback failure ? On radeon (pretty sure nouveau >> or intel hit the same issue) we can only know when one >> of the draw_* context callback is call if we can do >> the rendering or not. >> >> The failure here is dictated by memory constraint, ie >> if user bind big texture, big vbo ... we might not have >> enough GPU address space to bind all the desired object >> (even for drawing a single triangle) ? >> >> What should we do ? None of the draw callback can return >> a value ? Maybe for a GL stack tracker we should report >> GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line >> is i think pipe driver are missing something here. Any >> idea ? Thought ? Is there already a plan to address that ? :) >> >> Cheers, >> Jerome > > I think a vital point you're missing is: do we even care? If rendering > fails because we simply can't render any more, do we even want to fall > back? I can see a point in having a cap on how large a buffer can be > rendered but apart from that, I'm not sure there even is a problem. > Welcome to GL. If I have a 32MB graphics card, and I advertise a maximum texture size of 4096x4096 + cubemapping + 3D textures, there is no nice way for the app to get a clue about what it can legally ask me to do. Old DRI drivers used to either use texmem which would try and scale the limits etc to what it could legally fit in the memory available, or with bufmgr drivers they would check against a limit from the kernel, and in both cases sw fallback if necessary. Gallium seemingly can't do this, maybe its okay to ignore it but it wasn't an option when we did the old DRI drivers. Dave. |
From: Joakim S. <ba...@zh...> - 2010-03-01 02:43:22
|
On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote: > Hi, > > I am a bit puzzled, how a pipe driver should handle > draw callback failure ? On radeon (pretty sure nouveau > or intel hit the same issue) we can only know when one > of the draw_* context callback is call if we can do > the rendering or not. > > The failure here is dictated by memory constraint, ie > if user bind big texture, big vbo ... we might not have > enough GPU address space to bind all the desired object > (even for drawing a single triangle) ? > > What should we do ? None of the draw callback can return > a value ? Maybe for a GL stack tracker we should report > GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line > is i think pipe driver are missing something here. Any > idea ? Thought ? Is there already a plan to address that ? :) > > Cheers, > Jerome I think a vital point you're missing is: do we even care? If rendering fails because we simply can't render any more, do we even want to fall back? I can see a point in having a cap on how large a buffer can be rendered but apart from that, I'm not sure there even is a problem. |
From: Francisco J. <cur...@ri...> - 2010-03-01 02:25:01
|
ran...@gm... writes: > Hello all! > > After unsuccesfull battle with git-send-email I just send these two patches from Kmail. Botch as attachments and inlin, but inline > version probably will be damaged in process..... > Thanks, both patches pushed (after some minor reformatting: please, use git-format-patch next time). > Patch 1: add XRGB8888 into nouveau_fbo.c, makes xmoto actually display its demo, not abort > > diff --git a/src/mesa/drivers/dri/nouveau/nouveau_fbo.c b/src/mesa/drivers/dri/nouveau/nouveau_fbo.c > index 1db8c5d..8464786 100644 > --- a/src/mesa/drivers/dri/nouveau/nouveau_fbo.c > +++ b/src/mesa/drivers/dri/nouveau/nouveau_fbo.c > @@ -215,6 +215,8 @@ get_tex_format(struct gl_texture_image *ti) > switch (ti->TexFormat) { > case MESA_FORMAT_ARGB8888: > return GL_RGBA8; > + case MESA_FORMAT_XRGB8888: > + return GL_RGB8; > case MESA_FORMAT_RGB565: > return GL_RGB5; > default: > > > Patch 2: add two stencil operation cases in nv04_state_raster.c, allow demos/reflect and demos/dinoshade actually work. Dinoshade > still visible broken. Not sure about redbook/stencil, it looks very same on my modified driver with TNT2 and with swrast. But > tests/stencil definitely wrong .... So, all cases are in, one just need to figure out correct assignment. > > diff --git a/src/mesa/drivers/dri/nouveau/nv04_state_raster.c b/src/mesa/drivers/dri/nouveau/nv04_state_raster.c > index 5e3788d..6d0b262 100644 > --- a/src/mesa/drivers/dri/nouveau/nv04_state_raster.c > +++ b/src/mesa/drivers/dri/nouveau/nv04_state_raster.c > @@ -61,6 +61,10 @@ get_stencil_op(unsigned op) > switch (op) { > case GL_KEEP: > return 0x1; > + case GL_ZERO: > + return 0x2; > + case GL_REPLACE: > + return 0x3; > case GL_INCR: > return 0x4; > case GL_DECR: > > ----- > > Tested-off-by: Andrew Randrianasulu <ran...@gm...> |
From: Jerome G. <gl...@fr...> - 2010-02-28 19:25:32
|
Hi, I am a bit puzzled, how a pipe driver should handle draw callback failure ? On radeon (pretty sure nouveau or intel hit the same issue) we can only know when one of the draw_* context callback is call if we can do the rendering or not. The failure here is dictated by memory constraint, ie if user bind big texture, big vbo ... we might not have enough GPU address space to bind all the desired object (even for drawing a single triangle) ? What should we do ? None of the draw callback can return a value ? Maybe for a GL stack tracker we should report GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line is i think pipe driver are missing something here. Any idea ? Thought ? Is there already a plan to address that ? :) Cheers, Jerome |
From: Magnus K. <Mag...@gm...> - 2010-02-28 10:44:58
|
Here's an interesting regression caused by Mesa commit c76d4db25260dd68684bf784efacd7323c7cab8b (http://cgit.freedesktop.org/mesa/mesa/commit/?id=c76d4db25260dd68684bf784efacd7323c7cab8b). It shows itself only when Mesa is configured with --enable-debug. In this case the i965_dri.so driver can't be loaded due to a missing symbol CLAMP: libGL error: dlopen /usr/lib64/dri/i965_dri.so failed (/usr/lib64/dri/i965_dri.so: undefined symbol: CLAMP) I tracked this down to the removal of the inclusion of "main/macros.h" in gen6_cc.c. In color_calc_state_create_from_key() (gen6_cc.c line 220), the macro UNCLAMPED_FLOAT_TO_UBYTE is used. This macro is defined in "main/imports.h". Unfortunately, for the debug case, this macro calls upon the CLAMP macro defined in "main/macros.h", which is no longer included into gen6_cc.so. My feeling is that the macro declaration in "main/imports.h" is at fault. This file is included into "main/macros.h", but clearly depends on a macro (CLAMP) in there, causing a circular dependency. To me it sounds like the best way to fix the root cause is to move the UNCLAMPED_FLOAT_TO_UBYTE macro to "main/macros.h" where it would sit next to other similar macros. gen6_cc.c as a user of this macro would then of course include "main/macros.h" again. The minimum fix is just to revert part of c76d4db25260dd68684bf784efacd7323c7cab8b for gen6_cc.c I hope this analysis is useful to you. Regards, Magnus Kessler |