From: Ian R. <id...@us...> - 2005-04-05 18:07:13
|
For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI driver interface. There is a *LOT* of crap hanging around in both libGL and in the DRI drivers that exists *only* to maintain backwards compatability with older versions of the interface. Since it's crap, I would very much like to flush it. I'd like to cut this stuff out for 7.0 for several main reasons: - A major release is a logical time to make breaks like this. - Bit rot. Sure, we /assume/ libGL and the DRI drivers still actually work with older versions, but how often does it actually get tested? - Code asthetics. Because of the backwards compatability mechanisms that are in place, especially in libGL, to code can be a bit hard to follow. Removing that code would, in a WAG estimate, eliminate at least a couple hundred lines of code. It would also eliminate a number of '#ifdef DRI_NEW_INTERFACE_ONLY' blocks. What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=1", but that is a start. In include/GL/internal/dri_interface.h (in the Mesa tree) there are number of methods that get converted to 'void *' if DRI_NEW_INTERFACE_ONLY is defined. I propose that we completely remove them from the structures and rename some of the remaining methods. For example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2 would be removed, and __DRIcontextRec::bindContext3 would be renamed to __DRIcontextRec::bindContext. Additionally, there are a few libGL-private structures in src/glx/x11/glxclient.h that, due to binary compatability issues with older versions of the interface, can't be change. Eliminating support for those older interfaces would allow some significant cleaning in those structures. Basically, all of the stuff in glxclient.h with DEPRECATED in the name would be removed. Other, less important, changes could also be made to __GLXcontextRec. |
From: Keith W. <ke...@tu...> - 2005-04-05 19:12:07
|
Ian Romanick wrote: > For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI > driver interface. There is a *LOT* of crap hanging around in both libGL > and in the DRI drivers that exists *only* to maintain backwards > compatability with older versions of the interface. Since it's crap, I > would very much like to flush it. > > I'd like to cut this stuff out for 7.0 for several main reasons: > > - A major release is a logical time to make breaks like this. > > - Bit rot. Sure, we /assume/ libGL and the DRI drivers still actually > work with older versions, but how often does it actually get tested? In fact, we know that they don't work as backwards compatibility was broken in one of the recent 6.8.x releases, wasn't it? Given that is the case we might be able to take advantage of that and bring forward some of those changes - the old versions don't work anyway so there's absolutely no point keeping the code around for them... Keith |
From: Ian R. <id...@us...> - 2005-04-05 21:51:30
|
Keith Whitwell wrote: > Ian Romanick wrote: > >> For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI >> driver interface. There is a *LOT* of crap hanging around in both >> libGL and in the DRI drivers that exists *only* to maintain backwards >> compatability with older versions of the interface. Since it's crap, >> I would very much like to flush it. >> >> I'd like to cut this stuff out for 7.0 for several main reasons: >> >> - A major release is a logical time to make breaks like this. >> >> - Bit rot. Sure, we /assume/ libGL and the DRI drivers still actually >> work with older versions, but how often does it actually get tested? > > In fact, we know that they don't work as backwards compatibility was > broken in one of the recent 6.8.x releases, wasn't it? > > Given that is the case we might be able to take advantage of that and > bring forward some of those changes - the old versions don't work anyway > so there's absolutely no point keeping the code around for them... The 6.8.x break was on the server-side *only*. I made some changes in libglx that slightly broke the interface with the DDX. AFAIK, the client-side interfaces /should/ still work. Like I said, though, I don't know that it has been tested... |
From: Adam J. <aj...@nw...> - 2005-04-05 22:28:38
|
On Tuesday 05 April 2005 14:06, Ian Romanick wrote: > For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI > driver interface. There is a *LOT* of crap hanging around in both libGL > and in the DRI drivers that exists *only* to maintain backwards > compatability with older versions of the interface. Since it's crap, I > would very much like to flush it. > > I'd like to cut this stuff out for 7.0 for several main reasons: > > - A major release is a logical time to make breaks like this. > > - Bit rot. Sure, we /assume/ libGL and the DRI drivers still actually > work with older versions, but how often does it actually get tested? > > - Code asthetics. Because of the backwards compatability mechanisms > that are in place, especially in libGL, to code can be a bit hard to > follow. Removing that code would, in a WAG estimate, eliminate at least > a couple hundred lines of code. It would also eliminate a number of > '#ifdef DRI_NEW_INTERFACE_ONLY' blocks. > > What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=3D1", but > that is a start. In include/GL/internal/dri_interface.h (in the Mesa > tree) there are number of methods that get converted to 'void *' if > DRI_NEW_INTERFACE_ONLY is defined. I propose that we completely remove > them from the structures and rename some of the remaining methods. For > example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2 > would be removed, and __DRIcontextRec::bindContext3 would be renamed to > __DRIcontextRec::bindContext. > > Additionally, there are a few libGL-private structures in > src/glx/x11/glxclient.h that, due to binary compatability issues with > older versions of the interface, can't be change. Eliminating support > for those older interfaces would allow some significant cleaning in > those structures. Basically, all of the stuff in glxclient.h with > DEPRECATED in the name would be removed. Other, less important, changes > could also be made to __GLXcontextRec. I have another one: Hide all the functions that start with XF86DRI*, and=20 expose them to the driver through a function table or glXGetProcAddress=20 rather than by allowing the driver to call them directly. This will simpli= fy=20 the case where the X server is itself linked against libGL. Kevin tells me these functions were never intended to be public API anyway. =2D ajax |
From: Ian R. <id...@us...> - 2005-04-05 23:03:54
|
Adam Jackson wrote: > I have another one: Hide all the functions that start with XF86DRI*, and > expose them to the driver through a function table or glXGetProcAddress > rather than by allowing the driver to call them directly. This will simplify > the case where the X server is itself linked against libGL. > > Kevin tells me these functions were never intended to be public API anyway. The only functions that are still used by DRI_NEW_INTERFACE_ONLY drivers are XF86DRICreateDrawable, XF86DRIDestroyDrawable, and XF86DRIDestroyContext. It should be easy enough to eliminate those, but something other than gLXGetProcAddress might be preferable. |
From: Adam J. <aj...@nw...> - 2005-04-06 00:23:32
|
On Tuesday 05 April 2005 19:03, Ian Romanick wrote: > Adam Jackson wrote: > > I have another one: Hide all the functions that start with XF86DRI*, a= nd > > expose them to the driver through a function table or glXGetProcAddress > > rather than by allowing the driver to call them directly. This will > > simplify the case where the X server is itself linked against libGL. > > > > Kevin tells me these functions were never intended to be public API > > anyway. > > The only functions that are still used by DRI_NEW_INTERFACE_ONLY drivers > are XF86DRICreateDrawable, XF86DRIDestroyDrawable, and > XF86DRIDestroyContext. It should be easy enough to eliminate those, but > something other than gLXGetProcAddress might be preferable. Yeah, I just threw out glXGetProcAddress as a suggestion. It's probably=20 better to pass this table into the driver through the create context method. We can't eliminate the functionality of these calls (I don't think), but th= ey=20 should not be visible API from the perspective of the GL client. =2D ajax |
From: Brian P. <bri...@tu...> - 2005-04-06 00:25:49
|
Adam Jackson wrote: > On Tuesday 05 April 2005 19:03, Ian Romanick wrote: > >>Adam Jackson wrote: >> >>>I have another one: Hide all the functions that start with XF86DRI*, and >>>expose them to the driver through a function table or glXGetProcAddress >>>rather than by allowing the driver to call them directly. This will >>>simplify the case where the X server is itself linked against libGL. >>> >>>Kevin tells me these functions were never intended to be public API >>>anyway. >> >>The only functions that are still used by DRI_NEW_INTERFACE_ONLY drivers >>are XF86DRICreateDrawable, XF86DRIDestroyDrawable, and >>XF86DRIDestroyContext. It should be easy enough to eliminate those, but >>something other than gLXGetProcAddress might be preferable. > > > Yeah, I just threw out glXGetProcAddress as a suggestion. It's probably > better to pass this table into the driver through the create context method. > > We can't eliminate the functionality of these calls (I don't think), but they > should not be visible API from the perspective of the GL client. Right. glXGetProcAddress() should not be used by libGL or the drivers to get internal function pointers. There should be a new function for that, if we're breaking the ABI. -Brian |
From: Ian R. <id...@us...> - 2005-04-06 07:00:09
|
Brian Paul wrote: > Adam Jackson wrote: > >> Yeah, I just threw out glXGetProcAddress as a suggestion. It's >> probably better to pass this table into the driver through the create >> context method. [snip] > Right. glXGetProcAddress() should not be used by libGL or the drivers > to get internal function pointers. There should be a new function for > that, if we're breaking the ABI. Not that I necessarily disagree, but what is your reasoning? |
From: Brian P. <bri...@tu...> - 2005-04-06 13:29:52
|
Ian Romanick wrote: > Brian Paul wrote: > >> Adam Jackson wrote: >> >>> Yeah, I just threw out glXGetProcAddress as a suggestion. It's >>> probably better to pass this table into the driver through the create >>> context method. > > > [snip] > >> Right. glXGetProcAddress() should not be used by libGL or the drivers >> to get internal function pointers. There should be a new function for >> that, if we're breaking the ABI. > > > Not that I necessarily disagree, but what is your reasoning? I think it's poor design to overload a public API function with extra functionality like that. I realize we didn't have much choice originally. -Brian |
From: Roland M. <rol...@nr...> - 2005-04-05 19:15:35
|
Ian Romanick wrote: > > For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI > driver interface. There is a *LOT* of crap hanging around in both libGL > and in the DRI drivers that exists *only* to maintain backwards > compatability with older versions of the interface. Since it's crap, I > would very much like to flush it. > > I'd like to cut this stuff out for 7.0 for several main reasons: > > - A major release is a logical time to make breaks like this. > > - Bit rot. Sure, we /assume/ libGL and the DRI drivers still actually > work with older versions, but how often does it actually get tested? > > - Code asthetics. Because of the backwards compatability mechanisms > that are in place, especially in libGL, to code can be a bit hard to > follow. Removing that code would, in a WAG estimate, eliminate at least > a couple hundred lines of code. It would also eliminate a number of > '#ifdef DRI_NEW_INTERFACE_ONLY' blocks. > > What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=1", but > that is a start. In include/GL/internal/dri_interface.h (in the Mesa > tree) there are number of methods that get converted to 'void *' if > DRI_NEW_INTERFACE_ONLY is defined. I propose that we completely remove > them from the structures and rename some of the remaining methods. For > example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2 > would be removed, and __DRIcontextRec::bindContext3 would be renamed to > __DRIcontextRec::bindContext. > > Additionally, there are a few libGL-private structures in > src/glx/x11/glxclient.h that, due to binary compatability issues with > older versions of the interface, can't be change. Eliminating support > for those older interfaces would allow some significant cleaning in > those structures. Basically, all of the stuff in glxclient.h with > DEPRECATED in the name would be removed. Other, less important, changes > could also be made to __GLXcontextRec. Another item would be to look into what's required to support visuals beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC (AFAIK ajax (if I don't mix-up the nicks again :)) said that this may require an ABI change, too... When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on my wishlist: Would it be possible to increase |MAX_WIDTH| and |MAX_HEIGHT| (and the matching texture limits of the software rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint come in mind) ? ---- Bye, Roland -- __ . . __ (o.\ \/ /.o) rol...@nr... \__\/\/__/ MPEG specialist, C&&JAVA&&Sun&&Unix programmer /O /==\ O\ TEL +49 641 7950090 (;O/ \/ \O;) |
From: Brian P. <bri...@tu...> - 2005-04-05 20:09:09
|
Roland Mainz wrote: > Ian Romanick wrote: > >>For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI >>driver interface. There is a *LOT* of crap hanging around in both libGL >>and in the DRI drivers that exists *only* to maintain backwards >>compatability with older versions of the interface. Since it's crap, I >>would very much like to flush it. >> >>I'd like to cut this stuff out for 7.0 for several main reasons: >> >>- A major release is a logical time to make breaks like this. >> >>- Bit rot. Sure, we /assume/ libGL and the DRI drivers still actually >>work with older versions, but how often does it actually get tested? >> >>- Code asthetics. Because of the backwards compatability mechanisms >>that are in place, especially in libGL, to code can be a bit hard to >>follow. Removing that code would, in a WAG estimate, eliminate at least >>a couple hundred lines of code. It would also eliminate a number of >>'#ifdef DRI_NEW_INTERFACE_ONLY' blocks. >> >>What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=1", but >>that is a start. In include/GL/internal/dri_interface.h (in the Mesa >>tree) there are number of methods that get converted to 'void *' if >>DRI_NEW_INTERFACE_ONLY is defined. I propose that we completely remove >>them from the structures and rename some of the remaining methods. For >>example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2 >>would be removed, and __DRIcontextRec::bindContext3 would be renamed to >>__DRIcontextRec::bindContext. >> >>Additionally, there are a few libGL-private structures in >>src/glx/x11/glxclient.h that, due to binary compatability issues with >>older versions of the interface, can't be change. Eliminating support >>for those older interfaces would allow some significant cleaning in >>those structures. Basically, all of the stuff in glxclient.h with >>DEPRECATED in the name would be removed. Other, less important, changes >>could also be made to __GLXcontextRec. > > > Another item would be to look into what's required to support visuals > beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC > (AFAIK ajax (if I don't mix-up the nicks again :)) said that this may > require an ABI change, too... I doubt an ABI change would be needed for that. > When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on > my wishlist: Would it be possible to increase |MAX_WIDTH| and > |MAX_HEIGHT| (and the matching texture limits of the software > rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint > come in mind) ? If you increase MAX_WIDTH/HEIGHT too far, you'll start to see interpolation errors in triangle rasterization (the software routines). The full explanation is long, but basically there needs to be enough fractional bits in the GLfixed datatype to accomodate interpolation across the full viewport width/height. In fact, I'm not sure that we've already gone too far by setting MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional bits. I haven't heard any reports of bad triangles so far though. But there probably aren't too many people generating 4Kx4K images. Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of the interpolation issues to see what side-effects might pop up. Finally, Mesa has a number of scratch arrays that get dimensioned to [MAX_WIDTH]. Some of those arrays/structs are rather large already. -Brian |
From: Nicolai H. <pre...@gm...> - 2005-04-05 23:04:08
|
On Tuesday 05 April 2005 22:11, Brian Paul wrote: > If you increase MAX_WIDTH/HEIGHT too far, you'll start to see=20 > interpolation errors in triangle rasterization (the software=20 > routines). The full explanation is long, but basically there needs to=20 > be enough fractional bits in the GLfixed datatype to accomodate=20 > interpolation across the full viewport width/height. >=20 > In fact, I'm not sure that we've already gone too far by setting=20 > MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional=20 > bits. I haven't heard any reports of bad triangles so far though.=20 > But there probably aren't too many people generating 4Kx4K images. >=20 > Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of=20 > the interpolation issues to see what side-effects might pop up. >=20 > Finally, Mesa has a number of scratch arrays that get dimensioned to=20 > [MAX_WIDTH]. Some of those arrays/structs are rather large already. Slightly off-topic, but a thought that occured to me in this regard was to= =20 tile rendering. Basically, do a logical divide of the framebuffer into=20 rectangles of, say, 64x64 pixels. During rasterization, all primitives are= =20 split according to those tiles and rendered separately. This has some=20 advantages: a) It could help reduce the interpolation issues you mentioned. It's=20 obviously not a magic bullet, but it can avoid the need for insane=20 precision in inner loops. c) Better control of the size of scratch structures, possibly even better=20 caching behaviour. b) One could build a multi-threaded rasterizer (where work queues are per=20 framebuffer tile), which is going to become all the more interesting once=20 dualcore CPUs are widespread. cu, Nicolai |
From: Brian P. <bri...@tu...> - 2005-04-05 23:30:09
|
Nicolai Haehnle wrote: > On Tuesday 05 April 2005 22:11, Brian Paul wrote: > >>If you increase MAX_WIDTH/HEIGHT too far, you'll start to see >>interpolation errors in triangle rasterization (the software >>routines). The full explanation is long, but basically there needs to >>be enough fractional bits in the GLfixed datatype to accomodate >>interpolation across the full viewport width/height. >> >>In fact, I'm not sure that we've already gone too far by setting >>MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional >>bits. I haven't heard any reports of bad triangles so far though. >>But there probably aren't too many people generating 4Kx4K images. >> >>Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of >>the interpolation issues to see what side-effects might pop up. >> >>Finally, Mesa has a number of scratch arrays that get dimensioned to >>[MAX_WIDTH]. Some of those arrays/structs are rather large already. > > > Slightly off-topic, but a thought that occured to me in this regard was to > tile rendering. Basically, do a logical divide of the framebuffer into > rectangles of, say, 64x64 pixels. During rasterization, all primitives are > split according to those tiles and rendered separately. This has some > advantages: > > a) It could help reduce the interpolation issues you mentioned. It's > obviously not a magic bullet, but it can avoid the need for insane > precision in inner loops. > c) Better control of the size of scratch structures, possibly even better > caching behaviour. > b) One could build a multi-threaded rasterizer (where work queues are per > framebuffer tile), which is going to become all the more interesting once > dualcore CPUs are widespread. This would be FAR more work than simply addressing the interpolation issue. There's lots of subtle conformance issues with the tiling approach you suggest. Consider something simple like line stipples. -Brian |
From: Adam J. <aj...@nw...> - 2005-04-06 00:32:02
|
On Tuesday 05 April 2005 16:11, Brian Paul wrote: > Roland Mainz wrote: > > Another item would be to look into what's required to support visuals > > beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC > > (AFAIK ajax (if I don't mix-up the nicks again :)) said that this may > > require an ABI change, too... > > I doubt an ABI change would be needed for that. Are you sure about this? I thought we treated channels as bytes everywhere, unless GLchan was define= d=20 to something bigger, and even then only for OSMesa. Even if it's not an AB= I=20 change, I suspect that growing GLchan beyond 8 bits while still preserving= =20 performance is non-trivial. > > When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on > > my wishlist: Would it be possible to increase |MAX_WIDTH| and > > |MAX_HEIGHT| (and the matching texture limits of the software > > rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint > > come in mind) ? > > If you increase MAX_WIDTH/HEIGHT too far, you'll start to see > interpolation errors in triangle rasterization (the software > routines). The full explanation is long, but basically there needs to > be enough fractional bits in the GLfixed datatype to accomodate > interpolation across the full viewport width/height. > > In fact, I'm not sure that we've already gone too far by setting > MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional > bits. I haven't heard any reports of bad triangles so far though. > But there probably aren't too many people generating 4Kx4K images. Yet. Big images are becoming a reality. DMX+glxproxy brings this real clo= se=20 to home. > Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of > the interpolation issues to see what side-effects might pop up. Definitely. > Finally, Mesa has a number of scratch arrays that get dimensioned to > [MAX_WIDTH]. Some of those arrays/structs are rather large already. I looked into allocating these dynamically, but there were one or two stick= y=20 points (mostly related to making scope act the same) so I dropped it. It=20 could be done though. =2D ajax |
From: Brian P. <bri...@tu...> - 2005-04-06 01:11:19
|
Adam Jackson wrote: > On Tuesday 05 April 2005 16:11, Brian Paul wrote: > >>Roland Mainz wrote: >> >>>Another item would be to look into what's required to support visuals >>>beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC >>>(AFAIK ajax (if I don't mix-up the nicks again :)) said that this may >>>require an ABI change, too... >> >>I doubt an ABI change would be needed for that. > > > Are you sure about this? Yup, pretty sure. An ABI change at the libGL / driver interface isn't needed. I don't know of any place in that interface where 8-bit color is an issue. Please let me know if I'm wrong. > I thought we treated channels as bytes everywhere, unless GLchan was defined > to something bigger, and even then only for OSMesa. Even if it's not an ABI > change, I suspect that growing GLchan beyond 8 bits while still preserving > performance is non-trivial. This is separate from Ian's ABI discussion. It's true that core Mesa has to be recompiled to support 8, 16 or 32-bit color channels. That's something I'd like to change in the future. It will be a lot of work but it can be done. Currently, there aren't any hardware drivers that support > 8-bit color channels. If we did want to support deeper channels in a hardware driver we'd have a lot of work to do in any case. One approach would be to compile core Mesa for 16-bit channels, then shift/drop bits in the driver whenever we write to a color buffer. Of course, there's more to it than that, but it would be feasible. As part of the GL_ARB_framebuffer_object work I'm doing, simultaneous support for various channel sizes will be more do-able. >>>When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on >>>my wishlist: Would it be possible to increase |MAX_WIDTH| and >>>|MAX_HEIGHT| (and the matching texture limits of the software >>>rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint >>>come in mind) ? >> >>If you increase MAX_WIDTH/HEIGHT too far, you'll start to see >>interpolation errors in triangle rasterization (the software >>routines). The full explanation is long, but basically there needs to >>be enough fractional bits in the GLfixed datatype to accomodate >>interpolation across the full viewport width/height. >> >>In fact, I'm not sure that we've already gone too far by setting >>MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional >>bits. I haven't heard any reports of bad triangles so far though. >>But there probably aren't too many people generating 4Kx4K images. > > > Yet. Big images are becoming a reality. DMX+glxproxy brings this real close > to home. I fully agree that there's need to render larger images. >>Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of >>the interpolation issues to see what side-effects might pop up. > > > Definitely. > > >>Finally, Mesa has a number of scratch arrays that get dimensioned to >>[MAX_WIDTH]. Some of those arrays/structs are rather large already. > > > I looked into allocating these dynamically, but there were one or two sticky > points (mostly related to making scope act the same) so I dropped it. It > could be done though. A lot of these allocations are on the stack. Changing them to heap allocations might cause some loss of performance too. -Brian |
From: Julien L. <jul...@gm...> - 2005-04-06 13:22:00
|
On Apr 5, 2005 10:11 PM, Brian Paul <bri...@tu...> wrote: > Roland Mainz wrote: > > Ian Romanick wrote: > > When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on > > my wishlist: Would it be possible to increase |MAX_WIDTH| and > > |MAX_HEIGHT| (and the matching texture limits of the software > > rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint > > come in mind) ? > > If you increase MAX_WIDTH/HEIGHT too far, you'll start to see > interpolation errors in triangle rasterization (the software > routines). The full explanation is long, but basically there needs to > be enough fractional bits in the GLfixed datatype to accomodate > interpolation across the full viewport width/height. Will increasing MAX_WIDTH/HEIGHT affect applications which run in small windows or only those which use resolutions exceeding the 4Kx4K limit? > > In fact, I'm not sure that we've already gone too far by setting > MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional > bits. I haven't heard any reports of bad triangles so far though. Do you know any specific application which may expose bad rendering when the size gets to large? > But there probably aren't too many people generating 4Kx4K images. We've been running tests with the glutdemo applications and Xprint at higher resolutions (6Kx8K window size) and did not notice any bad rendering using the software rasterizer. Julien -- Julien Lafon Senior Staff Engineer, Hitachi |
From: Brian P. <bri...@tu...> - 2005-04-06 23:28:14
|
Roland Mainz wrote: > Brian Paul wrote: > >>>>>On Apr 5, 2005 10:11 PM, Brian Paul <bri...@tu...> wrote: >>>>>Will increasing MAX_WIDTH/HEIGHT affect applications which run in >>>>>small windows or only those which use resolutions exceeding the 4Kx4K >>>>>limit? >>>> >>>>Increasing MAX_WIDTH/HEIGHT will result in more memory usage >>>>regardless of window size. >>> >>>Do you know how much memory is additionally allocated? If it is less >>>than 1MB then it may not be worth to worry about... >> >>If you do a grep in the sources for MAX_WIDTH you'll see that it's >>used in lots of places. Some are for stack allocations, others are >>for heap allocations. It would take some effort to determine exactly >>how much more memory would be used. I know of at least one structure >>that contains arrays dimensioned according to MAX_WIDTH that's >>currently just under 1MB. That's probably the largest. > > > What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would > that be possible (for stack allocations the Mesa code then has to depend > on |alloca()|) ? Probably do-able, but a lot of work. >>>>As is, you can't exceed 4K x 4K resolution without increasing >>>>MAX_WIDTH/HEIGHT. Your glViewport call will be clamped to those >>>>limits if you specify larger. >>> >>>Let me reformulate my question: Will increasing MAX_WIDTH/HEIGHT break >>>any existing application at normal video screen sizes? >> >>Probably not, but I'm not 100% sure. > > > What about making an experiment and bump the value to 8Kx8K and check if > we see anything which breaks in the following months ? Ordinary applications would probably be fine (I think), but I'm fairly confident that if someone created an 8Kx8K framebuffer and draw large triangles things would not work properly. Feel free to increase MAX_WIDTH/HEIGHT in your copy of Mesa and try running various apps. -Brian |
From: Julien L. <jul...@gm...> - 2005-04-07 05:37:56
|
On Apr 7, 2005 1:30 AM, Brian Paul <bri...@tu...> wrote: > Feel free to increase MAX_WIDTH/HEIGHT in your copy of Mesa and try > running various apps. We have increased MAX_WIDTH/HEIGHT in our internal builds but this limit will still hit the average Linux user using both the Postscript or software renderer. Julien -- Julien Lafon Senior Staff Engineer, Hitachi |
From: Roland M. <rol...@nr...> - 2005-04-07 00:23:38
|
Brian Paul wrote: [snip] > > What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would > > that be possible (for stack allocations the Mesa code then has to depend > > on |alloca()|) ? > > Probably do-able, but a lot of work. Depends... if |alloca()| can safely be used on all platforms supported by Mesa then this should be no problem to implement. Alternatively the code could simply assume that the C compiler supports the C++ feature (BTW: Is this supported in C99, too ?) that an array can be dynamically sized at declaration (however that's less portable). ---- Bye, Roland -- __ . . __ (o.\ \/ /.o) rol...@nr... \__\/\/__/ MPEG specialist, C&&JAVA&&Sun&&Unix programmer /O /==\ O\ TEL +49 641 7950090 (;O/ \/ \O;) |
From: Brian P. <bri...@tu...> - 2005-04-07 03:59:50
|
Roland Mainz wrote: > Brian Paul wrote: > [snip] > >>>What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would >>>that be possible (for stack allocations the Mesa code then has to depend >>>on |alloca()|) ? >> >>Probably do-able, but a lot of work. > > > Depends... if |alloca()| can safely be used on all platforms supported > by Mesa then this should be no problem to implement. Alternatively the > code could simply assume that the C compiler supports the C++ feature > (BTW: Is this supported in C99, too ?) that an array can be dynamically > sized at declaration (however that's less portable). I don't want to create a dependency on C99's variable length arrays. I'm also leary of alloca() since it's not in POSIX. "grep MAX_WIDTH src/mesa/*/*.[ch] | wc" shows there's about 160 occurances of MAX_WIDTH that would have to be changed for dynamic allocation. A lot of work. -Brian |
From: Julien L. <jul...@gm...> - 2005-04-07 05:38:27
|
On Apr 7, 2005 2:23 AM, Roland Mainz <rol...@nr...> wrote: > Brian Paul wrote: > [snip] > > > What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would > > > that be possible (for stack allocations the Mesa code then has to depend > > > on |alloca()|) ? > > > > Probably do-able, but a lot of work. > > Depends... if |alloca()| can safely be used on all platforms supported > by Mesa then this should be no problem to implement. Alternatively the > code could simply assume that the C compiler supports the C++ feature > (BTW: Is this supported in C99, too ?) that an array can be dynamically > sized at declaration (however that's less portable). We already investigated this option but abandoned the idea after realising that common data types such as struct span_arrays and all its consumers must be changed. Without using C++ features it may be too much hassle and is the reason why a bump of the MAX_WIDTH/HEIGHT values is more feasible here. Julien -- Julien Lafon Senior Staff Engineer, Hitachi |
From: Alan C. <Alan.Coopersmith@Sun.COM> - 2005-04-07 05:31:56
|
Roland Mainz wrote: > Depends... if |alloca()| can safely be used on all platforms supported > by Mesa then this should be no problem to implement. I don't think the Solaris implementation of alloca counts as "safe" unfortunately, due to this warning in the man page: If the allocated block is beyond the current stack limit, the resulting behavior is undefined. It certainly scares me away from most uses. -- -Alan Coopersmith- ala...@su... Sun Microsystems, Inc. - X Window System Engineering |
From: Brian P. <bri...@tu...> - 2005-04-06 13:35:24
|
Julien Lafon wrote: > On Apr 5, 2005 10:11 PM, Brian Paul <bri...@tu...> wrote: > >>Roland Mainz wrote: >> >>>Ian Romanick wrote: >>>When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on >>>my wishlist: Would it be possible to increase |MAX_WIDTH| and >>>|MAX_HEIGHT| (and the matching texture limits of the software >>>rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint >>>come in mind) ? >> >>If you increase MAX_WIDTH/HEIGHT too far, you'll start to see >>interpolation errors in triangle rasterization (the software >>routines). The full explanation is long, but basically there needs to >>be enough fractional bits in the GLfixed datatype to accomodate >>interpolation across the full viewport width/height. > > Will increasing MAX_WIDTH/HEIGHT affect applications which run in > small windows or only those which use resolutions exceeding the 4Kx4K > limit? Increasing MAX_WIDTH/HEIGHT will result in more memory usage regardless of window size. As is, you can't exceed 4K x 4K resolution without increasing MAX_WIDTH/HEIGHT. Your glViewport call will be clamped to those limits if you specify larger. >>In fact, I'm not sure that we've already gone too far by setting >>MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional >>bits. I haven't heard any reports of bad triangles so far though. > > Do you know any specific application which may expose bad rendering > when the size gets to large? No (there's far too many OpenGL apps out there for me to say). >>But there probably aren't too many people generating 4Kx4K images. > > We've been running tests with the glutdemo applications and Xprint at > higher resolutions (6Kx8K window size) and did not notice any bad > rendering using the software rasterizer. How large are your triangles? The interpolation error will accumulate and be most noticable with very large triangles. -Brian |
From: Julien L. <jul...@gm...> - 2005-04-06 13:51:18
|
On Apr 6, 2005 3:37 PM, Brian Paul <bri...@tu...> wrote: > Julien Lafon wrote: > > On Apr 5, 2005 10:11 PM, Brian Paul <bri...@tu...> wrote: > > Will increasing MAX_WIDTH/HEIGHT affect applications which run in > > small windows or only those which use resolutions exceeding the 4Kx4K > > limit? > > Increasing MAX_WIDTH/HEIGHT will result in more memory usage > regardless of window size. Do you know how much memory is additionally allocated? If it is less than 1MB then it may not be worth to worry about... > > As is, you can't exceed 4K x 4K resolution without increasing > MAX_WIDTH/HEIGHT. Your glViewport call will be clamped to those > limits if you specify larger. Let me reformulate my question: Will increasing MAX_WIDTH/HEIGHT break any existing application at normal video screen sizes? > > >>But there probably aren't too many people generating 4Kx4K images. > > > > We've been running tests with the glutdemo applications and Xprint at > > higher resolutions (6Kx8K window size) and did not notice any bad > > rendering using the software rasterizer. > > How large are your triangles? I thought we have tested all combinations ranging from very small triangles up to full window size. > > The interpolation error will accumulate and be most noticable with > very large triangles. Can you point me to one of the glutdemo applications which may likely fail? Julien -- Julien Lafon Senior Staff Engineer, Hitachi |
From: Brian P. <bri...@tu...> - 2005-04-06 14:08:54
|
Julien Lafon wrote: > On Apr 6, 2005 3:37 PM, Brian Paul <bri...@tu...> wrote: > >>Julien Lafon wrote: >> >>>On Apr 5, 2005 10:11 PM, Brian Paul <bri...@tu...> wrote: >>>Will increasing MAX_WIDTH/HEIGHT affect applications which run in >>>small windows or only those which use resolutions exceeding the 4Kx4K >>>limit? >> >>Increasing MAX_WIDTH/HEIGHT will result in more memory usage >>regardless of window size. > > Do you know how much memory is additionally allocated? If it is less > than 1MB then it may not be worth to worry about... If you do a grep in the sources for MAX_WIDTH you'll see that it's used in lots of places. Some are for stack allocations, others are for heap allocations. It would take some effort to determine exactly how much more memory would be used. I know of at least one structure that contains arrays dimensioned according to MAX_WIDTH that's currently just under 1MB. That's probably the largest. >>As is, you can't exceed 4K x 4K resolution without increasing >>MAX_WIDTH/HEIGHT. Your glViewport call will be clamped to those >>limits if you specify larger. > > Let me reformulate my question: Will increasing MAX_WIDTH/HEIGHT break > any existing application at normal video screen sizes? Probably not, but I'm not 100% sure. >>>>But there probably aren't too many people generating 4Kx4K images. >>> >>>We've been running tests with the glutdemo applications and Xprint at >>>higher resolutions (6Kx8K window size) and did not notice any bad >>>rendering using the software rasterizer. >> >>How large are your triangles? > > I thought we have tested all combinations ranging from very small > triangles up to full window size. Try a sliver triangle that goes from the lower-left corner of the viewport to the upper-right. Verify that the two long edges exactly meet at the upper-right. >>The interpolation error will accumulate and be most noticable with >>very large triangles. > > Can you point me to one of the glutdemo applications which may likely fail? I can't. It would be easier to write a new program that stressed the scenario that's likely to fail. -Brian |