From: Dan <aya...@in...> - 2003-04-07 08:57:43
|
Hi all. Same type of question as before, but this time, substitute 'Unreal Tournament 2003' with 'Neverwinter Nights' :-) So has anyone had any luck with the DRI drivers, and if so, with which card? I can get into the menu system and create characters, but my system locks hard right at the end of the 'loading' screen when I'm entering the game (I'm currently running XFree86-4.3.0 with the Gatos ati.2 drivers, and using the Beta 3 NWN client). I've tried with the RADEON_NO_TCL=1 and RADEON_NO_VTXFMT=1 environment variables - same problem. I have seen reports of some people with Voodoo and Matrox cards being able to play the game, so maybe it just affects ATI cards? Or maybe it's not related to the drivers at all. Don't know yet... Dan |
From: Roland S. <rsc...@sw...> - 2003-04-07 19:32:52
|
Dan wrote: > Hi all. > > Same type of question as before, but this time, substitute 'Unreal > Tournament 2003' with 'Neverwinter Nights' :-) > So has anyone had any luck with the DRI drivers, and if so, with which > card? > > I can get into the menu system and create characters, but my system > locks hard right at the end of the 'loading' screen when I'm entering > the game (I'm currently running XFree86-4.3.0 with the Gatos ati.2 > drivers, and using the Beta 3 NWN client). > I've tried with the RADEON_NO_TCL=1 and RADEON_NO_VTXFMT=1 environment > variables - same problem. > > I have seen reports of some people with Voodoo and Matrox cards being > able to play the game, so maybe it just affects ATI cards? Or maybe it's > not related to the drivers at all. Don't know yet... > > > Dan NWN runs fine for me (you definitely need beta3), but I have a Radeon 9000 and am using DRI trunk. There are some issues, but they aren't that major: - there are no pictures/wrong pictures when you choose which game to load (probably has nothing to do with the graphic driver) - no creature shadows, doesn't matter if you switch the option on or off - sometimes lighting (?) seems to be wrong, so a texture appears with the wrong colour (at least I think it looked differently using W98) - of course, no shiny water (since this requires the ATI fragment program extension, whatever it is called) - it's quite a bit slower than using W98 - and last but not least it doesn't use the home directory, so you'd need to be root to play it (probably not a graphic driver bug neither ;-)) Roland |
From: Adam K K. <ad...@vo...> - 2003-04-07 20:11:26
|
On Mon, 7 Apr 2003, Roland Scheidegger wrote: > Dan wrote: > > Hi all. > > > > Same type of question as before, but this time, substitute 'Unreal > > Tournament 2003' with 'Neverwinter Nights' :-) > > So has anyone had any luck with the DRI drivers, and if so, with which > > card? > > > > I can get into the menu system and create characters, but my system > > locks hard right at the end of the 'loading' screen when I'm entering > > the game (I'm currently running XFree86-4.3.0 with the Gatos ati.2 > > drivers, and using the Beta 3 NWN client). > > I've tried with the RADEON_NO_TCL=1 and RADEON_NO_VTXFMT=1 environment > > variables - same problem. > > > > I have seen reports of some people with Voodoo and Matrox cards being > > able to play the game, so maybe it just affects ATI cards? Or maybe it's > > not related to the drivers at all. Don't know yet... > > > > > > Dan > > NWN runs fine for me (you definitely need beta3), but I have a Radeon > 9000 and am using DRI trunk. There are some issues, but they aren't that > major: > - there are no pictures/wrong pictures when you choose which game to > load (probably has nothing to do with the graphic driver) Hmmm... Works for me with my 8500. > - no creature shadows, doesn't matter if you switch the option on or off > - sometimes lighting (?) seems to be wrong, so a texture appears with > the wrong colour (at least I think it looked differently using W98) > - of course, no shiny water (since this requires the ATI fragment > program extension, whatever it is called) > - it's quite a bit slower than using W98 > - and last but not least it doesn't use the home directory, so you'd > need to be root to play it (probably not a graphic driver bug neither ;-)) No need to be root. You just need to make sure that the user you plan on playing as owns the nwn directory and its subdirectories. In fact, you can even install the game in the users home directory (ie /home/adamk/nwn), as the user, and it'll play fine. You are missing one big drawback, however. Textures look really poor due to the fact that S3TC textures can't be used in the game. Something really needs to be done about this lack of S3TC in the DRI drivers. Adam |
From: Dan <aya...@in...> - 2003-04-08 07:14:45
|
Adam K Kirchhoff wrote: >On Mon, 7 Apr 2003, Roland Scheidegger wrote: > > >>NWN runs fine for me (you definitely need beta3), but I have a Radeon >>9000 and am using DRI trunk. There are some issues, but they aren't that >>major: >>- there are no pictures/wrong pictures when you choose which game to >>load (probably has nothing to do with the graphic driver) >> >> > >Hmmm... Works for me with my 8500. > > Well it doesn't work at all for me, but I borrowed my best friend's Geforce 2 MX and it works perfectly. Probably something's up with the R100 driver then. >You are missing one big drawback, however. Textures look really poor >due to the fact that S3TC textures can't be used in the game. > >Something really needs to be done about this lack of S3TC in the DRI >drivers. > > Yes. I have an idea on this. How about I tip off S3 that we ARE implementing S3TC in the DRI and wait for a couple of weeks to see if they start investigating? We'll find out if they're serious about enforcing their patent then... |
From: Mike A. H. <mh...@re...> - 2003-04-08 10:56:17
|
On Tue, 8 Apr 2003, Dan wrote: >>You are missing one big drawback, however. Textures look really poor >>due to the fact that S3TC textures can't be used in the game. >> >>Something really needs to be done about this lack of S3TC in the DRI >>drivers. > >Yes. I have an idea on this. How about I tip off S3 that we ARE >implementing S3TC in the DRI and wait for a couple of weeks to see if >they start investigating? We'll find out if they're serious about >enforcing their patent then... Very unlikely. A much more likely scenario is that there would be zero response, and continue to be zero response - until it actually was implemented, and some major OS vendor shipped the code in an OS product. At some point in time possibly years later, then would be a good time to sue the OS vendor for damages due to IP infringement. The actual people who implemented the code would most likely go unharmed. That is much more realistic as to how IP works with open source software. Of course, in lieu of official permission in writing to use the patented technology with redistribution rights, any smart OS vendor would rip out the offending patent encumbered code, or rip out the piece of offending software entirely. That is much closer to reality than expecting to strongarm a patent holder as you suggest. -- Mike A. Harris ftp://people.redhat.com/mharris OS Systems Engineer - XFree86 maintainer - Red Hat |
From: Ian M. <sp...@f2...> - 2003-04-08 11:22:17
|
On Tue, 8 Apr 2003 07:08:54 -0400 (EDT) "Mike A. Harris" <mh...@re...> wrote: > in lieu of official permission in writing > to use the patented technology with redistribution rights, any > smart OS vendor would rip out the offending patent encumbered > code, or rip out the piece of offending software entirely. isnt it about time we got something like --enable-dodgylegalstatus-s3tc-code then? -- Do not meddle in the affairs of Dragons for you are tasty and good with ketchup. |
From: Mike A. H. <mh...@re...> - 2003-04-08 12:30:36
|
On Tue, 8 Apr 2003, Ian Molton wrote: >> in lieu of official permission in writing >> to use the patented technology with redistribution rights, any >> smart OS vendor would rip out the offending patent encumbered >> code, or rip out the piece of offending software entirely. > >isnt it about time we got something like >--enable-dodgylegalstatus-s3tc-code then? When enabled, that would create GPL related issues similar to MP3 patent issues and the GPL when linked into GPL code. It also possibly creates a freetype like source of patent licensing revenue stream for the patent owner, where the patent owner doesn't say anything but people who want to use the code and avoid patent issues contact S3 and pay patent royalties just to be safe. Doesn't matter to me much either way though. If someone does stick any patent encumbered code into Mesa, DRI, or XFree86 however, I hope they do it in an easy-to-rip-out manner, so that distribution maintainers don't have to go through hell. -- Mike A. Harris ftp://people.redhat.com/mharris OS Systems Engineer - XFree86 maintainer - Red Hat |
From: Alan C. <al...@lx...> - 2003-04-08 12:36:47
|
On Maw, 2003-04-08 at 12:22, Ian Molton wrote: > On Tue, 8 Apr 2003 07:08:54 -0400 (EDT) > "Mike A. Harris" <mh...@re...> wrote: > > > in lieu of official permission in writing > > to use the patented technology with redistribution rights, any > > smart OS vendor would rip out the offending patent encumbered > > code, or rip out the piece of offending software entirely. > > isnt it about time we got something like > --enable-dodgylegalstatus-s3tc-code then? The vendor would still have to remove the code. |
From: Marcus M. <mo...@me...> - 2003-04-08 13:07:45
|
Alan Cox writes: > > The vendor would still have to remove the code. > Are you sure? Code is not a usage of the patent, merely a description, which is not prohibited, but the purpose of the patent process. As long as you don't provide the actual functionality, you should be ok. Private and academic use of patented technology does not require a license. Isn't a52dec distributed under these assumptions? In any case, easy removal of the code would probably be preferable, including a warning for those countries where the S3TC patent exists (where does it exist?). Marcus -- /--------------------------------------------------------------------\ | Dr. Marcus O.C. Metzler | | |--------------------------------|-----------------------------------| | mo...@me... | http://www.metzlerbros.de/ | \--------------------------------------------------------------------/ |
From: Alan C. <al...@lx...> - 2003-04-08 14:05:18
|
On Maw, 2003-04-08 at 14:07, Marcus Metzler wrote: > Are you sure? Code is not a usage of the patent, merely a description, > which is not prohibited, but the purpose of the patent process. Code is both. Its the dreaded corner case the US legal process can't cope with. Company lawyers jobs are to protect companies not make the right guess on corner cases. > As long as you don't provide the actual functionality, you should be > ok. Private and academic use of patented technology does not require a > license. Isn't a52dec distributed under these assumptions? Red Hat don't distribute a52dec. > In any case, easy removal of the code would probably be preferable, > including a warning for those countries where the S3TC patent exists > (where does it exist?). Probably just the USA/Japan. Or providing a plugin API for the free world to distribute texture convertors. Alan |
From: Ian R. <id...@us...> - 2003-04-08 16:36:57
|
Marcus Metzler wrote: > In any case, easy removal of the code would probably be preferable, > including a warning for those countries where the S3TC patent exists > (where does it exist?). At the very least, anywhere that US patent 5,956,431 is valid. There may well be other US patents as well as other foreign patents. |
From: Ian R. <id...@us...> - 2003-04-08 17:02:15
|
Ian Romanick wrote: > Marcus Metzler wrote: > >> In any case, easy removal of the code would probably be preferable, >> including a warning for those countries where the S3TC patent exists >> (where does it exist?). > > At the very least, anywhere that US patent 5,956,431 is valid. There > may well be other US patents as well as other foreign patents. Allow me to rephrase that. I am not a lawyer, and I do not speak for IBM in any way, shape, or form. This should not be considered legal advice in any way, shape, or form. The above mentioned patent seems to be related to the issue at hand. |
From: Keith W. <ke...@tu...> - 2003-04-25 07:29:48
|
> > Ok after looking some more on that backtrace and the dri driver, the > problem seems to be that a visual without a stencil buffer is used, but > the stencil buffer gets still used (thus the driver thinks we don't have > hw stencil and will use the software stencil buffer). I've hacked up > radeon_dri.c so it only exports visuals which do have a stencil buffer. > This improved frame rates of nwn by about a factor of 5 or so overall > (unfortunately the R200_NO_TCL needed to get rid of the rendering errors > cut that in half again, but that's a different topic). > I've no idea if this is a simple bug of nwn (if it requires the stencil > buffer, it should request a visual with stencil, right?) or a driver bug > (maybe the driver could do something requiring a stencil buffer on its > own or it incorrectly thinks it needs to clear it?), but I'm sure > someone can figure it out... As it stands I don't see why we would ever want to advertise a 32bpp visual without a stencil buffer... Maybe someone on dri-devel can tell me why this happens? Keith |
From: Ian R. <id...@us...> - 2003-04-25 20:03:11
|
Keith Whitwell wrote: > >> Ok after looking some more on that backtrace and the dri driver, the >> problem seems to be that a visual without a stencil buffer is used, >> but the stencil buffer gets still used (thus the driver thinks we >> don't have hw stencil and will use the software stencil buffer). I've >> hacked up radeon_dri.c so it only exports visuals which do have a >> stencil buffer. This improved frame rates of nwn by about a factor of >> 5 or so overall (unfortunately the R200_NO_TCL needed to get rid of >> the rendering errors cut that in half again, but that's a different >> topic). >> I've no idea if this is a simple bug of nwn (if it requires the >> stencil buffer, it should request a visual with stencil, right?) or a >> driver bug (maybe the driver could do something requiring a stencil >> buffer on its own or it incorrectly thinks it needs to clear it?), but >> I'm sure someone can figure it out... > > > As it stands I don't see why we would ever want to advertise a 32bpp > visual without a stencil buffer... Maybe someone on dri-devel can tell > me why this happens? With the current implementation, there is no reason. However, as soon as fbconfig and the next round of texmem work is available, we should be able 32-bit color with 16, 24, and 32-bit depth buffer. In that scenario there are plenty of cases where you have 32-bit color and no stencil buffer. We can determine if it's an app bug or a libGL bug pretty easilly. Run the app in GDB. When it hits the SSE test, but a break point at glXChooseVisual. It would then be pretty easy to see if the app is asking for a visual with a stencil buffer. I don't have NWN, so I can't check it myself. |
From: Roland S. <rsc...@sw...> - 2003-04-25 23:47:18
|
Ian Romanick wrote: > Keith Whitwell wrote: > >> >>> Ok after looking some more on that backtrace and the dri driver, the >>> problem seems to be that a visual without a stencil buffer is used, >>> but the stencil buffer gets still used (thus the driver thinks we >>> don't have hw stencil and will use the software stencil buffer). I've >>> hacked up radeon_dri.c so it only exports visuals which do have a >>> stencil buffer. This improved frame rates of nwn by about a factor of >>> 5 or so overall (unfortunately the R200_NO_TCL needed to get rid of >>> the rendering errors cut that in half again, but that's a different >>> topic). >>> I've no idea if this is a simple bug of nwn (if it requires the >>> stencil buffer, it should request a visual with stencil, right?) or a >>> driver bug (maybe the driver could do something requiring a stencil >>> buffer on its own or it incorrectly thinks it needs to clear it?), >>> but I'm sure someone can figure it out... >> >> >> >> As it stands I don't see why we would ever want to advertise a 32bpp >> visual without a stencil buffer... Maybe someone on dri-devel can >> tell me why this happens? > > > With the current implementation, there is no reason. However, as soon > as fbconfig and the next round of texmem work is available, we should be > able 32-bit color with 16, 24, and 32-bit depth buffer. In that > scenario there are plenty of cases where you have 32-bit color and no > stencil buffer. > > We can determine if it's an app bug or a libGL bug pretty easilly. Run > the app in GDB. When it hits the SSE test, but a break point at > glXChooseVisual. It would then be pretty easy to see if the app is > asking for a visual with a stencil buffer. > > I don't have NWN, so I can't check it myself. Ok, done that (though you can't set the breakpoint when the SSE test is hit, too late...) Nwn does not explicitly request a stencil size (it's not in the AttribList). But in this case, shouldn't it just get no stencil buffer instead of a software emulated one? Is it possible the app doesn't actually use the stencil buffer but the driver somehow still thinks it needs to clear it? If you are interested, here are the variables after the AttribList is parsed: rgba = 1 doublebuffer = 1 stereo = 0 auxBuffers = 0 redSize = 5 greenSize = 5 blueSize = 5 alphaSize = 0 depthSize = 24 stencilSize = 0 accumRedSize = 0 accumGreenSize = 0 accumBlueSize = 0 accumAlphaSize = 0 visualType = 1 visualTypeValue = 32771 transparentPixel = 0 transparentPixelValue = 32768 transparentIndex = 0 transparentIndexValue = 0 transparentRed = 0 transparentRedValue = 0 transparentGreen = 0 transparentGreenValue = 0 transparentBlue = 0 transparentBlueValue = 0 Roland |
From: Ian R. <id...@us...> - 2003-04-26 00:30:41
|
Roland Scheidegger wrote: > Ok, done that (though you can't set the breakpoint when the SSE test is > hit, too late...) You're right. I'm used to having to debug problems after the context is created. :) > Nwn does not explicitly request a stencil size (it's not in the > AttribList). But in this case, shouldn't it just get no stencil buffer > instead of a software emulated one? > Is it possible the app doesn't actually use the stencil buffer but the > driver somehow still thinks it needs to clear it? Try running with RADEON_DEBUG=fall. Looking at the code, the only way a fallback message will be printed for a stencil fallback is if the app tries to explicitly enable stencil testing (see radeonEnable in lib/GL/mesa/src/drv/radeon/radeon_state.c). > If you are interested, here are the variables after the AttribList is > parsed: If you get a fallback message, then the problem is with the app. It's expecting a stencil buffer but it isn't asking for one. Fortunately, that should take them about 2 minutes to fix. :) |
From: Roland S. <rsc...@sw...> - 2003-04-26 01:36:03
|
Ian Romanick wrote: >>> Nwn does not explicitly request a stencil size (it's not in the >> AttribList). But in this case, shouldn't it just get no stencil buffer >> instead of a software emulated one? >> Is it possible the app doesn't actually use the stencil buffer but the >> driver somehow still thinks it needs to clear it? > > > Try running with RADEON_DEBUG=fall. Looking at the code, the only way a > fallback message will be printed for a stencil fallback is if the app > tries to explicitly enable stencil testing (see radeonEnable in > lib/GL/mesa/src/drv/radeon/radeon_state.c). > > If you get a fallback message, then the problem is with the app. It's > expecting a stencil buffer but it isn't asking for one. Fortunately, > that should take them about 2 minutes to fix. :) I've already tried that (that was in the thread in dri-users) and there are no fallback messages (btw I have a Radeon 9000 but it looks like the problem is the same on the original Radeon which the OP has). Roland |
From: Keith W. <ke...@tu...> - 2003-04-26 09:04:44
|
Roland Scheidegger wrote: > Ian Romanick wrote: > >> Keith Whitwell wrote: >> >>> >>>> Ok after looking some more on that backtrace and the dri driver, the >>>> problem seems to be that a visual without a stencil buffer is used, >>>> but the stencil buffer gets still used (thus the driver thinks we >>>> don't have hw stencil and will use the software stencil buffer). >>>> I've hacked up radeon_dri.c so it only exports visuals which do have >>>> a stencil buffer. This improved frame rates of nwn by about a factor >>>> of 5 or so overall (unfortunately the R200_NO_TCL needed to get rid >>>> of the rendering errors cut that in half again, but that's a >>>> different topic). >>>> I've no idea if this is a simple bug of nwn (if it requires the >>>> stencil buffer, it should request a visual with stencil, right?) or >>>> a driver bug (maybe the driver could do something requiring a >>>> stencil buffer on its own or it incorrectly thinks it needs to clear >>>> it?), but I'm sure someone can figure it out... >>> >>> >>> >>> >>> As it stands I don't see why we would ever want to advertise a 32bpp >>> visual without a stencil buffer... Maybe someone on dri-devel can >>> tell me why this happens? >> >> >> >> With the current implementation, there is no reason. However, as soon >> as fbconfig and the next round of texmem work is available, we should >> be able 32-bit color with 16, 24, and 32-bit depth buffer. In that >> scenario there are plenty of cases where you have 32-bit color and no >> stencil buffer. >> >> We can determine if it's an app bug or a libGL bug pretty easilly. >> Run the app in GDB. When it hits the SSE test, but a break point at >> glXChooseVisual. It would then be pretty easy to see if the app is >> asking for a visual with a stencil buffer. >> >> I don't have NWN, so I can't check it myself. > > > Ok, done that (though you can't set the breakpoint when the SSE test is > hit, too late...) > Nwn does not explicitly request a stencil size (it's not in the > AttribList). But in this case, shouldn't it just get no stencil buffer > instead of a software emulated one? > Is it possible the app doesn't actually use the stencil buffer but the > driver somehow still thinks it needs to clear it? Looking at the backtrace you provided earlier is interesting: #4 0x409cf1d5 in r200Clear (ctx=0xe12bb40, mask=236108608, all=1 '\001', cx=0, cy=0, cw=1024, ch=768) at r200_ioctl.c:574 #5 0x408bbc85 in _mesa_Clear (mask=0) at buffers.c:130 Look at the value of mask in r200Clear - this is the buffers that are being requested to clear. In hex this is 0xe12bb40 -- in other words, garbage. Looking at the code in _mesa_Clear: GLbitfield ddMask; /* don't clear depth buffer if depth writing disabled */ if (!ctx->Depth.Mask) mask &= ~GL_DEPTH_BUFFER_BIT; /* Build bitmask to send to driver Clear function */ ddMask = mask & (GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT | GL_ACCUM_BUFFER_BIT); if (mask & GL_COLOR_BUFFER_BIT) { ddMask |= ctx->Color._DrawDestMask; } ASSERT(ctx->Driver.Clear); ctx->Driver.Clear( ctx, ddMask, (GLboolean) !ctx->Scissor.Enabled, x, y, width, height ); The value we're interested in is ddMask - which can only get those garbage bits (assuming the backtrace is correct) if ctx->Color._DrawDestMask is corrupt. Note that the backtrace says that mask in _mesa_Clear is zero (which is also kindof strange). So, I'm wondering if you can redo the gdb experiment, but print out the value of ctx->Color._DrawDestMask. (Let me know if you need help with gdb to acomplish this). Keith |
From: Roland S. <rsc...@sw...> - 2003-04-26 09:55:38
|
Keith Whitwell wrote: > Roland Scheidegger wrote: >=20 >> Ian Romanick wrote: >> >>> Keith Whitwell wrote: >>> >>>> >>>>> Ok after looking some more on that backtrace and the dri driver,=20 >>>>> the problem seems to be that a visual without a stencil buffer is=20 >>>>> used, but the stencil buffer gets still used (thus the driver=20 >>>>> thinks we don't have hw stencil and will use the software stencil=20 >>>>> buffer). I've hacked up radeon_dri.c so it only exports visuals=20 >>>>> which do have a stencil buffer. This improved frame rates of nwn by= =20 >>>>> about a factor of 5 or so overall (unfortunately the R200_NO_TCL=20 >>>>> needed to get rid of the rendering errors cut that in half again,=20 >>>>> but that's a different topic). >>>>> I've no idea if this is a simple bug of nwn (if it requires the=20 >>>>> stencil buffer, it should request a visual with stencil, right?) or= =20 >>>>> a driver bug (maybe the driver could do something requiring a=20 >>>>> stencil buffer on its own or it incorrectly thinks it needs to=20 >>>>> clear it?), but I'm sure someone can figure it out... >>>> >>>> >>>> >>>> >>>> >>>> As it stands I don't see why we would ever want to advertise a 32bpp= =20 >>>> visual without a stencil buffer... Maybe someone on dri-devel can=20 >>>> tell me why this happens? >>> >>> >>> >>> >>> With the current implementation, there is no reason. However, as=20 >>> soon as fbconfig and the next round of texmem work is available, we=20 >>> should be able 32-bit color with 16, 24, and 32-bit depth buffer. In= =20 >>> that scenario there are plenty of cases where you have 32-bit color=20 >>> and no stencil buffer. >>> >>> We can determine if it's an app bug or a libGL bug pretty easilly. =20 >>> Run the app in GDB. When it hits the SSE test, but a break point at=20 >>> glXChooseVisual. It would then be pretty easy to see if the app is=20 >>> asking for a visual with a stencil buffer. >>> >>> I don't have NWN, so I can't check it myself. >> >> >> >> Ok, done that (though you can't set the breakpoint when the SSE test=20 >> is hit, too late...) >> Nwn does not explicitly request a stencil size (it's not in the=20 >> AttribList). But in this case, shouldn't it just get no stencil buffer= =20 >> instead of a software emulated one? >> Is it possible the app doesn't actually use the stencil buffer but the= =20 >> driver somehow still thinks it needs to clear it? >=20 >=20 > Looking at the backtrace you provided earlier is interesting: >=20 > #4 0x409cf1d5 in r200Clear (ctx=3D0xe12bb40, mask=3D236108608, all=3D1= =20 > '\001', cx=3D0, cy=3D0, cw=3D1024, ch=3D768) at r200_ioctl.c:574 > #5 0x408bbc85 in _mesa_Clear (mask=3D0) at buffers.c:130 >=20 > Look at the value of mask in r200Clear - this is the buffers that are=20 > being requested to clear. In hex this is 0xe12bb40 -- in other words,=20 > garbage. >=20 > Looking at the code in _mesa_Clear: >=20 > GLbitfield ddMask; >=20 > /* don't clear depth buffer if depth writing disabled */ > if (!ctx->Depth.Mask) > mask &=3D ~GL_DEPTH_BUFFER_BIT; >=20 > /* Build bitmask to send to driver Clear function */ > ddMask =3D mask & (GL_DEPTH_BUFFER_BIT | > GL_STENCIL_BUFFER_BIT | > GL_ACCUM_BUFFER_BIT); > if (mask & GL_COLOR_BUFFER_BIT) { > ddMask |=3D ctx->Color._DrawDestMask; > } >=20 > ASSERT(ctx->Driver.Clear); > ctx->Driver.Clear( ctx, ddMask, (GLboolean) !ctx->Scissor.Enabled= , > x, y, width, height ); >=20 >=20 > The value we're interested in is ddMask - which can only get those=20 > garbage bits (assuming the backtrace is correct) if=20 > ctx->Color._DrawDestMask is corrupt. Note that the backtrace says that= =20 > mask in _mesa_Clear is zero (which is also kindof strange). >=20 > So, I'm wondering if you can redo the gdb experiment, but print out the= =20 > value of ctx->Color._DrawDestMask. (Let me know if you need help with=20 > gdb to acomplish this). I hope I got it right (this is from interrupting the game with ctrl-c,=20 so the function which was interrupted was r200SpanRenderFinish, hoping=20 nobody changed ctx in-between). If it's not useful, I can try more, but=20 not before monday. (gdb) print ctx->Color $3 =3D {ClearIndex =3D 0, ClearColor =3D {0, 0, 0, 0}, IndexMask =3D 4294= 967295, ColorMask =3D "=FF=FF=FF=FF", DrawBuffer =3D 1029, _DrawDestMask =3D 4= '\004', AlphaEnabled =3D 1 '\001', AlphaFunc =3D 516, AlphaRef =3D 0.200000003= , BlendEnabled =3D 1 '\001', BlendSrcRGB =3D 770, BlendDstRGB =3D 771, BlendSrcA =3D 770, BlendDstA =3D 771, BlendEquation =3D 32774, BlendCo= lor =3D=20 {0, 0, 0, 0}, LogicOp =3D 5379, IndexLogicOpEnabled =3D 0 '\0', ColorLogicOpEnabled =3D 0 '\0', DitherFlag =3D 1 '\001'} Roland btw is this a coincidence that the mask value (236108608) is in hex the=20 same as the ctx address? |
From: Keith W. <ke...@tu...> - 2003-04-26 20:54:47
|
Roland Scheidegger wrote: > Keith Whitwell wrote: > >> Roland Scheidegger wrote: >> >>> Ian Romanick wrote: >>> >>>> Keith Whitwell wrote: >>>> >>>>> >>>>>> Ok after looking some more on that backtrace and the dri driver, >>>>>> the problem seems to be that a visual without a stencil buffer is >>>>>> used, but the stencil buffer gets still used (thus the driver >>>>>> thinks we don't have hw stencil and will use the software stencil >>>>>> buffer). I've hacked up radeon_dri.c so it only exports visuals >>>>>> which do have a stencil buffer. This improved frame rates of nwn >>>>>> by about a factor of 5 or so overall (unfortunately the >>>>>> R200_NO_TCL needed to get rid of the rendering errors cut that in >>>>>> half again, but that's a different topic). >>>>>> I've no idea if this is a simple bug of nwn (if it requires the >>>>>> stencil buffer, it should request a visual with stencil, right?) >>>>>> or a driver bug (maybe the driver could do something requiring a >>>>>> stencil buffer on its own or it incorrectly thinks it needs to >>>>>> clear it?), but I'm sure someone can figure it out... >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> As it stands I don't see why we would ever want to advertise a >>>>> 32bpp visual without a stencil buffer... Maybe someone on >>>>> dri-devel can tell me why this happens? >>>> >>>> >>>> >>>> >>>> >>>> With the current implementation, there is no reason. However, as >>>> soon as fbconfig and the next round of texmem work is available, we >>>> should be able 32-bit color with 16, 24, and 32-bit depth buffer. >>>> In that scenario there are plenty of cases where you have 32-bit >>>> color and no stencil buffer. >>>> >>>> We can determine if it's an app bug or a libGL bug pretty easilly. >>>> Run the app in GDB. When it hits the SSE test, but a break point at >>>> glXChooseVisual. It would then be pretty easy to see if the app is >>>> asking for a visual with a stencil buffer. >>>> >>>> I don't have NWN, so I can't check it myself. >>> >>> >>> >>> >>> Ok, done that (though you can't set the breakpoint when the SSE test >>> is hit, too late...) >>> Nwn does not explicitly request a stencil size (it's not in the >>> AttribList). But in this case, shouldn't it just get no stencil >>> buffer instead of a software emulated one? >>> Is it possible the app doesn't actually use the stencil buffer but >>> the driver somehow still thinks it needs to clear it? >> >> >> >> Looking at the backtrace you provided earlier is interesting: >> >> #4 0x409cf1d5 in r200Clear (ctx=0xe12bb40, mask=236108608, all=1 >> '\001', cx=0, cy=0, cw=1024, ch=768) at r200_ioctl.c:574 >> #5 0x408bbc85 in _mesa_Clear (mask=0) at buffers.c:130 >> >> Look at the value of mask in r200Clear - this is the buffers that are >> being requested to clear. In hex this is 0xe12bb40 -- in other words, >> garbage. >> >> Looking at the code in _mesa_Clear: >> >> GLbitfield ddMask; >> >> /* don't clear depth buffer if depth writing disabled */ >> if (!ctx->Depth.Mask) >> mask &= ~GL_DEPTH_BUFFER_BIT; >> >> /* Build bitmask to send to driver Clear function */ >> ddMask = mask & (GL_DEPTH_BUFFER_BIT | >> GL_STENCIL_BUFFER_BIT | >> GL_ACCUM_BUFFER_BIT); >> if (mask & GL_COLOR_BUFFER_BIT) { >> ddMask |= ctx->Color._DrawDestMask; >> } >> >> ASSERT(ctx->Driver.Clear); >> ctx->Driver.Clear( ctx, ddMask, (GLboolean) !ctx->Scissor.Enabled, >> x, y, width, height ); >> >> >> The value we're interested in is ddMask - which can only get those >> garbage bits (assuming the backtrace is correct) if >> ctx->Color._DrawDestMask is corrupt. Note that the backtrace says >> that mask in _mesa_Clear is zero (which is also kindof strange). >> >> So, I'm wondering if you can redo the gdb experiment, but print out >> the value of ctx->Color._DrawDestMask. (Let me know if you need help >> with gdb to acomplish this). > > > I hope I got it right (this is from interrupting the game with ctrl-c, > so the function which was interrupted was r200SpanRenderFinish, hoping > nobody changed ctx in-between). If it's not useful, I can try more, but > not before monday. > > (gdb) print ctx->Color > $3 = {ClearIndex = 0, ClearColor = {0, 0, 0, 0}, IndexMask = 4294967295, > ColorMask = "ÿÿÿÿ", DrawBuffer = 1029, _DrawDestMask = 4 '\004', > AlphaEnabled = 1 '\001', AlphaFunc = 516, AlphaRef = 0.200000003, > BlendEnabled = 1 '\001', BlendSrcRGB = 770, BlendDstRGB = 771, > BlendSrcA = 770, BlendDstA = 771, BlendEquation = 32774, BlendColor = > {0, 0, > 0, 0}, LogicOp = 5379, IndexLogicOpEnabled = 0 '\0', > ColorLogicOpEnabled = 0 '\0', DitherFlag = 1 '\001'} > > Roland > > btw is this a coincidence that the mask value (236108608) is in hex the > same as the ctx address? > Well spotted, I guess it could just be gcc rearranging its variables after the fact. I'll have to add some debug to see what's really happening. Keith |
From: Roland S. <rsc...@sw...> - 2003-04-28 19:40:47
|
Keith Whitwell wrote: > Roland Scheidegger wrote: >=20 >> Keith Whitwell wrote: >> >>> Roland Scheidegger wrote: >>> >>>> Ian Romanick wrote: >>>> >>>>> Keith Whitwell wrote: >>>>> >>>>>> >>>>>>> Ok after looking some more on that backtrace and the dri driver,=20 >>>>>>> the problem seems to be that a visual without a stencil buffer is= =20 >>>>>>> used, but the stencil buffer gets still used (thus the driver=20 >>>>>>> thinks we don't have hw stencil and will use the software stencil= =20 >>>>>>> buffer). I've hacked up radeon_dri.c so it only exports visuals=20 >>>>>>> which do have a stencil buffer. This improved frame rates of nwn=20 >>>>>>> by about a factor of 5 or so overall (unfortunately the=20 >>>>>>> R200_NO_TCL needed to get rid of the rendering errors cut that in= =20 >>>>>>> half again, but that's a different topic). >>>>>>> I've no idea if this is a simple bug of nwn (if it requires the=20 >>>>>>> stencil buffer, it should request a visual with stencil, right?)=20 >>>>>>> or a driver bug (maybe the driver could do something requiring a=20 >>>>>>> stencil buffer on its own or it incorrectly thinks it needs to=20 >>>>>>> clear it?), but I'm sure someone can figure it out... >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> As it stands I don't see why we would ever want to advertise a=20 >>>>>> 32bpp visual without a stencil buffer... Maybe someone on=20 >>>>>> dri-devel can tell me why this happens? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> With the current implementation, there is no reason. However, as=20 >>>>> soon as fbconfig and the next round of texmem work is available, we= =20 >>>>> should be able 32-bit color with 16, 24, and 32-bit depth buffer. =20 >>>>> In that scenario there are plenty of cases where you have 32-bit=20 >>>>> color and no stencil buffer. >>>>> >>>>> We can determine if it's an app bug or a libGL bug pretty easilly. = =20 >>>>> Run the app in GDB. When it hits the SSE test, but a break point=20 >>>>> at glXChooseVisual. It would then be pretty easy to see if the app= =20 >>>>> is asking for a visual with a stencil buffer. >>>>> >>>>> I don't have NWN, so I can't check it myself. >>>> >>>> >>>> >>>> >>>> >>>> Ok, done that (though you can't set the breakpoint when the SSE test= =20 >>>> is hit, too late...) >>>> Nwn does not explicitly request a stencil size (it's not in the=20 >>>> AttribList). But in this case, shouldn't it just get no stencil=20 >>>> buffer instead of a software emulated one? >>>> Is it possible the app doesn't actually use the stencil buffer but=20 >>>> the driver somehow still thinks it needs to clear it? >>> >>> >>> >>> >>> Looking at the backtrace you provided earlier is interesting: >>> >>> #4 0x409cf1d5 in r200Clear (ctx=3D0xe12bb40, mask=3D236108608, all=3D= 1=20 >>> '\001', cx=3D0, cy=3D0, cw=3D1024, ch=3D768) at r200_ioctl.c:574 >>> #5 0x408bbc85 in _mesa_Clear (mask=3D0) at buffers.c:130 >>> >>> Look at the value of mask in r200Clear - this is the buffers that are= =20 >>> being requested to clear. In hex this is 0xe12bb40 -- in other=20 >>> words, garbage. >>> >>> Looking at the code in _mesa_Clear: >>> >>> GLbitfield ddMask; >>> >>> /* don't clear depth buffer if depth writing disabled */ >>> if (!ctx->Depth.Mask) >>> mask &=3D ~GL_DEPTH_BUFFER_BIT; >>> >>> /* Build bitmask to send to driver Clear function */ >>> ddMask =3D mask & (GL_DEPTH_BUFFER_BIT | >>> GL_STENCIL_BUFFER_BIT | >>> GL_ACCUM_BUFFER_BIT); >>> if (mask & GL_COLOR_BUFFER_BIT) { >>> ddMask |=3D ctx->Color._DrawDestMask; >>> } >>> >>> ASSERT(ctx->Driver.Clear); >>> ctx->Driver.Clear( ctx, ddMask, (GLboolean) !ctx->Scissor.Enabl= ed, >>> x, y, width, height ); >>> >>> >>> The value we're interested in is ddMask - which can only get those=20 >>> garbage bits (assuming the backtrace is correct) if=20 >>> ctx->Color._DrawDestMask is corrupt. Note that the backtrace says=20 >>> that mask in _mesa_Clear is zero (which is also kindof strange). >>> >>> So, I'm wondering if you can redo the gdb experiment, but print out=20 >>> the value of ctx->Color._DrawDestMask. (Let me know if you need help= =20 >>> with gdb to acomplish this). >> >> >> >> I hope I got it right (this is from interrupting the game with ctrl-c,= =20 >> so the function which was interrupted was r200SpanRenderFinish, hoping= =20 >> nobody changed ctx in-between). If it's not useful, I can try more,=20 >> but not before monday. >> >> (gdb) print ctx->Color >> $3 =3D {ClearIndex =3D 0, ClearColor =3D {0, 0, 0, 0}, IndexMask =3D 4= 294967295, >> ColorMask =3D "=FF=FF=FF=FF", DrawBuffer =3D 1029, _DrawDestMask =3D= 4 '\004', >> AlphaEnabled =3D 1 '\001', AlphaFunc =3D 516, AlphaRef =3D 0.2000000= 03, >> BlendEnabled =3D 1 '\001', BlendSrcRGB =3D 770, BlendDstRGB =3D 771, >> BlendSrcA =3D 770, BlendDstA =3D 771, BlendEquation =3D 32774, Blend= Color=20 >> =3D {0, 0, >> 0, 0}, LogicOp =3D 5379, IndexLogicOpEnabled =3D 0 '\0', >> ColorLogicOpEnabled =3D 0 '\0', DitherFlag =3D 1 '\001'} >> >> Roland >> >> btw is this a coincidence that the mask value (236108608) is in hex=20 >> the same as the ctx address? >> >=20 > Well spotted, I guess it could just be gcc rearranging its variables=20 > after the fact. I'll have to add some debug to see what's really=20 > happening. I think if the game would issue a glClear command with the=20 GL_STENCIL_BUFFER_BIT set this would explain the behaviour. Interestingly, shadows indeed work if you feed the game a visual with a=20 stencil buffer, so I'd suppose it's really a bug of nwn (which, as Ian=20 mentioned, should be no problem to fix). Though I'm not sure that the behaviour of the driver is correct either. If you try to access the stencil buffer within a visual which doesn't=20 have a stencil buffer, shouldn't it just return an error or something=20 instead of providing a software emulation? Not to mention that the=20 emulation doesn't seem to work (there are no shadows, the only emulated=20 operation wrt the stencil buffer seems to be that clear operation.=20 Though it might be possible nwn disables shadows if it gets a visual=20 without a stencil buffer, but the programmers forgot that in this case=20 it's unnecessary to clear the (non-existant) stencil buffer). (btw sorry if I'm posting garbage, I'm not familiar with OGL at all=20 unfortunately :-() Roland |
From: Allen A. <ak...@po...> - 2003-04-28 19:52:32
|
On Mon, Apr 28, 2003 at 09:44:51PM +0200, Roland Scheidegger wrote: | If you try to access the stencil buffer within a visual which doesn't | have a stencil buffer, shouldn't it just return an error or something | instead of providing a software emulation? The OpenGL spec says: If there is no stencil buffer, no stencil modification can occur, and it is as if the stencil tests always pass, regardless of any calls to glStencilOp. Allen |
From: Leif D. <lde...@re...> - 2003-04-28 20:10:30
|
On Mon, 28 Apr 2003, Allen Akin wrote: > On Mon, Apr 28, 2003 at 09:44:51PM +0200, Roland Scheidegger wrote: > | If you try to access the stencil buffer within a visual which doesn't > | have a stencil buffer, shouldn't it just return an error or something > | instead of providing a software emulation? > > The OpenGL spec says: > > If there is no stencil buffer, no stencil modification can > occur, and it is as if the stencil tests always pass, regardless > of any calls to glStencilOp. > > Allen It also says (for Clears): If a buffer is not present, then a Clear directed at that buffer has no effect. -- Leif Delgass http://www.retinalburn.net |
From: Allen A. <ak...@po...> - 2003-04-28 21:28:30
|
On Mon, Apr 28, 2003 at 03:09:22PM -0500, Leif Delgass wrote: | On Mon, 28 Apr 2003, Allen Akin wrote: | | > If there is no stencil buffer, no stencil modification can | > occur, and it is as if the stencil tests always pass, regardless | > of any calls to glStencilOp. | | It also says (for Clears): | | If a buffer is not present, then a Clear directed at that buffer | has no effect. Yep. Same effect in the long run: If there is no stencil buffer, then nothing you attempt to do to the stencil buffer will change it. :-) The main point was just that it doesn't cause an error. Allen |
From: Ian R. <id...@us...> - 2003-04-28 20:45:16
|
Roland Scheidegger wrote: > I think if the game would issue a glClear command with the > GL_STENCIL_BUFFER_BIT set this would explain the behaviour. > Interestingly, shadows indeed work if you feed the game a visual with a > stencil buffer, so I'd suppose it's really a bug of nwn (which, as Ian > mentioned, should be no problem to fix). > Though I'm not sure that the behaviour of the driver is correct either. > If you try to access the stencil buffer within a visual which doesn't > have a stencil buffer, shouldn't it just return an error or something > instead of providing a software emulation? Not to mention that the > emulation doesn't seem to work (there are no shadows, the only emulated > operation wrt the stencil buffer seems to be that clear operation. IIRC, if there is now stencil buffer in the visual, the GL is supposed to ignore any stencil related requests. That is, it's supposed to act like the app never made any stencil related calls. > Though it might be possible nwn disables shadows if it gets a visual > without a stencil buffer, but the programmers forgot that in this case > it's unnecessary to clear the (non-existant) stencil buffer). If there was no fallback message for stencil buffer printed, then that is quite possibly what's happening. If there is no stencil buffer in the visual, then the driver *should* ignore a request to clear it. It sounds like it may be a bug in both the driver and the game. :) Could you submit a bug report to bugs.xfree86.org? When it comes through, I'll assign it to myself. After looking at radeonClear (lib/GL/mesa/src/drv/radeon/radeon_ioctl.c) it looks like if a stencil clear is requested and there is no *hardware* stencil buffer, it will do a sw fallback to clear it. However, it should print a message like radeonClear: swrast cldear, mask: 400 if 'RADEON_DEBUG=fall' is set. Did you ever see that message? I assume not. I modified glxgears to clear GL_STENCIL_BUFFER_BIT and set R200_DEBUG=fall, and I don't see the message on r200 either, but I do see glxgears drop from over 1,000 fps to about 110 fps. Hmm... |