You can subscribe to this list here.
| 1999 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2000 |
Jan
(17) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(8) |
Jul
|
Aug
(7) |
Sep
(8) |
Oct
(67) |
Nov
(32) |
Dec
(78) |
| 2001 |
Jan
(20) |
Feb
(5) |
Mar
(8) |
Apr
(9) |
May
(12) |
Jun
|
Jul
(2) |
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
| 2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
|
| 2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(5) |
Nov
(9) |
Dec
(4) |
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
|
Jun
(4) |
Jul
(2) |
Aug
(8) |
Sep
(25) |
Oct
(2) |
Nov
|
Dec
|
| 2007 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
(3) |
Sep
|
Oct
(2) |
Nov
|
Dec
(3) |
| 2008 |
Jan
(2) |
Feb
(8) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
(5) |
Dec
|
| 2009 |
Jan
(1) |
Feb
|
Mar
(5) |
Apr
(2) |
May
|
Jun
(4) |
Jul
(3) |
Aug
(1) |
Sep
(6) |
Oct
|
Nov
(6) |
Dec
(9) |
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2011 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Judith L. <ju...@os...> - 2005-09-20 21:18:55
|
Hello,
I had a bit of difficulty getting 'glean' to compile. It would not
compile until I edited:
# diff -Nur glean_cvs/src/glean/tvertattrib.cpp glean/src/glean/tvertattrib.cpp
--- glean_cvs/src/glean/tvertattrib.cpp 2005-09-19 13:16:28.020440201-0700
+++ glean/src/glean/tvertattrib.cpp 2005-09-20 13:16:16.195697571-0700
@@ -1524,7 +1524,7 @@
for (int i = 0; i < NUM_2_0_ATTRIB_FUNCS; i++) {
int attribFunc = NUM_NV_ATTRIB_FUNCS + NUM_ARB_ATTRIB_FUNCS+ i;
bool b;
- b = TestAttribs(r, attribFunc, getAttribfv, aliasing, numAttribs);
+ b = TestAttribs(r, attribFunc, getAttribfv, REQUIRED, numAttribs);
if (!b)
result = false;
r.num20tested++;
---------------------------------
The error message said the variable 'aliasing' was not declared in that
function. Is the value 'REQUIRED' okay, or should it be something else
at this point? I pulled from the CVS head on sourceforge. Is that the
latest source?
Thanks,
Judith Lebzelter
OSDL
|
|
From: Neateye <nit...@ao...> - 2005-05-17 22:33:03
|
Call out Gouranga be happy!!! Gouranga Gouranga Gouranga .... That which brings the highest happiness!! |
|
From: Allen A. <ak...@po...> - 2005-03-03 04:14:03
|
On Thu, Mar 03, 2005 at 01:00:59AM +0100, Nicolai Haehnle wrote: | On Wednesday 02 March 2005 23:18, Allen Akin wrote: | > Glean tests the special case because it's useful to know if all the bits | > in the color channel can be written reliably. For example, | > coloredLitPerf2 and coloredTexPerf2 don't bother running on visuals that | > fail the color correctness test, because they can't tell whether the | > final rendered image is correct. | > | > You could change things so that the low-order color bits aren't used | > when they're unreliable. That might be a good small project. | | You mean a change to the coloredLitPerf2 and coloredTexPerf2 test so that | they compare fewer bits when exactRGBA has failed? That sounds like the right approach. Though on second thought, since the problem is caused by rounding rather than truncation, you'd probably have to modify coloredLitPerf2 and coloredTexPerf2 so that they use the error computed by exactRGBA as a tolerance. (Rather than just ignoring some low-order bits, since the effects of rounding could ripple up to any arbitrary higher-order bit.) | Alternatively, do you think it would make sense to add a test for color | correctness when the color values passed to glColor are properly "stable"? | What I'm thinking of is a test very similar to exactRGBA, however stabilize | the color values before passing them to glColor. For example, if the | framebuffer has 8-bit color channels and the expected readback value is | 0xAB, pass 0xABABABAB to glColorui and 0xABAB to glColorus. The rationale | behind 0xABABABAB is that it is the mathematically correct 32-bit fixed | point extension of the 8-bit fixed point 0xAB when 0xFFFFFFFF. That seems like a reasonable thing to expect, but since the spec doesn't require it, it would be hard to claim that an implementation "fails" if it doesn't behave that way. (For example, if it uses a low-precision floating-point representation.) Another way to approach it would be to add a test that attempts to deduce the internal floating-point precision that the implementation uses for color. It wouldn't be PASS/FAIL; it would be a NOTE test. Allen |
|
From: Nicolai H. <pre...@gm...> - 2005-03-03 00:01:08
|
On Wednesday 02 March 2005 23:18, Allen Akin wrote: > On Wed, Mar 02, 2005 at 09:27:25PM +0100, Nicolai Haehnle wrote: > | So whose fault is this? Mesa, glean or even the spec itself? > | Personally, I'd say the truncation behaviour described in the second=20 > | paragraph of 2.14.9 is wrong; it should allow for upward rounding by th= e=20 > | implementation. However, I'd really appreciate your opinion in this=20 matter. >=20 > In the olden days, it was surprisingly common for applications to demand > absolute control over the values written to the color buffer. (Often > because they were going to use exclusive-or for reversible drawing, or > direct-color style color tables for color correction or animation, or > multiple color layers for ECAD, or things even more tricky.) >=20 > It also turns out to be very useful for testing the GL implementation if > tests can guarantee that the color values they specify will be written > into the color buffer without any modification. >=20 > The language in section 2.14.9 is there specifically to support those > uses. However, it is a special case, and it requires special code paths > in most GL implementations. Not everyone wants to bother with those. >=20 > The compromise that most developers seem to have made is that 8-bit > color channels have the special behavior, but deeper color channels > don't. Implementing that is simple, and it preserves compatibility for > the few legacy applications that need it nowadays. Thanks for the explanation. > Glean tests the special case because it's useful to know if all the bits > in the color channel can be written reliably. For example, > coloredLitPerf2 and coloredTexPerf2 don't bother running on visuals that > fail the color correctness test, because they can't tell whether the > final rendered image is correct. >=20 > You could change things so that the low-order color bits aren't used > when they're unreliable. That might be a good small project. You mean a change to the coloredLitPerf2 and coloredTexPerf2 test so that=20 they compare fewer bits when exactRGBA has failed? Alternatively, do you think it would make sense to add a test for color=20 correctness when the color values passed to glColor are properly "stable"?= =20 What I'm thinking of is a test very similar to exactRGBA, however stabilize= =20 the color values before passing them to glColor. For example, if the=20 framebuffer has 8-bit color channels and the expected readback value is=20 0xAB, pass 0xABABABAB to glColorui and 0xABAB to glColorus. The rationale=20 behind 0xABABABAB is that it is the mathematically correct 32-bit fixed=20 point extension of the 8-bit fixed point 0xAB when 0xFFFFFFFF. cu, Nicolai |
|
From: Allen A. <ak...@po...> - 2005-03-02 22:18:43
|
On Wed, Mar 02, 2005 at 09:27:25PM +0100, Nicolai Haehnle wrote: | So whose fault is this? Mesa, glean or even the spec itself? | Personally, I'd say the truncation behaviour described in the second | paragraph of 2.14.9 is wrong; it should allow for upward rounding by the | implementation. However, I'd really appreciate your opinion in this matter. In the olden days, it was surprisingly common for applications to demand absolute control over the values written to the color buffer. (Often because they were going to use exclusive-or for reversible drawing, or direct-color style color tables for color correction or animation, or multiple color layers for ECAD, or things even more tricky.) It also turns out to be very useful for testing the GL implementation if tests can guarantee that the color values they specify will be written into the color buffer without any modification. The language in section 2.14.9 is there specifically to support those uses. However, it is a special case, and it requires special code paths in most GL implementations. Not everyone wants to bother with those. The compromise that most developers seem to have made is that 8-bit color channels have the special behavior, but deeper color channels don't. Implementing that is simple, and it preserves compatibility for the few legacy applications that need it nowadays. Glean tests the special case because it's useful to know if all the bits in the color channel can be written reliably. For example, coloredLitPerf2 and coloredTexPerf2 don't bother running on visuals that fail the color correctness test, because they can't tell whether the final rendered image is correct. You could change things so that the low-order color bits aren't used when they're unreliable. That might be a good small project. Allen |
|
From: Nicolai H. <pre...@gm...> - 2005-03-02 20:27:37
|
Hi,
while playing around with a Mesa-based driver, I noticed that pure Mesa=20
software rendering (LIBGL_FORCE_XMESA) fails the "exactRGBA" (in=20
treadpix.cpp) test of glean:
exactRGBA: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 44
Unsigned short worst-case error was 0x100 at (0, 0)
expected (0xeb00, 0x5e00, 0xb300, 0x0)
got (0xea00, 0x5e00, 0xb300, 0x0)
Unsigned int worst-case error was 0x1000000 at (2, 0)
expected (0xd1000000, 0x66000000, 0x8000000, 0x0)
got (0xd1000000, 0x66000000, 0x9000000, 0x0)
As you can see, the least significant bit in the framebuffer isn't what=20
glean expects.
After some investigation, this seems to be due to a subtle rounding issue.=
=20
The exactRGBA test does not round color values before submitting them via=20
glColorusv/uiv. Mesa converts those unrounded fixed point values to floats=
=20
using the USHORT_TO_FLOAT and UINT_TO_FLOAT macros, which divide by (2^n -=
=20
1) where n =3D 16 or 32.
Glean, on the other hand, expects a rounding behaviour where the lower bits=
=20
are basically truncated. But this "truncation" is exactly what the quoted=20
paragraph from section 2.14.9 of the OpenGL 2.0 specification expects, even=
=20
though the *same* section of the spec requires the division behaviour that=
=20
Mesa implements.
So whose fault is this? Mesa, glean or even the spec itself?
Personally, I'd say the truncation behaviour described in the second=20
paragraph of 2.14.9 is wrong; it should allow for upward rounding by the=20
implementation. However, I'd really appreciate your opinion in this matter.
cu,
Nicolai
|
|
From: Allen A. <ak...@po...> - 2003-09-24 22:15:26
|
On Tue, Sep 23, 2003 at 11:12:39PM -0700, Eric Anholt wrote: | The SiS DRI driver is failing on the paths test. For some reason the | value after the modulated-texture-only test is 0.996 (1/255 short of the | 1.0 it's supposed to be), and changing it to GL_REPLACE worked fine. I | assume a 1-bit error here isn't that big of an issue, and must just be a | hardware limitation? It's hard to generalize -- it could be symptomatic of a blending arithmetic problem, or of a pixel value conversion problem. Might be in either driver or hardware. | Attached is a patch to have the paths failure message also include the | value of the pixel, so others can see what's happening in case of | error. Also, the wrong arg was being passed to FailMessage in the | always-fail case. If anyone can think of a better wording, that would | be great. It's a bit awkward at the moment. Looks OK to me, so I'll go ahead and apply it. Allen |
|
From: Eric A. <et...@lc...> - 2003-09-24 06:13:11
|
The SiS DRI driver is failing on the paths test. For some reason the value after the modulated-texture-only test is 0.996 (1/255 short of the 1.0 it's supposed to be), and changing it to GL_REPLACE worked fine. I assume a 1-bit error here isn't that big of an issue, and must just be a hardware limitation? Attached is a patch to have the paths failure message also include the value of the pixel, so others can see what's happening in case of error. Also, the wrong arg was being passed to FailMessage in the always-fail case. If anyone can think of a better wording, that would be great. It's a bit awkward at the moment. -- Eric Anholt et...@lc... http://people.freebsd.org/~anholt/ an...@Fr... |
|
From: Allen A. <ak...@po...> - 2003-09-24 05:39:31
|
On Tue, Sep 23, 2003 at 08:52:56PM -0700, Eric Anholt wrote: | While working on the SiS driver, I've noticed something annoying with | glean. When a test that has many combinations fails, it usually quits | testing rather than also testing the other combinations. ... | | Attached is a patch for the texEnv test. If you feel this is | appropriate, I may end up with patches for other tests, too. Looks good to me. I'll apply it shortly. I've been thinking it's time to re-energize the project, and have a few ideas in mind. Your patch is a good start. Allen |
|
From: Eric A. <et...@lc...> - 2003-09-24 03:53:33
|
While working on the SiS driver, I've noticed something annoying with glean. When a test that has many combinations fails, it usually quits testing rather than also testing the other combinations. I noticed this in particular on the texEnv test, which failed for seven different cases of env mode and format. It was much easier to see the list of seven, and as I fixed cases see the list shrink, than to notice that I had fixed the GL_REPLACE/GL_LUMINANCE case but not GL_REPLACE/GL_INTENSITY. Attached is a patch for the texEnv test. If you feel this is appropriate, I may end up with patches for other tests, too. -- Eric Anholt et...@lc... http://people.freebsd.org/~anholt/ an...@Fr... |
|
From: Brian P. <br...@tu...> - 2002-12-04 23:23:06
|
I just checked in some updates to the texCombine test to exercise the GL_ARB_texture_env_crossbar extension. The test is far from exhaustive - it just tests texture source routing with simple combiner modes. -Brian |
|
From: ik22 <ik...@wa...> - 2002-09-19 11:53:09
|
|
From: Allen A. <ak...@po...> - 2001-08-07 17:56:36
|
On Tue, Aug 07, 2001 at 11:55:58AM -0600, Brian Paul wrote: | The test looks at the frame buffer RGBA bit depths as well as the | texture component bit depths when using a non-packed RGBA internal | format. The minimum of those depths is used to compute the tolerance. Sounds like the right approach. (The best we can do, anyway, since there's no query for the precision of internal arithmetic.) Allen |
|
From: Brian P. <br...@va...> - 2001-08-07 17:53:09
|
Allen Akin wrote: > > On Tue, Aug 07, 2001 at 10:51:04AM -0500, Kent Miller wrote: > | > I'd be happy to loosen the tolerance a little. Can you send a patch > | > that fixes it for you? > | > | I just added another bit - instead of 8.0/256.0, I made it 16.0/256.0. > > Of course the risk here is that 16/256 is a huge amount of slop for a > machine (or a software renderer) that supports color channels deeper > than 8 bits. I haven't looked -- does the test check the color > channel depth, or does it just assume the channel is 8 bits deep? The test looks at the frame buffer RGBA bit depths as well as the texture component bit depths when using a non-packed RGBA internal format. The minimum of those depths is used to compute the tolerance. -Brian |
|
From: Allen A. <ak...@po...> - 2001-08-07 17:47:01
|
On Tue, Aug 07, 2001 at 10:51:04AM -0500, Kent Miller wrote: | > I'd be happy to loosen the tolerance a little. Can you send a patch | > that fixes it for you? | | I just added another bit - instead of 8.0/256.0, I made it 16.0/256.0. Of course the risk here is that 16/256 is a huge amount of slop for a machine (or a software renderer) that supports color channels deeper than 8 bits. I haven't looked -- does the test check the color channel depth, or does it just assume the channel is 8 bits deep? Allen |
|
From: Brian P. <br...@va...> - 2001-08-07 16:04:05
|
Kent Miller wrote: > > > I'd be happy to loosen the tolerance a little. Can you send a patch > > that fixes it for you? > > I just added another bit - instead of 8.0/256.0, I made it 16.0/256.0. If > you want, I can change it since I have modified the test to support the ARB > version (along with the EXT) of the extension anyway. Sure, go for it. -Brian |
|
From: Kent M. <kpm...@ap...> - 2001-08-07 15:53:06
|
> I'd be happy to loosen the tolerance a little. Can you send a patch > that fixes it for you? I just added another bit - instead of 8.0/256.0, I made it 16.0/256.0. If you want, I can change it since I have modified the test to support the ARB version (along with the EXT) of the extension anyway. Kent |
|
From: Brian P. <br...@va...> - 2001-08-07 15:36:24
|
Kent Miller wrote: > > Hi, > > I am looking at the texCombine test. The calculated tolerance given to the > test is 8.0 / (1<<rBits) which in my case turns out to be 8/256, or .03125. > > Our OpenGL implementation stores texture components as unsigned bytes, so > for example passing in a a texture with a component value of 0.5 would yield > a internal storage value of 0x7F obtained by truncation (instead of 0x80 > which could be obtained by rounding and would technically closer to > correct). > > So my question is what criterion was used to pick the tolerance of 8.0 / > (1<<rBits)? If I add another bit of error (chosen by the fact that we have > an additional source of error with the truncation described above) I do pass > the test. > > Can someone enlighten me here? The tolerance I chose is rather arbitrary. I half-expected someone would find a problem with it someday. As Allen said, it's hard to choose a good tolerance given the many possible variations in OpenGL implementations. I'd be happy to loosen the tolerance a little. Can you send a patch that fixes it for you? -Brian |
|
From: Allen A. <ak...@po...> - 2001-07-30 22:34:42
|
On Mon, Jul 30, 2001 at 05:11:10PM -0500, Kent Miller wrote: | | So my question is what criterion was used to pick the tolerance of 8.0 / | (1<<rBits)? I'll have to leave the answer to Brian, since he's the author of that test. But as a general observation, it may be impossible to pick a simple tolerance value for tests that involve arithmetic in multiple pipeline stages. For example, there are systems in which the texture component size, texture filtering arithmetic unit width, and framebuffer component size are all different. Although you can query the component sizes, there's no way to determine the precision of internal arithmetic. In such cases a "fail" indication is best thought of as advice to the application programmer: it means "don't count on having as much precision as the depth of the framebuffer or the texture seems to imply." This is the philosophy behind the blendFunc test, for example. An implementation that "fails" blendFunc is not necessarily flawed; it may be behaving exactly as designed, just not as an application programmer might guess. One way to design tests to get around such things is to make them return a NOTE result, and store the precision with which the computations appear to have been performed. This doesn't give you a clean pass/fail indication, but it does allow you to compare implementations to see which is more precise. The key to making the test useful is to choose the audience for the test carefully, match the test to the expectations of that audience, and then document it. Allen |
|
From: Kent M. <kpm...@ap...> - 2001-07-30 22:13:22
|
Hi, I am looking at the texCombine test. The calculated tolerance given to the test is 8.0 / (1<<rBits) which in my case turns out to be 8/256, or .03125. Our OpenGL implementation stores texture components as unsigned bytes, so for example passing in a a texture with a component value of 0.5 would yield a internal storage value of 0x7F obtained by truncation (instead of 0x80 which could be obtained by rounding and would technically closer to correct). So my question is what criterion was used to pick the tolerance of 8.0 / (1<<rBits)? If I add another bit of error (chosen by the fact that we have an additional source of error with the truncation described above) I do pass the test. Can someone enlighten me here? Thanks, Kent |
|
From: Brian P. <br...@va...> - 2001-05-31 19:11:53
|
Jeffrey Mehlhorn wrote: > > Brian, > > Could you send me the two lines of code that you > added to your glext.h file so that I can get > glean to build. I guess the correct values of > DOT3_RGB_EXT and DOT3_RGBA_EXT are not that much > of a concern, but I figure if you know what they > should be it wouldn't hurt. #ifndef GL_ARB_texture_env_dot3 #define GL_DOT3_RGB_ARB 0x86AE #define GL_DOT3_RGBA_ARB 0x86AF #endif #ifndef GL_EXT_texture_env_dot3 #define GL_DOT3_RGB_EXT 0x8740 #define GL_DOT3_RGBA_EXT 0x8741 #endif You should also have: #ifndef GL_ARB_texture_env_dot3 #define GL_ARB_texture_env_dot3 1 #endif #ifndef GL_EXT_texture_env_dot3 #define GL_EXT_texture_env_dot3 1 #endif -Brian |
|
From: Jeffrey M. <Jef...@3D...> - 2001-05-31 16:23:49
|
Brian, Could you send me the two lines of code that you added to your glext.h file so that I can get glean to build. I guess the correct values of DOT3_RGB_EXT and DOT3_RGBA_EXT are not that much of a concern, but I figure if you know what they should be it wouldn't hurt. Jeff -----Original Message----- From: Brian Paul [mailto:br...@va...] Sent: Thursday, May 31, 2001 11:10 AM To: gle...@li... Subject: Re: [glean-dev] Missing Extension Constants Jeffrey Mehlhorn wrote: > > Glean Developers, > > I have recently pulled the glean source tree, actually > is been a couple of weeks, but in any event, when I > compile the glean executable, I get the following error: > > ttexcombine.cpp(176) : error C2065: 'GL_DOT3_RGB_EXT' : undeclared > identifier > ttexcombine.cpp(192) : error C2065: 'GL_DOT3_RGBA_EXT' : undeclared > identifier > > I pulled the glext.h header file today and the constants > denoted in the error messages above were not there. > > Does anyone have a copy of glext.h that contains these > constants? SGI's been slow to update glext.h with the latest ARB extension tokens. I just hacked my copy of glext.h. Otherwise it would be easy to add the appropriate #ifndef/#define/#endif statements in Glean to work around this too. > Also, ttexenv.cpp contains a call to assert which is > undefined in my Windows based build environment. This > doesn't present too much of an obstacle as I can easily > comment out the call, but does any know what library > I need to link with to resolve the assert call. Just need to #include <assert.h>. I've checked in the fix. -Brian _______________________________________________ glean-dev mailing list gle...@li... http://lists.sourceforge.net/lists/listinfo/glean-dev |
|
From: Brian P. <br...@va...> - 2001-05-31 16:06:07
|
Jeffrey Mehlhorn wrote: > > Glean Developers, > > I have recently pulled the glean source tree, actually > is been a couple of weeks, but in any event, when I > compile the glean executable, I get the following error: > > ttexcombine.cpp(176) : error C2065: 'GL_DOT3_RGB_EXT' : undeclared > identifier > ttexcombine.cpp(192) : error C2065: 'GL_DOT3_RGBA_EXT' : undeclared > identifier > > I pulled the glext.h header file today and the constants > denoted in the error messages above were not there. > > Does anyone have a copy of glext.h that contains these > constants? SGI's been slow to update glext.h with the latest ARB extension tokens. I just hacked my copy of glext.h. Otherwise it would be easy to add the appropriate #ifndef/#define/#endif statements in Glean to work around this too. > Also, ttexenv.cpp contains a call to assert which is > undefined in my Windows based build environment. This > doesn't present too much of an obstacle as I can easily > comment out the call, but does any know what library > I need to link with to resolve the assert call. Just need to #include <assert.h>. I've checked in the fix. -Brian |
|
From: Jeffrey M. <Jef...@3D...> - 2001-05-31 15:13:44
|
Glean Developers,
I have recently pulled the glean source tree, actually
is been a couple of weeks, but in any event, when I
compile the glean executable, I get the following error:
ttexcombine.cpp(176) : error C2065: 'GL_DOT3_RGB_EXT' : undeclared
identifier
ttexcombine.cpp(192) : error C2065: 'GL_DOT3_RGBA_EXT' : undeclared
identifier
I pulled the glext.h header file today and the constants
denoted in the error messages above were not there.
Does anyone have a copy of glext.h that contains these
constants?
Also, ttexenv.cpp contains a call to assert which is
undefined in my Windows based build environment. This
doesn't present too much of an obstacle as I can easily
comment out the call, but does any know what library
I need to link with to resolve the assert call.
Finally,
Jeff
|
|
From: Brian P. <br...@va...> - 2001-05-10 17:11:39
|
Brian Paul wrote: > > I've just checked in a new RGBA-mode glLogicOp() test. It's based > on Allen's blendFunc test. > > The OpenGL conformance tests only test glLogicOp() in CI mode, not > RGBA mode, so this fills in a hole. > > I added tlogicop.obj to the Windows makefile but have only tested > the code on Linux w/ gcc. Hopefully it'll compile/run alright on > Windows. PS: The test uses GLubyte images so the test probably won't work right for frame buffers with deeper than 8-bit channels. I'm still pondering the best solution for that. -Brian |