|
From: Nicolai H. <pre...@gm...> - 2005-03-02 20:27:37
|
Hi,
while playing around with a Mesa-based driver, I noticed that pure Mesa=20
software rendering (LIBGL_FORCE_XMESA) fails the "exactRGBA" (in=20
treadpix.cpp) test of glean:
exactRGBA: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 44
Unsigned short worst-case error was 0x100 at (0, 0)
expected (0xeb00, 0x5e00, 0xb300, 0x0)
got (0xea00, 0x5e00, 0xb300, 0x0)
Unsigned int worst-case error was 0x1000000 at (2, 0)
expected (0xd1000000, 0x66000000, 0x8000000, 0x0)
got (0xd1000000, 0x66000000, 0x9000000, 0x0)
As you can see, the least significant bit in the framebuffer isn't what=20
glean expects.
After some investigation, this seems to be due to a subtle rounding issue.=
=20
The exactRGBA test does not round color values before submitting them via=20
glColorusv/uiv. Mesa converts those unrounded fixed point values to floats=
=20
using the USHORT_TO_FLOAT and UINT_TO_FLOAT macros, which divide by (2^n -=
=20
1) where n =3D 16 or 32.
Glean, on the other hand, expects a rounding behaviour where the lower bits=
=20
are basically truncated. But this "truncation" is exactly what the quoted=20
paragraph from section 2.14.9 of the OpenGL 2.0 specification expects, even=
=20
though the *same* section of the spec requires the division behaviour that=
=20
Mesa implements.
So whose fault is this? Mesa, glean or even the spec itself?
Personally, I'd say the truncation behaviour described in the second=20
paragraph of 2.14.9 is wrong; it should allow for upward rounding by the=20
implementation. However, I'd really appreciate your opinion in this matter.
cu,
Nicolai
|