From: Ross G. M. <rg...@cs...> - 2002-03-01 18:35:47
|
Hey All, I'm getting some bad pixels when I try to do some rendering and I need a little help sorting out what's wrong. Here's a pseudocode representation of what I'm trying to do: <Initial Mesa setup (lighting, model-view and projection matrices, etc...)> <clear the buffers> glDisable( GL_LIGHTING); glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthFunc( GL_LESS); glDepthMask( GL_TRUE); <Draw lot's of polygons> glEnable( GL_LIGHTING); glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthFunc( GL_EQUAL); glDepthMask( GL_FALSE); <Draw those same polygons again> glFlush(); Is there some reason why this shouldn't work? (I know, I know - it would be a lot more efficient to update the depth and color buffers in one pass. For various and sundry reasons - having to do with the fact that I'm eventually going to be doing this rendering on a Beowulf cluster - I can't.) What I'm getting back is an image with several bad pixels. (They're black when they should be colored.) There's not very many - maybe two dozen out of 250000 - but it's enough to make the image look bad. I did some further checking and compared the depth buffer from that algorithm (obtained by doing a glReadPixels with unsigned short as the data type) with one obtained after drawing the same scene with a single-pass algorithm. What I found was that the bad pixels had values that were 1 less than their corresponding value in the depth buffer from the single-pass algorithm. The off-by-1 thing makes me think this is some kind of roundoff error. What really puzzles me is why enabling/disabling lighting would affect the values that are written into the depth buffer. Anyone have any thoughts? Thanks, -Ross |