From: Brian P. <br...@pr...> - 2000-06-18 21:20:06
|
Linux Development Account wrote: > > While I have been working with OpenGL since its inception I am > relatively new to Mesa and the Source Forge development system here > so bear with me. > > An initial question: If I find what I think is bug do I directly > try to add in the source forge bug tracker or do we discuss it first here > to make sure a bug has really been found? > > I recently compiled a fairly large visualization system under Linux-i386 and > Mesa OpenGL. Overall I am very impressed - kudos to the developers. However I > did discover several bugs, one which I think is quite serious. These tests > were done with a dataset of approx. 50 million polygons. Here are the > problems that I found > > ---- > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > Polygons less than one pixel in size don't display at all. Take a very complex > polygonal surface then slowly pull back. Given the huge number of > polygons eventually the size of a polygon will be mathematically smaller > than one pixel. It appears at this stage that no on screen pixel > representation is drawn. Thus as you pull back the suface discenigrates > and vanishes even though the overall surface might cover thousands of > pixels. > > Note this problem does not happen on SGI, Sun, or windows OpenGL > implementations so I guess there is some aspect of the OpenGL standard > that Mesa does not correctly implement. In this implementation the surface > is a digital terrain model rendered as a series a quad strips. A long time ago I added code to cull triangles smaller than a certain threshold size in order to prevent fp/int overflows in the rasterizer code. A few people have complained about this now so I guess it's time I reexamined the problem. > #2 Front & Back buffer drawing+blending = crash > > With a double buffered visual if I draw polygons (quad strips in this > case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. > However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. > Here's some output from gdb. > > Reading symbols from /lib/ld-linux.so.2...done. > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > 286 blend.c: No such file or directory. > (gdb) where > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > #1 0x842ed10 in ?? () > Cannot access memory at address 0x1 > > Note that the quad strips are drawn with GL_BLEND enabled although I am > not sure if that has anything to do with the problem. I hadn't heard of this problem. I'll look into it (though I'll be out of town most of this week, so be patient). > #3 Rendering Performance Issue - Not a bug but an observation. > > Again using a double buffered (+Zbuffer) - If I draw the same quad strip > scene described in bug #1 normaly in the background and swap it to the > foregroup this the usual SwapBuffer command I get excellent performance. > However if I set glDrawBuffer(GL_FRONT) and render the same scene you get > absolutely horrible performance in comparison. I would say 10-12 times > slower. I would expect the two situations to have similar performance with only > glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone > with more Mesa knowledge care to comment - Is this a problem or just a > optimization waiting for an implementor? Drawing to the front buffer (the X window) is generally done with XDrawPoint. That's inherently slow. Drawing to the back buffer is either done with direct writes to the XImage or with XPutPixel. That's much faster than XDrawPoint. > I plan to look into #2 myself as a first experiment working with the Mesa > code but for #1 and #3 it would be nice to see some dicussion from the > more experienced Mesa developers. Sould I add these bugs to source forge? Yes, please file bug reports, otherwise I'm likely to forget about them. -Brian |