From: Linux D. A. <li...@iv...> - 2000-06-18 03:37:54
|
While I have been working with OpenGL since its inception I am relatively new to Mesa and the Source Forge development system here so bear with me. An initial question: If I find what I think is bug do I directly try to add in the source forge bug tracker or do we discuss it first here to make sure a bug has really been found? I recently compiled a fairly large visualization system under Linux-i386 and Mesa OpenGL. Overall I am very impressed - kudos to the developers. However I did discover several bugs, one which I think is quite serious. These tests were done with a dataset of approx. 50 million polygons. Here are the problems that I found ---- #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? Polygons less than one pixel in size don't display at all. Take a very complex polygonal surface then slowly pull back. Given the huge number of polygons eventually the size of a polygon will be mathematically smaller than one pixel. It appears at this stage that no on screen pixel representation is drawn. Thus as you pull back the suface discenigrates and vanishes even though the overall surface might cover thousands of pixels. Note this problem does not happen on SGI, Sun, or windows OpenGL implementations so I guess there is some aspect of the OpenGL standard that Mesa does not correctly implement. In this implementation the surface is a digital terrain model rendered as a series a quad strips. ---- #2 Front & Back buffer drawing+blending = crash With a double buffered visual if I draw polygons (quad strips in this case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. Here's some output from gdb. Reading symbols from /lib/ld-linux.so.2...done. #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d ) at blend.c:286 286 blend.c: No such file or directory. (gdb) where #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d ) at blend.c:286 #1 0x842ed10 in ?? () Cannot access memory at address 0x1 Note that the quad strips are drawn with GL_BLEND enabled although I am not sure if that has anything to do with the problem. ---- #3 Rendering Performance Issue - Not a bug but an observation. Again using a double buffered (+Zbuffer) - If I draw the same quad strip scene described in bug #1 normaly in the background and swap it to the foregroup this the usual SwapBuffer command I get excellent performance. However if I set glDrawBuffer(GL_FRONT) and render the same scene you get absolutely horrible performance in comparison. I would say 10-12 times slower. I would expect the two situations to have similar performance with only glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone with more Mesa knowledge care to comment - Is this a problem or just a optimization waiting for an implementor? ---- This testing was done with a C++ application using Mesa 3.2, Redhat 6.2 with the stock kernel, XFree86 3.3.6, Metrolink Motif, and the SGI Motif OpenGL drawing widget from Mesa (GLwMDrawingA.h). I plan to look into #2 myself as a first experiment working with the Mesa code but for #1 and #3 it would be nice to see some dicussion from the more experienced Mesa developers. Sould I add these bugs to source forge? Talk to you later, Mark ------------------------------------- Mark Paton Interactive Visualization Systems Inc. http://www.ivs.unb.ca |
From: Randall F. <rjf...@ho...> - 2000-06-18 17:27:52
|
To add my $0.02: I would love to see (#1) addressed as well. Generally, we comment out the size check in the code to work around this, but a proper fix still needs to be found. As far as (#3) is concerned, there were some comments in the readme for X11 that discuss this. I assume they are still there. Linux Development Account wrote: > > While I have been working with OpenGL since its inception I am > relatively new to Mesa and the Source Forge development system here > so bear with me. > > An initial question: If I find what I think is bug do I directly > try to add in the source forge bug tracker or do we discuss it first here > to make sure a bug has really been found? > > I recently compiled a fairly large visualization system under Linux-i386 and > Mesa OpenGL. Overall I am very impressed - kudos to the developers. However I > did discover several bugs, one which I think is quite serious. These tests > were done with a dataset of approx. 50 million polygons. Here are the > problems that I found > > ---- > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > Polygons less than one pixel in size don't display at all. Take a very complex > polygonal surface then slowly pull back. Given the huge number of > polygons eventually the size of a polygon will be mathematically smaller > than one pixel. It appears at this stage that no on screen pixel > representation is drawn. Thus as you pull back the suface discenigrates > and vanishes even though the overall surface might cover thousands of > pixels. > > Note this problem does not happen on SGI, Sun, or windows OpenGL > implementations so I guess there is some aspect of the OpenGL standard > that Mesa does not correctly implement. In this implementation the surface > is a digital terrain model rendered as a series a quad strips. > > ---- > > #2 Front & Back buffer drawing+blending = crash > > With a double buffered visual if I draw polygons (quad strips in this > case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. > However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. > Here's some output from gdb. > > Reading symbols from /lib/ld-linux.so.2...done. > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > 286 blend.c: No such file or directory. > (gdb) where > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > #1 0x842ed10 in ?? () > Cannot access memory at address 0x1 > > Note that the quad strips are drawn with GL_BLEND enabled although I am > not sure if that has anything to do with the problem. > > ---- > > #3 Rendering Performance Issue - Not a bug but an observation. > > Again using a double buffered (+Zbuffer) - If I draw the same quad strip > scene described in bug #1 normaly in the background and swap it to the > foregroup this the usual SwapBuffer command I get excellent performance. > However if I set glDrawBuffer(GL_FRONT) and render the same scene you get > absolutely horrible performance in comparison. I would say 10-12 times > slower. I would expect the two situations to have similar performance with only > glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone > with more Mesa knowledge care to comment - Is this a problem or just a > optimization waiting for an implementor? > > ---- > > This testing was done with a C++ application using Mesa 3.2, Redhat 6.2 > with the stock kernel, XFree86 3.3.6, Metrolink Motif, and the SGI Motif > OpenGL drawing widget from Mesa (GLwMDrawingA.h). > > I plan to look into #2 myself as a first experiment working with the Mesa > code but for #1 and #3 it would be nice to see some dicussion from the > more experienced Mesa developers. Sould I add these bugs to source forge? > > Talk to you later, > Mark > > ------------------------------------- > Mark Paton > Interactive Visualization Systems Inc. > http://www.ivs.unb.ca > > _______________________________________________ > Mesa3d-dev mailing list > Mes...@li... > http://lists.sourceforge.net/mailman/listinfo/mesa3d-dev -- rjf. Randy Frank | ASCI Visualization Lawrence Livermore National Laboratory | rj...@ll... B451 Room 2039 L-561 | Voice: (925) 423-9399 Livermore, CA 94550 | Fax: (925) 423-8704 |
From: Brian P. <br...@pr...> - 2000-06-18 21:20:06
|
Linux Development Account wrote: > > While I have been working with OpenGL since its inception I am > relatively new to Mesa and the Source Forge development system here > so bear with me. > > An initial question: If I find what I think is bug do I directly > try to add in the source forge bug tracker or do we discuss it first here > to make sure a bug has really been found? > > I recently compiled a fairly large visualization system under Linux-i386 and > Mesa OpenGL. Overall I am very impressed - kudos to the developers. However I > did discover several bugs, one which I think is quite serious. These tests > were done with a dataset of approx. 50 million polygons. Here are the > problems that I found > > ---- > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > Polygons less than one pixel in size don't display at all. Take a very complex > polygonal surface then slowly pull back. Given the huge number of > polygons eventually the size of a polygon will be mathematically smaller > than one pixel. It appears at this stage that no on screen pixel > representation is drawn. Thus as you pull back the suface discenigrates > and vanishes even though the overall surface might cover thousands of > pixels. > > Note this problem does not happen on SGI, Sun, or windows OpenGL > implementations so I guess there is some aspect of the OpenGL standard > that Mesa does not correctly implement. In this implementation the surface > is a digital terrain model rendered as a series a quad strips. A long time ago I added code to cull triangles smaller than a certain threshold size in order to prevent fp/int overflows in the rasterizer code. A few people have complained about this now so I guess it's time I reexamined the problem. > #2 Front & Back buffer drawing+blending = crash > > With a double buffered visual if I draw polygons (quad strips in this > case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. > However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. > Here's some output from gdb. > > Reading symbols from /lib/ld-linux.so.2...done. > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > 286 blend.c: No such file or directory. > (gdb) where > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > #1 0x842ed10 in ?? () > Cannot access memory at address 0x1 > > Note that the quad strips are drawn with GL_BLEND enabled although I am > not sure if that has anything to do with the problem. I hadn't heard of this problem. I'll look into it (though I'll be out of town most of this week, so be patient). > #3 Rendering Performance Issue - Not a bug but an observation. > > Again using a double buffered (+Zbuffer) - If I draw the same quad strip > scene described in bug #1 normaly in the background and swap it to the > foregroup this the usual SwapBuffer command I get excellent performance. > However if I set glDrawBuffer(GL_FRONT) and render the same scene you get > absolutely horrible performance in comparison. I would say 10-12 times > slower. I would expect the two situations to have similar performance with only > glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone > with more Mesa knowledge care to comment - Is this a problem or just a > optimization waiting for an implementor? Drawing to the front buffer (the X window) is generally done with XDrawPoint. That's inherently slow. Drawing to the back buffer is either done with direct writes to the XImage or with XPutPixel. That's much faster than XDrawPoint. > I plan to look into #2 myself as a first experiment working with the Mesa > code but for #1 and #3 it would be nice to see some dicussion from the > more experienced Mesa developers. Sould I add these bugs to source forge? Yes, please file bug reports, otherwise I'm likely to forget about them. -Brian |
From: Brian P. <br...@pr...> - 2000-06-18 22:58:05
|
OK, I found some time during my X server builds to look into these problems. Brian Paul wrote: > > Linux Development Account wrote: > > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > > > Polygons less than one pixel in size don't display at all. I've changed the code to simply threshold the 1/area computation in tritemp.h. My tests indicate this works. > > #2 Front & Back buffer drawing+blending = crash Easily fixed (span.c.) I've checked in these changes to both the Mesa 3.2 and 3.3 branches. I'm planning to make a Mesa 3.2.1 release in the coming weeks since there's been a number of bug fixes since 3.2. -Brian |
From: Stephen J B. <sj...@li...> - 2000-06-19 13:35:55
|
On Sun, 18 Jun 2000, Linux Development Account wrote: > While I have been working with OpenGL since its inception I am > relatively new to Mesa and the Source Forge development system here > so bear with me. > > An initial question: If I find what I think is bug do I directly > try to add in the source forge bug tracker or do we discuss it first here > to make sure a bug has really been found? It's a hard call - I'd discuss it here first. > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > Polygons less than one pixel in size don't display at all. Take a very complex > polygonal surface then slowly pull back. Given the huge number of > polygons eventually the size of a polygon will be mathematically smaller > than one pixel. It appears at this stage that no on screen pixel > representation is drawn. Thus as you pull back the suface discenigrates > and vanishes even though the overall surface might cover thousands of > pixels. This has come up a couple of times before. Mesa does cull very tiny triangles (which is "A BAD THING" IMHO) - we are told that it's necessary for Mesa to do this in order to avoid mathematical instabilities down the line. The last person to complain about this hacked Mesa to reduce the size limit to something MUCH smaller than it currently is - with no ill effects. I guess that hack didn't make it into mainstream Mesa...but I think we DO need to address this issue as a bug. > Note this problem does not happen on SGI, Sun, or windows OpenGL > implementations so I guess there is some aspect of the OpenGL standard > that Mesa does not correctly implement. I don't know if what Mesa does is in violation of the OpenGL spec or not... I suspect it is a violation - but even if it isn't, it's an evil that needs to be addressed properly at some stage. > #2 Front & Back buffer drawing+blending = crash > > With a double buffered visual if I draw polygons (quad strips in this > case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. > However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. > Here's some output from gdb. > > Reading symbols from /lib/ld-linux.so.2...done. > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > 286 blend.c: No such file or directory. > (gdb) where > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > #1 0x842ed10 in ?? () > Cannot access memory at address 0x1 That's a new one on me - but then you may well be the first person ever to try that! It's a pretty estoeric rendering mode. > #3 Rendering Performance Issue - Not a bug but an observation. > > Again using a double buffered (+Zbuffer) - If I draw the same quad strip > scene described in bug #1 normaly in the background and swap it to the > foregroup this the usual SwapBuffer command I get excellent performance. > However if I set glDrawBuffer(GL_FRONT) and render the same scene you get > absolutely horrible performance in comparison. I would say 10-12 times > slower. I would expect the two situations to have similar performance with only > glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone > with more Mesa knowledge care to comment - Is this a problem or just a > optimization waiting for an implementor? As always, it's the commonly used modes that get the most optimisation - and glDrawBuffer(GL_FRONT_AND_BACK) is certainly not commonly used. It's possible that because each pixel is writing to two widely separated chunks of memory that you are causeing some nasty cache artifact in the CPU - but I suspect it's more likely that a lack of a specifically optimised code path is your problem. (That assumes you are doing software rendering - if you are doing hardware rendering then the likely explanation would be that the hardware can't do simultaneous front and back buffer rendering - so you got a software fallback). If I were you, I'd alter my code to render each object twice, once into the back buffer and again into the front. This would avoid both the bug and the slowdown - and it'll probably be faster on every platform - not just Mesa. Of course you should register #2 as a bug, possibly #1 also (although we all understand it already) - I don't think a lack of optimisation in #3 justifies a *bug* report - but opinions will differ on that one. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Mark P. <li...@iv...> - 2000-06-19 18:49:19
|
I am just starting to learn Mesa at a lower level and don't know too many the details of the rendering engine yet but: The basic problem here seems to be that we are culling triangles without regard to its neighbouring triangles. I expect this would be very difficult to do and it is desirable to treat each triangle independently. However, wouldn't a solution be that the smallest a triangle can be is one pixel - e.g. the scan conversion for this triangle is automatic. If a triangle is mathematically smaller then a pixel area its rasterization is just the nearest pixel. Then the depth buffer test and other rendering details are processed accordingly. Then if you do something crazy like rendering 50 million polygons in a 72x72 pixel box (which I tried). The system will have to render 50 million pixels, and many of them will be the same pixel but the result will be correct. Is this a valid thing to consider? I am assuming here that the numeric instibility problems are caused by attempting to scan convert a mathematically tiny triangle. In my case I used a quadstrip. In this instance it might be possible to consider a higher level approach. E.g. Instead of breaking each quad into two triagles and render - reduce the quad strip before triangulation to generate reasonable (1pixel) triangles. - Mark >On Mon, 19 Jun 2000, Stephen J Baker wrote: > On Sun, 18 Jun 2000, Linux Development Account wrote: > > > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > > > Polygons less than one pixel in size don't display at all. Take a very complex > > polygonal surface then slowly pull back. Given the huge number of > > polygons eventually the size of a polygon will be mathematically smaller > > than one pixel. It appears at this stage that no on screen pixel > > representation is drawn. Thus as you pull back the suface discenigrates > > and vanishes even though the overall surface might cover thousands of > > pixels. > > This has come up a couple of times before. > > Mesa does cull very tiny triangles (which is "A BAD THING" IMHO) - we are > told that it's necessary for Mesa to do this in order to avoid mathematical > instabilities down the line. > > The last person to complain about this hacked Mesa to reduce the size limit > to something MUCH smaller than it currently is - with no ill effects. > > I guess that hack didn't make it into mainstream Mesa...but I think we DO > need to address this issue as a bug. > > > Note this problem does not happen on SGI, Sun, or windows OpenGL > > implementations so I guess there is some aspect of the OpenGL standard > > that Mesa does not correctly implement. > > I don't know if what Mesa does is in violation of the OpenGL spec or not... > I suspect it is a violation - but even if it isn't, it's an evil that needs > to be addressed properly at some stage. > |
From: Allen A. <ak...@po...> - 2000-06-19 19:12:05
|
On Mon, Jun 19, 2000 at 03:44:57PM -0300, Mark Paton wrote: | The basic problem here seems to be that we are culling triangles | without regard to its neighbouring triangles. I expect this would | be very difficult to do and it is desirable to treat each triangle | independently. ... Yes, in fact OpenGL requires that the triangles be treated independently. | ... However, wouldn't a solution be that the smallest | a triangle can be is one pixel - e.g. the scan conversion for this | triangle is automatic. If a triangle is mathematically smaller then a | pixel area its rasterization is just the nearest pixel. ... OpenGL also requires that filled primitives be "point sampled." Essentially, the image plane is examined at regularly-spaced points. If the projection of a triangle (for example) covers one of those points, then the color, depth, etc. at that point are derived from that triangle, no matter how small the area of the triangle might be. This behavior is essential for proper antialiasing and visible-surface resolution, among other things -- including avoiding the problem that you first mentioned (gaps/overlaps in dense meshes of small polygons). However, for all this to work, the rasterization code must be written *very* carefully. When I wrote the original version of the code that eventually become Mesa's triangle rasterizer, I was aware of the issues, but took a few shortcuts for performance and made a few mistakes as well. Many of those have been fixed by now, but a few probably still remain, and the triangle-size cutoff may have been installed to work around them. It would be worth trying to reduce the cutoff size. My guess is that it would probably work in most cases. The ones least-likely to work are those in which the vertex coordinates are far from zero. | Then if you do something crazy like rendering 50 million polygons in | a 72x72 pixel box (which I tried). The system will have to render 50 | million pixels, and many of them will be the same pixel but the result | will be correct. Is this a valid thing to consider? ... Assuming the projected images of the polygons don't overlap, then the right behavior is that all 50 million polygons will be considered, but only 72x72 pixels will be written. Allen |
From: Stephen J B. <sj...@li...> - 2000-06-19 20:31:59
|
On Mon, 19 Jun 2000, Mark Paton wrote: > I am just starting to learn Mesa at a lower level and don't know too > many the details of the rendering engine yet but: > The basic problem here seems to be that we are culling triangles > without regard to its neighbouring triangles. I expect this would > be very difficult to do and it is desirable to treat each triangle > independently. However, wouldn't a solution be that the smallest > a triangle can be is one pixel - e.g. the scan conversion for this > triangle is automatic. If a triangle is mathematically smaller then a > pixel area its rasterization is just the nearest pixel. Then the > depth buffer test and other rendering details are processed accordingly. > Then if you do something crazy like rendering 50 million polygons in > a 72x72 pixel box (which I tried). The system will have to render 50 > million pixels, and many of them will be the same pixel but the result > will be correct. Is this a valid thing to consider? I am assuming here > that the numeric instibility problems are caused by attempting to scan > convert a mathematically tiny triangle. You'd think that would be OK wouldn't you...but it's not. :-( Imagine a single pixel sphere made from a hundred red triangles each with an alpha of 0.1 - rendered onto a black background. If you rendered that in a truly correct manner, the colour of the resulting pixel would be RGB=(0.1,0,0), but if each triangle is forced to cover the entire pixel, you get nearly 100% red because each triangle hides only 1/10th of the colour of the one behind it - then adds 0.1 of its own colour...do that 100 times and you have: 0.1 + 0.9 * ( 0.1 + 0.9 * ( 0.1 ...a hundred times...... )) ...which sums to something like 0.9999999999 - peak red. What you *should* do is to have the triangle be discarded unless it covers the very center of the pixel...no matter how small that triangle is. This isn't a problem that's unique to very small triangles. Even a large triangle that *touches* a pixel without covering the exact center of the pixel should not render to it or else you'll get a row of bright 'beads' running along the edges of two translucent triangles that share a common edge. Since Mesa already gets that right, using the existing rasterizer should be OK so long as the polygon isn't simply discarded. You might also want to think about what happens when a bunch of long thin triangles meet at a point. Each one is large - so the small triangle test won't discard it - yet 360 one-degree-wide triangles meeting at a point still have to make up a solid circle. The answer to all of these problems is to stop thinking of pixels as being little squares - but instead to think of them as infinitely small points. Instead of thinking about whether a polygon touches some point on a square pixel, think about whether the polygon covers that infinitesimal point at the center of the pixel. Of course if you are antialiasing - then thinking of pixels as areas becomes sensible again. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |