You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
(10) |
Apr
(28) |
May
(41) |
Jun
(91) |
Jul
(63) |
Aug
(45) |
Sep
(37) |
Oct
(80) |
Nov
(91) |
Dec
(47) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(48) |
Feb
(121) |
Mar
(126) |
Apr
(16) |
May
(85) |
Jun
(84) |
Jul
(115) |
Aug
(71) |
Sep
(27) |
Oct
(33) |
Nov
(15) |
Dec
(71) |
2002 |
Jan
(73) |
Feb
(34) |
Mar
(39) |
Apr
(135) |
May
(59) |
Jun
(116) |
Jul
(93) |
Aug
(40) |
Sep
(50) |
Oct
(87) |
Nov
(90) |
Dec
(32) |
2003 |
Jan
(181) |
Feb
(101) |
Mar
(231) |
Apr
(240) |
May
(148) |
Jun
(228) |
Jul
(156) |
Aug
(49) |
Sep
(173) |
Oct
(169) |
Nov
(137) |
Dec
(163) |
2004 |
Jan
(243) |
Feb
(141) |
Mar
(183) |
Apr
(364) |
May
(369) |
Jun
(251) |
Jul
(194) |
Aug
(140) |
Sep
(154) |
Oct
(167) |
Nov
(86) |
Dec
(109) |
2005 |
Jan
(176) |
Feb
(140) |
Mar
(112) |
Apr
(158) |
May
(140) |
Jun
(201) |
Jul
(123) |
Aug
(196) |
Sep
(143) |
Oct
(165) |
Nov
(158) |
Dec
(79) |
2006 |
Jan
(90) |
Feb
(156) |
Mar
(125) |
Apr
(146) |
May
(169) |
Jun
(146) |
Jul
(150) |
Aug
(176) |
Sep
(156) |
Oct
(237) |
Nov
(179) |
Dec
(140) |
2007 |
Jan
(144) |
Feb
(116) |
Mar
(261) |
Apr
(279) |
May
(222) |
Jun
(103) |
Jul
(237) |
Aug
(191) |
Sep
(113) |
Oct
(129) |
Nov
(141) |
Dec
(165) |
2008 |
Jan
(152) |
Feb
(195) |
Mar
(242) |
Apr
(146) |
May
(151) |
Jun
(172) |
Jul
(123) |
Aug
(195) |
Sep
(195) |
Oct
(138) |
Nov
(183) |
Dec
(125) |
2009 |
Jan
(268) |
Feb
(281) |
Mar
(295) |
Apr
(293) |
May
(273) |
Jun
(265) |
Jul
(406) |
Aug
(679) |
Sep
(434) |
Oct
(357) |
Nov
(306) |
Dec
(478) |
2010 |
Jan
(856) |
Feb
(668) |
Mar
(927) |
Apr
(269) |
May
(12) |
Jun
(13) |
Jul
(6) |
Aug
(8) |
Sep
(23) |
Oct
(4) |
Nov
(8) |
Dec
(11) |
2011 |
Jan
(4) |
Feb
(2) |
Mar
(3) |
Apr
(9) |
May
(6) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
(1) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Stephen J B. <sj...@li...> - 2000-06-26 13:18:24
|
On Fri, 23 Jun 2000, Gareth Hughes wrote: > Okay, here's the deal, for the record: > > Brian and I have discussed this, and we have decided to replace Mesa's > GLU with the SI's one RSN. I was under the impression that SGI's not-quite-OpenSource license would be a problem here. --ooOOoo-- Please note the change in my email address (was hti.com - now link.com) Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Brian P. <br...@pr...> - 2000-06-23 18:03:40
|
Scott McMillan wrote: > > So far this only happens on my VAIO Laptop running Mandrake 6.0 with > a 2.2.11 kernel and a self compiled version of Mesa 3.2. It doesn't > happen on my desktop (Mandrake 6.0, 2.2.13, forgot to check which > version of Mesa - oops). Can you send me the complete program to test? -Brian |
From: Gareth H. <ga...@pr...> - 2000-06-23 01:29:33
|
Olivier Michel wrote: > > I found two serious bugs in the GLU tesselator. I reported these bugs on > the sourceforge bug tracking system. They are easy to reproduce on linux > using gtklookat-0.8.2 (which uses libvrml97-0.8.2) and loading the two > VRML files I mention in the bug report. Okay, here's the deal, for the record: Brian and I have discussed this, and we have decided to replace Mesa's GLU with the SI's one RSN. I am still working on my new code, as I know of at least one obscure bug in SGI's version that I'd like to avoid. I will continue to work on this until I'm finished, and then Brian and I will compare the two implementations and decide on which version to use. I could say I'm rather busy at the moment with other things like DRI driver development, but that would be a severe understatement. I'm hoping that one day I will actually have sufficient time to devote a week to this, as that would get me a long way towards being done. The current tessellator in Mesa is just plain broken. I actually understand the problem space now, which is why I'm not fixing the old code, rather replacing it entirely. -- Gareth |
From: Brian P. <br...@pr...> - 2000-06-22 22:15:38
|
Holger Waechtler wrote: > > Hi everybody, > > it's almost done -- the software rasterizer seems to be working. I'll > stress and test it a bit next days. So, this extension is really going to do what you need? I'm expecting/ hoping that people will actually use it. > Brian, shall I send you a 'preview', which could be used already as > template for drivers or should I send it to you when I tested it ? Don't send it until you're reasonably finished with it. -Brian |
From: Brian P. <br...@pr...> - 2000-06-22 21:08:16
|
Olivier Michel wrote: > > Hello, > > I found two serious bugs in the GLU tesselator. I reported these bugs on > the sourceforge bug tracking system. They are easy to reproduce on linux > using gtklookat-0.8.2 (which uses libvrml97-0.8.2) and loading the two > VRML files I mention in the bug report. > > I tested these two VRML files on my G3 with MacLookAt which relies on > the same libvrml97 library but uses Apple OpenGL and they both worked > nicely. > > Both gtklookat and libvrml97 (as well as MacLookAt) are available from > http://www.openvrml.org/projects/ with complete source code. > > One bug is very annoying since it makes GLU crash (SEGFAULT probably due > to a memory leak while attempting to free memory). The other bug causes > reversing the order of vertices in some non convex faces when the normal > is negative, which makes that the wrong side of the face is rendered > (typically the top face of a non-convex VRML Extrusion is drawn properly > while the bottom face is drawn on the wrong side). > > I tryed to get into the GLU code to understand the problem, but it seems > not so easy for me. I think you Gareth are the author of that code and > you may help me fixing that problem if you can reproduce the bug (any > hint greatly appreciated). I suggest building/using SGI's GLU library from the SI release. -Brian |
From: Holger W. <hwa...@ya...> - 2000-06-21 13:27:29
|
Hi everybody, it's almost done -- the software rasterizer seems to be working. I'll stress and test it a bit next days. Brian, shall I send you a 'preview', which could be used already as template for drivers or should I send it to you when I tested it ? - Holger |
From: Olivier M. <Oli...@cy...> - 2000-06-21 11:59:30
|
Hello, I found two serious bugs in the GLU tesselator. I reported these bugs on the sourceforge bug tracking system. They are easy to reproduce on linux using gtklookat-0.8.2 (which uses libvrml97-0.8.2) and loading the two VRML files I mention in the bug report. I tested these two VRML files on my G3 with MacLookAt which relies on the same libvrml97 library but uses Apple OpenGL and they both worked nicely. Both gtklookat and libvrml97 (as well as MacLookAt) are available from http://www.openvrml.org/projects/ with complete source code. One bug is very annoying since it makes GLU crash (SEGFAULT probably due to a memory leak while attempting to free memory). The other bug causes reversing the order of vertices in some non convex faces when the normal is negative, which makes that the wrong side of the face is rendered (typically the top face of a non-convex VRML Extrusion is drawn properly while the bottom face is drawn on the wrong side). I tryed to get into the GLU code to understand the problem, but it seems not so easy for me. I think you Gareth are the author of that code and you may help me fixing that problem if you can reproduce the bug (any hint greatly appreciated). Best, -Olivier |
From: Gareth H. <ga...@pr...> - 2000-06-21 02:22:39
|
On that NVIDIA demo page, there's a demo that shows of this new extension. I know the Rage 6 can also do vertex weighting, so we might want to implement this fairly soon. I volunteer, as I'll be working on the Rage 6 DRI driver. I'll investigate this and report back. -- Gareth |
From: Gareth H. <ga...@pr...> - 2000-06-21 02:19:26
|
Here's a link to some new NVIDIA demos, including a dot3 bumpmapping one with source: http://www.nvidia.com/marketing/developer/devrel.nsf/TechnicalDemosFrame?OpenPage -- Gareth |
From: Scott M. <mcm...@ca...> - 2000-06-20 14:20:24
|
So far this only happens on my VAIO Laptop running Mandrake 6.0 with a 2.2.11 kernel and a self compiled version of Mesa 3.2. It doesn't happen on my desktop (Mandrake 6.0, 2.2.13, forgot to check which version of Mesa - oops). Scott McMillan wrote: > > I am trying to implement a multipass rendering technique similar > to Mark Kilgard's real-time shadow library, that involves multiple > rendering passes: > > 1) render the scene with normal lights no stencil > glEnable(GL_LIGHT0); > glEnable(GL_LIGHTING); > > 2) render the front faces of shadow volumes with the following > glDisable(GL_LIGHTING); > glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); > glEnable(GL_STENCIL_TEST); > glDepthMask(GL_FALSE); > glStencilFunc(GL_ALWAYS, 0, 0); > glStencilOp(GL_KEEP, GL_KEEP, GL_INCR); > > 3) render the back faces of the shadow volumes with the following: > glStencilOp(GL_KEEP, GL_KEEP, GL_DECR); > > 4) render the original scene with a different color light: > glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); > glDepthFunc(GL_EQUAL); > glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); > glStencilFunc(GL_NOTEQUAL, 0, ~0); > > glDisable(GL_LIGHT0); > glEnable(GL_LIGHT1); > glEnable(GL_LIGHTING); > > With Mesa, everything works except that I get a white > translucent (like screen door transparency) haze where the > shadow volumes are. This occurs in areas where the stencil > values were incremented and then decremented back to zero > so there should have been no rendering into the color buffer > after the first step. > > This does not happen on SGI's or most NT drivers. > > Anybody have any ideas? Should I also report this as a bug? > > scott -- Scott McMillan mailto:mcm...@ca... Cambridge Research Associates http://www.cambridge.com 1430 Spring Hill Road, Ste. 200 Voice: (703) 790-0505 x7235 McLean, VA 22102 Fax: (703) 790-0370 |
From: Holger W. <hwa...@ya...> - 2000-06-20 11:11:11
|
On Tue, 20 Jun 2000, Gareth Hughes wrote: > > I agree that it would be pretty simple to add these new modes to the > > software renderer. Off-hand I don't know which of the cards supported > > by the DRI could support them. If you implement GL_EXT_texture_env > > just send me a patch. > > Holger, are you planning on doing this? It should be fairly straight > forward. I'll implement EXT_texture_env_combine instead. I hope, I have something working until next week. The EXT_texture_env API can be expressed in terms of EXT_texture_env_combine, but I'm not shure if this makes sense. As far I know yet, Mesa would be the only OpenGL implementing this extension. It's a bit easier to use, but less flexible. > > I'd implement GL_EXT_texture_env as-is. Then, perhaps design a new > > extension. > > Brian's idea of a new/separate GL_MESA_texture_env_dot3 extension would > be the best way to handle this. I will write this extension so that it defines a new mode for EXT_texture_env_combine. > > Don't know. Better find out before writing GL_MESA_texture_env_dot3 > > though. > > I'll take a look at some of the hardware specs I have today. This would be great. Could you look, if all other modes are supported, too ?? - Holger |
From: Gareth H. <ga...@pr...> - 2000-06-20 01:21:19
|
Brian Paul wrote: > > I have reason to believe that GL_EXT_texture_env_combine will be supported > by several vendors in the future, if they don't already. I also believe this is true. > I agree that it would be pretty simple to add these new modes to the > software renderer. Off-hand I don't know which of the cards supported > by the DRI could support them. If you implement GL_EXT_texture_env > just send me a patch. Holger, are you planning on doing this? It should be fairly straight forward. > I'd implement GL_EXT_texture_env as-is. Then, perhaps design a new > extension. Brian's idea of a new/separate GL_MESA_texture_env_dot3 extension would be the best way to handle this. > Don't know. Better find out before writing GL_MESA_texture_env_dot3 > though. I'll take a look at some of the hardware specs I have today. -- Gareth |
From: Stephen J B. <sj...@li...> - 2000-06-19 20:31:59
|
On Mon, 19 Jun 2000, Mark Paton wrote: > I am just starting to learn Mesa at a lower level and don't know too > many the details of the rendering engine yet but: > The basic problem here seems to be that we are culling triangles > without regard to its neighbouring triangles. I expect this would > be very difficult to do and it is desirable to treat each triangle > independently. However, wouldn't a solution be that the smallest > a triangle can be is one pixel - e.g. the scan conversion for this > triangle is automatic. If a triangle is mathematically smaller then a > pixel area its rasterization is just the nearest pixel. Then the > depth buffer test and other rendering details are processed accordingly. > Then if you do something crazy like rendering 50 million polygons in > a 72x72 pixel box (which I tried). The system will have to render 50 > million pixels, and many of them will be the same pixel but the result > will be correct. Is this a valid thing to consider? I am assuming here > that the numeric instibility problems are caused by attempting to scan > convert a mathematically tiny triangle. You'd think that would be OK wouldn't you...but it's not. :-( Imagine a single pixel sphere made from a hundred red triangles each with an alpha of 0.1 - rendered onto a black background. If you rendered that in a truly correct manner, the colour of the resulting pixel would be RGB=(0.1,0,0), but if each triangle is forced to cover the entire pixel, you get nearly 100% red because each triangle hides only 1/10th of the colour of the one behind it - then adds 0.1 of its own colour...do that 100 times and you have: 0.1 + 0.9 * ( 0.1 + 0.9 * ( 0.1 ...a hundred times...... )) ...which sums to something like 0.9999999999 - peak red. What you *should* do is to have the triangle be discarded unless it covers the very center of the pixel...no matter how small that triangle is. This isn't a problem that's unique to very small triangles. Even a large triangle that *touches* a pixel without covering the exact center of the pixel should not render to it or else you'll get a row of bright 'beads' running along the edges of two translucent triangles that share a common edge. Since Mesa already gets that right, using the existing rasterizer should be OK so long as the polygon isn't simply discarded. You might also want to think about what happens when a bunch of long thin triangles meet at a point. Each one is large - so the small triangle test won't discard it - yet 360 one-degree-wide triangles meeting at a point still have to make up a solid circle. The answer to all of these problems is to stop thinking of pixels as being little squares - but instead to think of them as infinitely small points. Instead of thinking about whether a polygon touches some point on a square pixel, think about whether the polygon covers that infinitesimal point at the center of the pixel. Of course if you are antialiasing - then thinking of pixels as areas becomes sensible again. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Allen A. <ak...@po...> - 2000-06-19 19:12:05
|
On Mon, Jun 19, 2000 at 03:44:57PM -0300, Mark Paton wrote: | The basic problem here seems to be that we are culling triangles | without regard to its neighbouring triangles. I expect this would | be very difficult to do and it is desirable to treat each triangle | independently. ... Yes, in fact OpenGL requires that the triangles be treated independently. | ... However, wouldn't a solution be that the smallest | a triangle can be is one pixel - e.g. the scan conversion for this | triangle is automatic. If a triangle is mathematically smaller then a | pixel area its rasterization is just the nearest pixel. ... OpenGL also requires that filled primitives be "point sampled." Essentially, the image plane is examined at regularly-spaced points. If the projection of a triangle (for example) covers one of those points, then the color, depth, etc. at that point are derived from that triangle, no matter how small the area of the triangle might be. This behavior is essential for proper antialiasing and visible-surface resolution, among other things -- including avoiding the problem that you first mentioned (gaps/overlaps in dense meshes of small polygons). However, for all this to work, the rasterization code must be written *very* carefully. When I wrote the original version of the code that eventually become Mesa's triangle rasterizer, I was aware of the issues, but took a few shortcuts for performance and made a few mistakes as well. Many of those have been fixed by now, but a few probably still remain, and the triangle-size cutoff may have been installed to work around them. It would be worth trying to reduce the cutoff size. My guess is that it would probably work in most cases. The ones least-likely to work are those in which the vertex coordinates are far from zero. | Then if you do something crazy like rendering 50 million polygons in | a 72x72 pixel box (which I tried). The system will have to render 50 | million pixels, and many of them will be the same pixel but the result | will be correct. Is this a valid thing to consider? ... Assuming the projected images of the polygons don't overlap, then the right behavior is that all 50 million polygons will be considered, but only 72x72 pixels will be written. Allen |
From: Mark P. <li...@iv...> - 2000-06-19 18:49:19
|
I am just starting to learn Mesa at a lower level and don't know too many the details of the rendering engine yet but: The basic problem here seems to be that we are culling triangles without regard to its neighbouring triangles. I expect this would be very difficult to do and it is desirable to treat each triangle independently. However, wouldn't a solution be that the smallest a triangle can be is one pixel - e.g. the scan conversion for this triangle is automatic. If a triangle is mathematically smaller then a pixel area its rasterization is just the nearest pixel. Then the depth buffer test and other rendering details are processed accordingly. Then if you do something crazy like rendering 50 million polygons in a 72x72 pixel box (which I tried). The system will have to render 50 million pixels, and many of them will be the same pixel but the result will be correct. Is this a valid thing to consider? I am assuming here that the numeric instibility problems are caused by attempting to scan convert a mathematically tiny triangle. In my case I used a quadstrip. In this instance it might be possible to consider a higher level approach. E.g. Instead of breaking each quad into two triagles and render - reduce the quad strip before triangulation to generate reasonable (1pixel) triangles. - Mark >On Mon, 19 Jun 2000, Stephen J Baker wrote: > On Sun, 18 Jun 2000, Linux Development Account wrote: > > > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > > > Polygons less than one pixel in size don't display at all. Take a very complex > > polygonal surface then slowly pull back. Given the huge number of > > polygons eventually the size of a polygon will be mathematically smaller > > than one pixel. It appears at this stage that no on screen pixel > > representation is drawn. Thus as you pull back the suface discenigrates > > and vanishes even though the overall surface might cover thousands of > > pixels. > > This has come up a couple of times before. > > Mesa does cull very tiny triangles (which is "A BAD THING" IMHO) - we are > told that it's necessary for Mesa to do this in order to avoid mathematical > instabilities down the line. > > The last person to complain about this hacked Mesa to reduce the size limit > to something MUCH smaller than it currently is - with no ill effects. > > I guess that hack didn't make it into mainstream Mesa...but I think we DO > need to address this issue as a bug. > > > Note this problem does not happen on SGI, Sun, or windows OpenGL > > implementations so I guess there is some aspect of the OpenGL standard > > that Mesa does not correctly implement. > > I don't know if what Mesa does is in violation of the OpenGL spec or not... > I suspect it is a violation - but even if it isn't, it's an evil that needs > to be addressed properly at some stage. > |
From: Scott M. <mcm...@ca...> - 2000-06-19 18:18:42
|
I am trying to implement a multipass rendering technique similar to Mark Kilgard's real-time shadow library, that involves multiple rendering passes: 1) render the scene with normal lights no stencil glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); 2) render the front faces of shadow volumes with the following glDisable(GL_LIGHTING); glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glEnable(GL_STENCIL_TEST); glDepthMask(GL_FALSE); glStencilFunc(GL_ALWAYS, 0, 0); glStencilOp(GL_KEEP, GL_KEEP, GL_INCR); 3) render the back faces of the shadow volumes with the following: glStencilOp(GL_KEEP, GL_KEEP, GL_DECR); 4) render the original scene with a different color light: glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthFunc(GL_EQUAL); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glStencilFunc(GL_NOTEQUAL, 0, ~0); glDisable(GL_LIGHT0); glEnable(GL_LIGHT1); glEnable(GL_LIGHTING); With Mesa, everything works except that I get a white translucent (like screen door transparency) haze where the shadow volumes are. This occurs in areas where the stencil values were incremented and then decremented back to zero so there should have been no rendering into the color buffer after the first step. This does not happen on SGI's or most NT drivers. Anybody have any ideas? Should I also report this as a bug? scott -- Scott McMillan mailto:mcm...@ca... Cambridge Research Associates http://www.cambridge.com 1430 Spring Hill Road, Ste. 200 Voice: (703) 790-0505 x7235 McLean, VA 22102 Fax: (703) 790-0370 |
From: Brian P. <br...@pr...> - 2000-06-19 15:45:39
|
[I think I accidently deleted part of my reply] Brian Paul wrote: > > Holger Waechtler wrote: > > > > Hi, > > > > I'm still thinking about how to accelerate per pixel lighting and bumpmapping > > on current hardware. > > > > A problem arising in bumpmapping using current OpenGL implementations is at > > least the missing SUBTRACT texenv mode. The nVidia register combiner > > extension provides a solution, but are neither easy to use nor portable to > > other cards. The other way is defined by GL_EXT_texture_env, but at least > > Mesa doesn't has it yet. > > > > Brian pointed me out that EXT_texture_env_combine may be a more common > > solution (the nVidia driver already has it -- it's designed for their > > cards), but I first have to read this again ... > > I have reason to believe that GL_EXT_texture_env_combine will be supported > by several vendors in the future, if they don't already. > > > However, it should be quite easy to add GL_EXT_texture_env to Mesa. It > > defines the following modes (new are only SUBTRACT, REVERSE_SUBTRACT and > > REVERSE_BLEND -- I have no idea, for what COPY should be good ...): > > > > Cv = Cf COPY > > Cv = Ct REPLACE > > Cv = Cf * Ct MODULATE > > Cv = Cf + Ct ADD > > Cv = Cf - Ct SUBTRACT > > Cv = Ct - Cf REVERSE_SUBTRACT > > Cv = aCf + (1-a)Ct BLEND > > Cv = aCt + (1-a)Cf REVERSE_BLEND > > I agree that it would be pretty simple to add these new modes to the > software renderer. Off-hand I don't know which of the cards supported > by the DRI could support them. If you implement GL_EXT_texture_env > just send me a patch. > > > Additionally a dotproduct mode would make per pixel lighting possible > > (D3D has one, so why don't we have ?? -- it seems to be accelerated by > > hardware at least on nVidia's new cards): > > > > Cv = Rf*Rt+Gf*Gt+Bf*Bt DOTPROD3 > > > > Input components are understood as they were scaled and biased by 128, so > > their range [0,255] is mapped to [-1.0,1.0]: > > > > Results will be scaled as defined in EXT_texture_env, then clamped and > > written into the R, G and B component of Cv. It can be used now as light > > map to modulate another texture. > > > > I'd really like to hear your comments before I start any coding. > > Shall we define a complete new spec as superset of EXT_texture_env ? > > (was this ever implemented yet ?) > > I'd implement GL_EXT_texture_env as-is. Then, perhaps design a new > extension. A new extension for doing the dot product, that is. Perhaps call it GL_MESA_texture_env_dot3. In fact, you could implement the dot3 extension without GL_EXT_texture_env since there's no dependency. > > Can those modes be accelerated by 3dfx, Matrox and ATI cards ? > > Don't know. Better find out before writing GL_MESA_texture_env_dot3 > though. > > -Brian |
From: Brian P. <br...@pr...> - 2000-06-19 15:40:08
|
Holger Waechtler wrote: > > Hi, > > I'm still thinking about how to accelerate per pixel lighting and bumpmapping > on current hardware. > > A problem arising in bumpmapping using current OpenGL implementations is at > least the missing SUBTRACT texenv mode. The nVidia register combiner > extension provides a solution, but are neither easy to use nor portable to > other cards. The other way is defined by GL_EXT_texture_env, but at least > Mesa doesn't has it yet. > > Brian pointed me out that EXT_texture_env_combine may be a more common > solution (the nVidia driver already has it -- it's designed for their > cards), but I first have to read this again ... I have reason to believe that GL_EXT_texture_env_combine will be supported by several vendors in the future, if they don't already. > However, it should be quite easy to add GL_EXT_texture_env to Mesa. It > defines the following modes (new are only SUBTRACT, REVERSE_SUBTRACT and > REVERSE_BLEND -- I have no idea, for what COPY should be good ...): > > Cv = Cf COPY > Cv = Ct REPLACE > Cv = Cf * Ct MODULATE > Cv = Cf + Ct ADD > Cv = Cf - Ct SUBTRACT > Cv = Ct - Cf REVERSE_SUBTRACT > Cv = aCf + (1-a)Ct BLEND > Cv = aCt + (1-a)Cf REVERSE_BLEND I agree that it would be pretty simple to add these new modes to the software renderer. Off-hand I don't know which of the cards supported by the DRI could support them. If you implement GL_EXT_texture_env just send me a patch. > Additionally a dotproduct mode would make per pixel lighting possible > (D3D has one, so why don't we have ?? -- it seems to be accelerated by > hardware at least on nVidia's new cards): > > Cv = Rf*Rt+Gf*Gt+Bf*Bt DOTPROD3 > > Input components are understood as they were scaled and biased by 128, so > their range [0,255] is mapped to [-1.0,1.0]: > > Results will be scaled as defined in EXT_texture_env, then clamped and > written into the R, G and B component of Cv. It can be used now as light > map to modulate another texture. > > I'd really like to hear your comments before I start any coding. > Shall we define a complete new spec as superset of EXT_texture_env ? > (was this ever implemented yet ?) I'd implement GL_EXT_texture_env as-is. Then, perhaps design a new extension. > Can those modes be accelerated by 3dfx, Matrox and ATI cards ? Don't know. Better find out before writing GL_MESA_texture_env_dot3 though. -Brian |
From: Brian P. <br...@pr...> - 2000-06-19 14:09:10
|
Mark Paton wrote: > > Just a quick question. If I am using anonymous cvs which tag do I > use to get the version 3.2 branch that includes any changes since > the release of version 3.2. Is this the mesa_3_2_dev tag? Right. > I assume the main branch is what will eventually be version 3.3. Right again. -Brian |
From: Stephen J B. <sj...@li...> - 2000-06-19 13:35:55
|
On Sun, 18 Jun 2000, Linux Development Account wrote: > While I have been working with OpenGL since its inception I am > relatively new to Mesa and the Source Forge development system here > so bear with me. > > An initial question: If I find what I think is bug do I directly > try to add in the source forge bug tracker or do we discuss it first here > to make sure a bug has really been found? It's a hard call - I'd discuss it here first. > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > Polygons less than one pixel in size don't display at all. Take a very complex > polygonal surface then slowly pull back. Given the huge number of > polygons eventually the size of a polygon will be mathematically smaller > than one pixel. It appears at this stage that no on screen pixel > representation is drawn. Thus as you pull back the suface discenigrates > and vanishes even though the overall surface might cover thousands of > pixels. This has come up a couple of times before. Mesa does cull very tiny triangles (which is "A BAD THING" IMHO) - we are told that it's necessary for Mesa to do this in order to avoid mathematical instabilities down the line. The last person to complain about this hacked Mesa to reduce the size limit to something MUCH smaller than it currently is - with no ill effects. I guess that hack didn't make it into mainstream Mesa...but I think we DO need to address this issue as a bug. > Note this problem does not happen on SGI, Sun, or windows OpenGL > implementations so I guess there is some aspect of the OpenGL standard > that Mesa does not correctly implement. I don't know if what Mesa does is in violation of the OpenGL spec or not... I suspect it is a violation - but even if it isn't, it's an evil that needs to be addressed properly at some stage. > #2 Front & Back buffer drawing+blending = crash > > With a double buffered visual if I draw polygons (quad strips in this > case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. > However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. > Here's some output from gdb. > > Reading symbols from /lib/ld-linux.so.2...done. > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > 286 blend.c: No such file or directory. > (gdb) where > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > #1 0x842ed10 in ?? () > Cannot access memory at address 0x1 That's a new one on me - but then you may well be the first person ever to try that! It's a pretty estoeric rendering mode. > #3 Rendering Performance Issue - Not a bug but an observation. > > Again using a double buffered (+Zbuffer) - If I draw the same quad strip > scene described in bug #1 normaly in the background and swap it to the > foregroup this the usual SwapBuffer command I get excellent performance. > However if I set glDrawBuffer(GL_FRONT) and render the same scene you get > absolutely horrible performance in comparison. I would say 10-12 times > slower. I would expect the two situations to have similar performance with only > glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone > with more Mesa knowledge care to comment - Is this a problem or just a > optimization waiting for an implementor? As always, it's the commonly used modes that get the most optimisation - and glDrawBuffer(GL_FRONT_AND_BACK) is certainly not commonly used. It's possible that because each pixel is writing to two widely separated chunks of memory that you are causeing some nasty cache artifact in the CPU - but I suspect it's more likely that a lack of a specifically optimised code path is your problem. (That assumes you are doing software rendering - if you are doing hardware rendering then the likely explanation would be that the hardware can't do simultaneous front and back buffer rendering - so you got a software fallback). If I were you, I'd alter my code to render each object twice, once into the back buffer and again into the front. This would avoid both the bug and the slowdown - and it'll probably be faster on every platform - not just Mesa. Of course you should register #2 as a bug, possibly #1 also (although we all understand it already) - I don't think a lack of optimisation in #3 justifies a *bug* report - but opinions will differ on that one. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Holger W. <hwa...@ya...> - 2000-06-19 05:40:37
|
Hi, I'm still thinking about how to accelerate per pixel lighting and bumpmapping on current hardware. A problem arising in bumpmapping using current OpenGL implementations is at least the missing SUBTRACT texenv mode. The nVidia register combiner extension provides a solution, but are neither easy to use nor portable to other cards. The other way is defined by GL_EXT_texture_env, but at least Mesa doesn't has it yet. Brian pointed me out that EXT_texture_env_combine may be a more common solution (the nVidia driver already has it -- it's designed for their cards), but I first have to read this again ... However, it should be quite easy to add GL_EXT_texture_env to Mesa. It defines the following modes (new are only SUBTRACT, REVERSE_SUBTRACT and REVERSE_BLEND -- I have no idea, for what COPY should be good ...): Cv = Cf COPY Cv = Ct REPLACE Cv = Cf * Ct MODULATE Cv = Cf + Ct ADD Cv = Cf - Ct SUBTRACT Cv = Ct - Cf REVERSE_SUBTRACT Cv = aCf + (1-a)Ct BLEND Cv = aCt + (1-a)Cf REVERSE_BLEND Additionally a dotproduct mode would make per pixel lighting possible (D3D has one, so why don't we have ?? -- it seems to be accelerated by hardware at least on nVidia's new cards): Cv = Rf*Rt+Gf*Gt+Bf*Bt DOTPROD3 Input components are understood as they were scaled and biased by 128, so their range [0,255] is mapped to [-1.0,1.0]: Results will be scaled as defined in EXT_texture_env, then clamped and written into the R, G and B component of Cv. It can be used now as light map to modulate another texture. I'd really like to hear your comments before I start any coding. Shall we define a complete new spec as superset of EXT_texture_env ? (was this ever implemented yet ?) Can those modes be accelerated by 3dfx, Matrox and ATI cards ? - Holger |
From: Mark P. <li...@iv...> - 2000-06-19 02:43:55
|
Just a quick question. If I am using anonymous cvs which tag do I use to get the version 3.2 branch that includes any changes since the release of version 3.2. Is this the mesa_3_2_dev tag? I assume the main branch is what will eventually be version 3.3. Thanks, Mark |
From: Brian P. <br...@pr...> - 2000-06-18 22:58:05
|
OK, I found some time during my X server builds to look into these problems. Brian Paul wrote: > > Linux Development Account wrote: > > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > > > Polygons less than one pixel in size don't display at all. I've changed the code to simply threshold the 1/area computation in tritemp.h. My tests indicate this works. > > #2 Front & Back buffer drawing+blending = crash Easily fixed (span.c.) I've checked in these changes to both the Mesa 3.2 and 3.3 branches. I'm planning to make a Mesa 3.2.1 release in the coming weeks since there's been a number of bug fixes since 3.2. -Brian |
From: Brian P. <br...@pr...> - 2000-06-18 21:20:06
|
Linux Development Account wrote: > > While I have been working with OpenGL since its inception I am > relatively new to Mesa and the Source Forge development system here > so bear with me. > > An initial question: If I find what I think is bug do I directly > try to add in the source forge bug tracker or do we discuss it first here > to make sure a bug has really been found? > > I recently compiled a fairly large visualization system under Linux-i386 and > Mesa OpenGL. Overall I am very impressed - kudos to the developers. However I > did discover several bugs, one which I think is quite serious. These tests > were done with a dataset of approx. 50 million polygons. Here are the > problems that I found > > ---- > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > Polygons less than one pixel in size don't display at all. Take a very complex > polygonal surface then slowly pull back. Given the huge number of > polygons eventually the size of a polygon will be mathematically smaller > than one pixel. It appears at this stage that no on screen pixel > representation is drawn. Thus as you pull back the suface discenigrates > and vanishes even though the overall surface might cover thousands of > pixels. > > Note this problem does not happen on SGI, Sun, or windows OpenGL > implementations so I guess there is some aspect of the OpenGL standard > that Mesa does not correctly implement. In this implementation the surface > is a digital terrain model rendered as a series a quad strips. A long time ago I added code to cull triangles smaller than a certain threshold size in order to prevent fp/int overflows in the rasterizer code. A few people have complained about this now so I guess it's time I reexamined the problem. > #2 Front & Back buffer drawing+blending = crash > > With a double buffered visual if I draw polygons (quad strips in this > case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. > However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. > Here's some output from gdb. > > Reading symbols from /lib/ld-linux.so.2...done. > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > 286 blend.c: No such file or directory. > (gdb) where > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > #1 0x842ed10 in ?? () > Cannot access memory at address 0x1 > > Note that the quad strips are drawn with GL_BLEND enabled although I am > not sure if that has anything to do with the problem. I hadn't heard of this problem. I'll look into it (though I'll be out of town most of this week, so be patient). > #3 Rendering Performance Issue - Not a bug but an observation. > > Again using a double buffered (+Zbuffer) - If I draw the same quad strip > scene described in bug #1 normaly in the background and swap it to the > foregroup this the usual SwapBuffer command I get excellent performance. > However if I set glDrawBuffer(GL_FRONT) and render the same scene you get > absolutely horrible performance in comparison. I would say 10-12 times > slower. I would expect the two situations to have similar performance with only > glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone > with more Mesa knowledge care to comment - Is this a problem or just a > optimization waiting for an implementor? Drawing to the front buffer (the X window) is generally done with XDrawPoint. That's inherently slow. Drawing to the back buffer is either done with direct writes to the XImage or with XPutPixel. That's much faster than XDrawPoint. > I plan to look into #2 myself as a first experiment working with the Mesa > code but for #1 and #3 it would be nice to see some dicussion from the > more experienced Mesa developers. Sould I add these bugs to source forge? Yes, please file bug reports, otherwise I'm likely to forget about them. -Brian |
From: Randall F. <rjf...@ho...> - 2000-06-18 17:27:52
|
To add my $0.02: I would love to see (#1) addressed as well. Generally, we comment out the size check in the code to work around this, but a proper fix still needs to be found. As far as (#3) is concerned, there were some comments in the readme for X11 that discuss this. I assume they are still there. Linux Development Account wrote: > > While I have been working with OpenGL since its inception I am > relatively new to Mesa and the Source Forge development system here > so bear with me. > > An initial question: If I find what I think is bug do I directly > try to add in the source forge bug tracker or do we discuss it first here > to make sure a bug has really been found? > > I recently compiled a fairly large visualization system under Linux-i386 and > Mesa OpenGL. Overall I am very impressed - kudos to the developers. However I > did discover several bugs, one which I think is quite serious. These tests > were done with a dataset of approx. 50 million polygons. Here are the > problems that I found > > ---- > > #1 Incorrect Rendering of Subpixel quadstrips - Serious problem ? > > Polygons less than one pixel in size don't display at all. Take a very complex > polygonal surface then slowly pull back. Given the huge number of > polygons eventually the size of a polygon will be mathematically smaller > than one pixel. It appears at this stage that no on screen pixel > representation is drawn. Thus as you pull back the suface discenigrates > and vanishes even though the overall surface might cover thousands of > pixels. > > Note this problem does not happen on SGI, Sun, or windows OpenGL > implementations so I guess there is some aspect of the OpenGL standard > that Mesa does not correctly implement. In this implementation the surface > is a digital terrain model rendered as a series a quad strips. > > ---- > > #2 Front & Back buffer drawing+blending = crash > > With a double buffered visual if I draw polygons (quad strips in this > case) normally or use glDrawBuffer( GL_FRONT ) and draw the polygons no problem. > However, if I use glDrawBuffer(GL_FRONT_AND_BACK) the application crashes. > Here's some output from gdb. > > Reading symbols from /lib/ld-linux.so.2...done. > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > 286 blend.c: No such file or directory. > (gdb) where > #0 0x400552ac in blend_transparency (ctx=Cannot access memory at address 0x3d > ) at blend.c:286 > #1 0x842ed10 in ?? () > Cannot access memory at address 0x1 > > Note that the quad strips are drawn with GL_BLEND enabled although I am > not sure if that has anything to do with the problem. > > ---- > > #3 Rendering Performance Issue - Not a bug but an observation. > > Again using a double buffered (+Zbuffer) - If I draw the same quad strip > scene described in bug #1 normaly in the background and swap it to the > foregroup this the usual SwapBuffer command I get excellent performance. > However if I set glDrawBuffer(GL_FRONT) and render the same scene you get > absolutely horrible performance in comparison. I would say 10-12 times > slower. I would expect the two situations to have similar performance with only > glDrawBuffer(GL_FONT_AND_BACK) to suffering a performance penalty. Anyone > with more Mesa knowledge care to comment - Is this a problem or just a > optimization waiting for an implementor? > > ---- > > This testing was done with a C++ application using Mesa 3.2, Redhat 6.2 > with the stock kernel, XFree86 3.3.6, Metrolink Motif, and the SGI Motif > OpenGL drawing widget from Mesa (GLwMDrawingA.h). > > I plan to look into #2 myself as a first experiment working with the Mesa > code but for #1 and #3 it would be nice to see some dicussion from the > more experienced Mesa developers. Sould I add these bugs to source forge? > > Talk to you later, > Mark > > ------------------------------------- > Mark Paton > Interactive Visualization Systems Inc. > http://www.ivs.unb.ca > > _______________________________________________ > Mesa3d-dev mailing list > Mes...@li... > http://lists.sourceforge.net/mailman/listinfo/mesa3d-dev -- rjf. Randy Frank | ASCI Visualization Lawrence Livermore National Laboratory | rj...@ll... B451 Room 2039 L-561 | Voice: (925) 423-9399 Livermore, CA 94550 | Fax: (925) 423-8704 |