You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(17) 
2
(10) 
3

4

5
(4) 
6

7

8

9
(8) 
10

11
(2) 
12
(2) 
13
(2) 
14
(2) 
15
(3) 
16

17
(7) 
18
(22) 
19
(1) 
20

21
(1) 
22
(6) 
23
(4) 
24
(2) 
25
(1) 
26

27

28

29
(1) 
30
(5) 
31
(15) 



From: Diogo de Andrade <diogo.andrade@ne...>  20071018 13:57:09

Yes, the parallel edges cases are the problematic ones. Are you suggesting that I: a) Before testing the triangles proper, enclose the 2 triangles in 2 boxes and test for overlap of that boxes and only if that test returns overlap I test the rest. Or b) Before each edge/edge test, enclose the two edges in two boxes, test for overlap and only if that overlap is true, do the edge test proper? That sounds like it could solve my problem, at least this particular one. Of course it probably won't solve another 10000 problems, but I might postpone the problem enough so I can buy myself development time to use a good mesh parameterization algorithm of some sort. Diogo de Andrade _____ From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Steve McCrea Sent: quartafeira, 17 de Outubro de 2007 23:19 To: Game Development Algorithms Subject: Re: [Algorithms] Triangle/Triangle intersection To address your lowlevel problem: The edge intersection test used (Graphics Gems III, IV.6) breaks down to P* = P1 + a(P2  P1), a must be [0,1] P* = P3 + b(P4  P3), b must be [0,1] A = P2  P1 B = P3  P4 C = P1  P3 a = BxC/AxB b = CxA/AxB So essentially the test breaks down if the edges are nearly parallel. Moving the test to double precision should improve it dramatically. The GraphicGem mentions that an initial bounding box test on the edges results in a 20% speedup across a range of platforms. This would also fix any parallel edges with a perpendicular split plane, like your problem case. Steve _____ From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Diogo de Andrade Sent: Wednesday, October 17, 2007 10:03 AM To: gdalgorithmslist@... Subject: [Algorithms] Triangle/Triangle intersection Hey all! I have a triangle/triangle intersection problem which I hope some can shed some light on. My purpose is to build UV charts for an arbitrary model. I'll then use those charts to build lightmaps. My current algorithm just uses the normal of a particular triangle to decide on which of the 6 charts (+/X, +/Y, +/Z) it will be, then I project the proper world coordinates of the model onto the appropriate plane (for instance, if I want to place a certain triangle on the +X plane, I just use the Y,Z coordinates (properly scaled and offset) as U,V). This works well for simple models, as I expected (for instance, a cube, or a sphere, or a simple table, stuff like that) Now, to add support to more complex models (for instance, concave models, in which the projection of two different triangles can lead to the same spot on the chart), I added triangle intersections, that is, for each triangle I project, I'm checking if there is a projected triangle is already on that position. If it is, I just create a new chart and place the triangle there. The thing is, this almost works, BUT (there's always a but.) some triangles get placed in another chart altogether, although they clearly fit in the "first" chart. Delving deeper into this, I arrive to the conclusion that triangle/triangle intersection routines are reporting false positives. First of all, I was convinced that this was a precision issue, but changing all the code from floats to doubles had no effect. Then I bore deeper into the tri/tri routines and what I found out disturbed me. I created another project (to simplify my tests) and copied the relevant code straight from my code base to that new project. And to my surprise, using the same input data, I got a false positive in one case, and the (correct) negative on the other. I quickly assigned blame to different compilation flags, but after a lot of hunting around, I couldn't see differences. I finally got the two applications to behave the same way, by changing stuff like: f=Ay*BxAx*By; to f1=Ay*Bx; f2=Ax*By; f=f1f2; This made the two pieces of code behave the same, BUT in this version, I have false positives again... :\ I'm using the code by Tomas Moller available at http://www.cs.lth.se/home/Tomas_Akenine_Moller/code/ , but I already tried all the tri/tri routines I could find on the net... None of them report the correct "false" response I was expecting on these two triangles: V0[0]=0.85786074f; V0[1]=3.9275737f; V0[2]=0.0f; V1[0]=1.5166174f; V1[1]=3.7277427f; V1[2]=0.0f; V2[0]=2.8088150f; V2[1]=6.8473845f; V2[2]=0.0f; U0[0]=0.075145423f; U0[1]=0.24772120f; U0[2]=0.0f; U1[0]=1.4189863f; U1[1]=3.4920404f; U1[2]=0.0f; U2[0]=0.76022965f; U2[1]=3.6918714f; U2[2]=0.0f; int coplanar; float p1[3],p2[3]; int b=tri_tri_intersect_with_isectline(V0,V1,V2,U0,U1,U2,&coplanar,&p1[0],&p2[0] ); I've graphed these two triangles two million times already, but it's pretty clear they don't intersect (I can see the separating axis clearly, since the highest Y value on tri U is 3.6918714 and the lowest Y value on tri V is 3.7277427f)... Still, the routines give a false positive on these triangles (and curiously enough, not on the cases where there are triangles just adjacent to each of them)... When I investigated this matter even more, I found out that the problem is that the routine reports an intersection of edge (V1,V2) and (U0,U1), which are almost collinear, so I assume that's the problem... I really don't know what else to try out... I don't care if the routine is slow or 2d, because it's for use on a preprocess and the only cases I want to test are 2d, but I really can't understand the behavior I'm seeing... Thanks in advance for any help/hint! Diogo de Andrade 
From: Diogo de Andrade <diogo.andrade@ne...>  20071018 13:53:20

As I said in my initial post, curiously enough I don't have problems with the adjacent triangles (specially because I contract the triangles by 5% or so before trying the collision detection, to avoid cases where the edges are shared to yield a collision)... I have problems with triangles that have some edges parallel... Diogo de Andrade Original Message From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Ben Garney Sent: quartafeira, 17 de Outubro de 2007 19:44 To: Game Development Algorithms Subject: Re: [Algorithms] Triangle/Triangle intersection On 10/17/07, Diogo de Andrade <diogo.andrade@...> wrote: > floats to doubles had no effect. Then I bore deeper into the tri/tri > routines and what I found out disturbed me. I created another project (to I'm guessing this case comes up for adjacent triangles... Have you considered doing a topological check (ie, adjacent triangles with similar normals don't get checked for collision)?  Ben Garney GarageGames.com  This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist 
From: Diogo de Andrade <diogo.andrade@ne...>  20071018 13:51:45

>Another problem might be using DirectX. If you call into the DirectX >library (or, in fact, certain other libraries), it may change the >internal CPU rounding mode and floating point precision bits, if you're >on x86. Similarly, signal handlers may have that effect on UNIX. The >only way to fix this is to set the rounding mode and precision bits >yourself to what you want, either using inline assembler, or using >_controlfp87() from the MSVC runtime. That was my bet with the difference in result between the two applications on the same PC/compiler/environment... But I thought that by using FPU_PRESERVE flag and doubles, that problem wouldn't arise (at least for test purposes), but apparently it still does... >Regarding unwrapping for light maps, that problem has been solved many >times over :) I would suggest going for some general mesh >parameterization algorithm, and tweak it to favor good connectivity with >low distortion over getting the largest possible subcharts. Also, you >might want to force it to break triangles when the edge angle is greater >than some tolerance, or where there are material or smoothing group >breaks, to make sure you get a nice, clean break in the rendered result. If I can't get a simpler solution (I don't have that much time for implementation at this stage), that's what I intend to do (find another way to do the UV unwrap, instead of my simple projection algorithm)... I was expecting D3D's UVAtlas to do that work for me, but I can't seem to understand how can I make that work or if it does that at all... >Last: I'm assuming you put gutters around your outer triangle edges in >the light map? Else you'll have filtering problems with light or shadow >"leaking." I'm aware of that problem, but I ain't reached that yet... I was thinking on giving some space on the generated atlas, and then postprocess the texture obtained to bleed the generated lightmaps onto the unused space... Diogo de Andrade 
From: Diogo de Andrade <diogo.andrade@ne...>  20071018 13:47:58

Thanks for the suggestion, I will check it out and see how complex it is to implement. :) Thanks Diogo de Andrade _____ From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Jason Hughes Sent: quartafeira, 17 de Outubro de 2007 17:04 To: Game Development Algorithms Subject: Re: [Algorithms] Triangle/Triangle intersection Diogo, Interesting problem. I have to wonder, though, if you already have to handle the case where you have collisions and you're creating separate charts to satisfy the triangular projection's uniqueness criteria... why not abandon the cubefaced charting entirely and build a proper 2D triangle allocator instead? You're already doing half the work. http://research.microsoft.com/~hoppe/mcgim.pdf You might find this to be a helpful resource. The partitioning of models is fairly straightforward, and will yield you fewer, larger UV shells with higher degrees of connectivity than your current method (which may not matter if this is for tools use only, but should matter for runtime use), because the geometry charts span curvature. ...and you won't need to solve this triangle projection error. :) Thanks, JH Diogo de Andrade wrote: Hey all! I have a triangle/triangle intersection problem which I hope some can shed some light on. My purpose is to build UV charts for an arbitrary model. 
From: Simon Fenney <simon.fenney@po...>  20071018 12:29:13

=20 Gribb, Gil wrote: > > ...and in fact every graphics spec I know (OGL, DX)=20 > *requires* them to=20 > > snap to fixed point before rasterisation. > >=20 > > TomF. >=20 > My point was that it is possible to make a triangle overlap=20 > test that does not give a false positive for a pair of=20 > triangles sharing an edge. That is certainly true. If you have an edge in floatingpoint screen coordinates, you can make sure that it, and the corresponding shared edge on an adjacent triangle, have no repeated pixels nor holes. > And as a secondary point, if some code using floats doesn't=20 > work, then using doubles won't work either, because the=20 > problem wasn't precision perse. IIRC, it's when you have to compute the signed area of the triangle* that things start to become very unpleasant with floating point. You can easily produce a triangle with an extremely small absolute area yet the calculation's intermediate values need to be extremely large and require virtually arbitrary levels of precision. Simon * EG. for backface culling and texture mapping. >=20 > Gil >=20 >=20 >  >  > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and=20 > a browser. > Download your FREE copy of Splunk now >>=20 > http://get.splunk.com/ _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=3Dgdalgo > rithmslist >=20 
From: Gribb, Gil <ggribb@ra...>  20071018 12:10:42

> ...and in fact every graphics spec I know (OGL, DX) *requires* them to > snap > to fixed point before rasterisation. >=20 > TomF. Probably true, but not my point really. The data starts out as floating point and yet no cracks. It is certainly possible to make a pure floating point rasterizer that does not produce cracks. My point was that it is possible to make a triangle overlap test that does not give a false positive for a pair of triangles sharing an edge. And as a secondary point, if some code using floats doesn't work, then using doubles won't work either, because the problem wasn't precision perse. Gil 
From: Tom Forsyth <tom.forsyth@ee...>  20071018 06:32:24

...and in fact every graphics spec I know (OGL, DX) *requires* them to = snap to fixed point before rasterisation. TomF. From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of = Marco Salvi Sent: Wednesday, October 17, 2007 9:20 AM To: Game Development Algorithms Subject: Re: [Algorithms] Triangle/Triangle intersection On 10/17/07, Gribb, Gil <ggribb@...> wrote: There probably is a way to do what you want exactly, with floats, but it = is very tricky. GPU's for example never draw a pixel twice for a = wellformed mesh, and they use floats, but it takes a lot of care and thought to get this right.=20 =A0 As far as I know all modern rasterization units (on GPUs) work with = fixed point math, the only way to get watertight rasterization :)=20 
From: Tom Forsyth <tom.forsyth@ee...>  20071018 06:32:23

Yes, the geometric test isn't that useful. When you go to actually make = your charts, you'll need to handle imagespace messiness like quantising to = texel boundaries and adding ntexel gutters around all your objects. Best way = is to make a bunch of separate charts, one per connected sequence of = triangles. Then find the texel values you need for each chart. Then floodfill the gutters. Then pack the charts together into textures with a 2D bitmap packer. And certainly check out the link Jason gave  well worth the extra complexity. TomF. From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of = Jason Hughes Sent: Wednesday, October 17, 2007 9:04 AM To: Game Development Algorithms Subject: Re: [Algorithms] Triangle/Triangle intersection Diogo, Interesting problem.=A0 I have to wonder, though, if you already have to handle the case where you have collisions and you're creating separate charts to satisfy the triangular projection's uniqueness criteria... why = not abandon the cubefaced charting entirely and build a proper 2D triangle allocator instead?=A0 You're already doing half the work. http://research.microsoft.com/~hoppe/mcgim.pdf You might find this to be a helpful resource.=A0 The partitioning of = models is fairly straightforward, and will yield you fewer, larger UV shells with higher degrees of connectivity than your current method (which may not matter if this is for tools use only, but should matter for runtime = use), because the geometry charts span curvature. ...and you won't need to solve this triangle projection error.=A0 :) Thanks, JH Diogo de Andrade wrote:=20 Hey all! =A0 I have a triangle/triangle intersection problem which I hope some can = shed some light on=85 My purpose is to build UV charts for an arbitrary = model=85=20 
From: Steve McCrea <SMcCrea@re...>  20071017 22:19:20

To address your lowlevel problem: =20 The edge intersection test used (Graphics Gems III, IV.6) breaks down to P* =3D P1 + a(P2  P1), a must be [0,1] P* =3D P3 + b(P4  P3), b must be [0,1] A =3D P2  P1 B =3D P3  P4 C =3D P1  P3 a =3D BxC/AxB b =3D CxA/AxB So essentially the test breaks down if the edges are nearly parallel. =20 Moving the test to double precision should improve it dramatically. The GraphicGem mentions that an initial bounding box test on the edges results in a 20% speedup across a range of platforms. This would also fix any parallel edges with a perpendicular split plane, like your problem case. =20 Steve ________________________________ From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Diogo de Andrade Sent: Wednesday, October 17, 2007 10:03 AM To: gdalgorithmslist@... Subject: [Algorithms] Triangle/Triangle intersection Hey all! =20 I have a triangle/triangle intersection problem which I hope some can shed some light on... My purpose is to build UV charts for an arbitrary model... I'll then use those charts to build lightmaps... My current algorithm just uses the normal of a particular triangle to decide on which of the 6 charts (+/X, +/Y, +/Z) it will be, then I project the proper world coordinates of the model onto the appropriate plane (for instance, if I want to place a certain triangle on the +X plane, I just use the Y,Z coordinates (properly scaled and offset) as U,V). This works well for simple models, as I expected (for instance, a cube, or a sphere, or a simple table, stuff like that) Now, to add support to more complex models (for instance, concave models, in which the projection of two different triangles can lead to the same spot on the chart), I added triangle intersections, that is, for each triangle I project, I'm checking if there is a projected triangle is already on that position. If it is, I just create a new chart and place the triangle there... The thing is, this almost works, BUT (there's always a but...) some triangles get placed in another chart altogether, although they clearly fit in the "first" chart... Delving deeper into this, I arrive to the conclusion that triangle/triangle intersection routines are reporting false positives... First of all, I was convinced that this was a precision issue, but changing all the code from floats to doubles had no effect... Then I bore deeper into the tri/tri routines and what I found out disturbed me... I created another project (to simplify my tests) and copied the relevant code straight from my code base to that new project... And to my surprise, using the same input data, I got a false positive in one case, and the (correct) negative on the other... I quickly assigned blame to different compilation flags, but after a lot of hunting around, I couldn't see differences... I finally got the two applications to behave the same way, by changing stuff like: =20 f=3DAy*BxAx*By; =20 to =20 f1=3DAy*Bx; f2=3DAx*By; f=3Df1f2; =20 This made the two pieces of code behave the same, BUT in this version, I have false positives again... :\ =20 I'm using the code by Tomas Moller available at http://www.cs.lth.se/home/Tomas_Akenine_Moller/code/ , but I already tried all the tri/tri routines I could find on the net... None of them report the correct "false" response I was expecting on these two triangles: =20 V0[0]=3D0.85786074f; V0[1]=3D3.9275737f; V0[2]=3D0.0f; = V1[0]=3D1.5166174f; V1[1]=3D3.7277427f; V1[2]=3D0.0f; V2[0]=3D2.8088150f; = V2[1]=3D6.8473845f; V2[2]=3D0.0f; =20 U0[0]=3D0.075145423f; U0[1]=3D0.24772120f; U0[2]=3D0.0f; = U1[0]=3D1.4189863f; U1[1]=3D3.4920404f; U1[2]=3D0.0f; U2[0]=3D0.76022965f; = U2[1]=3D3.6918714f; U2[2]=3D0.0f; =20 int coplanar; float p1[3],p2[3]; int b=3Dtri_tri_intersect_with_isectline(V0,V1,V2,U0,U1,U2,&coplanar,&p1[0],&= p 2[0]); =20 I've graphed these two triangles two million times already, but it's pretty clear they don't intersect (I can see the separating axis clearly, since the highest Y value on tri U is 3.6918714 and the lowest Y value on tri V is 3.7277427f)... Still, the routines give a false positive on these triangles (and curiously enough, not on the cases where there are triangles just adjacent to each of them)... When I investigated this matter even more, I found out that the problem is that the routine reports an intersection of edge (V1,V2) and (U0,U1), which are almost collinear, so I assume that's the problem...=20 =20 I really don't know what else to try out... I don't care if the routine is slow or 2d, because it's for use on a preprocess and the only cases I want to test are 2d, but I really can't understand the behavior I'm seeing... =20 Thanks in advance for any help/hint! =20 Diogo de Andrade =20 =20 =20 
From: Ben Garney <beng@ga...>  20071017 18:43:53

On 10/17/07, Diogo de Andrade <diogo.andrade@...> wrote: > floats to doubles had no effect=85 Then I bore deeper into the tri/tri > routines and what I found out disturbed me=85 I created another project (= to I'm guessing this case comes up for adjacent triangles... Have you considered doing a topological check (ie, adjacent triangles with similar normals don't get checked for collision)? =20 Ben Garney GarageGames.com 
From: Jon Watte <hplus@mi...>  20071017 18:34:44

Another problem might be using DirectX. If you call into the DirectX library (or, in fact, certain other libraries), it may change the internal CPU rounding mode and floating point precision bits, if you're on x86. Similarly, signal handlers may have that effect on UNIX. The only way to fix this is to set the rounding mode and precision bits yourself to what you want, either using inline assembler, or using _controlfp87() from the MSVC runtime. Regarding unwrapping for light maps, that problem has been solved many times over :) I would suggest going for some general mesh parameterization algorithm, and tweak it to favor good connectivity with low distortion over getting the largest possible subcharts. Also, you might want to force it to break triangles when the edge angle is greater than some tolerance, or where there are material or smoothing group breaks, to make sure you get a nice, clean break in the rendered result. Last: I'm assuming you put gutters around your outer triangle edges in the light map? Else you'll have filtering problems with light or shadow "leaking." Cheers, / h+ Gribb, Gil wrote: > > Well, those are the wonders of floating point. It doesn’t matter if > you use doubles or not, doubles are still floating point. You will see > differences with platforms, compilers, optimization, slightly > different code, etc. What you are asking floating point to do, it > cannot unless you are extra careful. > > Generally, you introduce a tolerance, so it isn’t “do these triangles > intersect?” it is “do these triangles intersect by more than some > small number?”. > > There probably is a way to do what you want exactly, with floats, but > it is very tricky. GPU’s for example never draw a pixel twice for a > wellformed mesh, and they use floats, but it takes a lot of care and > thought to get this right. > > Gil >   Revenge is the most pointless and damaging of human desires. 
From: Marco Salvi <marcotti@gm...>  20071017 16:20:05

On 10/17/07, Gribb, Gil <ggribb@...> wrote: > > There probably is a way to do what you want exactly, with floats, but it > is very tricky. GPU's for example never draw a pixel twice for a wellformed > mesh, and they use floats, but it takes a lot of care and thought to get > this right. > > > As far as I know all modern rasterization units (on GPUs) work with fixed point math, the only way to get watertight rasterization :) 
From: Gribb, Gil <ggribb@ra...>  20071017 16:10:30

Well, those are the wonders of floating point. It doesn't matter if you use doubles or not, doubles are still floating point. You will see differences with platforms, compilers, optimization, slightly different code, etc. What you are asking floating point to do, it cannot unless you are extra careful. =20 Generally, you introduce a tolerance, so it isn't "do these triangles intersect?" it is "do these triangles intersect by more than some small number?". =20 There probably is a way to do what you want exactly, with floats, but it is very tricky. GPU's for example never draw a pixel twice for a wellformed mesh, and they use floats, but it takes a lot of care and thought to get this right. =20 Gil =20 =20 =20 ________________________________ From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Diogo de Andrade Sent: Wednesday, October 17, 2007 10:03 AM To: gdalgorithmslist@... Subject: [Algorithms] Triangle/Triangle intersection =20 Hey all! =20 I have a triangle/triangle intersection problem which I hope some can shed some light on... My purpose is to build UV charts for an arbitrary model... I'll then use those charts to build lightmaps... My current algorithm just uses the normal of a particular triangle to decide on which of the 6 charts (+/X, +/Y, +/Z) it will be, then I project the proper world coordinates of the model onto the appropriate plane (for instance, if I want to place a certain triangle on the +X plane, I just use the Y,Z coordinates (properly scaled and offset) as U,V). This works well for simple models, as I expected (for instance, a cube, or a sphere, or a simple table, stuff like that) Now, to add support to more complex models (for instance, concave models, in which the projection of two different triangles can lead to the same spot on the chart), I added triangle intersections, that is, for each triangle I project, I'm checking if there is a projected triangle is already on that position. If it is, I just create a new chart and place the triangle there... The thing is, this almost works, BUT (there's always a but...) some triangles get placed in another chart altogether, although they clearly fit in the "first" chart... Delving deeper into this, I arrive to the conclusion that triangle/triangle intersection routines are reporting false positives... First of all, I was convinced that this was a precision issue, but changing all the code from floats to doubles had no effect... Then I bore deeper into the tri/tri routines and what I found out disturbed me... I created another project (to simplify my tests) and copied the relevant code straight from my code base to that new project... And to my surprise, using the same input data, I got a false positive in one case, and the (correct) negative on the other... I quickly assigned blame to different compilation flags, but after a lot of hunting around, I couldn't see differences... I finally got the two applications to behave the same way, by changing stuff like: =20 f=3DAy*BxAx*By; =20 to =20 f1=3DAy*Bx; f2=3DAx*By; f=3Df1f2; =20 This made the two pieces of code behave the same, BUT in this version, I have false positives again... :\ =20 I'm using the code by Tomas Moller available at http://www.cs.lth.se/home/Tomas_Akenine_Moller/code/ , but I already tried all the tri/tri routines I could find on the net... None of them report the correct "false" response I was expecting on these two triangles: =20 V0[0]=3D0.85786074f; V0[1]=3D3.9275737f; V0[2]=3D0.0f; = V1[0]=3D1.5166174f; V1[1]=3D3.7277427f; V1[2]=3D0.0f; V2[0]=3D2.8088150f; = V2[1]=3D6.8473845f; V2[2]=3D0.0f; =20 U0[0]=3D0.075145423f; U0[1]=3D0.24772120f; U0[2]=3D0.0f; = U1[0]=3D1.4189863f; U1[1]=3D3.4920404f; U1[2]=3D0.0f; U2[0]=3D0.76022965f; = U2[1]=3D3.6918714f; U2[2]=3D0.0f; =20 int coplanar; float p1[3],p2[3]; int b=3Dtri_tri_intersect_with_isectline(V0,V1,V2,U0,U1,U2,&coplanar,&p1[0],&= p 2[0]); =20 I've graphed these two triangles two million times already, but it's pretty clear they don't intersect (I can see the separating axis clearly, since the highest Y value on tri U is 3.6918714 and the lowest Y value on tri V is 3.7277427f)... Still, the routines give a false positive on these triangles (and curiously enough, not on the cases where there are triangles just adjacent to each of them)... When I investigated this matter even more, I found out that the problem is that the routine reports an intersection of edge (V1,V2) and (U0,U1), which are almost collinear, so I assume that's the problem...=20 =20 I really don't know what else to try out... I don't care if the routine is slow or 2d, because it's for use on a preprocess and the only cases I want to test are 2d, but I really can't understand the behavior I'm seeing... =20 Thanks in advance for any help/hint! =20 Diogo de Andrade =20 =20 =20 
From: Jason Hughes <jason_hughes@di...>  20071017 16:04:14

Diogo, Interesting problem. I have to wonder, though, if you already have to handle the case where you have collisions and you're creating separate charts to satisfy the triangular projection's uniqueness criteria... why not abandon the cubefaced charting entirely and build a proper 2D triangle allocator instead? You're already doing half the work. http://research.microsoft.com/~hoppe/mcgim.pdf You might find this to be a helpful resource. The partitioning of models is fairly straightforward, and will yield you fewer, larger UV shells with higher degrees of connectivity than your current method (which may not matter if this is for tools use only, but should matter for runtime use), because the geometry charts span curvature. ...and you won't need to solve this triangle projection error. :) Thanks, JH Diogo de Andrade wrote: > > Hey all! > > > > I have a triangle/triangle intersection problem which I hope some can > shed some light on... My purpose is to build UV charts for an > arbitrary model... > > 
From: Diogo de Andrade <diogo.andrade@ne...>  20071017 15:00:45

Hey all! I have a triangle/triangle intersection problem which I hope some can shed some light on. My purpose is to build UV charts for an arbitrary model. I'll then use those charts to build lightmaps. My current algorithm just uses the normal of a particular triangle to decide on which of the 6 charts (+/X, +/Y, +/Z) it will be, then I project the proper world coordinates of the model onto the appropriate plane (for instance, if I want to place a certain triangle on the +X plane, I just use the Y,Z coordinates (properly scaled and offset) as U,V). This works well for simple models, as I expected (for instance, a cube, or a sphere, or a simple table, stuff like that) Now, to add support to more complex models (for instance, concave models, in which the projection of two different triangles can lead to the same spot on the chart), I added triangle intersections, that is, for each triangle I project, I'm checking if there is a projected triangle is already on that position. If it is, I just create a new chart and place the triangle there. The thing is, this almost works, BUT (there's always a but.) some triangles get placed in another chart altogether, although they clearly fit in the "first" chart. Delving deeper into this, I arrive to the conclusion that triangle/triangle intersection routines are reporting false positives. First of all, I was convinced that this was a precision issue, but changing all the code from floats to doubles had no effect. Then I bore deeper into the tri/tri routines and what I found out disturbed me. I created another project (to simplify my tests) and copied the relevant code straight from my code base to that new project. And to my surprise, using the same input data, I got a false positive in one case, and the (correct) negative on the other. I quickly assigned blame to different compilation flags, but after a lot of hunting around, I couldn't see differences. I finally got the two applications to behave the same way, by changing stuff like: f=Ay*BxAx*By; to f1=Ay*Bx; f2=Ax*By; f=f1f2; This made the two pieces of code behave the same, BUT in this version, I have false positives again... :\ I'm using the code by Tomas Moller available at http://www.cs.lth.se/home/Tomas_Akenine_Moller/code/ , but I already tried all the tri/tri routines I could find on the net... None of them report the correct "false" response I was expecting on these two triangles: V0[0]=0.85786074f; V0[1]=3.9275737f; V0[2]=0.0f; V1[0]=1.5166174f; V1[1]=3.7277427f; V1[2]=0.0f; V2[0]=2.8088150f; V2[1]=6.8473845f; V2[2]=0.0f; U0[0]=0.075145423f; U0[1]=0.24772120f; U0[2]=0.0f; U1[0]=1.4189863f; U1[1]=3.4920404f; U1[2]=0.0f; U2[0]=0.76022965f; U2[1]=3.6918714f; U2[2]=0.0f; int coplanar; float p1[3],p2[3]; int b=tri_tri_intersect_with_isectline(V0,V1,V2,U0,U1,U2,&coplanar,&p1[0],&p2[0] ); I've graphed these two triangles two million times already, but it's pretty clear they don't intersect (I can see the separating axis clearly, since the highest Y value on tri U is 3.6918714 and the lowest Y value on tri V is 3.7277427f)... Still, the routines give a false positive on these triangles (and curiously enough, not on the cases where there are triangles just adjacent to each of them)... When I investigated this matter even more, I found out that the problem is that the routine reports an intersection of edge (V1,V2) and (U0,U1), which are almost collinear, so I assume that's the problem... I really don't know what else to try out... I don't care if the routine is slow or 2d, because it's for use on a preprocess and the only cases I want to test are 2d, but I really can't understand the behavior I'm seeing... Thanks in advance for any help/hint! Diogo de Andrade 
From: Leigh McRae <leigh.mcrae@co...>  20071015 15:18:09

If you willing to have an artist place waypoints by hand, then I would = go with that. You could get this system going very quick and it would be = very light for both CPU and memory. It doesn't sound like you need much. I am also liking the 2d grid idea but doing an A* across the level = might become a problem. I haven't any experience with multiresolution = grids but I think that's what I would explore. With all the talk of = multithreading, you could likely have a system that paints the grid for = dynamic obstacles. What do you have to determine whether you need to = pathfind at all? Do you have line of sight queries? The last problem is = paining the grid, I guess that would be a flood fill or sorts, off line. = Don't under estimate how hard that will be. Part of me would still do a navmesh though :) I really like how you = can do 2d line of sight using the triangles. There are other problems I = glossed over though. Determining whether your agent can fit through a = triangle can be a pain.=20 I would say any auto generation will likely be harder than you = initially think and take a month to do, plus you will be bug fixing it = to the end :) Leigh  Original Message =20 From: Jeff Russell=20 To: Game Development Algorithms=20 Sent: Sunday, October 14, 2007 11:29 PM Subject: Re: [Algorithms] pathfinding and nav representation On 10/14/07, Leigh McRae <leigh.mcrae@...> wrote: You really need to give more info about the game. How much detail = is there in the level, is it all open fields or is it a city? Do you want to = lean on memory or CPU? How much extra AI do you want out of it? A little more info then:  Detail in the level varies by location. Some areas are pretty much = open wilderness with a few trees, and some are tight street corners and = alleys with debris littered everywhere. Characters are humans and = humansized. We don't really have to worry about vehicles or such for = right now. I'd rather lean on memory side for this because our = characters counts can grow quite large and I'd rather give up a little = memory than to have something like this sta rt slowing things down a = lot. As far as AI out of it  really it just needs to be enough info to = mark pass/no pass  we already have mechanisms for marking cover and = defining specific move points for scripting etc.=20 Thanks for your response, Jeff =  = = This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a = browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ =  _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: = http://sourceforge.net/mailarchive/forum.php?forum_name=3Dgdalgorithmsli= st 
From: Jason Hughes <jason_hughes@di...>  20071015 04:09:27

Jeff Russell wrote: > > A little more info then: > >  Detail in the level varies by location. Some areas are pretty much > open wilderness with a few trees, and some are tight street corners > and alleys with debris littered everywhere. Characters are humans and > humansized. We don't really have to worry about vehicles or such for > right now. I'd rather lean on memory side for this because our > characters counts can grow quite large and I'd rather give up a little > memory than to have something like this start slowing things down a > lot. As far as AI out of it  really it just needs to be enough info > to mark pass/no pass  we already have mechanisms for marking cover > and defining specific move points for scripting etc. > Leigh gave you some very good advice. I recently wrote a middleware pathfinding system for a client that happened to be a navigation mesh system. Being that you're not interested in that for various reasons, the alternatives do have some drawbacks. For instance, a 2D grid thrown over the level can be great, especially if you're ok with lower resolution and using some ray casts for steering assistance... but you need to be acutely aware of the grid cell dimensions and that no passage is smaller than about 1/2 the size of a cell, or you'll be in pain trying to get it to show up on the grid. Multiresolution maps are better, but requires either some degree of artist UV mapping grid cells in the larger grid, so you can refer to a higher resolution grid for those cells that need it, or an automatic mapping approach which is a bit more tools intensive (but very nice once you have it workingartist selects passable floor geometry and calls a function that creates UVs and generates an image region for them, etc, etc). This method tends to handle bridges and multistory levels very poorly, obviously. Waypoints are great for a lot of things. If you give your nodes a bounding volume, they can be every bit as good as a navigation mesh and a lot easier to place. Again, this requires a little tools work and artists/designers need to be able to place them and visually see that they fit with the level. Supporting just box, sphere, and plane, you can approximate whole areas very quickly. If you can manage swept geometry attached to a spline curve, you can do incredible detail with minimum data (with heavier computational requirements). Great for creating believable paths for individual actors through complex static terrain, ie. crumbling walls, highways, or twisty mine shafts. As far as autogenerating nav meshes, you may be able to easily adapt a mesh simplification system to output a usable mesh, so long as you have a scene where your levels are fully loaded. Especially if there's some lowerres collision data lying about, it's pretty easy to clip the walls out and reduce the interior fidelity. If your nav system supports a quick thickness property per face and trivially supports queries from slightly off the nav mesh (as my system does), you should get a very usable mesh from relatively rough geometry. Good luck, JH 
From: Jeff Russell <jeffdr@gm...>  20071015 03:29:15

On 10/14/07, Leigh McRae <leigh.mcrae@...> wrote: > > You really need to give more info about the game. How much detail is > there > in the level, is it all open fields or is it a city? Do you want to lean > on > memory or CPU? How much extra AI do you want out of it? > A little more info then:  Detail in the level varies by location. Some areas are pretty much open wilderness with a few trees, and some are tight street corners and alleys with debris littered everywhere. Characters are humans and humansized. We don't really have to worry about vehicles or such for right now. I'd rather lean on memory side for this because our characters counts can grow quite large and I'd rather give up a little memory than to have something like this start slowing things down a lot. As far as AI out of it  really it just needs to be enough info to mark pass/no pass  we already have mechanisms for marking cover and defining specific move points for scripting etc. Thanks for your response, Jeff 
From: Leigh McRae <leigh.mcrae@co...>  20071014 19:54:12

You really need to give more info about the game. How much detail is there in the level, is it all open fields or is it a city? Do you want to lean on memory or CPU? How much extra AI do you want out of it? I think a 2d regular grid can be really good if it fits your game. You really save since you don't need to store vertices for the cells. So your computing isn't of storing. It can also be good for dynamic obstacles since you can paint stuff in Having nodes that have area always helps. I think with this solution you will be battling resolution and art that doesn't align on the grid all that well. I am not a fan of waypoints since ultimately an AI doesn't get a good grasp of the lay of the land. There is only so much info a point will give you. Waypoint systems tend to tell you what way point to go to and maybe how close you need to get. With grids or navmesh, you have edges that can be used for steering and give you exactly how much leeway you have on the path. This is why I asked for more requirements because maybe it's good enough. Problem with good enough is that designers tend to come up to you desk much later asking why people don't navigate better... Waypoints are also sometimes associated with low memory but I have seen waypoint connections explode as more resolution is needed. I wrote an automated triangle navmesh solution that had the artist tag geometry with info for graphics, collision and AI. It fixes problems with stuff like a wall being changed from wood to metal but the sound it makes doesn't get updated. It's also very good for defining AI regions. Having a ped follow a sidewalk is pretty easy. Also you get automated support plus the artist can always manually modify the geometry if they have to. The biggest problem you will have with a system like this is working with bad/hard content that artist dream up. Also if you are able to share the same vertex buffer with the graphics, I think the storage requirements are bad. Whatever your solution I would try and have it automated or you will end up with not being able to make changes to your level because the collision, sound, etc assets won't be ready in time for the next milestone. Leigh  Original Message  From: "Jeff Russell" <jeffdr@...> To: "Game Development Algorithms" <gdalgorithmslist@...> Sent: Sunday, October 14, 2007 1:28 PM Subject: [Algorithms] pathfinding and nav representation > I'm looking for a good way to represent our game world for the > purposes of character path finding and navigation (we're replacing > some middleware). > > Our levels are about 1 square mile or so at most, and we need enough > resolution to still allow characters to move very close to obstacles > (for cover, such as fences, boulders, etc). A big problem with the > middleware package that we had been using was that it required the use > of navigation meshes, which were sort of ok when in actual use but > ended up being an enormous pain in the butt to generate, and made > reworking of levels difficult because the mesh would then have to be > edited again by hand. One goal we have here is that generation of our > nav data should be almost completely automatic to make level > development quicker. > > Obviously whatever we use needs to be amenable somehow to A* or a > similar algorithm. Our characters will be viewed in first person at a > variety of distances, so this isnt an RTS or something where we can > take shortcuts with collisions. It'd be nice to 'bake' our static > obstacles into this nav structure so that they are free with respect > to path finding and such. > > So, any recommendations? Right now my favorite idea is just using a > big 2d grid (like an image almost) to mask off regions that are > 'passable', 'impassable', or 'steep' and such. > > Jeff Russell > >  > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 
From: Jeff Russell <jeffdr@gm...>  20071014 17:28:35

I'm looking for a good way to represent our game world for the purposes of character path finding and navigation (we're replacing some middleware). Our levels are about 1 square mile or so at most, and we need enough resolution to still allow characters to move very close to obstacles (for cover, such as fences, boulders, etc). A big problem with the middleware package that we had been using was that it required the use of navigation meshes, which were sort of ok when in actual use but ended up being an enormous pain in the butt to generate, and made reworking of levels difficult because the mesh would then have to be edited again by hand. One goal we have here is that generation of our nav data should be almost completely automatic to make level development quicker. Obviously whatever we use needs to be amenable somehow to A* or a similar algorithm. Our characters will be viewed in first person at a variety of distances, so this isnt an RTS or something where we can take shortcuts with collisions. It'd be nice to 'bake' our static obstacles into this nav structure so that they are free with respect to path finding and such. So, any recommendations? Right now my favorite idea is just using a big 2d grid (like an image almost) to mask off regions that are 'passable', 'impassable', or 'steep' and such. Jeff Russell 
From: Jamie Fowlston <jamief@qu...>  20071013 15:06:54

Ignacio Castaño wrote: > On 10/12/07, Jamie Fowlston <jamief@...> wrote: >> I understand the "occlusion" part of POM to refer to the second raytrace >> from the surface point toward the light, which is not compulsory for >> relief mapping. > > Actually, that's not required for POM either. I think "occlusion" > stands for "selfocclusion" which is an effect that regular parallax > mapping does not have. Rereading Tatarchuk's Siggraph 2005 presentation (http://ati.amd.com/developer/SIGGRAPH05/TatarchukParallaxOcclusionMappingSketchprint.pdf), it still looks to me like "occlusion" is only used when referring to the selfocclusion in the selfshadowing ray cast, although there's obviously the possibility of selfocclusion in both ray casts. Jamie 
From: <castano@gm...>  20071013 00:18:41

On 10/12/07, Jamie Fowlston <jamief@...> wrote: > I understand the "occlusion" part of POM to refer to the second raytrace > from the surface point toward the light, which is not compulsory for > relief mapping. Actually, that's not required for POM either. I think "occlusion" stands for "selfocclusion" which is an effect that regular parallax mapping does not have. =20 Ignacio Casta=F1o castano@... 
From: Jamie Fowlston <jamief@qu...>  20071012 10:18:39

Ignacio Castaño wrote: > I personally prefer the term "Relief Mapping", not only to > differentiate it from Parallax Mapping, but also because that's closer > to the name used in the first publication about the topic. Relief > Texture Mapping, in SIGGRAPH 2000. > > I usually use the term POM to refer to ATI's implementation of Relief > Mapping only. They have done a good job evangelizing their approach, > so today many people refer to all Relief Mapping techniques as POM. I understand the "occlusion" part of POM to refer to the second raytrace from the surface point toward the light, which is not compulsory for relief mapping. Jamie 
From: <castano@gm...>  20071012 00:12:54

I personally prefer the term "Relief Mapping", not only to differentiate it from Parallax Mapping, but also because that's closer to the name used in the first publication about the topic. Relief Texture Mapping, in SIGGRAPH 2000. I usually use the term POM to refer to ATI's implementation of Relief Mapping only. They have done a good job evangelizing their approach, so today many people refer to all Relief Mapping techniques as POM. =20 Ignacio Casta=F1o castano@... On 10/11/07, Eric Haines <erich666666@...> wrote: > Not to start a flamewar, but which term is the one to use: cascading > shadow maps, or parallelsplit shadow maps? My money's on the first term. > > Finally: parallax occlusion mapping (vs. the quickhack parallax mapping > with offset limiting) or relief mapping? I'm swaying towards relief > mapping, if only to differentiate it from the "quick & dirty" parallax > mapping of Kaneko and Welsh. > > [I've already been swayed by both > http://home.comcast.net/~tom_forsyth/blog.wiki.html#Bitangents and Spike > Hughes that bitangent is the way to go vs. binormal.] > > I'm asking because we're putting out a third edition of "RealTime > Rendering" (more like a remake than a new edition, in some ways) and we > want to settle on terms. > > > While I'm here, we're going allcolor through this next edition of the > book. We are aiming for SIGGRAPH 2008 release (and maybe a rough > preprint at GDC), which means "all work done by the end of January". > We're updating a bunch of images to newer, or at least color, versions > (and redoing a bunch  embarrassingly, we didn't keep all the original > color images...). If you have any instructive imagesprojective > textures, ROAM algorithm, selfshadowing surface acne, you name it we > may need itor just some stunningly cool images (we'd like to show the > state of the art in various areas) that we can reprint, please contact > me directly. Better yet, send on any images (TIFF or PNG, please) and > we'll get a release (nonexclusive use, etc.) out to you to sign. If we > use your image, you get credit in the caption (of course), a chance for > 12,000 or so programmers to be educated or amazed by your work (judging > from past sales), and a 30% discount on the book. [I wish we could send > everyone who contributed a free copy, but at about 180 image sets > contributed so far for the past editions and this book, we'd have to pay > money to publish. We *do* give copies to chapter reviewers, and could > use a few more volunteers...] > > Eric > > p.s. One other bonus opinion question: do you still use the term > "Gouraud shading"? As an old guy, I do, but one coauthor considers it a > term we should mention maybe once, purely for historical purposes, but > in the era of programmable shaders it's rarely used "asis" in its > original meaning. > 
From: Eric Haines <erich666666@gm...>  20071011 22:38:26

The I3D 2008 site is open to submissions: papers, posters, and demos. See http://graphics.cs.williams.edu/i3d08/. Deadline's October 22. A poster or demo is a nice way to get your idea out there and get feedback, without too much work. As a reminder (and blatant promotion  I like this event, it's fun), I3D 2008 itself is a threeday annual symposium (== small conference) of about 100 or so researchers and industry programmers. It presents a mix of research papers, technical talks, posters, demos, and opportunities to meet other attendees. It will be at EA's Redwood City HQ, 30 minutes south of San Francisco, the weekend before GDC, to make it easy for games programmers and others to attend. Gabe Newell and Pat Hanrahan are the keynote/capstone speakers for 2008. We're just settling on a registration fee  should be affordably low, due to some generous corporate sponsorships. Eric 