You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

S  M  T  W  T  F  S 






1
(28) 
2
(15) 
3
(4) 
4
(22) 
5
(22) 
6
(24) 
7
(4) 
8
(7) 
9
(6) 
10
(13) 
11
(4) 
12
(22) 
13
(55) 
14
(30) 
15
(24) 
16
(1) 
17
(2) 
18
(11) 
19
(28) 
20
(14) 
21
(18) 
22
(15) 
23
(7) 
24

25
(30) 
26
(26) 
27
(43) 
28
(26) 


From: Sim Dietrich <SD<ietrich@nv...>  20020228 17:34:41

You want the Z axis of your local space to always point 'out', like the vertex normal. If you want to handle flipped textures ( ie a texture mapped left to right in x from 1>0 in uv rather than 0>1 in uv, the easiest way is to use the Normal for your Z axis, as opposed to using SxT, or SxT if it didn't match the normal's direction ( the approach I used to recommend ). As far as being faster, I mean that you already have to skin the normal, so using that for your texture space basis is cheaper than skinning the normal and the SxT vector. Original Message From: Adam Moravanszky [mailto:amoravanszky@...] Sent: Thursday, February 28, 2002 8:14 AM To: Sim Dietrich; 'John Harries '; 'Algorithms List ' Subject: Re: [Algorithms] Skinning + Tangent vectors Hmm. But if we assume c), then doesn't that imply that Normal == SxT * SIGNBIT , making your point a) be meaningless? I also don't understand what you mean by the Normal being 'cheaper' than SxT ... what are you talking about here? My app is fill limited at the moment so neither this, nor point b) matters for me. :/ Adam From: "Sim Dietrich" <SDietrich@...> > In my experience with this, having helped implement it in several games, I > can tell you that : > > a) You should always use the Normal instead of SxT ( this is not only > cheaper, but helps with flipped textures ) > > b) You shouldn't bother renormalizing these vectors after transforming them, > assuming you don't have significant scales > > c) The assumption that S & T & Normal vectors are orthonormal is a good > assumption if your texels are mapped to be square. Because this is usually > true, or supposed to be true, it tends should work out if you make the > assumption that the tangent basis is orthonormal. > 
From: Adam Moravanszky <amoravanszky@dp...>  20020228 16:15:24

Hmm. But if we assume c), then doesn't that imply that Normal == SxT * SIGNBIT , making your point a) be meaningless? I also don't understand what you mean by the Normal being 'cheaper' than SxT ... what are you talking about here? My app is fill limited at the moment so neither this, nor point b) matters for me. :/ Adam From: "Sim Dietrich" <SDietrich@...> > In my experience with this, having helped implement it in several games, I > can tell you that : > > a) You should always use the Normal instead of SxT ( this is not only > cheaper, but helps with flipped textures ) > > b) You shouldn't bother renormalizing these vectors after transforming them, > assuming you don't have significant scales > > c) The assumption that S & T & Normal vectors are orthonormal is a good > assumption if your texels are mapped to be square. Because this is usually > true, or supposed to be true, it tends should work out if you make the > assumption that the tangent basis is orthonormal. > 
From: Sim Dietrich <SD<ietrich@nv...>  20020228 15:54:02

In my experience with this, having helped implement it in several games, I can tell you that : a) You should always use the Normal instead of SxT ( this is not only cheaper, but helps with flipped textures ) b) You shouldn't bother renormalizing these vectors after transforming them, assuming you don't have significant scales c) The assumption that S & T & Normal vectors are orthonormal is a good assumption if your texels are mapped to be square. Because this is usually true, or supposed to be true, it tends should work out if you make the assumption that the tangent basis is orthonormal. Original Message From: John Harries To: Adam Moravanszky; Algorithms List Sent: 2/28/2002 6:58 AM Subject: RE: [Algorithms] Skinning + Tangent vectors :) That should have read: Since S, T, SxT IDEALLY form an orthonormal basis you can transform S and your normal and then perform a crossproduct to calculate T. Easy in a vertex shader. John > Original Message > From: Adam Moravanszky [mailto:amoravanszky@...] > Sent: Thursday, February 28, 2002 8:53 AM > To: John Harries; Algorithms List > Subject: Re: [Algorithms] Skinning + Tangent vectors > > > Yes, this does help, as did Tom's reply. > > I don't agree with S,T and SxT forming an orthonormal basis, though. This > is only the case when we have 'perfect' texture mapping > coordinates with no > distortion  not very likely. I believe S and T have to follow > the texture > mapping directions, even if the UV directions are not orthogonal. On the > other hand, S cross T is usually (close to) the normal, so the > optimization > in my vertex shaders is that I leave the normal away, and compute that on > the fly as > SIGNBIT * (S x T) > > Adam >  Original Message  > From: John Harries > To: Adam Moravanszky ; Algorithms List > Sent: Thursday, February 28, 2002 3:01 PM > Subject: RE: [Algorithms] Skinning + Tangent vectors > > > Yes, you need to retransform. > > Your tangent space is defined by the vectors S, T and SxT. (Where SxT is > cross_product(S, T)). > > Note that SxT is also the vertex normal. > > S and T should be derived from the texture coordinate derivatives > only once. > > Since S, T, SxT forms an orthonormal basis you can transform S and your > normal and then perform a crossproduct to calculate T. Easy in a vertex > shader. > > Does this help? > > John > > _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: John Harries <jharries@io...>  20020228 14:58:48

:) That should have read: Since S, T, SxT IDEALLY form an orthonormal basis you can transform S and your normal and then perform a crossproduct to calculate T. Easy in a vertex shader. John > Original Message > From: Adam Moravanszky [mailto:amoravanszky@...] > Sent: Thursday, February 28, 2002 8:53 AM > To: John Harries; Algorithms List > Subject: Re: [Algorithms] Skinning + Tangent vectors > > > Yes, this does help, as did Tom's reply. > > I don't agree with S,T and SxT forming an orthonormal basis, though. This > is only the case when we have 'perfect' texture mapping > coordinates with no > distortion  not very likely. I believe S and T have to follow > the texture > mapping directions, even if the UV directions are not orthogonal. On the > other hand, S cross T is usually (close to) the normal, so the > optimization > in my vertex shaders is that I leave the normal away, and compute that on > the fly as > SIGNBIT * (S x T) > > Adam >  Original Message  > From: John Harries > To: Adam Moravanszky ; Algorithms List > Sent: Thursday, February 28, 2002 3:01 PM > Subject: RE: [Algorithms] Skinning + Tangent vectors > > > Yes, you need to retransform. > > Your tangent space is defined by the vectors S, T and SxT. (Where SxT is > cross_product(S, T)). > > Note that SxT is also the vertex normal. > > S and T should be derived from the texture coordinate derivatives > only once. > > Since S, T, SxT forms an orthonormal basis you can transform S and your > normal and then perform a crossproduct to calculate T. Easy in a vertex > shader. > > Does this help? > > John > > 
From: Adam Moravanszky <amoravanszky@dp...>  20020228 14:54:51

Yes, this does help, as did Tom's reply. I don't agree with S,T and SxT forming an orthonormal basis, though. This is only the case when we have 'perfect' texture mapping coordinates with no distortion  not very likely. I believe S and T have to follow the texture mapping directions, even if the UV directions are not orthogonal. On the other hand, S cross T is usually (close to) the normal, so the optimization in my vertex shaders is that I leave the normal away, and compute that on the fly as SIGNBIT * (S x T) Adam  Original Message  From: John Harries To: Adam Moravanszky ; Algorithms List Sent: Thursday, February 28, 2002 3:01 PM Subject: RE: [Algorithms] Skinning + Tangent vectors Yes, you need to retransform. Your tangent space is defined by the vectors S, T and SxT. (Where SxT is cross_product(S, T)). Note that SxT is also the vertex normal. S and T should be derived from the texture coordinate derivatives only once. Since S, T, SxT forms an orthonormal basis you can transform S and your normal and then perform a crossproduct to calculate T. Easy in a vertex shader. Does this help? John 
From: Willem H. de Boer <W<illem@mu...>  20020228 14:47:24

I think you could do that. Render the scene a second time into an offscreen buffer in black (vertex colours set to 0 perhaps, although that would mean touching your vertex buffers. Maybe use a onebyone black texture instead), with white depthbased fog. But, after you've done that, how would you go about blitting that scene into the frame buffer's destination alpha channel? I haven't done any PC stuff lately, so I don't know whether or not it's possible to mask/shift a colour channel into the destination alpha channel before blitting it.. Prolly not. The PS2 version uses some very clever hardware hacks to get the job done. Maybe you could do some clever pixel shader tricks. So instead of rendering the scene into an offscreen buffer, render it on top of the existing one and have a pixel shader shove one of the colour channels of the black+white render into the destination alpha channel of the framebuffer... This is just speculation on my behalf. I have little to no experience when it comes to pixel shader coding. Cheers, Willem MuckyFoot Goat Boy Original Message From: Garett Bass [mailto:gtbass@...] Sent: 28 February 2002 14:31 To: GDAlgorithmslist@... Subject: RE: [Algorithms] Depth of Field and Smoke Blurs I'm curious, on the PC, could you create the same or similar destination alpha effect by rendering the scene into destination alpha in black (or white) with white (or black) depthbased fog? I'm thinking the color of each pixel in the range of black to white would be equivalent to zbased alpha. I'm not familiar with the workings of destination alpha, so I don't really have any idea whether this would work or whether it could be done in some usefull parallel to normal rendering to make it even feasible. Perhaps you could reduce the LOD ranges dramatically when using depth of field, or even totally alter the LOD regions to minimize the tesselation in unfocussed areas to make up for the extra draw time the effect requires? Garett Bass Original Message On PS2 we convert the zbuffer into one big 32bit RGBA texture, and draw the 8bit green colour component into the alpha channel of our frame buffer [bilinear filtering off]. The green channel proved to be the most useful one for depthoffield stuff. We then scale down our framebuffer to a quarter of its original area [halfsize offscreen buffer blit with bilinear filtering on] and draw it on top of the original framebuffer with destination alpha testing on. Willem MuckyFoot Goat Boy _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Tom Forsyth <tomf@mu...>  20020228 14:46:23

Yes, that works well. The way I did my depthoffield effect was to render the scene a second time into a small (256x128) A8R8G8B8 texture, with the alpha channel based on the Z value using either a vertex shader, or a 1D planarmapped texture if using the FFP. Alpha = 0 at the focus point, fading to 1 closer and further away. Then blend that texture over the main view, using the texture's alpha with a standard SRCLAPHA:INVSRCALPHA. Result  the infocus depth is razorsharp, and everything else is foggy. You have to be careful with alphablended effects, but otherwise it works well. And yes, you can use lowerLoD shapes for the render to the texture. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Garett Bass [mailto:gtbass@...] > Sent: 28 February 2002 14:31 > To: GDAlgorithmslist@... > Subject: RE: [Algorithms] Depth of Field and Smoke Blurs > > > I'm curious, on the PC, could you create the same or similar > destination > alpha effect by rendering the scene into destination alpha in > black (or > white) with white (or black) depthbased fog? I'm thinking > the color of > each pixel in the range of black to white would be equivalent > to zbased > alpha. I'm not familiar with the workings of destination > alpha, so I don't > really have any idea whether this would work or whether it > could be done in > some usefull parallel to normal rendering to make it even feasible. > > Perhaps you could reduce the LOD ranges dramatically when > using depth of > field, or even totally alter the LOD regions to minimize the > tesselation in > unfocussed areas to make up for the extra draw time the > effect requires? > > Garett Bass > > > Original Message > > > On PS2 we convert the zbuffer into one big 32bit RGBA > texture, and draw > the 8bit green colour component into the alpha channel of our frame > buffer [bilinear filtering off]. > The green channel proved to be the most useful one for depthoffield > stuff. We then scale down our framebuffer to a quarter > of its original area [halfsize offscreen buffer blit with bilinear > filtering on] and draw it on top of the original framebuffer > with destination alpha testing on. > > Willem > MuckyFoot Goat Boy > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Garett Bass <gtbass@st...>  20020228 14:31:29

I'm curious, on the PC, could you create the same or similar destination alpha effect by rendering the scene into destination alpha in black (or white) with white (or black) depthbased fog? I'm thinking the color of each pixel in the range of black to white would be equivalent to zbased alpha. I'm not familiar with the workings of destination alpha, so I don't really have any idea whether this would work or whether it could be done in some usefull parallel to normal rendering to make it even feasible. Perhaps you could reduce the LOD ranges dramatically when using depth of field, or even totally alter the LOD regions to minimize the tesselation in unfocussed areas to make up for the extra draw time the effect requires? Garett Bass Original Message On PS2 we convert the zbuffer into one big 32bit RGBA texture, and draw the 8bit green colour component into the alpha channel of our frame buffer [bilinear filtering off]. The green channel proved to be the most useful one for depthoffield stuff. We then scale down our framebuffer to a quarter of its original area [halfsize offscreen buffer blit with bilinear filtering on] and draw it on top of the original framebuffer with destination alpha testing on. Willem MuckyFoot Goat Boy 
From: Kaspar Daugaard <kdaugaard@bi...>  20020228 14:09:15

Animate the tangent vectors. Youll probably find its a good idea to blend the bone matrices instead of the transformed vectors as an optimisation. So just use the 3x3 part of the blended matrix to transform the tangents, same way as the normal. Best regards, Kaspar Daugaard http://www.bigbluebox.com Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Adam Moravanszky Sent: 28 February 2002 13:33 To: Algorithms List Subject: [Algorithms] Skinning + Tangent vectors Hello. I am completely new to mesh deformation with bones, so this may sound naive: If I have a mesh which needs tangent vectors per vertex (for bump mapping, for example) , should these be animated along with the vertex positions and the normals, or can that be avoided? Regenerating them from scratch every frame from the unchanging texture coordinates is too expensive??! P.S: I am doing skinning in software with Cal3D so this need not be hardware friendly. What I would like to avoid is rewriting my shaders to light in any other space than standard tangent space. P.P.S: I saw in the archives that this is supposedly covered in the book "Mathematics for 3D Game Programming & Computer Graphics", but alas, I don't have it. Thanks, Adam 
From: Tom Forsyth <tomf@mu...>  20020228 14:02:10

Yes, you need to animate them as well as the normals, and in the same way. Note that a shortcut that works suprisingly well is to only animate normals and tangent vectors by the most important (usually firstlisted) bone, rather than doing the full fourway blending. The difference in lighting is usually very small and hard to see, and it's a lot quicker. Alternatively, instead of animating the normals and tangent vectors, "deanimate" the light vector. It's often quicker in the simple case because there's only one vector to move, not three, but if you're also doing anisotropic lighting and all that sort of stuff, you have more "light" vectors to transforms and it can be slower. Also, depending on what animations you are doing, you may not be able to find the inverse of the animation matrices very easily. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. Original Message From: Adam Moravanszky [mailto:amoravanszky@...] Sent: 28 February 2002 13:33 To: Algorithms List Subject: [Algorithms] Skinning + Tangent vectors Hello. I am completely new to mesh deformation with bones, so this may sound naive: If I have a mesh which needs tangent vectors per vertex (for bump mapping, for example) , should these be animated along with the vertex positions and the normals, or can that be avoided? Regenerating them from scratch every frame from the unchanging texture coordinates is too expensive??! P.S: I am doing skinning in software with Cal3D so this need not be hardware friendly. What I would like to avoid is rewriting my shaders to light in any other space than standard tangent space. P.P.S: I saw in the archives that this is supposedly covered in the book "Mathematics for 3D Game Programming & Computer Graphics", but alas, I don't have it. Thanks, Adam 
From: John Harries <jharries@io...>  20020228 14:01:41

Yes, you need to retransform. Your tangent space is defined by the vectors S, T and SxT. (Where SxT is cross_product(S, T)). Note that SxT is also the vertex normal. S and T should be derived from the texture coordinate derivatives only once. Since S, T, SxT forms an orthonormal basis you can transform S and your normal and then perform a crossproduct to calculate T. Easy in a vertex shader. Does this help? John Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Adam Moravanszky Sent: Thursday, February 28, 2002 7:33 AM To: Algorithms List Subject: [Algorithms] Skinning + Tangent vectors Hello. I am completely new to mesh deformation with bones, so this may sound naive: If I have a mesh which needs tangent vectors per vertex (for bump mapping, for example) , should these be animated along with the vertex positions and the normals, or can that be avoided? Regenerating them from scratch every frame from the unchanging texture coordinates is too expensive??! P.S: I am doing skinning in software with Cal3D so this need not be hardware friendly. What I would like to avoid is rewriting my shaders to light in any other space than standard tangent space. P.P.S: I saw in the archives that this is supposedly covered in the book "Mathematics for 3D Game Programming & Computer Graphics", but alas, I don't have it. Thanks, Adam 
From: Adam Moravanszky <amoravanszky@dp...>  20020228 13:34:28

Hello. I am completely new to mesh deformation with bones, so this may sound = naive: If I have a mesh which needs tangent vectors per vertex (for bump = mapping, for example) , should these be animated along with the vertex positions and the = normals, or can that be avoided? Regenerating them from scratch every frame from the unchanging texture = coordinates is too expensive??! P.S: I am doing skinning in software with Cal3D so this need not be = hardware friendly. What I would like to avoid is rewriting my shaders = to light in any other space than standard tangent space. P.P.S: I saw in the archives that this is supposedly covered in the book = "Mathematics for 3D Game Programming & Computer Graphics", but alas, I = don't have it. Thanks, Adam 
From: David Zaadstra <mythomania@gm...>  20020228 13:05:09

That's just about what my thoughts were about the algorithm. I just don't have the knowledge to tell you that it's sigma shaped ;) I'll try two things: 1) your solution. If that doesn't look okay I'll try it like 2) http://www.people.nnov.ru/fractal/VRML/Terra/terra.htm with the difference that my triangles are 90°, not equalsided. I would divide the terrain into 4 parts then, something like this: \ /  \ /   \ /   / \   / \  /\ and recurse on each part. Tried it on paper. Gave me a linear interpolation. Thanks for your help. I'll post another screenshot when it looks ok and tell you how I did it. David  Original Message  From: "Tom Forsyth" <tomf@...> To: "David Zaadstra" <mythomania@...>; <gdalgorithmslist@...> Sent: Thursday, February 28, 2002 12:43 PM Subject: RE: [Algorithms] midpoint displacement problems > I dug out Gems1 and had a look at the algorithm. I would expect that with no > random displacements and a starting grid, it would produce a series of > bilinear patches. But the algorithm is not the one I was expecting. > > Given a grid: > > ABC >    > DEF >    > GHI > > For which we start with values A, C, G and I. We calculate the midpoint E > the way I would expect: > > E = (A+C+G+I)/4 > > To do a bilinear filter, you would then fine the edge midpoints this way: > > D = (A+G)/2 > B = (A+C)/2 > F = (C+I)/2 > H = (G+I)/2 > > ...but this is not what the book does. It requires use of an adjacent > square: > > ABC+ >      > DEFX >      > GHI+ > > ...and F = (C+I+E+X)/4 > > > This seems wrong to me. I can't really say why, except that it feels wrong. > One odd feature is that with more tesselation (with no random > displacements), the line CI is not a straight line, it is some odd > sigmashaped thing. Which means the 2D midpoint displacement result (a > straight line) does not appear anywhere in the 3D version. Which seems > wrong. > > > I am by no means an expert on these things (and I would use a weighted > bicubic interpolation scheme rather than a linear one anyway), but is the > algorithm given in the book realy the standard algorithm? > > > Whether it is or not, it's obviously not giving you the results you expect, > which is what really matters. Try calculating using > > D = (A+G)/2 > B = (A+C)/2 > F = (C+I)/2 > H = (G+I)/2 > > instead  it may be more to your liking. And of course the real trick is to > fiddle with the algorithm until it gives you results you like  there's no > "right" or "wrong" way to do things. > > > Tom Forsyth  purely hypothetical Muckyfoot bloke. > > This email is the product of your deranged imagination, > and does not in any way imply existence of the author. > > > Original Message > > From: David Zaadstra [mailto:mythomania@...] > > Sent: 27 February 2002 21:35 > > To: gdalgorithmslist@... > > Subject: Re: [Algorithms] midpoint displacement problems > > > > > > hmm...I think I just found out that the algorithm as it is > > described in GPG1 > > doesn't really work. I can't explain why, but I calculated a small 8*8 > > heightmap on paper, and I get wrong results. The points get > > interpolated > > like an exponential function from the middle of the square to > > its edges. > > When I read how it works with a line I thought that the > > algorithm is ok. And > > it is, but only in 2D, with the line. > > I think the problem with the 3D version is that the 4 points which get > > interpolated don't lie in a plane. Or is it because the > > distance of the > > points is sqrt(2) in the diamond step and 1 in the square step? > > Can anybody else comment on this? Somebody how knows the > > algorithm, and is > > better at maths and explaining than me maybe? > > > > > >  Original Message  > > From: "Tom Forsyth" <tomf@...> > > To: "David Zaadstra" <mythomania@...>; > > <gdalgorithmslist@...> > > Sent: Wednesday, February 27, 2002 9:24 PM > > Subject: RE: [Algorithms] midpoint displacement problems > > > > > > > That looks like your midpoint calculation isn't working > > right  the peaks > > > are the "coarse" divisions, but then the rest of the > > subdivisions don't > > > interpolate through them properly. Try turning off any > > random variation > > > after, say, three divisons  you should then get a nice > > smooth surface. > > But > > > I don't think you will, because the calculation of the new > > midpoint isn't > > > working right. > > > > > > Tom Forsyth  purely hypothetical Muckyfoot bloke. > > > > > > This email is the product of your deranged imagination, > > > and does not in any way imply existence of the author. > > > > > > > Original Message > > > > From: David Zaadstra [mailto:mythomania@...] > > > > Sent: 27 February 2002 20:12 > > > > To: gdalgorithmslist@... > > > > Subject: [Algorithms] midpoint displacement problems > > > > > > > > > > > > Hello everybody, > > > > > > > > I have a problem with the midpoint displacement algorithm > > (from GPG1). > > > > I get strange little peaks all around my landscape. I've been > > > > looking for > > > > the bug for hours now and starting to believe that I'm a > > > > complete idiot. > > > > Could it be that the peaks are normal? That would explain why > > > > an erosion > > > > filter was added to the example file in GPG... > > > > Please take a look at this screenshot to see what I mean: > > > > http://www.gameprogramming.de/screen.jpg (not a very good one, i know) > > > > > > > > Thanks for your help, > > > > David > > > > > > > > > > > > _______________________________________________ > > > > GDAlgorithmslist mailing list > > > > GDAlgorithmslist@... > > > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > > > Archives: > > > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > > > > > > _______________________________________________ > > > GDAlgorithmslist mailing list > > > GDAlgorithmslist@... > > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > > > > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: <kemen@po...>  20020228 12:57:55

I've done it similarly as Tom describes, and no strange peaks occur: http://manabove.org/terrain/images/tes26.jpg > ABC >    > DEF >    > GHI > For which we start with values A, C, G and I. We calculate the midpoint E > the way I would expect: > E = (A+C+G+I)/4 
From: David Zaadstra <mythomania@gm...>  20020228 12:55:48

Sorry, but your landscape still looks wrong. The peaks are disturbing. Your texturing is good though ;)  Original Message  From: "Gottfried Chen" <gottfried.chen@...> To: "David Zaadstra" <mythomania@...>; <gdalgorithmslist@...> Sent: Thursday, February 28, 2002 12:14 PM Subject: AW: [Algorithms] midpoint displacement problems > oops, sorry, mixed this up with fault formation. actually you have to tweak > the roughness and fir filter factors. (in the image i've used a roughness of > 1 and a fir filter constant of 0.1f). > > > Ursprüngliche Nachricht > > Von: gdalgorithmslistadmin@... > > [mailto:gdalgorithmslistadmin@...]Im Auftrag von > > Gottfried Chen > > Gesendet: Donnerstag, 28. Februar 2002 11:02 > > An: David Zaadstra; gdalgorithmslist@... > > Betreff: AW: [Algorithms] midpoint displacement problems > > > > > > I've implemented the algorithm a while ago and had the exact same "error" > > you have. Actually it turned out not to be an error. Just play > > around a bit > > with the "number of iterations between smoothing filter" parameter a bit. > > Then it should look like this: > > http://www.unet.univie.ac.at/~a9104678/ChengineOrg/Images/Screensh > ots/Terrai > > nTexture.jpg > > > > gottfried > > > > > Ursprüngliche Nachricht > > > Von: gdalgorithmslistadmin@... > > > [mailto:gdalgorithmslistadmin@...]Im Auftrag von > > > David Zaadstra > > > Gesendet: Mittwoch, 27. Februar 2002 21:12 > > > An: gdalgorithmslist@... > > > Betreff: [Algorithms] midpoint displacement problems > > > > > > > > > Hello everybody, > > > > > > I have a problem with the midpoint displacement algorithm (from GPG1). > > > I get strange little peaks all around my landscape. I've been > > looking for > > > the bug for hours now and starting to believe that I'm a complete idiot. > > > Could it be that the peaks are normal? That would explain why an erosion > > > filter was added to the example file in GPG... > > > Please take a look at this screenshot to see what I mean: > > > http://www.gameprogramming.de/screen.jpg (not a very good one, i know) > > > > > > Thanks for your help, > > > David > > > > > > > > > _______________________________________________ > > > GDAlgorithmslist mailing list > > > GDAlgorithmslist@... > > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > > > > > > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Tom Forsyth <tomf@mu...>  20020228 11:45:21

I dug out Gems1 and had a look at the algorithm. I would expect that with no random displacements and a starting grid, it would produce a series of bilinear patches. But the algorithm is not the one I was expecting. Given a grid: ABC    DEF    GHI For which we start with values A, C, G and I. We calculate the midpoint E the way I would expect: E = (A+C+G+I)/4 To do a bilinear filter, you would then fine the edge midpoints this way: D = (A+G)/2 B = (A+C)/2 F = (C+I)/2 H = (G+I)/2 ...but this is not what the book does. It requires use of an adjacent square: ABC+      DEFX      GHI+ ...and F = (C+I+E+X)/4 This seems wrong to me. I can't really say why, except that it feels wrong. One odd feature is that with more tesselation (with no random displacements), the line CI is not a straight line, it is some odd sigmashaped thing. Which means the 2D midpoint displacement result (a straight line) does not appear anywhere in the 3D version. Which seems wrong. I am by no means an expert on these things (and I would use a weighted bicubic interpolation scheme rather than a linear one anyway), but is the algorithm given in the book realy the standard algorithm? Whether it is or not, it's obviously not giving you the results you expect, which is what really matters. Try calculating using D = (A+G)/2 B = (A+C)/2 F = (C+I)/2 H = (G+I)/2 instead  it may be more to your liking. And of course the real trick is to fiddle with the algorithm until it gives you results you like  there's no "right" or "wrong" way to do things. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: David Zaadstra [mailto:mythomania@...] > Sent: 27 February 2002 21:35 > To: gdalgorithmslist@... > Subject: Re: [Algorithms] midpoint displacement problems > > > hmm...I think I just found out that the algorithm as it is > described in GPG1 > doesn't really work. I can't explain why, but I calculated a small 8*8 > heightmap on paper, and I get wrong results. The points get > interpolated > like an exponential function from the middle of the square to > its edges. > When I read how it works with a line I thought that the > algorithm is ok. And > it is, but only in 2D, with the line. > I think the problem with the 3D version is that the 4 points which get > interpolated don't lie in a plane. Or is it because the > distance of the > points is sqrt(2) in the diamond step and 1 in the square step? > Can anybody else comment on this? Somebody how knows the > algorithm, and is > better at maths and explaining than me maybe? > > >  Original Message  > From: "Tom Forsyth" <tomf@...> > To: "David Zaadstra" <mythomania@...>; > <gdalgorithmslist@...> > Sent: Wednesday, February 27, 2002 9:24 PM > Subject: RE: [Algorithms] midpoint displacement problems > > > > That looks like your midpoint calculation isn't working > right  the peaks > > are the "coarse" divisions, but then the rest of the > subdivisions don't > > interpolate through them properly. Try turning off any > random variation > > after, say, three divisons  you should then get a nice > smooth surface. > But > > I don't think you will, because the calculation of the new > midpoint isn't > > working right. > > > > Tom Forsyth  purely hypothetical Muckyfoot bloke. > > > > This email is the product of your deranged imagination, > > and does not in any way imply existence of the author. > > > > > Original Message > > > From: David Zaadstra [mailto:mythomania@...] > > > Sent: 27 February 2002 20:12 > > > To: gdalgorithmslist@... > > > Subject: [Algorithms] midpoint displacement problems > > > > > > > > > Hello everybody, > > > > > > I have a problem with the midpoint displacement algorithm > (from GPG1). > > > I get strange little peaks all around my landscape. I've been > > > looking for > > > the bug for hours now and starting to believe that I'm a > > > complete idiot. > > > Could it be that the peaks are normal? That would explain why > > > an erosion > > > filter was added to the example file in GPG... > > > Please take a look at this screenshot to see what I mean: > > > http://www.gameprogramming.de/screen.jpg (not a very good one, i know) > > > > > > Thanks for your help, > > > David > > > > > > > > > _______________________________________________ > > > GDAlgorithmslist mailing list > > > GDAlgorithmslist@... > > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Gottfried Chen <gottfried.chen@ne...>  20020228 11:15:56

oops, sorry, mixed this up with fault formation. actually you have to tweak the roughness and fir filter factors. (in the image i've used a roughness of 1 and a fir filter constant of 0.1f). > Ursprüngliche Nachricht > Von: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...]Im Auftrag von > Gottfried Chen > Gesendet: Donnerstag, 28. Februar 2002 11:02 > An: David Zaadstra; gdalgorithmslist@... > Betreff: AW: [Algorithms] midpoint displacement problems > > > I've implemented the algorithm a while ago and had the exact same "error" > you have. Actually it turned out not to be an error. Just play > around a bit > with the "number of iterations between smoothing filter" parameter a bit. > Then it should look like this: > http://www.unet.univie.ac.at/~a9104678/ChengineOrg/Images/Screensh ots/Terrai > nTexture.jpg > > gottfried > > > Ursprüngliche Nachricht > > Von: gdalgorithmslistadmin@... > > [mailto:gdalgorithmslistadmin@...]Im Auftrag von > > David Zaadstra > > Gesendet: Mittwoch, 27. Februar 2002 21:12 > > An: gdalgorithmslist@... > > Betreff: [Algorithms] midpoint displacement problems > > > > > > Hello everybody, > > > > I have a problem with the midpoint displacement algorithm (from GPG1). > > I get strange little peaks all around my landscape. I've been > looking for > > the bug for hours now and starting to believe that I'm a complete idiot. > > Could it be that the peaks are normal? That would explain why an erosion > > filter was added to the example file in GPG... > > Please take a look at this screenshot to see what I mean: > > http://www.gameprogramming.de/screen.jpg (not a very good one, i know) > > > > Thanks for your help, > > David > > > > > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Gottfried Chen <gottfried.chen@ne...>  20020228 10:02:48

I've implemented the algorithm a while ago and had the exact same "error" you have. Actually it turned out not to be an error. Just play around a bit with the "number of iterations between smoothing filter" parameter a bit. Then it should look like this: http://www.unet.univie.ac.at/~a9104678/ChengineOrg/Images/Screenshots/Terrai nTexture.jpg gottfried > Ursprüngliche Nachricht > Von: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...]Im Auftrag von > David Zaadstra > Gesendet: Mittwoch, 27. Februar 2002 21:12 > An: gdalgorithmslist@... > Betreff: [Algorithms] midpoint displacement problems > > > Hello everybody, > > I have a problem with the midpoint displacement algorithm (from GPG1). > I get strange little peaks all around my landscape. I've been looking for > the bug for hours now and starting to believe that I'm a complete idiot. > Could it be that the peaks are normal? That would explain why an erosion > filter was added to the example file in GPG... > Please take a look at this screenshot to see what I mean: > http://www.gameprogramming.de/screen.jpg (not a very good one, i know) > > Thanks for your help, > David > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Mark DeGeorge <mdegeorg@uc...>  20020228 08:08:52

A few questions about the technique mentioned... One, just wondering how the winddriven noise is supposed to be done. I = guess my thought was to interpolate texture coordinates instead of noise = values. Is this the correct way to get clouds to actually move instead = of just morph from one orientation to another? Secondly, the article talks alot about using render to texture. I = implemented it in well, 10 passes (could bring this down alot with = multitexturing, but im just trying to get it to look right now). So, i = have two passes for each octave, a pass for the subtraction, and a pass = to make the texture the color of the sky. Is this a bad idea? 
From: Mark Lee <mark_lee@ch...>  20020228 06:02:29

> > So is using the ZBuffer as a Texture to create Depth of Field the same > technique used for Smoke Blurs, that are regularly seen in Driving Games? > If you are talking about DOF on PC hardware then it might not be so simple to retrieve the zbuffer. Using the hires/lores DOF technique, you could alway resort to rendering your depth interpolator directly into the alpha channel of your hires image using a 1D projective texture map. This can be can be done without using any custom shaders but will use up an extra texture stage. 
From: Jon Watte <hplus@mi...>  20020228 02:07:16

Because the facing vector is arbitrary, you can choose world space (say, negative Z in a righthanded system). Using that and simple trigonometry (two atan2() calls, mediumexpensive), it's trivial to extract the X & Z rotation angles for Euler angles that will orient a vector pointing down Z to one that points in the direction of the normal. You can also cross the arbitrary facing vector with the normal, which gives you an axis of rotation; then dot them to get the cosine of the rotation angle; Going from axis/cosangle to quaternion or Euler I hope you already have code for, 'cause I'm too lazy to look it up :) Cheers, / h+ > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...]On Behalf Of > Ratcliff, John > Sent: Wednesday, February 27, 2002 3:36 PM > To: gdalgorithmslist@... > Subject: [Algorithms] Oriention from Normal > > > I thought I had this as a solved problem in my code. In fact I was just > going to post a question to the list asking if anyone knew of an > optimization technique. However, now I'm not certain I even have the > problem solved corectly in the first place. > > What I need to be able to do is take the result of an impact > event and then > create a generic orientation, either quaternion or euler, I don't care > which, that can be passed on to other routines like for > generating decals or > oriented effects. > > What my current code does is as follows: > > (1) Take the triangle that was hit and compute it's face normal. > (2) Build a 3x3 rotation matrix from that vector normal, using some > arbitrary 'facing' vector. > (3) Extract the euler rotation from this matrix using the > published graphics > gems sample code routines. > > Now, this isn't necessarily that efficient, and I would certainly be > interested in more direct routes to convert a surface normal into > either an > euler or quaternion rotation that would orient and abitrary object so that > it were aligned with this surface. > > That said, I'm not sure my code, as is, is working correctly. > > Suggestions on more efficient techniques welcome. I assume this is a very > common problem that many people have needed to solve. > > John > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Chris Butcher (BUNGIE) <cbutcher@mi...>  20020228 01:58:35

> Original Message > From: Ratcliff, John [mailto:jratcliff@...] > Sent: Wednesday, February 27, 2002 16:18 > To: Chris Butcher (BUNGIE); gdalgorithmslist@... > Subject: RE: [Algorithms] Oriention from Normal >=20 > Hmmmm..how about just doing this? At least that's why our new AI guy, = Scott > Martin, added to our math libraries today: >=20 > euler.x =3D 0; // roll > euler.y =3D acosf(normal.z); // pitch > euler.z =3D atanf(normal.y, normal.x); // yaw >=20 > ??? >=20 > If you are going to be working in euler coordinates as a generalized > representation that is. And going from euler to quat is easy enough. >=20 Sure, that works, although it's almost certainly slower. Be very wary = of slightlyoutofrange values in normal.z if you're passing it to = acosf directly.  Chris Butcher AI Engineer  Halo Bungie Studios butcher@... 
From: Adrian Perez <adrianpe@mi...>  20020228 00:43:47

The ZBuffer trick is probably useful in finding out where to place the heat wave effect or how much to do it I suppose, but the actual effect itself doesn't use the zbuffer. On the PS2, (I'm extrapolating based on my limited knowledge of the hardware here) they most likely source the framebuffer using a squarish wiggling triangle mesh back on top of the framebuffer. On the Xbox/PC you do it with a quad with a dependent dot product texture program on it. I.E. you source the framebuffer and have an animated 'wiggle' texture that you use to sample wiggled points from the framebuffer and render it back on. Both schemes work really well, and the PS2 way is probably more efficient in this particular application since wiggling vertices is cheaper than animating a texture. If you're doing anything constant (like the /other/ New Lens Flare, the wateronthecamera effect) dot product programs will probably give you fewer artifacts than the triangle mesh way Cuban @bungie.com > Original Message > From: MHarmon@... [mailto:MHarmon@...] > Sent: Wednesday, February 27, 2002 8:37 AM > To: GDAlgorithmslist@... > Subject: RE: [Algorithms] Depth of Field and Smoke Blurs >=20 > So is using the ZBuffer as a Texture to create Depth of Field the same > technique used for Smoke Blurs, that are regularly seen in Driving Games? >=20 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 
From: Mark Duchaineau <duchaine@ll...>  20020228 00:27:41

For those interested in chunked CLOD: I've just posted a Master's thesis that Alex Pomeranz put together a couple of years ago at http://www.cognigraph.com/ROAM_homepage (middle of page) The idea is to build adaptive subtriangulations of bintree triangles that work just like single bintree triangles for purposes of maintaining crackfree meshes when splitting and merging. The constraints on the triangle boundaries are minimized to allow much better adaptivity for small chunks. I'm slowly putting a full open source runtime system together based on these ideas, and will let you know when that is available. Cheers, Mark D. 
From: Ratcliff, John <jratcliff@so...>  20020228 00:18:06

Hmmmm..how about just doing this? At least that's why our new AI guy, Scott Martin, added to our math libraries today: euler.x = 0; // roll euler.y = acosf(normal.z); // pitch euler.z = atanf(normal.y, normal.x); // yaw ??? If you are going to be working in euler coordinates as a generalized representation that is. And going from euler to quat is easy enough. John Original Message From: Chris Butcher (BUNGIE) [mailto:cbutcher@...] Sent: Wednesday, February 27, 2002 6:00 PM To: gdalgorithmslist@... Subject: RE: [Algorithms] Oriention from Normal > Original Message > From: Ratcliff, John [mailto:jratcliff@...] > > What I need to be able to do is take the result of an impact event and then > create a generic orientation, either quaternion or euler, I don't care > which, that can be passed on to other routines like for generating decals or > oriented effects. > If I understand you correctly, you're looking for a quaternion that will rotate a Zaxisaligned object to face along a given vector. Take the cross product of this vector with +Z to get a rotation axis, and the dot product of this vector with +Z (i.e. the Z component of the vector) to give the cosine of the angle of the rotation. That gives you an axis a and a rotation magnitude cos(theta). Use trigonometric identities to generate cos(theta/2) and sin(theta/2). You can build the quaternion directly as: q.w = cos(theta/2) q.v = sin(theta/2)*normalize(a) The only real problem with this is that there's no control over the 'roll' component of the rotation. If you wanted to modify that (e.g. by randomizing it) you could premultiply by a random quaternion that is a rotation about +Z.  Chris Butcher AI Engineer  Halo Bungie Studios butcher@... _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 