You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 






1
(28) 
2
(15) 
3
(4) 
4
(22) 
5
(22) 
6
(24) 
7
(4) 
8
(7) 
9
(6) 
10
(13) 
11
(4) 
12
(22) 
13
(55) 
14
(30) 
15
(24) 
16
(1) 
17
(2) 
18
(11) 
19
(28) 
20
(14) 
21
(18) 
22
(15) 
23
(7) 
24

25
(30) 
26
(26) 
27
(43) 
28
(26) 


From: Phil Teschner <philt@mi...>  20020220 21:00:34

My own theory of this is similar (I didn't think about using the alpha channel at all, but that's a great idea). 1) First, you render your scene normally without specular lighting. 2) Then you render your light sources, and specular reflection maps into another texture using the scene's depthbuffer (DX8 lets you do this). This will let you do partially obscured area lights. 3) Convolve the light buffer into another buffer and overlay that buffer onto the frame buffer. 4) Do not clear the convolved buffer but instead when you convolve the light buffer into the convolved buffer darken the existing textures during the combining phase. This will lead to light trails, also I believe this lets you cut down a bit on the number of convolution passes required to get the flare. The only tricky part in my opinion is the fact that Wreckless's convolution filter seems to be fairly big and I am not sure how you can do that speedily. Phil MS  Games Original Message From: Chris Butcher (BUNGIE) [mailto:cbutcher@...]=20 Sent: Wednesday, February 20, 2002 12:30 PM To: gdalgorithmslist@... Subject: RE: [Algorithms] [OT] Wreckless (XBOX) glare filter > Original Message > From: Killpack, Chris [mailto:ChrisK@...] >=20 > This is all well and good, but I'm not convinced this is how Wreckless does it. I've seen lots of glare coming > of buildings and from particle systems, so I'm wondering if it is a image postprocess. I hacked one of these=20 > together a while back as well, but it suffered from a lot of problems  no occlusion info, single colour glares=20 > on the screen. Alternatively they could spawn a glare sprite in the particle systems. >=20 Well, this is all kinda OT (nary an algorithm in sight!), but I'm almost sure Wreckless uses an imagebased effect that keys off alpha written into the framebuffer based on the alpha of their environment cubemap(s). They look like they may be downsampling the alpha channel (maybe tinting it by the diffuse color in the process) into a smaller offscreen framebuffer and then convolving it a few times before compositing it back in. Note that they have other offscreen buffer based postprocesstype effects as well, for lensflare trails and the big starry lensflare rendering (not to mention about ten million different camera screen effects). Not sure if it's interesting, but you could in theory implement your analytic approach in a modern vertexshader driven graphics engine by drawing a separate stream of point sprites on the model's surface with some Z fakery. You'd need the normal at each point as well as some other streams containing precomputed information about the local surface neighbourhood in order to work out what the size of the sprites should be. The trick would be to maintain approximately uniform brightness in screenspace as the density of sprite positions in screenspace changed due to model rotation (otherwise the highlights get all bunched up at the profile edge of models).  Chris Butcher AI Engineer  Halo Bungie Studios butcher@... _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Chris Butcher (BUNGIE) <cbutcher@mi...>  20020220 20:30:33

> Original Message > From: Killpack, Chris [mailto:ChrisK@...] >=20 > This is all well and good, but I'm not convinced this is how Wreckless = does it. I've seen lots of glare coming > of buildings and from particle = systems, so I'm wondering if it is a image postprocess. I hacked one of = these=20 > together a while back as well, but it suffered from a lot of problems =  no occlusion info, single colour glares=20 > on the screen. Alternatively they could spawn a glare sprite in the = particle systems. >=20 Well, this is all kinda OT (nary an algorithm in sight!), but I'm = almost sure Wreckless uses an imagebased effect that keys off alpha = written into the framebuffer based on the alpha of their environment = cubemap(s). They look like they may be downsampling the alpha channel = (maybe tinting it by the diffuse color in the process) into a smaller = offscreen framebuffer and then convolving it a few times before = compositing it back in. Note that they have other offscreen buffer based postprocesstype = effects as well, for lensflare trails and the big starry lensflare = rendering (not to mention about ten million different camera screen = effects). Not sure if it's interesting, but you could in theory implement your = analytic approach in a modern vertexshader driven graphics engine by = drawing a separate stream of point sprites on the model's surface with = some Z fakery. You'd need the normal at each point as well as some other = streams containing precomputed information about the local surface = neighbourhood in order to work out what the size of the sprites should = be. The trick would be to maintain approximately uniform brightness in = screenspace as the density of sprite positions in screenspace changed = due to model rotation (otherwise the highlights get all bunched up at = the profile edge of models).  Chris Butcher AI Engineer  Halo Bungie Studios butcher@... 
From: Gary McTaggart <gary@va...>  20020220 20:24:56

I believe there's a paper in 2001 Eurographics Rendering Techniques on "Virtual Occluders" that goes into how to aggregate occluders to make larger occluders. Gary Valve Original Message From: Joe Meenaghan [mailto:joseph.meenaghan@...] Sent: Tuesday, February 19, 2002 6:56 PM To: gdalgorithmslist@... Subject: [Algorithms] occlusion culling I have been experimenting with occlusion culling techniques recently and had a couple of questions. 1. How would one determine when an occluder (in my case, just a planar quad) currently spans the entire viewing volume? Projecting occluders into screen space or maybe just clip space for testing was about all I could come up with, but I am sure there must be more efficient ways. Any ideas? 2. I would like to figure out a technique for creating occlusion zones if possible when occluders 'overlap'. Does anyone have any idea how to do this or where I might find such information? Thanks very much for any help you might be able to provide. Joe Meenaghan Developer Game Institute joe@... <mailto:joe@...> 
From: Killpack, Chris <ChrisK@ea.com>  20020220 19:49:19

I played Wreckless last night and was struck by the very very nice glare = effect they have on highlights. Warning  this email is a stream of brain dump so it's sporadic and = unorganised in places... My attempts =3D=3D=3D=3D=3D=3D=3D=3D=3D I had looked into this a couple of years ago, initially trying to solve = the problem analytically. Assuming the specular part of your lighting = model is standard Phong I hit a stumbling block: you need to find the = maximum of a periodic function (in this case the dotproduct between = viewing angle and reflected light ray). Obviously there are many = solutions to this problem and I ditched that idea. My next idea was to cheat :) I made a texture with a Phong highlight at = the very centre of the texture. I calculated UV coords for each model = using environment mapping. Then for each triangle in the model I = calculated if the centre of the texture (0.5,0.5) lay in the triangle or = not.* If so, I calculated the localspace 3D coordinates for the = centre's location in the triangle and added it to a list. At the end of the frame I ran through this list, projecting the 3D = positions into screenspace and then rendering glare sprites at the = screenspace location. Wreckless =3D=3D=3D=3D=3D=3D=3D=3D This is all well and good, but I'm not convinced this is how Wreckless = does it. I've seen lots of glare coming of buildings and from particle = systems, so I'm wondering if it is a image postprocess. I hacked one of = these together a while back as well, but it suffered from a lot of = problems  no occlusion info, single colour glares on the screen. = Alternatively they could spawn a glare sprite in the particle systems. Perhaps anyone has any ideas? Back to IOP crash bugs.... :) Chris * In mathematicians speak I had restricted the domain of the function to = [PI,PI] and was thus able to find a single solution. However the = parametric mapping into this domain (via the environment map = calculations) isn't perfect. 
From: David Hunt <david@em...>  20020220 12:26:32

I'd recommend reading the visibility theory guide section in the Umbra = DPvs reference manual  probably the most complete and relevant treatise = on the subject I've seen. http://www.hybrid.fi/dpvs_download.html Wrt bulk culls, we use an artist supplied very coarse occlusion mesh BSP = which is never more than about 8 nodes deep and test all objects we're = considering rendering against it  it hasn't shown up in the profile = yet. But it probably isn't highly scalable, (it currently copes with = <1000 objects without raising a sweat). David Hunt  Original Message =20 From: Conor Stokes=20 To: gdalgorithmslist@...=20 Sent: Wednesday, February 20, 2002 4:10 AM Subject: Re: [Algorithms] occlusion culling A convex occluder spans the entire view volume if all the edge lines = at the side of the frustum (not counting the ones made by near and far = planes) intersect the occluder and the occluder passes the frustum test. These intersections can be done as simple ray polygon tests. Overlapping occluders (occluder fusion) can be done with beamtrees in = world space  or just by clipping 2 multiple projected volumes. In = screen space you can use a 2d bsp, or scanline/edge buffer methods. As a bit of side info  you need to select a set of large occluders = and do very quick bulk culls to get much speed benefit out of occlusion = culling now a days. Conor Stokes  Original Message =20 From: Joe Meenaghan=20 To: gdalgorithmslist@...=20 Sent: Wednesday, February 20, 2002 10:56 AM Subject: [Algorithms] occlusion culling I have been experimenting with occlusion culling techniques recently = and had a couple of questions. =20 1.. How would one determine when an occluder (in my case, just a = planar quad) currently spans the entire viewing volume? Projecting = occluders into screen space or maybe just clip space for testing was = about all I could come up with, but I am sure there must be more = efficient ways. Any ideas?=20 2.. I would like to figure out a technique for creating occlusion = zones if possible when occluders 'overlap'. Does anyone have any idea = how to do this or where I might find such information?=20 =20 Thanks very much for any help you might be able to provide. =20 Joe Meenaghan Developer Game Institute joe@... =20 
From: mark_me <mark_me@so...>  20020220 06:14:02

I agree that it's not clear, but since he said "I've got two rotation matrices derived from two sensors on a 3D tracking" I think what he means is actually the XAxis of the matrix or the direction of the vector rotated from the referenceframe XAxis , which is all what you need in tracking. Here ( I think! ) all you need to do is taking the dot product of the first row of each matrix ( or column ,if you are using column vectors ) to get the COS of the angle , then use ACOS to find the angle. If what you want is the angle between the two axes in AxisAngle representation , then I would recommend finding the axes directly from the matrices, which is here the Eigen vector of the matrix. A very good book to look into , which is BTW widely ignored by game developers , is "Quaternions and rotation sequences". This book talks about Matrices , AxisAngle , Euler angles , and of course Quaternions. It also talks about tracking and that stuff. Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Melax, Stan Sent: February 19, 2002 3:57 PM To: 'Jason Dorie'; Gareth Jones; gdalgorithmslist@... Subject: RE: [Algorithms] Angles between rotation axes As was pointed out the question wasn't quite clear, but lets assume that he really wants the axis and angle that transforms from his first orientation to his second. i.e. find T such that A * T = B Whether A,B, and T are matrices or quats the algebra is the same: T = Inverse(A) * B; It might be easiest to convert A and B to quaternion's first. But you can also get axis and angle from a matrix if you prefer that implementation. The choice is yours. (Please do not respond to this post with any religious quat vs matrix replies!!!) The quat dot product suggestion is unlikely what the person really wants. Say I start at my base orientation Quat(0,0,0,w=1) and then I turn my object around 180 degrees to Quat(0,0,z=1,0). The dot product of (0,0,0,1) and (0,0,1,0) is zero. but acos(0) is 90. So what does that mean? > Original Message > From: Jason Dorie [mailto:Jason.Dorie@...] > Sent: Tuesday, February 19, 2002 3:10 PM > To: Gareth Jones; gdalgorithmslist@... > Subject: Re: [Algorithms] Angles between rotation axes > > > > Why not convert them to quaternions, then take the dot of the > two quats? If > the matrices are only rotation, conversion to quat should be > simple. Taking > the dot product of the two quaternions will give you the > cosine of the angle > between them. > > Jason Dorie > BlackBoxGames > > >  Original Message  > From: Gareth Jones > To: gdalgorithmslist@... > Sent: Tuesday, February 19, 2002 2:53 PM > Subject: [Algorithms] Angles between rotation axes > > > I've got two rotation matrices derived from two sensors on a > 3D tracking > device and need to find the angles between them. > I thought it was a trivial problem on first glance but of > course it's not; > the best I've come up with is projecting the extracted > axis vectors from one matrix onto planes formed by the > vectors from the > other. This suffers from interdependencies between > the axes. Is there a more cleverer approach to this? I found various > references to 'pose' extraction from objects but I already > have the orientation information so it seems a bit of overkill. > > Perplexedly, > > Gareth Jones. > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Conor Stokes <cstokes@tp...>  20020220 04:05:26

A convex occluder spans the entire view volume if all the edge lines at = the side of the frustum (not counting the ones made by near and far = planes) intersect the occluder and the occluder passes the frustum test. These intersections can be done as simple ray polygon tests. Overlapping occluders (occluder fusion) can be done with beamtrees in = world space  or just by clipping 2 multiple projected volumes. In = screen space you can use a 2d bsp, or scanline/edge buffer methods. As a bit of side info  you need to select a set of large occluders and = do very quick bulk culls to get much speed benefit out of occlusion = culling now a days. Conor Stokes  Original Message =20 From: Joe Meenaghan=20 To: gdalgorithmslist@...=20 Sent: Wednesday, February 20, 2002 10:56 AM Subject: [Algorithms] occlusion culling I have been experimenting with occlusion culling techniques recently = and had a couple of questions. =20 1.. How would one determine when an occluder (in my case, just a = planar quad) currently spans the entire viewing volume? Projecting = occluders into screen space or maybe just clip space for testing was = about all I could come up with, but I am sure there must be more = efficient ways. Any ideas?=20 2.. I would like to figure out a technique for creating occlusion = zones if possible when occluders 'overlap'. Does anyone have any idea = how to do this or where I might find such information?=20 =20 Thanks very much for any help you might be able to provide. =20 Joe Meenaghan Developer Game Institute joe@... =20 
From: Joe Meenaghan <joseph.meenaghan@ve...>  20020220 02:50:03

I have been experimenting with occlusion culling techniques recently and had a couple of questions. 1. How would one determine when an occluder (in my case, just a planar quad) currently spans the entire viewing volume? Projecting occluders into screen space or maybe just clip space for testing was about all I could come up with, but I am sure there must be more efficient ways. Any ideas? 2. I would like to figure out a technique for creating occlusion zones if possible when occluders 'overlap'. Does anyone have any idea how to do this or where I might find such information? Thanks very much for any help you might be able to provide. Joe Meenaghan Developer Game Institute joe@... 
From: Joe Meenaghan <joseph.meenaghan@ve...>  20020220 02:50:01

Hi Guys, I am doing some culling and clipping routines in view space and I am trying to determine how to test AABBs that are currently defined by min and max points in world space (I can use a center/axes/extents configuration if necessary). I would rather not wind up having to do a matrix multiply on 8 points but at this point I can't see any alternative. I would ideally like to be able to do something like Kenny Hoff's aabb/plane tests but I'll take what I can get :). Thanks so much for any help you can provide. Joe Meenaghan Developer Game Institute joe@... 
From: Charles Bloom <cbloom@cb...>  20020220 02:15:15

Yeah, perhaps I should be more clear; "A" means "epsilon interpentration allowed" , and "B" means "absolutely no interpenetration allowed".  Charles Bloom cb@... http://www.cbloom.com 
From: Douglas Cox <ziflin@ho...>  20020220 01:48:19

Damn that's cold Charles ;). Ok, let me explain what I mean by shove: If the object is embedded, the collision test hops out and performs a onetime (meaning recursion is tested for) collision check to 'shove' the object to a position that is away from the embedded triangle just enough to be outside the EPSILON value. I do not simply pick up the object and move it as it would definitely cause the object to get stuck in certain cases ( I know, because for a while I had that in there ). Without this 'shove', objects would eventually fall through another object ( again, I tested every combination I could think of ). What's there now seems to work pretty well as no one in the last two milestones has complained of getting stuck or falling through any object. Ok, with that being said, I must say that collision detection has been the most annoying thing I've ever worked on, so if you have any ideas or suggestions, I'm definitely all ears!! thanks, doug  I think it's very important to NOT do this "shove", but I don't have time to go into details about why right now...  Charles Bloom cb@... http://www.cbloom.com _________________________________________________________________ Send and receive Hotmail on your mobile device: http://mobile.msn.com 
From: Michael Pohoreski <MP<ohoreski@cy...>  20020220 00:41:18

Gil Gribb has an excellent gem covering this. It covers DX and OpenGL. http://www2.ravensoft.com/users/ggribb/plane%20extraction.pdf Original Message From: Maltez [mailto:maltez@...] Subject: [Algorithms] How to get clip planes from modelview and projection matrices I am wondering if the 6 view clip planes (at least normal vectors) could be get from the different matrices (projection, modelview) or the resulting matrix multiplication modelview*projection ? 
From: Adam Paul Coates <acoates@st...>  20020220 00:16:25

On Wed, 20 Feb 2002, Maltez wrote: > Hi all. > > I am wondering if the 6 view clip planes (at least normal vectors) could be get from the different matrices (projection, modelview) or the resulting matrix multiplication modelview*projection ? > If you have the focal length of the projection equation then you can do what I do. Just take the planes as they are from the viewer's point of view: plane(fl, 0, asp, 0) //left plane(fl, 0, asp, 0) //right plane(0, fl, 1, 0) //bottom plane(0, fl, 1, 0) //top plane(0, 0, 1, hitherDistance) //hither plane(0, 0, 1, yonderDistance); //yonder These are just plane equations like Ax + By + Cz + D = 0. fl is the focal length. In my case (and it sounds like in yours) I'm using OpenGL and have the projection set up to map 1 to the bottom of the window at a depth of fl from the viewer and 1 at the top. asp is the aspect ratio of the window = width/height. You might need to change 1,1, etc. if you use a different setup. Also, these normals point outward, if you want them pointing inward for some reason, just negate the normals and I'm assuming that Z points into the screen. If you don't have the focal length you can figure it out by just drawing a diagram using whatever dimensions you've used to set up your projection  it's the "distance" between the eye point and the screen. Once you have these planes you can multiply them by your camera/modelview matrix to get the worldspace planes and then clip away happily. Hope that helps, AC 
From: Conor Stokes <cstokes@tp...>  20020220 00:07:04

> I don't know. Mirtich told me that he abandoned his own technique when > doing the implementation for his Sig 00 paper, which was done the LCP way. > Maybe Conor has more infos on this. If David Baraff would have gone into > some other field, I would most likely prefer Featherstone's method for > articulated bodies and penalty methods for resting contact. That is not to > say that I doubt that you have what may be a very robust implementation of > microcollisions. In my experience, none of these theories can be > implemented without some very solid engineering. The opposite is probably > true as well. From what I heard at MERL, Mirtich has abandoned physical simulation to a large degree. Then again, I haven't heard anything about it in a long while. Conor Stokes 