You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(6) 
2
(19) 
3
(38) 
4
(42) 
5
(13) 
6
(6) 
7
(21) 
8
(59) 
9
(47) 
10
(21) 
11
(22) 
12
(33) 
13
(4) 
14
(14) 
15
(13) 
16
(41) 
17
(27) 
18
(14) 
19
(30) 
20
(6) 
21
(3) 
22
(22) 
23
(2) 
24
(56) 
25
(48) 
26
(31) 
27
(11) 
28

29
(14) 
30
(6) 
31
(11) 



From: Charles Bloom <cbloom@cb...>  20010119 23:51:57

So, I'm thinking about general modeling of reflectance for realtime graphics. First of all, I think the recent work in realtime BRDF's is great. See the latest game developers for a good overview by Kautz et al. The BRDF basically tells you how much is reflected (on a percolor or perwavelength basis) in a given view direction given a light direction. The problem for me is that most surfaces for which a BRDF is really noticeable (like metals) are also reflecters. Now, anything that's specular is a reflecter, so what's the difference between a good reflector and a bad one : basically, for "nonreflecting" specular surfaces, the reflected light is being defocused (due to large spread of the BRDF) and weakened (due to a low peak in the BRDF) so that only the bright spot of actual light sources is visible. By contrast, good reflectors have sharp peaked BRDF (perfect mirrors just have a delta function at reflection vector == light vector) and a large peak, so that even weak light is imaged reliable by the reflecter. We can speak of a reflecter just like a lens (it attenuates and defocuses). A nonimaging reflecter is just like a ground glass lens or a "glass block"  you only see sharp light highlights, not clear images. So, what's the problem with mixing this with BRDF's? In theory, it's not a problem; to find the light reflected in a given view direction V from a surface element with normal N, you simply integrate over all outgoing directions O; the integrand is the BRDF(V,N,O) (written in a funny dependency; usually it's written as a fourterm function of the polar coordinates of V and O in the local surface space where N is {0,0,1}), and the light field is direction O, L(O). Again, for a mirror, the BRDF is a delta function, and the output is just L(R), where R is the reflection direction from V and N. For other surfaces, you get a blurring (typically peaked around L(R)). This is a convolution of the incoming light field with the BRDF as the kernel. Now, for point light sources, L(O) is a delta function in the direction of the light source L (in world space) (with magnitude determined by distance attenuation, if you care). In this case the integral drops out and you simply evaluate the BRDF at BRDF(V,N,L). This is what's done for the hardware BRDF demos. The interesting thing here is to have a dependence which is fancier than R*L (reflection vector dot L), or H*N. One really easy way to do this is to find H in the local surface space of N. You can then use the x and y components of H{N} to look in a texture map; if x and y are zero, H*N is 1 is the traditional Blinn Phong peak; you get an anisotropic BRDF by having a texture map which encodes an eliptical spot around this peak. This isn't the whole story but it's pretty darn close, and produces good images with only one pass. Basically, what's happening here is that the spread of the BRDF is not symmetric around R, the reflection vector, so you get more blurring of the incoming light in certain directions. What this then means is that the image of a light source is visible for a greater distance away from the specular spot in the directions of more blurring. So, what happens if I want a BRDF and a nonpoint source. I've got this darned integral (I'm ignoring all the solid angle junk that comes into these integrals) : Lout(V,N) = Integral{dO} Lin(O) * BRDF(O,V,N) Neither of my terms are delta functions, so I'm stuck with the integral, and I wind up with an output light field which depends on 4 variables : the polar coordinates of the view vector and the normal vector. I should also mention that N is actually implicitly a 3x3 matrix frame for the local coordinate space of the surface. You might try to do something cheezy like just separate these out, and do : Lout{V,N) = Lin(R) * BRDF( ? ) Just take the input light field in the reflectance direction; but where do we evaluate our BRDF? If we evaluate it in the R direction it's always onpeak; we might evaluate with O = N, or maybe with O = Norm( N + R ). Either way really makes no sense. Even if we did have somewhere to evalute it, we would get none of the anistropic effect, since that comes from reflecting strong lights which are NOT in the R direction, and we've evaluated Lin at R! This seems like a pretty nasty problem; I think reflection is a more convincing effect for many metals than BRDF, and they seem incompatible (in a practical sense at least). Perhaps Lout(V,N) is separable in some way. I suppose I could always just compute it and then try the separation techniques; it would be key to find a good choice of parametrization, something like : Lout(V,N) = F(V in localspace) * G(R in worldspace) you'd end up with F being something like a highlight map and G being something like a filtered environment map.  Charles Bloom http://www.cbloom.com 
From: Chris Hecker <checker@d6...>  20010119 22:12:27

>One other name to look at is Mirtich  he's had several papers in this area: Oops, yeah, I spaced on Mirtich's name, thanks. He was at the seminar too, so that's pretty rude of me. :) His wellknown impulse algorithm is a contact resolver, not a way to determine the contact manifold. Impulse only requires a single contact point. He does have an algorithm called VClip which is a manifold generator, though, so I definitely should have included him in my list. Thanks, Chris Game Technology Seminars, less that two weeks away! Physics & Graphics http://www.techsem.com 
From: Blake Senftner <bsenftner@ro...>  20010119 21:59:29

Jason, In addition to what everyone else said, also weight the average of your face normals with the area of the face that is contributing to a vertex normal. That will prevent small beveled edges from affecting neighboring large area polys too much. Blake > Original Message > From: Jason Zisk [mailto:ziskj@...] > Sent: Friday, January 19, 2001 12:41 PM > To: Algorithms List > Subject: [Algorithms] Normals and lighting > > > I decided to use an untextured object for testing today and > realized the > lighting on the object is very screwed up. As far as I know > I'm generating > the vertex normals correctly: I grab the normals of all the > triangles that > use that vertex and average them. > > Is this wrong? It think it looks right when I have a texture > on the object > but as soon as its untextured it looks wrong. Here are two > shots of the > problem: > > http://www.nfusion.com/normals1.jpg > http://www.nfusion.com/normals2.jpg > > It seems to be creating a strange spiral effect, even on squareedges. > > Thanks for any pointers, > >  Jason Zisk >  nFusion Interactive LLC > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Jason Zisk <ziskj@op...>  20010119 21:33:36

I never even realized that flat shading and gouraud would need different normals, of course. Thanks.  Jason Zisk  nFusion Interactive LLC  Original Message  From: "Jeffrey Rainy" <jrainy@...> To: <gdalgorithmslist@...> Sent: Friday, January 19, 2001 4:14 PM Subject: Re: [Algorithms] Normals and lighting > > Is this wrong? It think it looks right when I have a texture on the > object > > but as soon as its untextured it looks wrong. Here are two shots of the > > problem: > > The only problem I see, is that the model is flat shade, and the normals are > normals for a smoothed object. > > Either use glShadeModel( GL_SMOOTH ); , and keep your normal that way, or > use glShadeModel( GL_FLAT ); and have normal that are perpendicular to your > faces. If you use the later option, you'll need to have the same vertex one > time per face, with different normals. > > This would look better, IMHO. > > Hu. I just realized this is NOT the openGL gamedev list. Should you be using > another API, I guess someone will be able to translate, and you'll probably > get the idea anyway.... ;) > > > Jeffrey. > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Dave Smith <Dave.Smith@sd...>  20010119 21:27:11

Also you might be sending the normals in reversed for adjacent faces. In other words your vertex ordering between adjacent faces may be wrong. Two adjacent faces should be ordered consistently, either clockwise or counterclockwise before you pass through and compute all your normals.(If that's what you're currently doing.) 1 3 **  /  /   /   /  /  ** 2 4 Left  1,2,3 Right  3,2,4 DaveS Kelvin McDowell wrote: > > It looks like you're passing vertex normal's to the API (OpenGL/D3D  > whatever you're using) but have the lighting state set to be flat shade. > There isn't any interpolation going on between vertices. > 
From: Jeffrey Rainy <jrainy@dy...>  20010119 21:16:03

> Is this wrong? It think it looks right when I have a texture on the object > but as soon as its untextured it looks wrong. Here are two shots of the > problem: The only problem I see, is that the model is flat shade, and the normals are normals for a smoothed object. Either use glShadeModel( GL_SMOOTH ); , and keep your normal that way, or use glShadeModel( GL_FLAT ); and have normal that are perpendicular to your faces. If you use the later option, you'll need to have the same vertex one time per face, with different normals. This would look better, IMHO. Hu. I just realized this is NOT the openGL gamedev list. Should you be using another API, I guess someone will be able to translate, and you'll probably get the idea anyway.... ;) Jeffrey. 
From: Kelvin McDowell <kelvinm@re...>  20010119 21:01:18

It looks like you're passing vertex normal's to the API (OpenGL/D3D  whatever you're using) but have the lighting state set to be flat shade. There isn't any interpolation going on between vertices. Original Message From: Jason Zisk [mailto:ziskj@...] Sent: Friday, January 19, 2001 12:41 PM To: Algorithms List Subject: [Algorithms] Normals and lighting I decided to use an untextured object for testing today and realized the lighting on the object is very screwed up. As far as I know I'm generating the vertex normals correctly: I grab the normals of all the triangles that use that vertex and average them. Is this wrong? It think it looks right when I have a texture on the object but as soon as its untextured it looks wrong. Here are two shots of the problem: http://www.nfusion.com/normals1.jpg http://www.nfusion.com/normals2.jpg It seems to be creating a strange spiral effect, even on squareedges. Thanks for any pointers,  Jason Zisk  nFusion Interactive LLC _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist 
From: Jason Zisk <ziskj@op...>  20010119 20:40:25

I decided to use an untextured object for testing today and realized the lighting on the object is very screwed up. As far as I know I'm generating the vertex normals correctly: I grab the normals of all the triangles that use that vertex and average them. Is this wrong? It think it looks right when I have a texture on the object but as soon as its untextured it looks wrong. Here are two shots of the problem: http://www.nfusion.com/normals1.jpg http://www.nfusion.com/normals2.jpg It seems to be creating a strange spiral effect, even on squareedges. Thanks for any pointers,  Jason Zisk  nFusion Interactive LLC 
From: Jason Zisk <ziskj@op...>  20010119 18:13:07

I did take a look at a few other algorithms but A* really is the best of the bunch. We have tons of nodes on a huge map so all the time most of the other algos spend going thru nearly all the nodes is quite a waste. Plus A* has the nice property of assigning cost. As somone else mentioned the cost part is also the hardest to figure out, for now I think I'm just going to use the distance between the two nodes as the cost, with a bit added if its going uphill. Does anyone have any suggestions for estimating distance to the goal when you are in 3D space and using a graph? I don't really see any way to do it besides the actual distance between them. I know that most of the time it will be a pretty large underestimation. Thanks,  Jason Zisk  nFusion Interactive LLC  Original Message  From: "Matthew Davies" <MDavies@...> To: <gdalgorithmslist@...> Sent: Friday, January 19, 2001 9:08 AM Subject: RE: [Algorithms] A* with graphs > Have you thought about other algorithms  depending on what you're doing you > may find Dijkstra's algorithm useful. Its quicker and simpler than A* and > takes less memory. However it does have cons. Check it out. > > http://odin.ee.uwa.edu.au/~morris/Year2/PLDS210/dijkstra.html > > or for a Java animation: > > http://carnap.ss.uci.edu/java/dijkstra/DijkstraApplet.html > > Cheers! > > Matt Davies. > > > > Original Message > > From: Jason Zisk [mailto:ziskj@...] > > Sent: Friday, January 19, 2001 00:45 > > To: Algorithms List > > Subject: [Algorithms] A* with graphs > > > > > > Hi everyone, I'm looking for a reference, code, etc on using the A* > > algorithm on a graph (nodes with information on how they > > connect to each > > other). > > > > All the information about A* I could find was related to > > grids, which is > > semiuseful but I'd like something more. Its funny, many of the sites > > mention A* was developed for use on graphs then later > > modified to work with > > grids but nobody seems to use it on graphs anymore. :) > > > > I'm doing this for AI navigation, I have a huge list of nodes with > > connectivity info and I want a nice fast algo that will give > > me a path from > > one node to another. > > > > Thanks, > > > >  Jason Zisk > >  nFusion Interactive LLC > > > > > > > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Matthew Davies <MDavies@ac...>  20010119 17:33:54

No you don't and here's why: you can cache previous results as they do not take up much memory. A single dijkstra graph has one goal but many start positions (infact all across the path from the original) so by caching them up, other objects can use the same graph. Since they are also quite quick to compute its not so much a burden. For a game I was worked on, the creation of a new graph was stretched across multiple frames if necessary, sort of simulating the fact that the objects was looking for a way to get somewhere. This approach works great for indoor environments. Best regards, Matt. > Original Message > From: Charles Bloom [mailto:cbloom@...] > Sent: Friday, January 19, 2001 17:27 > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Detecting unfindable paths > > > > But Matt, you must store a different Dijkstra graph for each starting > location !? I guess this is useful in a "WarCraft" style game where > you'll repeatedly be sending units out from the same location to > different endpoints. It doesn't seem very useful in action > games, where > NPCs and PCs are in different places all the time. > > Using a *heirarchical* waypoint graph seems like the best way to go. > When someone tells you to walk somewhere very far away, you look at > the largest scale waypoint graph, and find waypoints whose circles > or spheres include the endpoints, then find a path in that graph; > Then each waypoint has a subgraph in the next level of tree which > you use to find your path from your actual current location to the > first waypoint, etc. It seems like the heirarchy should make this > O(M*log(N)) where M is the actual number of nodes you must walk and > N is the total number of nodes in the graph. Hmm.. I must be smoking > crack cuz that's too good to be true. > > Is there a way to randomize A* so that it has this order? eg. A* can > be O(M) in easy cases, but can also be O(N) in bad cases... > > At 04:50 PM 1/19/2001 0000, you wrote: > >Hi, > > > >Dijkstra's Algorithm makes it really easy to detect > unfindable paths. I > >find that generating node values for a particular path > finding session, then > >storing them in a cache gives you a really quick and memory efficient > >pathfinding solution. Check out the links I put on a > previous mail to see > >what I mean. Those links were found very quickly using > Google and searching > >for Dijkstra. > > > >Best regards, > >Matt Davies > > > >> Original Message > >> From: Javier Arevalo [mailto:jare@...] > >> Sent: Friday, January 19, 2001 15:31 > >> To: GDAlg > >> Subject: [Algorithms] Detecting unfindable paths > >> > >> > >> Hi! > >> > >> A* can be quite fast but if you want it to find very large > >> paths in huge > >> maps then it will also take a very long time before it gives > >> up searching. > >> > >> Does anyone have some good references to pathfinding techniques for > >> detecting the situations, in very large and complex maps, > >> where there does > >> not exist a path at all? Imagine Age Of Empires in an > islands (or even > >> worse, a coasttocoast map). Even worse, different units can > >> pass over > >> different kinds of terrain. so some units may have a valid > >> path while others > >> don't. > >> > >> Ideas I have been playing with in my head, all of them > with their own > >> advantages and limitations: > >> > >>  Limit the amount of searching A* does, once the limit is > >> reached choose > >> the best path so far, go there and try again from there. For > >> convoluted maps > >> this may compromise the ability to find the map at all. What > >> information may > >> be good to use as a limiter? (Number of open nodes, length of > >> current path, > >> amount of exploded nodes, etc). > >> > >>  Creating a set of different 'zones'; all nodes in the map > >> belong some of > >> these zones simultaneously. We store whether each zone is > >> reachable from > >> each other zone, and before trying the path, we check the > zones of the > >> origin and target points. Depending on the complexity of the > >> map, the number > >> of zones may vary from a few to quite a lot. > >> > >>  Storing some kind of 'major waypoint graph' not unlike > >> those described by > >> John Ratcliff and others in a previous thread here, and > >> having reachability > >> information among the waypoints. But how to select which > >> waypoints are most > >> appropriate to use? > >> > >>  Doing the search as a background process so the game's not > >> frozen while it > >> searches. This can grow very memoryintensive if many units > >> are finding > >> paths simultaneously. Also the units will have to hesitate > >> and change minds > >> if they start following the (current) best path before the > search is > >> complete. > >> > >>  Have a lowerresolution version of the map and use that to > >> build a coarser > >> path, then refine it using the highdetail version. I haven't > >> managed to fit > >> different terrain types & costs in this idea, or resolution > >> scales different > >> from 2:1. > >> > >>  Anything totally different? > >> > >> Thanks for any ideas and/or pointers, > >> Javier Arevalo > >> Pyro Studios > >> > >> > >> > >> _______________________________________________ > >> GDAlgorithmslist mailing list > >> GDAlgorithmslist@... > >> http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > >> > > > >_______________________________________________ > >GDAlgorithmslist mailing list > >GDAlgorithmslist@... > >http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > > > >  > Charles Bloom cb@... http://www.cbloom.com > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Charles Bloom <cbloom@cb...>  20010119 17:25:49

But Matt, you must store a different Dijkstra graph for each starting location !? I guess this is useful in a "WarCraft" style game where you'll repeatedly be sending units out from the same location to different endpoints. It doesn't seem very useful in action games, where NPCs and PCs are in different places all the time. Using a *heirarchical* waypoint graph seems like the best way to go. When someone tells you to walk somewhere very far away, you look at the largest scale waypoint graph, and find waypoints whose circles or spheres include the endpoints, then find a path in that graph; Then each waypoint has a subgraph in the next level of tree which you use to find your path from your actual current location to the first waypoint, etc. It seems like the heirarchy should make this O(M*log(N)) where M is the actual number of nodes you must walk and N is the total number of nodes in the graph. Hmm.. I must be smoking crack cuz that's too good to be true. Is there a way to randomize A* so that it has this order? eg. A* can be O(M) in easy cases, but can also be O(N) in bad cases... At 04:50 PM 1/19/2001 0000, you wrote: >Hi, > >Dijkstra's Algorithm makes it really easy to detect unfindable paths. I >find that generating node values for a particular path finding session, then >storing them in a cache gives you a really quick and memory efficient >pathfinding solution. Check out the links I put on a previous mail to see >what I mean. Those links were found very quickly using Google and searching >for Dijkstra. > >Best regards, >Matt Davies > >> Original Message >> From: Javier Arevalo [mailto:jare@...] >> Sent: Friday, January 19, 2001 15:31 >> To: GDAlg >> Subject: [Algorithms] Detecting unfindable paths >> >> >> Hi! >> >> A* can be quite fast but if you want it to find very large >> paths in huge >> maps then it will also take a very long time before it gives >> up searching. >> >> Does anyone have some good references to pathfinding techniques for >> detecting the situations, in very large and complex maps, >> where there does >> not exist a path at all? Imagine Age Of Empires in an islands (or even >> worse, a coasttocoast map). Even worse, different units can >> pass over >> different kinds of terrain. so some units may have a valid >> path while others >> don't. >> >> Ideas I have been playing with in my head, all of them with their own >> advantages and limitations: >> >>  Limit the amount of searching A* does, once the limit is >> reached choose >> the best path so far, go there and try again from there. For >> convoluted maps >> this may compromise the ability to find the map at all. What >> information may >> be good to use as a limiter? (Number of open nodes, length of >> current path, >> amount of exploded nodes, etc). >> >>  Creating a set of different 'zones'; all nodes in the map >> belong some of >> these zones simultaneously. We store whether each zone is >> reachable from >> each other zone, and before trying the path, we check the zones of the >> origin and target points. Depending on the complexity of the >> map, the number >> of zones may vary from a few to quite a lot. >> >>  Storing some kind of 'major waypoint graph' not unlike >> those described by >> John Ratcliff and others in a previous thread here, and >> having reachability >> information among the waypoints. But how to select which >> waypoints are most >> appropriate to use? >> >>  Doing the search as a background process so the game's not >> frozen while it >> searches. This can grow very memoryintensive if many units >> are finding >> paths simultaneously. Also the units will have to hesitate >> and change minds >> if they start following the (current) best path before the search is >> complete. >> >>  Have a lowerresolution version of the map and use that to >> build a coarser >> path, then refine it using the highdetail version. I haven't >> managed to fit >> different terrain types & costs in this idea, or resolution >> scales different >> from 2:1. >> >>  Anything totally different? >> >> Thanks for any ideas and/or pointers, >> Javier Arevalo >> Pyro Studios >> >> >> >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> > >_______________________________________________ >GDAlgorithmslist mailing list >GDAlgorithmslist@... >http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > >  Charles Bloom cb@... http://www.cbloom.com 
From: Matthew Davies <MDavies@ac...>  20010119 16:50:46

Hi, Dijkstra's Algorithm makes it really easy to detect unfindable paths. I find that generating node values for a particular path finding session, then storing them in a cache gives you a really quick and memory efficient pathfinding solution. Check out the links I put on a previous mail to see what I mean. Those links were found very quickly using Google and searching for Dijkstra. Best regards, Matt Davies > Original Message > From: Javier Arevalo [mailto:jare@...] > Sent: Friday, January 19, 2001 15:31 > To: GDAlg > Subject: [Algorithms] Detecting unfindable paths > > > Hi! > > A* can be quite fast but if you want it to find very large > paths in huge > maps then it will also take a very long time before it gives > up searching. > > Does anyone have some good references to pathfinding techniques for > detecting the situations, in very large and complex maps, > where there does > not exist a path at all? Imagine Age Of Empires in an islands (or even > worse, a coasttocoast map). Even worse, different units can > pass over > different kinds of terrain. so some units may have a valid > path while others > don't. > > Ideas I have been playing with in my head, all of them with their own > advantages and limitations: > >  Limit the amount of searching A* does, once the limit is > reached choose > the best path so far, go there and try again from there. For > convoluted maps > this may compromise the ability to find the map at all. What > information may > be good to use as a limiter? (Number of open nodes, length of > current path, > amount of exploded nodes, etc). > >  Creating a set of different 'zones'; all nodes in the map > belong some of > these zones simultaneously. We store whether each zone is > reachable from > each other zone, and before trying the path, we check the zones of the > origin and target points. Depending on the complexity of the > map, the number > of zones may vary from a few to quite a lot. > >  Storing some kind of 'major waypoint graph' not unlike > those described by > John Ratcliff and others in a previous thread here, and > having reachability > information among the waypoints. But how to select which > waypoints are most > appropriate to use? > >  Doing the search as a background process so the game's not > frozen while it > searches. This can grow very memoryintensive if many units > are finding > paths simultaneously. Also the units will have to hesitate > and change minds > if they start following the (current) best path before the search is > complete. > >  Have a lowerresolution version of the map and use that to > build a coarser > path, then refine it using the highdetail version. I haven't > managed to fit > different terrain types & costs in this idea, or resolution > scales different > from 2:1. > >  Anything totally different? > > Thanks for any ideas and/or pointers, > Javier Arevalo > Pyro Studios > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Paul Johnson <Paul@Stainless.co.uk>  20010119 16:39:02

As a quick test, I counted up the current recursion depth, purely for debugging purposes. Making the heuristic return a massive number and stop recursing based on this worked amazingly well. You can check for bigger than this massive number at the back end and decide whether to move at all or not. If so, the path was the best route so far. Paul Johnson Head of Core Tech. Stainless Technologies Ltd. Original Message From: Javier Arevalo [mailto:jare@...] Sent: 19 January 2001 15:31 To: GDAlg Subject: [Algorithms] Detecting unfindable paths Hi! A* can be quite fast but if you want it to find very large paths in huge maps then it will also take a very long time before it gives up searching. Does anyone have some good references to pathfinding techniques for detecting the situations, in very large and complex maps, where there does not exist a path at all? Imagine Age Of Empires in an islands (or even worse, a coasttocoast map). Even worse, different units can pass over different kinds of terrain. so some units may have a valid path while others don't. Ideas I have been playing with in my head, all of them with their own advantages and limitations:  Limit the amount of searching A* does, once the limit is reached choose the best path so far, go there and try again from there. For convoluted maps this may compromise the ability to find the map at all. What information may be good to use as a limiter? (Number of open nodes, length of current path, amount of exploded nodes, etc).  Creating a set of different 'zones'; all nodes in the map belong some of these zones simultaneously. We store whether each zone is reachable from each other zone, and before trying the path, we check the zones of the origin and target points. Depending on the complexity of the map, the number of zones may vary from a few to quite a lot.  Storing some kind of 'major waypoint graph' not unlike those described by John Ratcliff and others in a previous thread here, and having reachability information among the waypoints. But how to select which waypoints are most appropriate to use?  Doing the search as a background process so the game's not frozen while it searches. This can grow very memoryintensive if many units are finding paths simultaneously. Also the units will have to hesitate and change minds if they start following the (current) best path before the search is complete.  Have a lowerresolution version of the map and use that to build a coarser path, then refine it using the highdetail version. I haven't managed to fit different terrain types & costs in this idea, or resolution scales different from 2:1.  Anything totally different? Thanks for any ideas and/or pointers, Javier Arevalo Pyro Studios _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist 
From: Jeff Lander <jeffl@te...>  20010119 16:33:30

Ville, That sounds like a very good method. I am now using a similar system that I will be talking about at my GDC session this year. The key differences is that I classify the precalculated edge tables based on the angle between the edges and stored cameras. I sort based on the edge angle relative to the camera. At runtime I check the angle between the realtime camera and the object and select the closest match. The difference in angle determines how deep to look in the sorted edge structure. Jeff At 01:49 PM 1/19/2001 +0200, you wrote: >I've been working quite a bit with various silhouette extraction >algorithms recently (as our visibility library is solely based on >silhouettes).. I finished yesterday with the testing and optimization >of our newest algorithm, and since it works wonders, I'd like to share >it. It's a modified version of the algorithm I posted a couple of >weeks ago on this list. > >The algorithm works solely using _edges_ of meshes, so it does not need >triangle data at runtime. As a preprocess, we find all edges of a >triangle >mesh that are _not_ shared by coplanar surfaces, i.e. all edges that >can potentially contribute to a silhouette. For each such edge we store >indices to the vertices forming the edge and indices to the plane >equations of the 1 or 2 connected surfaces. If there are more than two >surfaces connected to an edge (bad modeling!) we just duplicate the edge. > >* * * > >The actual silhouette extraction works in two parts: caching "incremental" >silhouettes from certain viewpoints and "extracting" silhouettes from the >incremental representations. > >Creating an incremental representation is very straightforward. First, we >transform the camera position into the model's space ("invCamPos"). Then >we compute the distance from invCamPos to each of the planes of the model >(as >the plane equations are "normalized" as a preprocess, this work is a dot >product per plane). After this we traverse all edges of the model and > >a) classify them into two groups (see below) >b) compute smaller _absolute_ distance to the plane(s) connected to the edge >("visual event distance") > >The edges are classified into two groups, the "in" group and the "out" >group, >based on whether the edge is a silhouette edge as viewed from invCamPos >("in") >or a nonsilhouette edge ("out"). > >The smaller absolute distance (i.e. fabs() the plane distance values and >select >smaller) indicates the _minimum_ distance from invCamPos that a "visual >event" >can happen, i.e. the status of the silhouette edge may change from >silhouette> >nonsilhouette or vice versa. > >Then we sort all "in" edges and all "out" edges separately (based on the >visual >event distance). > >After this we select N closest "out" edges (I use N = number of "in" edges * >2). >We allocate memory for all of the "in" edges and N "out" edges and store >pointers >to the edges and the "visual event distance" values (for a total of 8 bytes >per >cached edge). Then we compute one more value, the largest "out" edge >distance in >the selection (we call this "radius") and store this value and "invCamPos" >to >the structure. > >The data structure above contains thus a silhouette as viewed from invCamPos >("in" >edges) and a set of edges along with "visual event distances" up to a >certain distance >("radius"). > >* * * > >The silhouette extraction part is trivial and very fast. We transform the >camera >position into the model's space and find *any* cached representation that >satisfies >the equation (invCamPoscached>invCamPos).length() <= cached>radius. If >there are >multiple cached entries that satisfy the equation, we choose the one with >the smallest >distance between the camera positions. > >We compute the distance between current camera position and the cached >structure's >camera position and call this value "dist". > >Then all we have to do is to traverse the edges in the cached structure. For >"in" >edges we compare "dist" to the event distance. If the current distance >is smaller, we automatically select the edge (as the status of the edge >cannot >change). If it's greater, we perform two dot products (i.e. camera position >vs. >plane equations connected to the edge) to classify the edge as silhouette or >nonsilhouette. > >For "out" edges (that are in sorted order, remember?) we have to perform the >plane equation tests (two tests per edge). However, we need only to traverse >this list until we reach a visual event distance that is >= "dist". > >We additionally use a simple vertex cache mechanism for removing duplicate >vertices. > >The end result is that if we have a cached silhouette, the absolute worst >case work >per extraction ("dist"=="radius") == 3*numSilhouetteEdges*2 dot products. >The bestcase >scenario (i.e. "dist"==0) doesn't require any dot products, just copying the >"in" edges. > >* * * > >The end result is that we get exact silhouettes, with little computation >time >and minimal number of vertices (i.e. shared vertices appear only once). I've >replaced >the silhouette extraction algorithm of Umbra with this and have got very >nice results. >On an x86 box, we spend an average of ~200 cycles per _output edge_ for the >entire >caching/extraction process. That's 5M _output edges_ per second on a 1GHz >box... >The algorithm scales well when there's not that much camera coherence, is >capable >of sharing silhouettes when models are heavily instanced (or there are >mirrors >in the scene) and doesn't consume much memory (I've limited the total cache >size >to 256kB). > >Although we use the algorithm for extracting occluder silhouettes, it should >work >very nicely for shadow frustum extraction or perhaps some cartoon rendering >stuff. > >cheers, >wili > >Ville Miettinen >http://surrender3d.com/umbra  Jeff Lander Game Technology Seminars Darwin 3D Jan 30  Feb 2, 2001 http://www.darwin3d.com http://www.techsem.com 
From: Tom Forsyth <tomf@mu...>  20010119 16:02:32

> From: Javier Arevalo [mailto:jare@...] > > Hi! > > A* can be quite fast but if you want it to find very large > paths in huge > maps then it will also take a very long time before it gives > up searching. > > Does anyone have some good references to pathfinding techniques for > detecting the situations, in very large and complex maps, > where there does > not exist a path at all? Imagine Age Of Empires in an islands (or even > worse, a coasttocoast map). Even worse, different units can > pass over > different kinds of terrain. so some units may have a valid > path while others > don't. > > Ideas I have been playing with in my head, all of them with their own > advantages and limitations: > >  Limit the amount of searching A* does, once the limit is > reached choose > the best path so far, go there and try again from there. For > convoluted maps > this may compromise the ability to find the map at all. What > information may > be good to use as a limiter? (Number of open nodes, length of > current path, > amount of exploded nodes, etc). This is probably quite good for shortrange navigation. For longerrange stuff, use one of the methods below (which may not give optimal paths at short ranges). >  Creating a set of different 'zones'; all nodes in the map > belong some of > these zones simultaneously. We store whether each zone is > reachable from > each other zone, and before trying the path, we check the zones of the > origin and target points. Depending on the complexity of the > map, the number > of zones may vary from a few to quite a lot. > >  Storing some kind of 'major waypoint graph' not unlike > those described by > John Ratcliff and others in a previous thread here, and > having reachability > information among the waypoints. But how to select which > waypoints are most > appropriate to use? The two above sound like the same thing to me. Find the closest waypoints to where you are and where you want to be, then A* through the waypoints, and then "smooth" your path between those waypoints. >  Doing the search as a background process so the game's not > frozen while it > searches. This can grow very memoryintensive if many units > are finding > paths simultaneously. Also the units will have to hesitate > and change minds > if they start following the (current) best path before the search is > complete. I can't see much wrong with that  your units are human (or at least fallible), so allowing them to instantly plot completely optimal paths from one end of the map to the other seems like overkill. >  Have a lowerresolution version of the map and use that to > build a coarser > path, then refine it using the highdetail version. I haven't > managed to fit > different terrain types & costs in this idea, or resolution > scales different > from 2:1. Well, this is the same as the zone map idea, but with regularlysized zones. Runs into problems if there are two areas not reachable from each other in the same big zone (e.g. a thin wall). >  Anything totally different? A heirarchial zone map may help with "can I get here at all" questions, though may not help with "what is the fastest path" questions (may pick silly paths). So a combo of the two seems like a good idea. If your map is dynamic, e.g. trees can be cut down to make paths, then a lot of these become difficult to use, because the zones and suchlike need recalculating ingame. To my mind, you haven't addressed the biggest problem  how to get units to nav around each other correctly. Navving when you are the only unit on a map is a doddle  coping with other units blocking your path and deciding whether or not they will move out of the way in time seems like the hard bit. > Thanks for any ideas and/or pointers, > Javier Arevalo > Pyro Studios Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. 
From: Javier Arevalo <jare@py...>  20010119 15:28:50

Hi! A* can be quite fast but if you want it to find very large paths in huge maps then it will also take a very long time before it gives up searching. Does anyone have some good references to pathfinding techniques for detecting the situations, in very large and complex maps, where there does not exist a path at all? Imagine Age Of Empires in an islands (or even worse, a coasttocoast map). Even worse, different units can pass over different kinds of terrain. so some units may have a valid path while others don't. Ideas I have been playing with in my head, all of them with their own advantages and limitations:  Limit the amount of searching A* does, once the limit is reached choose the best path so far, go there and try again from there. For convoluted maps this may compromise the ability to find the map at all. What information may be good to use as a limiter? (Number of open nodes, length of current path, amount of exploded nodes, etc).  Creating a set of different 'zones'; all nodes in the map belong some of these zones simultaneously. We store whether each zone is reachable from each other zone, and before trying the path, we check the zones of the origin and target points. Depending on the complexity of the map, the number of zones may vary from a few to quite a lot.  Storing some kind of 'major waypoint graph' not unlike those described by John Ratcliff and others in a previous thread here, and having reachability information among the waypoints. But how to select which waypoints are most appropriate to use?  Doing the search as a background process so the game's not frozen while it searches. This can grow very memoryintensive if many units are finding paths simultaneously. Also the units will have to hesitate and change minds if they start following the (current) best path before the search is complete.  Have a lowerresolution version of the map and use that to build a coarser path, then refine it using the highdetail version. I haven't managed to fit different terrain types & costs in this idea, or resolution scales different from 2:1.  Anything totally different? Thanks for any ideas and/or pointers, Javier Arevalo Pyro Studios 
From: Matthew Davies <MDavies@ac...>  20010119 14:08:38

Have you thought about other algorithms  depending on what you're doing you may find Dijkstra's algorithm useful. Its quicker and simpler than A* and takes less memory. However it does have cons. Check it out. http://odin.ee.uwa.edu.au/~morris/Year2/PLDS210/dijkstra.html or for a Java animation: http://carnap.ss.uci.edu/java/dijkstra/DijkstraApplet.html Cheers! Matt Davies. > Original Message > From: Jason Zisk [mailto:ziskj@...] > Sent: Friday, January 19, 2001 00:45 > To: Algorithms List > Subject: [Algorithms] A* with graphs > > > Hi everyone, I'm looking for a reference, code, etc on using the A* > algorithm on a graph (nodes with information on how they > connect to each > other). > > All the information about A* I could find was related to > grids, which is > semiuseful but I'd like something more. Its funny, many of the sites > mention A* was developed for use on graphs then later > modified to work with > grids but nobody seems to use it on graphs anymore. :) > > I'm doing this for AI navigation, I have a huge list of nodes with > connectivity info and I want a nice fast algo that will give > me a path from > one node to another. > > Thanks, > >  Jason Zisk >  nFusion Interactive LLC > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Ville Miettinen <wili@hy...>  20010119 11:49:38

I've been working quite a bit with various silhouette extraction algorithms recently (as our visibility library is solely based on silhouettes).. I finished yesterday with the testing and optimization of our newest algorithm, and since it works wonders, I'd like to share it. It's a modified version of the algorithm I posted a couple of weeks ago on this list. The algorithm works solely using _edges_ of meshes, so it does not need triangle data at runtime. As a preprocess, we find all edges of a triangle mesh that are _not_ shared by coplanar surfaces, i.e. all edges that can potentially contribute to a silhouette. For each such edge we store indices to the vertices forming the edge and indices to the plane equations of the 1 or 2 connected surfaces. If there are more than two surfaces connected to an edge (bad modeling!) we just duplicate the edge. * * * The actual silhouette extraction works in two parts: caching "incremental" silhouettes from certain viewpoints and "extracting" silhouettes from the incremental representations. Creating an incremental representation is very straightforward. First, we transform the camera position into the model's space ("invCamPos"). Then we compute the distance from invCamPos to each of the planes of the model (as the plane equations are "normalized" as a preprocess, this work is a dot product per plane). After this we traverse all edges of the model and a) classify them into two groups (see below) b) compute smaller _absolute_ distance to the plane(s) connected to the edge ("visual event distance") The edges are classified into two groups, the "in" group and the "out" group, based on whether the edge is a silhouette edge as viewed from invCamPos ("in") or a nonsilhouette edge ("out"). The smaller absolute distance (i.e. fabs() the plane distance values and select smaller) indicates the _minimum_ distance from invCamPos that a "visual event" can happen, i.e. the status of the silhouette edge may change from silhouette> nonsilhouette or vice versa. Then we sort all "in" edges and all "out" edges separately (based on the visual event distance). After this we select N closest "out" edges (I use N = number of "in" edges * 2). We allocate memory for all of the "in" edges and N "out" edges and store pointers to the edges and the "visual event distance" values (for a total of 8 bytes per cached edge). Then we compute one more value, the largest "out" edge distance in the selection (we call this "radius") and store this value and "invCamPos" to the structure. The data structure above contains thus a silhouette as viewed from invCamPos ("in" edges) and a set of edges along with "visual event distances" up to a certain distance ("radius"). * * * The silhouette extraction part is trivial and very fast. We transform the camera position into the model's space and find *any* cached representation that satisfies the equation (invCamPoscached>invCamPos).length() <= cached>radius. If there are multiple cached entries that satisfy the equation, we choose the one with the smallest distance between the camera positions. We compute the distance between current camera position and the cached structure's camera position and call this value "dist". Then all we have to do is to traverse the edges in the cached structure. For "in" edges we compare "dist" to the event distance. If the current distance is smaller, we automatically select the edge (as the status of the edge cannot change). If it's greater, we perform two dot products (i.e. camera position vs. plane equations connected to the edge) to classify the edge as silhouette or nonsilhouette. For "out" edges (that are in sorted order, remember?) we have to perform the plane equation tests (two tests per edge). However, we need only to traverse this list until we reach a visual event distance that is >= "dist". We additionally use a simple vertex cache mechanism for removing duplicate vertices. The end result is that if we have a cached silhouette, the absolute worst case work per extraction ("dist"=="radius") == 3*numSilhouetteEdges*2 dot products. The bestcase scenario (i.e. "dist"==0) doesn't require any dot products, just copying the "in" edges. * * * The end result is that we get exact silhouettes, with little computation time and minimal number of vertices (i.e. shared vertices appear only once). I've replaced the silhouette extraction algorithm of Umbra with this and have got very nice results. On an x86 box, we spend an average of ~200 cycles per _output edge_ for the entire caching/extraction process. That's 5M _output edges_ per second on a 1GHz box... The algorithm scales well when there's not that much camera coherence, is capable of sharing silhouettes when models are heavily instanced (or there are mirrors in the scene) and doesn't consume much memory (I've limited the total cache size to 256kB). Although we use the algorithm for extracting occluder silhouettes, it should work very nicely for shadow frustum extraction or perhaps some cartoon rendering stuff. cheers, wili Ville Miettinen http://surrender3d.com/umbra Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of PeterPike Sloan Sent: Wednesday, January 10, 2001 22:45 To: 'gdalgorithmslist@...' Subject: RE: [Algorithms] Silhouettes in dual space Hi, I have to second Charles recommendation  I'm one of the authors in the interactive technical illustration paper (in particular I implemented the silhouette stuff in that paper) and pedros method is *much* better for the perspective case. He's actually improved it, there will be a paper at I3D this year that talks about it  you don't need all of the edges in the data structure, only the ones that could be silhouettes... PeterPike Sloan (Of course for nonstatic geometry things change  it's not clear if any of the preproccesing that's neccesary would be better then brute force...) Original Message From: Charles Bloom [mailto:cbloom@...] Sent: Wednesday, January 10, 2001 9:12 AM To: gdalgorithmslist@... Subject: Re: [Algorithms] Silhouettes in dual space Hye Pierre, have you read Hoppe et al's paper on Silhouette clipping? Their algorithm is as good as it gets, except for implementation details. In particular, they address the option of using the "backfacing" technique that you propose, and find that it's competetive with their technique, but requires about 2x as many edge tests in the end. Their algorithm is O(m log(n)) in the number of edges (m) in the Silhouette, where (n) is the total number of edges, so log(n) is the expected depth of the tree. Note that m is O(sqrt(n)) in the total number of edges n. At 11:28 AM 1/10/2001 +0100, you wrote: >>Could you explain this statement please? If you have found the Silhouette, >>surely finding the shadow volume is trivial? > >This has to do with screen perspective. > >To find the edges of a shadow volume you compute something like: > > Normal  Source >= 0 > >(Normal = face normal, Source = light pos,  = dot product) > >To find the edges of a silhouette you perform standard backface culling and >take the perspective into account: > > Normal  (Source  Base)) >= 0 > >(Source = camera pos, Base = any vertex on the face) > >Hoppe reports computing the second one is harder (cf silhouette clipping), >but that's exactly what the paper I mentioned does (and in O(log n), further >improved by temporal coherence). > >This is highly related to backface culling, and it's not surprising to find >traces of the dual space in other papers such as Kumar & Manocha's >"Hierarchical Backface Culling". Unfortunately the way to get rid of the >"point at infinity problem" is unclear to me. > >By the way, I also found a cool method using Hoff's normal masks: >1) Compute your standard normal masks >2) Group your polygons according to the normal cluster ID >3) Perform your conservative backface culling in O(1). This gives >potentially visible clusters. >4) Invert the viewing direction and perform it again. This gives potentially >visible clusters from the back of the object, still in O(1). >5) Since the culling is conservative, you can expect the silhouette edges to >be in the clusters common to 3) and 4). Those are what I would call the >"silhouette clusters". >6) Now you can search for silhouette edges in a standard way, but only >taking the silhouette clusters into account. > >Alternatively, the silhouette clusters may be used in VDPM to quickly locate >regions of interest. > > >Disclaimer: all of this is theoretical and completely untested.... what do >you think ? > > > >Pierre Terdiman * Home: p.terdiman@... >Coder in the dark * Zappy's Lair: http://www.codercorner.com > > > > > > > > > >_______________________________________________ >GDAlgorithmslist mailing list >GDAlgorithmslist@... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > >  Charles Bloom cb@... http://www.cbloom.com _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: Paul Johnson <Paul@Stainless.co.uk>  20010119 11:15:40

Assuming you have code available (written or borrowed) that works on a grid, the modifications back to graphs are simple enough. There will be one recursive routine in there that calculates a heuristic cost from it's own node to its parents. This is typically, as stated, some form of simple distance calculation directly between the two points. After stuffing the answer in the nodestructures 'cost', it will call itself 8 other times for all neighbouring squares. Just make that examine the connectivity in your treegraph instead, and recurse down each of the nodes instead. Paul Johnson Head of Core Tech. Stainless Technologies Ltd. Original Message From: Javier Arevalo [mailto:jare@...] Sent: 19 January 2001 10:03 To: gdalgorithmslist@... Subject: Re: [Algorithms] A* with graphs The Commandos series uses exactly that technique, and I believe most Quakestyle games too, use a graph of connected convex "sectors", which is a perfect match for A*. What you need is:  information on cost for traversing each graph connection.  a way to estimate the cost of reaching the target. Most A* algos in games are used for navigation, and the target cost estimation is implemented in terms of current world coordinates and target world coordinates (either Euclidian distance, Manhattan or some other tweaked variations). Tweaking the way to calculate the cost for traversing nodes in the graph is the actual tricky part. Javier Arevalo Pyro Studios  Original Message  From: "Jason Zisk" <ziskj@...> To: "Algorithms List" <gdalgorithmslist@...> Sent: Friday, January 19, 2001 1:44 AM Subject: [Algorithms] A* with graphs > Hi everyone, I'm looking for a reference, code, etc on using the A* > algorithm on a graph (nodes with information on how they connect to each > other). > > All the information about A* I could find was related to grids, which is > semiuseful but I'd like something more. Its funny, many of the sites > mention A* was developed for use on graphs then later modified to work with > grids but nobody seems to use it on graphs anymore. :) > > I'm doing this for AI navigation, I have a huge list of nodes with > connectivity info and I want a nice fast algo that will give me a path from > one node to another. > > Thanks, > >  Jason Zisk >  nFusion Interactive LLC _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist 
From: Javier Arevalo <jare@py...>  20010119 10:00:55

The Commandos series uses exactly that technique, and I believe most Quakestyle games too, use a graph of connected convex "sectors", which is a perfect match for A*. What you need is:  information on cost for traversing each graph connection.  a way to estimate the cost of reaching the target. Most A* algos in games are used for navigation, and the target cost estimation is implemented in terms of current world coordinates and target world coordinates (either Euclidian distance, Manhattan or some other tweaked variations). Tweaking the way to calculate the cost for traversing nodes in the graph is the actual tricky part. Javier Arevalo Pyro Studios  Original Message  From: "Jason Zisk" <ziskj@...> To: "Algorithms List" <gdalgorithmslist@...> Sent: Friday, January 19, 2001 1:44 AM Subject: [Algorithms] A* with graphs > Hi everyone, I'm looking for a reference, code, etc on using the A* > algorithm on a graph (nodes with information on how they connect to each > other). > > All the information about A* I could find was related to grids, which is > semiuseful but I'd like something more. Its funny, many of the sites > mention A* was developed for use on graphs then later modified to work with > grids but nobody seems to use it on graphs anymore. :) > > I'm doing this for AI navigation, I have a huge list of nodes with > connectivity info and I want a nice fast algo that will give me a path from > one node to another. > > Thanks, > >  Jason Zisk >  nFusion Interactive LLC 
From: PeterPike Sloan <ppsloan@mi...>  20010119 07:51:28

One other name to look at is Mirtich  he's had several papers in this area: http://www.merl.com/people/mirtich/index.html http://www.cs.berkeley.edu/~mirtich/impulse.html PeterPike Sloan (John Snyder has also done some of this in the past: http://www.gg.caltech.edu/papers/collideabstract.html) Original Message From: Chris Hecker [mailto:checker@...] Sent: Thursday, January 18, 2001 11:06 AM To: gdalgorithmslist@...; gdalgorithmslist@... Subject: Re: [Algorithms] Contact Determination >I'm currently writing a physics simulator and suffer from the problem of >contact determination. ... >But I wasn't able to use the algorithm to compute multiple contact points. This is an incredibly hard problem, and unfortunately, it's also an unsolved incredibly hard problem. Last year's phyics seminar was totally dedicated to this topic, and the consensus from all the researchers and attendees after two 10hour days is that there's still no algorithm that works very well. :) Your best bet would be to find somebody who has a copy of the proceedings from the physics seminar from last year, since it has all the relevant papers in it, including some that weren't published anywhere else. Unfortunately, we don't have any left. The researchers you're going to want to look for are Vanecek, Bouma, Jing Xiao, Baraff (he hints about his algorithm in his contact papers), and Cremer. A trip through CiteSeer should help. Xiao is the only person working on this right now, and I don't think even she's doing it very actively. The algorithms range from iterative hacks to attempts at solving the full regularized boolean set operations. Baraff's algorithm (used by Maya) is a little unclear, but he says he's just examining local areas and trying to figure things out that way. His works for concave objects. We're going to talk a bit about calculating manifolds at the seminar this year. It's mostly going to be manifolds between soft bodies and rigid bodies (since the focus is on fluids and soft bodies for special effects). Obviously soft bodies makes it even harder (there's a paper by Snyder on doing this with lots of interval arithmetic). Chris Game Technology Seminars, less that two weeks away! Physics & Graphics http://www.techsem.com _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist 
From: Akbar A. <syedali011@ea...>  20010119 04:07:35

>However, I'm curious, where else does everyone regularly look for info? http://www.angelfire.com/ab3/nobody/links.html each link will follow to other links, etc... i don't have david baraff's home page as a link, but i wouldn't doubt if jos stam had a link to his page on his site... http://www.cs.cmu.edu/afs/cs.cmu.edu/user/baraff/www/baraffhome.html fredo durand has a very good links section as well at his site. then tehre is acm as well.. but google.com can usually track the papers for you. >directx mailing list not a little while ago, >that I haven't been able to locate at sourceforge. for Direct3D stuff, this is the place to go. the api is changing, and the traffic is usually very specfic/on topic/good place to learn tricks for D3D, and the sdk/doc is pretty specfic as well. >any suggestions for other places to check regularly/other lists to >join? What are your faves? Especially with 3D game programming/dev in mind. comp.graphics.algoritms, comp.graphics.rendering.renderman, OpenGL and the advanced one, this one and directX one. http://www.shugashack.com, http://www.voodooextreme.com, and cube.ign.com are usually where i frequently visit. http://unreal.epicgames.com/ isn't bad, cause there team has written articles on there engine, and you can learn a lot from it.. there a lot of stuff on the web, google.com is your friend, use it, and use it a lot. http://www.nvidia.com/developer is another good place, ati has a good one as well, but the url is very ugly... hope this helps, akbar A. ;vertexabuse.cjb.net or ;www.vertexabuse.com cjb goes down once in a while "necessity is the mother of strange bedfellows" ninja scroll Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of LordSachiel@... Sent: Thursday, January 18, 2001 9:29 PM To: gdalgorithmslist@... Subject: [Algorithms] Web Resource Request Hi, I've been reading this mailing list for a while now, and am learnign new things with every digest. And just as often you've given some great material elsewhere on the web. However, I'm curious, where else does everyone regularly look for info? I heard something about a directx mailing list not a little while ago, that I haven't been able to locate at sourceforge. Gamasutra has some great articles, although nothing too interesting lately in the programming category. Gamedev.net has picked up a lot since I started reading it a couple of months ago. But I still want more :) So, any suggestions for other places to check regularly/other lists to join? What are your faves? Especially with 3D game programming/dev in mind. 
From: <LordSachiel@ao...>  20010119 03:28:35

Hi, I've been reading this mailing list for a while now, and am learnign new things with every digest. And just as often you've given some great material elsewhere on the web. However, I'm curious, where else does everyone regularly look for info? I heard something about a directx mailing list not a little while ago, that I haven't been able to locate at sourceforge. Gamasutra has some great articles, although nothing too interesting lately in the programming category. Gamedev.net has picked up a lot since I started reading it a couple of months ago. But I still want more :) So, any suggestions for other places to check regularly/other lists to join? What are your faves? Especially with 3D game programming/dev in mind. 
From: Jason Zisk <ziskj@op...>  20010119 02:32:47

I guess I just don't understand A* well enough. The site Dusty sent has some really nice pseudocode that isn't cluttered up with grid stuff so I should be able to grasp it now. The way some sites explained the algorithm it sounded like they were "optimizing" it for grids, and then of course the code was a total mess so I didn't get any clues from there. Sorry for the blockheadedness.  Jason Zisk  nFusion Interactive LLC  Original Message  From: "David Notario" <dnotario@...> To: <gdalgorithmslist@...> Sent: Thursday, January 18, 2001 8:50 PM Subject: RE: [Algorithms] A* with graphs > > > > Original Message > > From: Jason Zisk [mailto:ziskj@...] > > Sent: Thursday, January 18, 2001 4:45 PM > > To: Algorithms List > > Subject: [Algorithms] A* with graphs > > > > > > Hi everyone, I'm looking for a reference, code, etc on using the A* > > algorithm on a graph (nodes with information on how they > > connect to each > > other). > > > > All the information about A* I could find was related to > > grids, which is > > semiuseful but I'd like something more. > > I don't see the conceptual difference between A* in grids and graphs, > maybe it's the cost heuristic you want to know more about. > > David > 
From: David Notario <dnotario@mi...>  20010119 02:09:58

> Original Message > From: Jason Zisk [mailto:ziskj@...] > Sent: Thursday, January 18, 2001 4:45 PM > To: Algorithms List > Subject: [Algorithms] A* with graphs > > > Hi everyone, I'm looking for a reference, code, etc on using the A* > algorithm on a graph (nodes with information on how they > connect to each > other). > > All the information about A* I could find was related to > grids, which is > semiuseful but I'd like something more. I don't see the conceptual difference between A* in grids and graphs, maybe it's the cost heuristic you want to know more about. David 