Thread: [Algorithms] Player collision detection (Page 2)
Brought to you by:
vexxed72
From: Dave H. <dh...@ma...> - 2002-04-02 03:50:41
|
Hi all, I'm working on a collision detection system for my current project, and am wondering how others do this. I've fought with various BSP-based schemes in the past, but am using an AABB scheme for the current approach, and am doing a Nettle/Telemachos swept-elipsoid sort of thing. After looking at various books/websites/etc., it seems like most of the more advanced collision detection stuff (e.g., all the seperating axis stuff) does not deal with sweeping of volumes, but instead just does discrete sampling at different positions to determine whether a collision has occured or not. So, on to my questions: 1. Am I right in my interpretation of the more advanced systems? 2. Do people actually use advanced systems these in games? 3. If so, how do you deal with the possibility of missing a collision for a fast-moving object that might pass completely through another object in a single frame? Thanks in advance, -dh ________________________ Dave Hill http://www.dgp.toronto.edu/~dh ----- Original Message ----- From: "Charles Bloom" <cb...@cb...> To: "Game Dev Algorithms (E-mail)" <gda...@li...> Sent: Monday, April 01, 2002 3:27 PM Subject: RE: [Algorithms] object-space bump mapping > > One last note on this : object-space (or world-space) bump mapping > behaves much better under vertex LOD than local (tangent) space lighting > does, since the tangent space bases are in the verts. As long as your > uv-mapping is "smooth", object/work-space normal maps behave extremely > well under vertex tesselation/LOD change. > > > > ---------------------------------------------------- > Charles Bloom cb...@cb... www.cbloom.com > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Brian M. <bri...@bl...> - 2002-04-03 22:04:37
|
DH) After looking at various books/websites/etc., it seems like most of the more advanced collision detection stuff (e.g., all the seperating axis stuff) does not deal with sweeping of volumes, but instead just does discrete sampling at different positions to determine whether a collision has occured or not. B) It's worth reading Dave Eberly's 'Method of Seperating Axis' paper. http://www.magic-software.com/Intersection3D.html This covers objects with linear velocities. IIRC it's also in his book - '3D Game Engine Design' -Brian. |
From: Adam M. <amo...@dp...> - 2002-03-26 23:46:28
|
Last week I installed Shiny's Messiah game out of curiosity to see how their promise of scalability works out on my gf3. I set a frame rate of 25 fps to be maintained. While the level geometry still looks terrible, the character models are amazingly high detail. They look much better than bump mapped. Of course, when I put the game in wireframe mode, it is visible that the mesh is tesselated so densely that the wireframe makes for a solid white image. So effectively they are doing point based rendering at that point, and the game reports > 20,000 trigs for a character. Doesn't that make you think that instead of investing a lot of effort into pixel shading, an interesting alternative would be a system that just renders a lot of trigs with boring shading? Beyond a certain point you won't even need textures. Charles' problem of texture mapping doesn't exist when using vertex colors, at the extreme.. Such a quick LOD system certainly sounds easier to implement than the Hoppe normal map approach, and there is no problem with silouettes. You probably save a lot of render state changes when using simple shading, as opposed to the very expensive advanced shaders I have been working with (that don't let you get anywhere near theoretical max trig throughput.) I admit that you will be able to render more characters with higher sum detail with the normal map aproach, but with a lot of characters onscreen, you can't really pay that much attention to detail anyway. -- -- Adam Moravanszky http://n.ethz.ch/student/adammo/ > Well, one excuse for unique bump maps is you can then simplify your > geometry, replace the missing polys with bump maps, and still have > awesome detailed-looking models. Sillouettes are still an issue > though. Displaced subdivision, anyone? > |
From: Charles B. <cb...@cb...> - 2002-03-27 00:22:06
|
Really the only problem with this suggestion is that verts take much more memory than textures. To get 1 pixel resolution, you only need 1 texel, which takes 1 byte. But to get 1 pixel with verts, you need at least 1 vertex, which takes at least 16 bytes (3 floats + color) !!! So you have 16x more memory use for the same detail. At 12:50 AM 3/27/2002 +0100, Adam Moravanszky wrote: >Last week I installed Shiny's Messiah game out of curiosity to see how their >promise of scalability works out on my gf3. I set a frame rate of 25 fps to >be maintained. While the level geometry still looks terrible, the character >models are amazingly high detail. They look much better than bump mapped. >Of course, when I put the game in wireframe mode, it is visible that the >mesh is tesselated so densely that the wireframe makes for a solid white >image. So effectively they are doing point based rendering at that point, >and the game reports > 20,000 trigs for a character. Doesn't that make you >think that instead of investing a lot of effort into pixel shading, an >interesting alternative would be a system that just renders a lot of trigs >with boring shading? Beyond a certain point you won't even need textures. >Charles' problem of texture mapping doesn't exist when using vertex colors, >at the extreme.. Such a quick LOD system certainly sounds easier to >implement than the Hoppe normal map approach, and there is no problem with >silouettes. You probably save a lot of render state changes when using >simple shading, as opposed to the very expensive advanced shaders I have >been working with (that don't let you get anywhere near theoretical max trig >throughput.) I admit that you will be able to render more characters with >higher sum detail with the normal map aproach, but with a lot of characters >onscreen, you can't really pay that much attention to detail anyway. >-- >-- Adam Moravanszky >http://n.ethz.ch/student/adammo/ > > ---------------------------------------------------- Charles Bloom cb...@cb... www.cbloom.com |
From: Andrew P. <aj...@eu...> - 2002-03-28 14:13:30
|
Also to avoid aliasing problems yuou would need different LOD's anyway or some strange runtime vertex filtering:) Andrew > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of > Charles Bloom > Sent: 27 March 2002 00:19 > To: gda...@li... > Subject: Re: [Algorithms] use lots of verts > > > > Really the only problem with this suggestion is that verts take much > more memory than textures. To get 1 pixel resolution, you only > need 1 texel, > which takes 1 byte. But to get 1 pixel with verts, you need at > least 1 vertex, > which takes at least 16 bytes (3 floats + color) !!! So you have > 16x more > memory > use for the same detail. > > At 12:50 AM 3/27/2002 +0100, Adam Moravanszky wrote: > >Last week I installed Shiny's Messiah game out of curiosity to > see how their > >promise of scalability works out on my gf3. I set a frame rate > of 25 fps to > >be maintained. While the level geometry still looks terrible, > the character > >models are amazingly high detail. They look much better than > bump mapped. > >Of course, when I put the game in wireframe mode, it is visible that the > >mesh is tesselated so densely that the wireframe makes for a solid white > >image. So effectively they are doing point based rendering at > that point, > >and the game reports > 20,000 trigs for a character. Doesn't > that make you > >think that instead of investing a lot of effort into pixel shading, an > >interesting alternative would be a system that just renders a > lot of trigs > >with boring shading? Beyond a certain point you won't even > need textures. > >Charles' problem of texture mapping doesn't exist when using > vertex colors, > >at the extreme.. Such a quick LOD system certainly sounds easier to > >implement than the Hoppe normal map approach, and there is no > problem with > >silouettes. You probably save a lot of render state changes when using > >simple shading, as opposed to the very expensive advanced shaders I have > >been working with (that don't let you get anywhere near > theoretical max trig > >throughput.) I admit that you will be able to render more characters with > >higher sum detail with the normal map aproach, but with a lot of > characters > >onscreen, you can't really pay that much attention to detail anyway. > >-- > >-- Adam Moravanszky > >http://n.ethz.ch/student/adammo/ > > > > > > ---------------------------------------------------- > Charles Bloom cb...@cb... www.cbloom.com > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > _____________________________________________________________________ > This e-mail is confidential and may be privileged. It may be > read, copied and used only by the intended recipient. No > communication sent by e-mail to or from Eutechnyx is intended to > give rise to contractual or other legal liability, apart from > liability which cannot be excluded under English law. > > This message has been checked for all known viruses by Star > Internet delivered through the MessageLabs Virus Control Centre. > > www.eutechnyx.com Eutechnyx Limited. Registered in England No: 2172322 _____________________________________________________________________ This e-mail is confidential and may be privileged. It may be read, copied and used only by the intended recipient. No communication sent by e-mail to or from Eutechnyx is intended to give rise to contractual or other legal liability, apart from liability which cannot be excluded under English law. This message has been checked for all known viruses by Star Internet delivered through the MessageLabs Virus Control Centre. www.eutechnyx.com Eutechnyx Limited. Registered in England No: 2172322 |
From: Paul F. <pf...@at...> - 2002-03-27 10:24:34
|
Charles Bloom wrote: > Really the only problem with this suggestion is that verts take much > more memory than textures. To get 1 pixel resolution, you only need 1 texel, > which takes 1 byte. But to get 1 pixel with verts, you need at least 1 vertex, > which takes at least 16 bytes (3 floats + color) !!! So you have 16x more > memory > use for the same detail. And the fact that it would be a *lot* slower. You will have moved the bottle neck to the transform stage whereas you really want to be fill limited. Also, the only shading you can do is diffuse + specular which is pretty poor. I'd definatly stick with textures for the forseeable future. :-) Cheers, Paul. |
From: Eric H. <er...@ac...> - 2002-03-15 05:19:40
|
At 08:06 PM 3/14/2002 +0100, you wrote: >I just read the Siggraph'02 paper from Cass Everitt / Mark Kilgard named >"Practical and Robust Stenciled Shadow Volumes for Hardware-Accelerated >rendering". Wonderful paper! Shadow volumes are now robust. Though unfortunately it didn't make it into SIGGRAPH, but their loss is everyone's gain, since now it's out in time for GDC instead. Too practical for SIGGRAPH ;-> Anyway, I like the method, especially the key realization that moving the far plane backwards to infinity does not cost all that much precision normally. Personally, I've been using zfail and just kicking the shadow volume back cap and the far plane both out by the distance of the diagonal of the box bounding the scene - I don't think there's a major practical reason to kick it all the way to infinity, "makes a shorter vertex shader" is the only one I can think of. It's a lot more elegant to go to infinity, though (and extremely cool for parallel light sources, where the points meet at infinity and there's no need for a far cap!). Not going to infinity is actually important for my application, where orthographic views are needed - their technique, as they mention, only works for perspective views. OK, here's a question to anyone out there: is there *any* way to render shadows onto transparent objects with shadow volumes? Since each pixel's stencil value can store only one shadow state, I can't think of anything good. Maybe Everitt's "depth-peeling" technique can somehow get used? Maybe there's some other way to do it? >So this is the method based on "Carmack's reverse", etc. They've updated the paper a bit today, as it took a day for me to fax Mark Kilgard a presentation on zfail from Bill Bilodeau in May 1999, so I think he predates Carmack on this one. That said, Bilodeau's presentation is extremely hard to find (and zfail is mentioned on one powerpoint slide in it) - I wish this finding had been publicized a lot more! Carmack definitely understood the problem well. Eric |
From: Pierre T. <p.t...@wa...> - 2002-03-15 06:07:51
|
> Anyway, I like the method, especially the key realization that moving the > far plane backwards to infinity does not cost all that much precision > normally. Except when you're using a W-Buffer.... (sigh) P. |
From: Dan B. <dan...@mi...> - 2002-03-15 02:26:55
|
It's been a while since I've looked at this problem, but it seams to me that their really isn't a 'correct' way to do it. It falls into the category of normal filtering. To make a long story short: (cos A + cos B) !=3D cos (A+B). Filtering is a linear operations, but we are doing a non-linear lighting calculation. So the math is really bogus. In order to really blend maps, you'd have to do the first, not the second. Having said that this is not the correct thing to do, doesn't mean it won't work in your case. Even though the math is totally bogus it definitely looks better if you turn filtering on with bump maps, and it might look ok just to average the normal. Strange as it sounds, I think your best bet may be to use to use EMBOSS bump mapping. You can add heights all day and it will still be correct (in as much as Emboss is correct :) ). Emboss is more expensive and trickier to get good looking results then dp3, but I think it might work in your case. I think Sim is right, 1 and 3 are really the same thing. #2 isn't that different. Try all the approaches and see which one looks the best, in the end, that's all that matters... Dan Baker Microsoft, Direct3D -----Original Message----- From: Sim Dietrich [mailto:SDi...@nv...]=20 Sent: Wednesday, March 13, 2002 9:56 PM To: 'Charles Bloom '; 'gda...@li... ' Subject: RE: [Algorithms] combining bump maps? -----Original Message----- From: Charles Bloom To: gda...@li... Sent: 3/13/2002 6:16 PM Subject: [Algorithms] combining bump maps? Is anyone combining bump maps on the fly in the pixel shader? How? To be more clear, what I'd like to do is have a base bump map for a surface, and then "add" on an aditional one or more with different uv's. Ideally, if my bump maps were actually signed grey-scale heightmaps, I could just literally add them, but of course they're actually normal maps, so adding isn't so hot. NVidia has this "detail normal maps" demo, but there seems to be no description of what they're doing, and I can't decipher opengl register- combiner markup. Some thoughts of possible ways to do it : 1. Add two normal maps and renormalize. This is pretty attractive, but also pretty expensive. eg. there's no way I can do this in one pass of generating the new normal map and also using it; I have to make it, write it out, and then read it back and use it in another pass. ** This is what the nvidia demo is using. It does a one and a half combiner version of a single iteration newton-raphson. The whole thing should be a single pass on GF3+. If you used the final combiner, you may be able to get it single pass on GF1 also. For the calculation, you use the interpolated v vector as the initial guess, and perform one iteration to get really close to a normalized vector, as long as the two v vectors were < ~40 degrees apart. This was developed by Scott Cutler on our d3d driver team. 2. Lerp between two normals and just use as-is. This is cheap, but can be quite off unless the normals are quite similar. ** Well, this is what bilinear filtering a normal map does, so, it can't be too bad for many textures. 3. Use some sort of alternate representation of normals, like a "dudv" map, and then just lerp and use as is. ** a dudv map can be seen as a 2d projection of a 3d normal, so I guess it would reduce to the same problem in this case. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 |