Re: [Algorithms] N-body processing
Brought to you by:
vexxed72
From: Pierre T. <p.t...@wa...> - 2000-08-18 22:24:28
|
> Dave and I are talking about the same thing here. Swept sphere volumes give > you tighter fits, faster interference rejection, and MUCH faster proximity > queries than OBBs. Particularly important if you're trying to stave off the > poly^2 tests. I assumed from your topic you were talking about N-Body > dynamic simulation, for which fast proximity queries will be extremely > helpful. > > (re-)Read the paper "Fast Proximity Queries with Swept Sphere Volumes" at: > > You'll be glad you did, unless I'm missing your point. ;^) I'm talking about N-Body dynamic simulation, but I'm not concerned with reducing the poly^2 tests or even the BV^2 tests. Read the original post again: I already have this part. For example, if I send my list of 1024 bounding boxes to the system, I get the correct list of possibly colliding objects in output, and it already gets detected in a fraction of a frame. So far, so good. Now, my concern is this: how do I track the possible colliding pairs? How do I store them? How do I handle them? For the moment Adam seems to be the only one who actually understood the problem: that's why he introduced the sparse matrices. Let's have an example. Assume the system (it could be your SSV system, giving a fast answer to my proximity query) tells me pair (P,Q) is about to collide - that is, the bounding volumes collide. I need a way to: 1) know if they're overlapping for the first time (i.e. did they already overlap the frame before?) 2) retrieve some cached values for this pair (separating vectors, supporting vertices, contact features, whatever) Alternatively, I need to know pairs which do *not* collide anymore!...so that I can discard the previously cached values. What I have there looks more like a storage problem, a database problem, a design problem.... it's actually not really a CD-related problem. :) The possible solutions would be: 1) Allocate an array of N*N entries, each entry containing a pointer to a temporary cache, or null if the pair is not active. Very fast access, perfect. But for N=1024, forget it. 2) Put all those pointers in a linear list, using the minimal ram. P active pairs, P entries in a linear array, I like that. But access becomes the slow part. I can read the whole thing in search of the current pair, but it's obviously bad - the number of active pairs can get pretty high. I can use some better schemes: insertion sort, bisection, or even a hash-table to access the array. But all in all it looks painful, and I was curious about the existing standard ways to deal with it. ...now, the SSV article is very interesting nonetheless :) ... Pierre |