Re: [Algorithms] GJK
Brought to you by:
vexxed72
From: Pierre T. <p.t...@wa...> - 2000-08-15 20:07:15
|
> Actually, I think your math is right (I had to write it in pencil :) ). > The reason why it's not a big win is that with this formulation you > require three multiplies for each term and then a dot product, as opposed > to just one multiply per term with the dot product table lookups. This is > assuming, of course, that multiplies are the expensive operations. Maybe you're right. As always, I think I'll test both ways and pick up the fastest. I asked some questions on comp.graphics.algorithm, where one can reach Van Den Bergen (as well as John Nagle). I also searched my archives and found some old messages by Nagle about GJK. Here's a big copy/paste of the whole thing..... Pierre ==================================================================== > Hi, > I went through various papers in order to implement my own version of GJK, > and there are some things I don't get: > > - in Johnson's distance subalgorithm, where does the Delta formula come > from? I followed Gilbert's original paper and tried to re-derive it ny > expressing the determinant of matrix As, but there's always something wrong > in the end. Is the demonstration available somewhere? I expect it can be > found in Johnson's PhD thesis, but I can't find that one. > > - in Van Den Bergen's paper about GJK, page 5: > "This subset X is the largest of all nonempty Z....". Largest ? I would have > said the smallest...? Typo ? > Not a typo. The smallest nonempty Z would be any singleton subset of Y. Take a pair of points Z = { x0, x1 }, then aff(Z) is a line. Let v = lambda0 x0 + lambda1 x1, lambda0 + lambda1 = 1, the point on aff(Z) closest to the origin. Iff. the lambda0 and lambda1 are positive then |v(conv(Z))| < |x0| and |v(conv(Z))| < |x1|, i.e. the line segment x0-x1 contains a point that is closer to the origin than either x0 or x1. The fact than Z has to be a minimal set is expressed in the fact that the lambdas have to be positive (not equal to zero). > - The delta formula is something like: > > Delta(j)(X+yj) = Sigma(i in IX) (Delta(i)(X) * (yi | yk - yi | yj)) > > ...where | is a dot product. > > Why don't they rewrite it that way : > Delta(j)(X+yj) = (yk - yj) | Sigma(i in IX) (Delta(i)(X) * yi) > > ...which saves a lot of dot products, and make all those caches useless ? > Hmmm, this looks OK. It has been a while, so I'll need some more thought on this one. I have some worries regarding the numerics of this formulation. By doing the subtraction first you might cancel out a lot of precision. Gino ==================================================================== Gerhard wrote: > I am working on a object-distance routine for collision detection. The > GJK-algo was recommended to me. I read some very theoretical papers about > it, and I think I understand how it basically works. I also understand the > principle of the minkowski sum, which is used in the algorithm. But I > wouldn't know how to compute it. > So my question is, how can I compute the minkowski sum of two convex > polyhedra? Try here for some links on GJK. Check out Cameron's work, and SOLID. http://www.animats.com/topics/others.html Personally, I think that introducing the concept of the Minkowsky sum into the design of GJK just complicates the problem. What GJK does can be envisioned in object space. What GJK is actually doing is trying to find the closest-points figure, which can be a line, a triangle, a nonplanar quadrelateral, or a tetrahedron. These represent the point vs. point case, the point vs. line case, the line vs. line case, and the point vs. face case. GJK works by starting with a line between two more or less arbitrary points on each object, and then adds a point to "improve" the closest-points figure. The new figure is tested to determine whether a lower-order part of itself is better than the whole figure, in which case a point is thrown out of the closest-points figure. This process iterates until no improvement is observed. A useful project would be to take Gino's code in SOLID, convert it to Java, and make up an applet that lets you watch the algorithm run. This will make it much clearer how GJK converges on a result, and would be very helpful in diagnosing some of the problems that GJK implementations encounter. John Nagle Animats www.animats.com ==================================================================== ge...@gi... writes: > Hello, > Sorry for replying so late. > I'd like to thank you for your help. > You told me that computing the Minkowski sum is not necessary. My point was that the explaination of GJK that uses the Minkowski sum is unnecessarily confusing. GJK does not, in fact, compute the Minkowski sum of the two polyhedra. That would be a huge data structure. > I'm a > bit confused about how GJK works in object space. > I start with two vertices, one on each object. But now, which vectors do I > use for the two support mappings? > > Here's a list of papers I have about GJK (doesn't mean that I understood > them all completely, they're a bit complicated for a 17-year old > student...): > > - "A Fast Procedure for Computing the Distance Between Complex Objects in > Three-Dimensional Space" > (Gilbert, Johnson and Keerthi) > - "A Fast and Robust GJK Implementation for Collision Detection of Convex > Objects" > (Gino van den Bergen) > - "Determining the Minimum Translational Distance between Two Convex > Polyhedra" > (Cameron) > - "Enhancing GJK: Computing Minimum and Penetration Distances between Convex > Polyhedra" > (Cameron) You have all the right papers. Useful URLs are http://www.win.tue.nl/cs/tt/gino/solid/ http://www.comlab.ox.ac.uk/oucl/people/stephen.cameron.html > I have no problem understanding improvement details such as hill climbing, > but the general algorithm in object space is not yet clear to me. It's not really clear to anybody. Gilbert's original paper is really hard to follow. And none of the papers have enough pictures. Early thinking on GJK applied it to point clouds rather than convex polyhedra, and included extensions to N dimensions. This resulted in a very abstract view of the problem, which is confusing when you try to follow what it's doing in 3D. The key to understanding GJK is understanding what the "simplex" it uses is. The simplex is a collection of from 1 to 4 pairs of points from the two polyhedra. There will never be more than three unique points from either polyhedron. The algorithm runs in two main steps: 1. Find a "better" point to add to the simplex. 2. Throw out one or more in the simplex that aren't helping. This iterates until no improvement is possible. Step 1 works by using the previous closest-points vector and examining each polyhedron, finding the point furthest in the direction along the closest-points vector that heads towards the other polyhedron. This is simple. Step 2 is a table-driven case analysis that looks at all the meaningful ways to build point vs. point, point vs. face, edge vs. edge, etc. cases from the points in the simplex and finds the current best. It's hard to follow what those tables are doing, but you could write an inefficient exhaustive search that tried all possible combinations and it would work. The original I-COLLIDE paper at http://www.cs.unc.edu/~geom/I_COLLIDE.html might be helpful. This is the original Lin-Canny work. It's easier to understand the basic Lin-Canny algorithm than GJK, but the code is huge. Lin-Canny works by turning the problem into a linear programming problem and using an existing linear programming algorithm to solve it. Not only does this take a lot of code with many special cases, it's subject to numerical precision problems. John Nagle Animats www.animats.com |