Re: [Algorithms] GJK
Brought to you by:
vexxed72
From: Pierre T. <p.t...@wa...> - 2000-08-14 04:24:19
|
> I'm in the middle of writing the code; I find writing code to implement an > algorithm helps me understand it. Actually for *that* one, I really would like to understand the whole theory before coding anything. I can't imagine how I would be able to fix a bug in the middle of this distance subalgorithm for example! :) ...brrr... Now to tell the truth, I already coded the search for supporting vertices with hill-climbing, but I don't think that's the difficult part ! All in all, the whole system seems very easy to implement once you get the ideas. If you look at the existing implementations, the code is always very small, even with all the optimizations and caches. I got 3 of them: (do you know some others?) - the enhanced GJK version 2.4 by Stephen Cameron. Without doubt the ugliest code I've ever seen :) But it was made on purpose.... - the SOLID 2.0 version. Not very much better, ahah! Comments are almost inexistant, and the use of caches and bit-arrays make the whole thing quite unclear. - the Graphics Gem IV version by Rich Rabbitz. Surprisingly the clearest version among those three - also the slowest since this is the basic GJK without any optimizations. I wish I had the book anyway (I just have the code) because it may contains some clear explanations about that famous Delta equation. I guess some of you guys have that book, hmmm? Anything useful there? > I haven't yet because the formula isn't really very clear to me. For > example, it says that delta_i (y_i) is 1, but it never says what > delta_(not i) (y_i) is. Perhaps it's zero, but it's certainly not written > there. Do we even need those values? > It's pretty easy to use cramer's rule; I think I'll just rederive it to > see where the formula comes from. It makes some sense that the cofactors > can be written in terms of the cofactors from a matrix representing > smaller simplices. I think it's written to take advantage of the fact > that you want to test all subsets of Y and save computations if possible, > which unfortunately makes it less opaque. :) Agreed. But when I try to rederive it using Cramer's rule and all the determinant properties I can think of, there's always something wrong in the end anyway. I guess I'm missing something important, but I don't know what. I suspect it has something to do with that "k=min(Ix)". For the moment I don't see why Delta(j) doesn't depends on the value of k, and maybe you have to use that fact to derive the formula. Rough guesses. There are two other things bugging me: - in Van Den Bergen's paper about GJK, page 5: "This subset X is the largest of all nonempty Z....". Largest ? I would have said the smallest...? Typo ? - The delta formula is something like: Delta(j)(X+yj) = Sigma(i in IX) (Delta(i)(X) * (yi | yk - yi | yj)) ...where | is a dot product. Since k is said to be fixed, why don't they rewrite it that way : Delta(j)(X+yj) = (yk - yj) | Sigma(i in IX) (Delta(i)(X) * yi) ...which saves a lot of dot products, and maybe make all those caches useless ? Pierre |