Re: [Algorithms] GJK
Brought to you by:
vexxed72
From: Pierre T. <p.t...@wa...> - 2000-08-12 00:30:48
|
> I think it's just hard to understand because the recursive solutions > provided have been simplified already from cramer's rule. I'm not seeing > why the new "closest point in simplex to origin" is perpendicular to the > affine combination of the points in simplex, so I've stopped at that point > to figure it out. Once I understand that, I think the rest of the math > should fall out nicely. Oh, I just re-derived that yesterday. I'm afraid this is the result of a brute-force calculus. Start from the Appendix II in the original Gilbert paper. Take the expression of f (lamda^2, lamda^3, ..., lamda^r). Then, if you compute df / d^i you just find the expected orthogonality between the expression of the closest point and any (Pi - P0), assuming Pi are the original points. Well, I admit this is not a difficult one, but it sure is somewhat delicate. It also gives you the matrix (41). Anyway it's a lot clearer after having rederived it. There are a awful lot of shortcuts taken in all the GJK-related papers I read, which makes the global understanding very painful. For example, Van Der Bergen in his GJK paper takes this orthogonality as granted, and at first it's very shocking. I think the most complete derivation (the original one) could be found in Daniel Johnson's PhD thesis.... But of course there's absolutely no way to find that one. [if someone knows about it.... tell me]. Even the derivations in Gilbert's original article seem quite superficial. While I'm at it : Gilbert wrote "Since f is convex...". How do we know that function is convex ? > At any rate, it's hacky because gino van den bergen (note my ignorance on > how to properly capitalize his name) saves the relevant dot products for > points that are in the simplex. I wonder if this level of optimization is > necessary given the cost of everything else that is typically contained in > a rigid body physics simulator, but I guess I'll see. :) Well, I suppose you could probably invert the (41) matrix with the standard inversion code of your matrix class without even using those nasty Delta notations ! But since the algorithm takes a combinatoric approach, it would need to invert up to 15 4x4 matrices (correct me if I'm wrong), most of the involved terms beeing recomputed many times. I suppose those optimisations are worth the pain - but once everything else works, of course. BTW I think the intuitive reason for limiting the number of vertices to 4, is that the closest point figure we're looking for can be something like: - point vs point (2 vertices) - edge vs point (3 vertices) - edge vs edge (4 vertices) - face vs point (4 vertices) Nothing else - unless I'm missing something. That is, 4 points is enough to catch all the possible figures. And since we're running the algorithm as many times as needed to examine all possible combinations anyway, there's no need to bother including a fifth point. Moreover it would imply inverting a 5x5 matrix (or computing more Deltas), which can be tedious. All in all this is my understanding of that little part of the algorithm. Don't take it as granted, I just figured out all of that yesterday - and the day before I knew almost nothing to GJK, so it could very well be totally wrong. Comments, corrections, ideas, are welcome. Pierre |