Thread: RE: [Algorithms] Up Vector in Projection Matrix
Brought to you by:
vexxed72
From: Kelmenson, D. <Dan...@br...> - 2000-08-09 21:35:47
|
> -----Original Message----- > From: Pai-Hung Chen [mailto:pa...@ac...] > > > > > When you specify a point or direction to "look at", you have only > specified > > two of the degrees of freedom of the system. That object > could be in the > > middle of the screen - either upside-down - or right way up > - or at any > > 'roll' angle in between. > > > > The 'up' vector only has to be enough to resolve that issue > - and only one > > of its degrees of freedom is needed. Hence, there are a > wide variety of > > vectors that will do the job. The only constraint is that > the up vector > > and the 'look direction' are not parallel. > > > > Hence, so long as your character can't look absolutely > vertically upwards > > or absolutely vertically downwards, an 'up' vector of > (0,1,0) will do > fine. > > That explains everything (though I don't know why/how the > underlying math > works). An ever-confusing question got explained! Thanks a lot! :-) > The most likely reason the math worked (the reason the math worked in my program, and most likely in yours...) is that the main use of the 'up' vector is crossing it with the forward/view direction. As long as your character/camera doesn't lean to the side, the true (0,1,0) up vector and the actual perpendicular-to-forward up vector will both be coplanar with the forward vector, and therefore give you the same cross product. Dan Kelmenson Mattel Interactive |
From: Ron L. <ro...@do...> - 2000-08-09 23:25:26
|
----- Original Message ----- From: "Kelmenson, Dan" <Dan...@br...> To: <gda...@li...> Sent: Wednesday, August 09, 2000 2:34 PM Subject: RE: [Algorithms] Up Vector in Projection Matrix > > -----Original Message----- > > From: Pai-Hung Chen [mailto:pa...@ac...] > > > > > > > > When you specify a point or direction to "look at", you have only > > specified > > > two of the degrees of freedom of the system. That object > > could be in the > > > middle of the screen - either upside-down - or right way up > > - or at any > > > 'roll' angle in between. > > > > > > The 'up' vector only has to be enough to resolve that issue > > - and only one > > > of its degrees of freedom is needed. Hence, there are a > > wide variety of > > > vectors that will do the job. The only constraint is that > > the up vector > > > and the 'look direction' are not parallel. > > > > > > Hence, so long as your character can't look absolutely > > vertically upwards > > > or absolutely vertically downwards, an 'up' vector of > > (0,1,0) will do > > fine. > > > > That explains everything (though I don't know why/how the > > underlying math > > works). An ever-confusing question got explained! Thanks a lot! :-) > > > > The most likely reason the math worked (the reason the math worked in my > program, and most likely in yours...) is that the main use of the 'up' > vector is crossing it with the forward/view direction. As long as your > character/camera doesn't lean to the side, the true (0,1,0) up vector and > the actual perpendicular-to-forward up vector will both be coplanar with the > forward vector, and therefore give you the same cross product. > Another way to think about it is that the "look-at" functions in the various APIs and toolkits do not require that the up vector argument be perpendicular to the look-at vector argument, but only that they be linearly independent. Then it's a fact that there is a unique unit up vector that IS perpendicular to the look-at vector argument, in the plane determined by the two vector arguments (why they must be linearly independent) and pointing, in that plane, to the same side of the argument look-at vector as points the argument up vector. The API look-at routine computes that perpendicular up vector from the argument vectors, either by iterated cross products or, more likely, by the Gram-Schmidt formula, to use in constructing the correct view matrix. |
From: Pierre T. <p.t...@wa...> - 2000-08-10 00:08:39
|
Hi, I'd like to implement my own version of the GJK algorithm involved in collision detection, without just using any of the available packages (Solid, Stephen Cameron's version, etc). But I admit I have some difficulties with the theory behind it. Is there any GJK expert here, who would be kind enough to briefly summarize that algorithm? For example, even Gilbert's underlying layer - that is, Johnson's algorithm to compute the closest point to the origin for a simplex - looks quite weird to me. The original paper by Gilbert is quite painful to read IMHO, and I don't really understand what's going on here. So, to start with, if someone can explain in a few words Johnson's algorithm, it would be cool. I'm familiar with Minkowski sums, supporting vertices and most of the usual terminology, so don't be afraid to use the right words in the right place. Pierre |
From: Pierre T. <p.t...@wa...> - 2000-08-10 04:07:48
|
Is there any known problem with Andrew Woo's Ray-AABB intersection code? (in Graphic Gems I). It seems to miss some intersections... (?) |
From: Will P. <wi...@cs...> - 2000-08-10 23:04:25
|
On Thu, 10 Aug 2000, Pierre Terdiman wrote: > I'd like to implement my own version of the GJK algorithm involved in > collision detection, without just using any of the available packages > (Solid, Stephen Cameron's version, etc). But I admit I have some > difficulties with the theory behind it. Is there any GJK expert here, who > would be kind enough to briefly summarize that algorithm? For example, even > Gilbert's underlying layer - that is, Johnson's algorithm to compute the > closest point to the origin for a simplex - looks quite weird to me. The > original paper by Gilbert is quite painful to read IMHO, and I don't really > understand what's going on here. So, to start with, if someone can explain > in a few words Johnson's algorithm, it would be cool. I'm familiar with > Minkowski sums, supporting vertices and most of the usual terminology, so > don't be afraid to use the right words in the right place. I'm actually implementing this algorithm now, and I find that I do not have the background in this area of mathematics yet to understand everything that is going on. So the disclaimer: I'm not an expert on this algorithm. I welcome Ron Levine's input. :) I assume by Johnson's algorithm you mean the sub-algorithm that computes the minimal convex simplex that contains the new support point that is closer to the origin? I think all descriptions I have read of it have been just hacky ways to determine how to convex-ly combine (by finding the best lamda values to use) the points in the previous simplex and the new support point to find the minimal convex simplex that contains that new support point. Does that seem correct? By the way, what is the *intuitive* reason that the simplex can be up to four dimensions (i.e. the tetrahedron)? Is it because the closest points can be from two edges? Will |
From: Pierre T. <p.t...@wa...> - 2000-08-11 01:22:52
|
> I assume by Johnson's algorithm you mean the sub-algorithm that computes > the minimal convex simplex that contains the new support point that is > closer to the origin? Absolutely. > I think all descriptions I have read of it have > been just hacky ways to determine how to convex-ly combine (by finding the > best lamda values to use) the points in the previous simplex and the new > support point to find the minimal convex simplex that contains that new > support point. Does that seem correct? I think so - as far as I can tell, and including the word "hacky" :) > By the way, what is the *intuitive* reason that the simplex can be up to > four dimensions (i.e. the tetrahedron)? Is it because the closest points > can be from two edges? Sorry, I'm a newbie to GJK and for the moment there's nothing intuitive in the whole thing. I'll try to dig some more into it before giving you my probably totally wrong answers... Pierre |
From: Pai-Hung C. <pa...@ac...> - 2000-08-11 04:07:47
|
Hi, I tried to search the list in the archive but I ended up with nothing no matter what info I provided. Am I doning something wrong? Thanks for the help, Pai-Hung Chen |
From: <fa...@no...> - 2000-08-11 09:52:41
|
I also tried myself to find something in the archives about "VIPM" or ROAM" for instance, and nothing was found. Isn'it strange ? Fred ----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Friday, August 11, 2000 6:03 AM Subject: [Algorithms] Help with Archive Search > Hi, > > I tried to search the list in the archive but I ended up with nothing no > matter what info I provided. Am I doning something wrong? > > Thanks for the help, > > Pai-Hung Chen > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Akbar A. <sye...@ea...> - 2000-08-11 10:29:31
|
hey, i am looking for this paper, preferably some where that doesn't charge. J. Kajiya, "Rendering Fur With Three Dimensional Textures," i scroured the net, but ended up in failure. it would be really cool if somebody could point me out to this one if they have seen it lying around. if any of you know of any other fur papers, please let me know. http://www.research.microsoft.com/profiles/kajiya.htm that was the closest i found, and apparently kajiya does not have any papers up on his site; all i could find was his "profile" :-| this was rather odd cause _most_ of the other ms research guys have "web sites". i even tried www.research.microsoft.com/~kajiya in conformance with the _other_ "websites", it popped up the "profile"; maybe it's cause he is the Assistant Director. oh well... it was rather humorous skimming through the profile though. "Now, Kajiya is working on an equally dramatic project, the convergence of the television and the computer. " does this mean ms is going to sell tv's? peace. akbar A. |
From: Pierre T. <p.t...@wa...> - 2000-08-11 11:17:00
|
> if any of you know of any other fur papers, please let me know. From the ACM Digital Library: Fake fur rendering; Dan B. Goldman; Proceedings of the 24th annual conference on Computer graphics & interactive techniques, 1997, Pages 127 - 134 Art-based rendering of fur, grass, and trees; Michael A. Kowalski, Lee Markosian, J. D. Northrup, Lubomir Bourdev, Ronen Barzel, Loring S. Holden and John F. Hughes; Proceedings of the SIGGRAPH 1999 annual conference on Computer graphics, 1999, Pages 433 - 438 Wet and messy fur; Armin Bruderlin; Proceedings of the conference on SIGGRAPH 99: conference abstracts and applications, 1999, Page 284 Rendering fur with three dimensional textures; J. T. Kahiya and T. L. Kay; Conference proceedings on Computer graphics, 1989, Pages 271 - 280 I have Kajiya's one but only as a paper version. Want some ugly scanned pics? :) Pierre |
From: Akbar A. <sye...@ea...> - 2000-08-11 15:09:21
|
>I have Kajiya's one but only as a paper version. >Want some ugly scanned pics? :) i suppose, is it worth it? peace, akbar A. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Pierre Terdiman Sent: Friday, August 11, 2000 4:09 AM To: gda...@li... Subject: Re: [Algorithms] a fur- paper > if any of you know of any other fur papers, please let me know. >From the ACM Digital Library: Fake fur rendering; Dan B. Goldman; Proceedings of the 24th annual conference on Computer graphics & interactive techniques, 1997, Pages 127 - 134 Art-based rendering of fur, grass, and trees; Michael A. Kowalski, Lee Markosian, J. D. Northrup, Lubomir Bourdev, Ronen Barzel, Loring S. Holden and John F. Hughes; Proceedings of the SIGGRAPH 1999 annual conference on Computer graphics, 1999, Pages 433 - 438 Wet and messy fur; Armin Bruderlin; Proceedings of the conference on SIGGRAPH 99: conference abstracts and applications, 1999, Page 284 Rendering fur with three dimensional textures; J. T. Kahiya and T. L. Kay; Conference proceedings on Computer graphics, 1989, Pages 271 - 280 I have Kajiya's one but only as a paper version. Want some ugly scanned pics? :) Pierre _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Kamil B. <no...@te...> - 2000-08-26 05:53:23
|
Hello Akbar, Friday, August 11, 2000, 7:22:05 PM, you wrote: >>I have Kajiya's one but only as a paper version. >>Want some ugly scanned pics? :) > i suppose, is it worth it? I'm also interested in them :) -- Best regards, Kamil Burzynski |
From: Will P. <wi...@cs...> - 2000-08-11 17:32:37
|
> > I think all descriptions I have read of it have > > been just hacky ways to determine how to convex-ly combine (by finding the > > best lamda values to use) the points in the previous simplex and the new > > support point to find the minimal convex simplex that contains that new > > support point. Does that seem correct? > > I think so - as far as I can tell, and including the word "hacky" :) I think it's just hard to understand because the recursive solutions provided have been simplified already from cramer's rule. I'm not seeing why the new "closest point in simplex to origin" is perpendicular to the affine combination of the points in simplex, so I've stopped at that point to figure it out. Once I understand that, I think the rest of the math should fall out nicely. At any rate, it's hacky because gino van den bergen (note my ignorance on how to properly capitalize his name) saves the relevant dot products for points that are in the simplex. I wonder if this level of optimization is necessary given the cost of everything else that is typically contained in a rigid body physics simulator, but I guess I'll see. :) Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Pierre T. <p.t...@wa...> - 2000-08-12 00:30:48
|
> I think it's just hard to understand because the recursive solutions > provided have been simplified already from cramer's rule. I'm not seeing > why the new "closest point in simplex to origin" is perpendicular to the > affine combination of the points in simplex, so I've stopped at that point > to figure it out. Once I understand that, I think the rest of the math > should fall out nicely. Oh, I just re-derived that yesterday. I'm afraid this is the result of a brute-force calculus. Start from the Appendix II in the original Gilbert paper. Take the expression of f (lamda^2, lamda^3, ..., lamda^r). Then, if you compute df / d^i you just find the expected orthogonality between the expression of the closest point and any (Pi - P0), assuming Pi are the original points. Well, I admit this is not a difficult one, but it sure is somewhat delicate. It also gives you the matrix (41). Anyway it's a lot clearer after having rederived it. There are a awful lot of shortcuts taken in all the GJK-related papers I read, which makes the global understanding very painful. For example, Van Der Bergen in his GJK paper takes this orthogonality as granted, and at first it's very shocking. I think the most complete derivation (the original one) could be found in Daniel Johnson's PhD thesis.... But of course there's absolutely no way to find that one. [if someone knows about it.... tell me]. Even the derivations in Gilbert's original article seem quite superficial. While I'm at it : Gilbert wrote "Since f is convex...". How do we know that function is convex ? > At any rate, it's hacky because gino van den bergen (note my ignorance on > how to properly capitalize his name) saves the relevant dot products for > points that are in the simplex. I wonder if this level of optimization is > necessary given the cost of everything else that is typically contained in > a rigid body physics simulator, but I guess I'll see. :) Well, I suppose you could probably invert the (41) matrix with the standard inversion code of your matrix class without even using those nasty Delta notations ! But since the algorithm takes a combinatoric approach, it would need to invert up to 15 4x4 matrices (correct me if I'm wrong), most of the involved terms beeing recomputed many times. I suppose those optimisations are worth the pain - but once everything else works, of course. BTW I think the intuitive reason for limiting the number of vertices to 4, is that the closest point figure we're looking for can be something like: - point vs point (2 vertices) - edge vs point (3 vertices) - edge vs edge (4 vertices) - face vs point (4 vertices) Nothing else - unless I'm missing something. That is, 4 points is enough to catch all the possible figures. And since we're running the algorithm as many times as needed to examine all possible combinations anyway, there's no need to bother including a fifth point. Moreover it would imply inverting a 5x5 matrix (or computing more Deltas), which can be tedious. All in all this is my understanding of that little part of the algorithm. Don't take it as granted, I just figured out all of that yesterday - and the day before I knew almost nothing to GJK, so it could very well be totally wrong. Comments, corrections, ideas, are welcome. Pierre |
From: Will P. <wi...@cs...> - 2000-08-12 21:06:58
|
> paper. Take the expression of f (lamda^2, lamda^3, ..., lamda^r). Then, if > you compute df / d^i you just find the expected orthogonality between the > expression of the closest point and any (Pi - P0), assuming Pi are the > original points. Well, I admit this is not a difficult one, but it sure is > somewhat delicate. I can see that derivation; it's somewhat like finding a jacobian matrix to do unconstrained optimization. I haven't gotten (yet) where they got the original expression f, though I understand their method for minimizing it. > For example, Van Der Bergen in his GJK paper takes this > orthogonality as granted, and at first it's very shocking. That's the one I was thinking of: it came out of nowhere. > While I'm at it : Gilbert wrote "Since f is convex...". How do we know that > function is convex ? I think it just fits the definition of convex, like the ones given in appendix A of Van Der Bergen's paper. > Well, I suppose you could probably invert the (41) matrix with the standard > inversion code of your matrix class without even using those nasty Delta > notations ! But since the algorithm takes a combinatoric approach, it would > need to invert up to 15 4x4 matrices (correct me if I'm wrong), most of the > involved terms beeing recomputed many times. I suppose those optimisations > are worth the pain - but once everything else works, of course. I definitely agree that it is worth saving the dot products when running the sub algorithm once (because of the combinatorial nature), but I'm not sure that it's worth doing between calls to the subagorithm (although most of the points in the simplex do not change). > BTW I think the intuitive reason for limiting the number of vertices to 4, > is that the closest point figure we're looking for can be something like: > - point vs point (2 vertices) > - edge vs point (3 vertices) > - edge vs edge (4 vertices) > - face vs point (4 vertices) That makes sense... I think I saw something similar in the gdc hardcore proceedings. Will |
From: Pierre T. <p.t...@wa...> - 2000-08-13 05:22:32
|
> I think it just fits the definition of convex, like the ones given in > appendix A of Van Der Bergen's paper. Hmm, to me the definition of a convex function (not of a polytope) is that the second derivative has a constant sign. That way, to minimize the function we just have to assure the first derivative is zero. For example how would we *maximize* the function? ...in the exact same way, yep... Difference comes from the second derivative. There must be a very good reason in our case to assume there's no need to check for it, but I don't find it obvious. You must be right anyway: it must be related to the original definition of the function, but ... well, I just don't see why it is obvious :) Anyway this is a detail, really. > I definitely agree that it is worth saving the dot products when running > the sub algorithm once (because of the combinatorial nature), but I'm not > sure that it's worth doing between calls to the subagorithm (although most > of the points in the simplex do not change). Oh... I mixed things up, right. I forgot there were some caches inside *and* outside the subalgorithm. Well. I'll save that for later, for the moment I didn't write a single line of code. > That makes sense... I think I saw something similar in the gdc hardcore > proceedings. Is there a GDC paper about GJK ? >Mathematically, it make sense. It makes me wonder if there is any >intuition about it, or they just make that observation about >perpendicularity (is that a word?) for no particular reason. Gilbert claims it's "geometrically obvious".... :) But I'm not that good, I can't even visualize the Minkowski difference of the two polytopes. I have some troubles with the recursive formula used to compute the deltas. Where does it come from? I followed Gilbert's steps, in vain. There's always something wrong in the end. Did you try to re-derive it ? Pierre |
From: Will P. <wi...@cs...> - 2000-08-13 21:03:40
|
> > I definitely agree that it is worth saving the dot products when running > > the sub algorithm once (because of the combinatorial nature), but I'm not > > sure that it's worth doing between calls to the subagorithm (although most > > of the points in the simplex do not change). > > Oh... I mixed things up, right. I forgot there were some caches inside *and* > outside the subalgorithm. Well. I'll save that for later, for the moment I > didn't write a single line of code. I'm in the middle of writing the code; I find writing code to implement an algorithm helps me understand it. > > That makes sense... I think I saw something similar in the gdc hardcore > > proceedings. > > Is there a GDC paper about GJK ? No, it was from http://www.gdconf.com/hardcore/physics.htm. I didn't attend, but my mentor from my summer position was kind enough to purchase the proceedings for me. > Gilbert claims it's "geometrically obvious".... :) But I'm not that good, I > can't even visualize the Minkowski difference of the two polytopes. Everything is obvious after it's understood. I wish scientific papers were written with that in mind. :) > I have some troubles with the recursive formula used to compute the deltas. > Where does it come from? I followed Gilbert's steps, in vain. There's always > something wrong in the end. Did you try to re-derive it ? I haven't yet because the formula isn't really very clear to me. For example, it says that delta_i (y_i) is 1, but it never says what delta_(not i) (y_i) is. Perhaps it's zero, but it's certainly not written there. It's pretty easy to use cramer's rule; I think I'll just rederive it to see where the formula comes from. It makes some sense that the cofactors can be written in terms of the cofactors from a matrix representing smaller simplices. I think it's written to take advantage of the fact that you want to test all subsets of Y and save computations if possible, which unfortunately makes it less opaque. :) Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Pierre T. <p.t...@wa...> - 2000-08-14 04:24:19
|
> I'm in the middle of writing the code; I find writing code to implement an > algorithm helps me understand it. Actually for *that* one, I really would like to understand the whole theory before coding anything. I can't imagine how I would be able to fix a bug in the middle of this distance subalgorithm for example! :) ...brrr... Now to tell the truth, I already coded the search for supporting vertices with hill-climbing, but I don't think that's the difficult part ! All in all, the whole system seems very easy to implement once you get the ideas. If you look at the existing implementations, the code is always very small, even with all the optimizations and caches. I got 3 of them: (do you know some others?) - the enhanced GJK version 2.4 by Stephen Cameron. Without doubt the ugliest code I've ever seen :) But it was made on purpose.... - the SOLID 2.0 version. Not very much better, ahah! Comments are almost inexistant, and the use of caches and bit-arrays make the whole thing quite unclear. - the Graphics Gem IV version by Rich Rabbitz. Surprisingly the clearest version among those three - also the slowest since this is the basic GJK without any optimizations. I wish I had the book anyway (I just have the code) because it may contains some clear explanations about that famous Delta equation. I guess some of you guys have that book, hmmm? Anything useful there? > I haven't yet because the formula isn't really very clear to me. For > example, it says that delta_i (y_i) is 1, but it never says what > delta_(not i) (y_i) is. Perhaps it's zero, but it's certainly not written > there. Do we even need those values? > It's pretty easy to use cramer's rule; I think I'll just rederive it to > see where the formula comes from. It makes some sense that the cofactors > can be written in terms of the cofactors from a matrix representing > smaller simplices. I think it's written to take advantage of the fact > that you want to test all subsets of Y and save computations if possible, > which unfortunately makes it less opaque. :) Agreed. But when I try to rederive it using Cramer's rule and all the determinant properties I can think of, there's always something wrong in the end anyway. I guess I'm missing something important, but I don't know what. I suspect it has something to do with that "k=min(Ix)". For the moment I don't see why Delta(j) doesn't depends on the value of k, and maybe you have to use that fact to derive the formula. Rough guesses. There are two other things bugging me: - in Van Den Bergen's paper about GJK, page 5: "This subset X is the largest of all nonempty Z....". Largest ? I would have said the smallest...? Typo ? - The delta formula is something like: Delta(j)(X+yj) = Sigma(i in IX) (Delta(i)(X) * (yi | yk - yi | yj)) ...where | is a dot product. Since k is said to be fixed, why don't they rewrite it that way : Delta(j)(X+yj) = (yk - yj) | Sigma(i in IX) (Delta(i)(X) * yi) ...which saves a lot of dot products, and maybe make all those caches useless ? Pierre |
From: Will P. <wi...@cs...> - 2000-08-14 18:06:05
|
> Actually for *that* one, I really would like to understand the whole theory > before coding anything. I can't imagine how I would be able to fix a bug in > the middle of this distance subalgorithm for example! :) ...brrr... Now to > tell the truth, I already coded the search for supporting vertices with > hill-climbing, but I don't think that's the difficult part ! I agree... I'd definitely need to understand an algorithm before I can debug it, but precision in algorithms is important (of course), and I find it easier to be precise in C++ than in the univeral language of mathematics (obscure movie reference). > All in all, the whole system seems very easy to implement once you get the > ideas. If you look at the existing implementations, the code is always very > small, even with all the optimizations and caches. I got 3 of them: (do you > know some others?) > > - the enhanced GJK version 2.4 by Stephen Cameron. Without doubt the ugliest > code I've ever seen :) But it was made on purpose.... I think even with comments and indenting kept in there it wouldn't be that readable. I'm a big fan of variables that mean something. > - the SOLID 2.0 version. Not very much better, ahah! Comments are almost > inexistant, and the use of caches and bit-arrays make the whole thing quite > unclear. Yeah, the paper is the best description of the algorithm I've seen, but the set operations could have been encapsulated for easier reading with no loss in efficiency. > - the Graphics Gem IV version by Rich Rabbitz. Surprisingly the clearest > version among those three - also the slowest since this is the basic GJK > without any optimizations. I wish I had the book anyway (I just have the > code) because it may contains some clear explanations about that famous > Delta equation. I guess some of you guys have that book, hmmm? Anything > useful there? I'll send you the article offline (it's like I walked to the library to make a copy for you, and then walked over to France to hand it to you, right?) > > I haven't yet because the formula isn't really very clear to me. For > > example, it says that delta_i (y_i) is 1, but it never says what > > delta_(not i) (y_i) is. Perhaps it's zero, but it's certainly not written > > there. > > Do we even need those values? You're right. I think they might not be needed. I just like to see all cases covered in a recursion. :) > - in Van Den Bergen's paper about GJK, page 5: > "This subset X is the largest of all nonempty Z....". Largest ? I would have > said the smallest...? Typo ? It would have to be... otherwise the set could grow very large or even worse: there would be no termination to the algorithm because it could cycle easier. > - The delta formula is something like: > > Delta(j)(X+yj) = Sigma(i in IX) (Delta(i)(X) * (yi | yk - yi | yj)) > > ...where | is a dot product. > > Since k is said to be fixed, why don't they rewrite it that way : > Delta(j)(X+yj) = (yk - yj) | Sigma(i in IX) (Delta(i)(X) * yi) > > ...which saves a lot of dot products, and maybe make all those caches > useless ? using your notation: Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * ((yi | yk) - (yi | yj))) Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * (yi | (yk - yj))) I don't think you can do that last step. Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Graham S. R. <gr...@se...> - 2000-08-14 21:38:10
|
Hello, I am just curious. How many here, if any, are planning to attend the XGDC conference hosted by Xtreme Games? Anyone go last year? I need to decide whether to push hard to get a paper done before mid-September or just focus on preparing something for GDC. Graham Rhodes |
From: Alex D'A. <al...@fu...> - 2000-08-15 01:50:15
|
I'll be going again this year, along with some of the other people here. I went last year and it was pretty cool. Jonathan Blow gave a great talk on lighting in games, Reichart von Wolfshild had an incredible talk on porting games (which really covered all sorts of business aspects), and Andre gave a very polished talk on fuzzy logic. There were a bunch of other lectures, but those stick out in my mind. As a side bonus, the vintage computer fest was going on the other side of the convention center. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Graham S. Rhodes Sent: Monday, August 14, 2000 2:35 PM To: gda...@li... Subject: [Algorithms] XGDC conference Hello, I am just curious. How many here, if any, are planning to attend the XGDC conference hosted by Xtreme Games? Anyone go last year? I need to decide whether to push hard to get a paper done before mid-September or just focus on preparing something for GDC. Graham Rhodes _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Dave A. <da...@ga...> - 2000-08-15 06:23:45
|
Hi Graham, I went last year, and I'm planning on attending (and speaking) this year as well. Last year's conference had some obvious rough edges, but it was fun and many of the lectures were very useful. In general, I think that the XGDC has some good potential, especially as more experienced/qualified people are willing to present. Dave Astle ------------------------------------------------------------------------- Chairman and COO, GameDev.net, LLC: http://www.gamedev.net/ Lead Programmer, Myopic Rhino Games: http://www.myopicrhino.com Software Engineer, ROI Systems: http://www.roisys.com ------------------------------------------------------------------------- "Graham S. Rhodes" wrote: > > Hello, > > I am just curious. How many here, if any, are planning to attend the XGDC > conference hosted by Xtreme Games? Anyone go last year? > > I need to decide whether to push hard to get a paper done before > mid-September or just focus on preparing something for GDC. > > Graham Rhodes > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Graham S. R. <gr...@se...> - 2000-08-15 16:17:13
|
Wow, Lots of attendees here! I appreciate the feedback folks. Starting to get excited. I'm planning to propose a talk on predicting and managing numerical error for stable physics simulation for games. The XGDC topic list on xgames3d.com has me listed with a title of "Advanced Physics Programming," but really the idea is to introduce formal techniques for analyzing the errors introduced by numerical techniques, the way that the errors propogate(sometimes leading to instability and blow-ups), and how to control the errors by designing or selecting the right solution scheme. Sounds a bit boring, but just about everything I've seen related to game physics simulation has skipped over this, and it is essential to achieving the most robust physics simulations. I'm also going to submit at least one proposal to GDC on a related matter. By the time GDC rolls around hopefully I'll have some more interesting examples, such as tricky collision detection examples. Graham Rhodes |
From: Jamie F. <j.f...@re...> - 2000-08-16 08:54:55
|
Can you recommend any books on the topic? I avoided the numerical methods lectures while at university, and so far it's been the most useful thing I could have done there.... Jamie "Graham S. Rhodes" wrote: > Wow, > > Lots of attendees here! I appreciate the feedback folks. Starting to get > excited. I'm planning to propose a talk on predicting and managing numerical > error for stable physics simulation for games. The XGDC topic list on > xgames3d.com has me listed with a title of "Advanced Physics Programming," > but really the idea is to introduce formal techniques for analyzing the > errors introduced by numerical techniques, the way that the errors > propogate(sometimes leading to instability and blow-ups), and how to control > the errors by designing or selecting the right solution scheme. Sounds a bit > boring, but just about everything I've seen related to game physics > simulation has skipped over this, and it is essential to achieving the most > robust physics simulations. > > I'm also going to submit at least one proposal to GDC on a related matter. > By the time GDC rolls around hopefully I'll have some more interesting > examples, such as tricky collision detection examples. > > Graham Rhodes > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Graham S. R. <gr...@se...> - 2000-08-16 15:14:25
|
Here are a few of the references that I use: "Computational Dynamics" by Ahmed Shabana is a decent book on, well, computational rigid-body dynamics with full discussion on many many joint constraints but not collision detection. It has a fair discussion of numerical methods, but it does not analyze the error terms sufficiently. "Computational Geometry: Algorithms and Applications" by M. de Berg et al. This book has some decent discussions on the development of robust geometric algorithms that handle degenerate cases well. Although I use the book as background, I haven't really tested their algorithms. One my favorite references on numerical methods is: "Computational Fluid Mechanics and Heat Transfer" by Tannehill, Anderson, and Pletcher. The order of the authors is random for each edition (there are two so far). This will be one of my references for the papers I'll be preparing. I know this sounds like a strange reference, but it has one of the best discussions I know of on the fundamental nature of numerical errors in discrete integration schemes for differential equations. Chapters 2 and 3 are introductions to DE's (especially PDE's due to the nature of the material of the topic of the book), and have nothing really to do with fluids. Chapter 4 analyzes truncation error in a bit more detail, and studies the stability issues of a laundry list of equations, using a variety of different difference formulas. There is some discussion in the book about how to deal with discontinuities. In fluids, discontinuities are shock waves, contact surfaces (two regions of fluid that move at different velocities at a common boundary in an inviscid flow----such as the interface between water and air at the ocean). But some of the rules apply elsewhere, including when you have cracks in a rigid or nonrigid body, and when there are collisions in a dynamics problem. The trick is detecting the discontinuities in an automated and robust manner. Obviously, it is hard to detect collisions in a robust manner while keeping time steps large enough for games. (Well, even without dealing with time steps). In fluids, shock capture methods are pretty good at finding discontinuities, but as with using penalty methods in dynamics there tends to be a general mushiness/springiness with oscillations at the discontinuity---second order accurate methods are required to come close to tightly modeling the discontinuity. Shock fitting methods actually model the geometry of the shock explicitly, and this is similar to restarting the integration of a dynamics problem at the point of collision. Much nicer if you can do it fast enough, harder to code. And you still have the problem of intersecting the geometries. (In fluids, the geometry problem involves moving the shock geometry until the flow properties on both sides satisfy the "Rankine-Hugoniot" equations----required to satisfy the second law of thermodynamics for physically consistent shocks. Enough of this tangent!) Graham Rhodes > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of Jamie > Fowlston > Sent: Wednesday, August 16, 2000 4:55 AM > To: gda...@li... > Subject: Re: [Algorithms] XGDC conference > > > Can you recommend any books on the topic? I avoided the numerical methods > lectures while at university, and so far it's been the most > useful thing I could > have done there.... > > Jamie > > > "Graham S. Rhodes" wrote: > > > Wow, > > > > Lots of attendees here! I appreciate the feedback folks. Starting to get > > excited. I'm planning to propose a talk on predicting and > managing numerical > > error for stable physics simulation for games. The XGDC topic list on > > xgames3d.com has me listed with a title of "Advanced Physics > Programming," > > but really the idea is to introduce formal techniques for analyzing the > > errors introduced by numerical techniques, the way that the errors > > propogate(sometimes leading to instability and blow-ups), and > how to control > > the errors by designing or selecting the right solution scheme. > Sounds a bit > > boring, but just about everything I've seen related to game physics > > simulation has skipped over this, and it is essential to > achieving the most > > robust physics simulations. > > > > I'm also going to submit at least one proposal to GDC on a > related matter. > > By the time GDC rolls around hopefully I'll have some more interesting > > examples, such as tricky collision detection examples. > > > > Graham Rhodes > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |