Thread: [Algorithms] Numerical methods (Page 2)
Brought to you by:
vexxed72
From: Jamie F. <j.f...@re...> - 2000-08-16 15:32:43
|
Cheers, I'll look into them when I get time :) Jamie "Graham S. Rhodes" wrote: > Here are a few of the references that I use: > > "Computational Dynamics" by Ahmed Shabana is a decent book on, well, > computational rigid-body dynamics with full discussion on many many joint > constraints but not collision detection. It has a fair discussion of > numerical methods, but it does not analyze the error terms sufficiently. > > "Computational Geometry: Algorithms and Applications" by M. de Berg et al. > This book has some decent discussions on the development of robust geometric > algorithms that handle degenerate cases well. Although I use the book as > background, I haven't really tested their algorithms. > > One my favorite references on numerical methods is: > > "Computational Fluid Mechanics and Heat Transfer" by Tannehill, Anderson, > and Pletcher. The order of the authors is random for each edition (there are > two so far). This will be one of my references for the papers I'll be > preparing. > > I know this sounds like a strange reference, but it has one of the best > discussions I know of on the fundamental nature of numerical errors in > discrete integration schemes for differential equations. Chapters 2 and 3 > are introductions to DE's (especially PDE's due to the nature of the > material of the topic of the book), and have nothing really to do with > fluids. Chapter 4 analyzes truncation error in a bit more detail, and > studies the stability issues of a laundry list of equations, using a variety > of different difference formulas. > > There is some discussion in the book about how to deal with discontinuities. > In fluids, discontinuities are shock waves, contact surfaces (two regions of > fluid that move at different velocities at a common boundary in an inviscid > flow----such as the interface between water and air at the ocean). But some > of the rules apply elsewhere, including when you have cracks in a rigid or > nonrigid body, and when there are collisions in a dynamics problem. The > trick is detecting the discontinuities in an automated and robust manner. > Obviously, it is hard to detect collisions in a robust manner while keeping > time steps large enough for games. (Well, even without dealing with time > steps). In fluids, shock capture methods are pretty good at finding > discontinuities, but as with using penalty methods in dynamics there tends > to be a general mushiness/springiness with oscillations at the > discontinuity---second order accurate methods are required to come close to > tightly modeling the discontinuity. Shock fitting methods actually model the > geometry of the shock explicitly, and this is similar to restarting the > integration of a dynamics problem at the point of collision. Much nicer if > you can do it fast enough, harder to code. And you still have the problem of > intersecting the geometries. (In fluids, the geometry problem involves > moving the shock geometry until the flow properties on both sides satisfy > the "Rankine-Hugoniot" equations----required to satisfy the second law of > thermodynamics for physically consistent shocks. Enough of this tangent!) > > Graham Rhodes > > > -----Original Message----- > > From: gda...@li... > > [mailto:gda...@li...]On Behalf Of Jamie > > Fowlston > > Sent: Wednesday, August 16, 2000 4:55 AM > > To: gda...@li... > > Subject: Re: [Algorithms] XGDC conference > > > > > > Can you recommend any books on the topic? I avoided the numerical methods > > lectures while at university, and so far it's been the most > > useful thing I could > > have done there.... > > > > Jamie > > > > > > "Graham S. Rhodes" wrote: > > > > > Wow, > > > > > > Lots of attendees here! I appreciate the feedback folks. Starting to get > > > excited. I'm planning to propose a talk on predicting and > > managing numerical > > > error for stable physics simulation for games. The XGDC topic list on > > > xgames3d.com has me listed with a title of "Advanced Physics > > Programming," > > > but really the idea is to introduce formal techniques for analyzing the > > > errors introduced by numerical techniques, the way that the errors > > > propogate(sometimes leading to instability and blow-ups), and > > how to control > > > the errors by designing or selecting the right solution scheme. > > Sounds a bit > > > boring, but just about everything I've seen related to game physics > > > simulation has skipped over this, and it is essential to > > achieving the most > > > robust physics simulations. > > > > > > I'm also going to submit at least one proposal to GDC on a > > related matter. > > > By the time GDC rolls around hopefully I'll have some more interesting > > > examples, such as tricky collision detection examples. > > > > > > Graham Rhodes > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Jeff L. <je...@di...> - 2000-08-16 21:57:33
|
Agree with your list and will add two more. Faires and Burden, "Numerical Methods" Engeln, "Numerical Algorithms with C" Both very good discussions and coverage of the topic. -Jeff At 11:11 AM 8/16/2000 -0400, you wrote: >Here are a few of the references that I use: > >"Computational Dynamics" by Ahmed Shabana is a decent book on, well, >computational rigid-body dynamics with full discussion on many many joint >constraints but not collision detection. It has a fair discussion of >numerical methods, but it does not analyze the error terms sufficiently. > >"Computational Geometry: Algorithms and Applications" by M. de Berg et al. >This book has some decent discussions on the development of robust geometric >algorithms that handle degenerate cases well. Although I use the book as >background, I haven't really tested their algorithms. > >One my favorite references on numerical methods is: > >"Computational Fluid Mechanics and Heat Transfer" by Tannehill, Anderson, >and Pletcher. The order of the authors is random for each edition (there are >two so far). This will be one of my references for the papers I'll be >preparing. > >I know this sounds like a strange reference, but it has one of the best >discussions I know of on the fundamental nature of numerical errors in >discrete integration schemes for differential equations. Chapters 2 and 3 >are introductions to DE's (especially PDE's due to the nature of the >material of the topic of the book), and have nothing really to do with >fluids. Chapter 4 analyzes truncation error in a bit more detail, and >studies the stability issues of a laundry list of equations, using a variety >of different difference formulas. > >There is some discussion in the book about how to deal with discontinuities. >In fluids, discontinuities are shock waves, contact surfaces (two regions of >fluid that move at different velocities at a common boundary in an inviscid >flow----such as the interface between water and air at the ocean). But some >of the rules apply elsewhere, including when you have cracks in a rigid or >nonrigid body, and when there are collisions in a dynamics problem. The >trick is detecting the discontinuities in an automated and robust manner. >Obviously, it is hard to detect collisions in a robust manner while keeping >time steps large enough for games. (Well, even without dealing with time >steps). In fluids, shock capture methods are pretty good at finding >discontinuities, but as with using penalty methods in dynamics there tends >to be a general mushiness/springiness with oscillations at the >discontinuity---second order accurate methods are required to come close to >tightly modeling the discontinuity. Shock fitting methods actually model the >geometry of the shock explicitly, and this is similar to restarting the >integration of a dynamics problem at the point of collision. Much nicer if >you can do it fast enough, harder to code. And you still have the problem of >intersecting the geometries. (In fluids, the geometry problem involves >moving the shock geometry until the flow properties on both sides satisfy >the "Rankine-Hugoniot" equations----required to satisfy the second law of >thermodynamics for physically consistent shocks. Enough of this tangent!) > >Graham Rhodes > > > -----Original Message----- > > From: gda...@li... > > [mailto:gda...@li...]On Behalf Of Jamie > > Fowlston > > Sent: Wednesday, August 16, 2000 4:55 AM > > To: gda...@li... > > Subject: Re: [Algorithms] XGDC conference > > > > > > Can you recommend any books on the topic? I avoided the numerical methods > > lectures while at university, and so far it's been the most > > useful thing I could > > have done there.... > > > > Jamie > > > > > > "Graham S. Rhodes" wrote: > > > > > Wow, > > > > > > Lots of attendees here! I appreciate the feedback folks. Starting to get > > > excited. I'm planning to propose a talk on predicting and > > managing numerical > > > error for stable physics simulation for games. The XGDC topic list on > > > xgames3d.com has me listed with a title of "Advanced Physics > > Programming," > > > but really the idea is to introduce formal techniques for analyzing the > > > errors introduced by numerical techniques, the way that the errors > > > propogate(sometimes leading to instability and blow-ups), and > > how to control > > > the errors by designing or selecting the right solution scheme. > > Sounds a bit > > > boring, but just about everything I've seen related to game physics > > > simulation has skipped over this, and it is essential to > > achieving the most > > > robust physics simulations. > > > > > > I'm also going to submit at least one proposal to GDC on a > > related matter. > > > By the time GDC rolls around hopefully I'll have some more interesting > > > examples, such as tricky collision detection examples. > > > > > > Graham Rhodes > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Pierre T. <p.t...@wa...> - 2000-08-15 06:22:35
|
> I agree... I'd definitely need to understand an algorithm before I can > debug it, but precision in algorithms is important (of course), and I find > it easier to be precise in C++ than in the univeral language of > mathematics (obscure movie reference). Ok, why not. I'd have said the opposite anyway :) The code version is easier to understand IMHO, but I don't see it more precise. Well, whatever. > I think even with comments and indenting kept in there it wouldn't be that > readable. I'm a big fan of variables that mean something. :) > I'll send you the article offline (it's like I walked to the library to > make a copy for you, and then walked over to France to hand it to you, > right?) Yeah! Great! Many thanks! (...a 40 Mo download, whoops! No pb, I get a fast connexion) As a matter of facts, there's some fresh meat inside. Rabbitz is the only one for example to relate the As matrix to Lagrange multipliers. Not really useful I guess, but worth writing. And he tells things in a clearer way than Gilbert for sure. Now I think I probably got it, by "brute-forcely" writing down all the possible matrices, determinants and cofactors: I must admit that formula works :) But I'm still very upset about it. I must find some more time to work on it, but I think for the moment I'll just go ahead. > using your notation: > > Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * ((yi | yk) - (yi | yj))) > > Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * (yi | (yk - yj))) > > I don't think you can do that last step. Ok, let's fix that, because it may be important. Either it works and nobody noticed it before (hence, I'm very happy), either it does not work and it probably means I'm not using the right delta formula, which would explain why I can't re-derive it (hence, I'm very happy as well because I found my bug :) Let's get rid of the Sigma, painful to write in ASCII. Am I wrong, or do we have the following pattern : D(j) = d(0) * y0|(yk - yj) + d(1) * y1|(yk - yj) + d(2) * y2|(yk - yj) + ...... with d(i) = scalar values, yi = vectors. The dot-product is associative: (rX)|Y = r(X|Y) Hence D(j) = (d(0)*y0)|(yk - yj) + (d(1)*y1)|(yk - yj) + (d(2)*y2)|(yk - yj) + ...... The dot-product is commutative: X|Y = Y|X Hence D(j) = (yk - yj)|(d(0)*y0) + (yk - yj)|(d(1)*y1) + (yk - yj)|(d(2)*y2) + ...... The dot-product is distributive: X|(Y+Z) = X|Y + X|Z And since yk and yj are fixed (they don't depend on i in the Sigma expression): D(j) = (yk - yj)| [ (d(0)*y0) + (d(1)*y1) + (d(2)*y2) + ...... ] Unless I'm blind and totally missing a key point somewhere, here it is. What do you think ? Pierre |
From: Will P. <wi...@cs...> - 2000-08-15 17:45:02
|
> > using your notation: > > > > Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * ((yi | yk) - (yi | yj))) > > > > Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * (yi | (yk - yj))) > > > > I don't think you can do that last step. > > Ok, let's fix that, because it may be important. Either it works and nobody > noticed it before (hence, I'm very happy), either it does not work and it > probably means I'm not using the right delta formula, which would explain > why I can't re-derive it (hence, I'm very happy as well because I found my > bug :) > > Let's get rid of the Sigma, painful to write in ASCII. Am I wrong, or do we > have the following pattern : All math is painful to write in ASCII. I'd prefer latex, because I can read and write it. :) > D(j) = d(0) * y0|(yk - yj) + d(1) * y1|(yk - yj) + d(2) * > y2|(yk - yj) + ...... > > with d(i) = scalar values, yi = vectors. > > The dot-product is associative: (rX)|Y = r(X|Y) > > Hence > D(j) = (d(0)*y0)|(yk - yj) + (d(1)*y1)|(yk - yj) + > (d(2)*y2)|(yk - yj) + ...... > > The dot-product is commutative: X|Y = Y|X > > Hence > D(j) = (yk - yj)|(d(0)*y0) + (yk - yj)|(d(1)*y1) + (yk - > yj)|(d(2)*y2) + ...... > > The dot-product is distributive: X|(Y+Z) = X|Y + X|Z > And since yk and yj are fixed (they don't depend on i in the Sigma > expression): > > D(j) = (yk - yj)| [ (d(0)*y0) + (d(1)*y1) + (d(2)*y2) + ...... ] > > > Unless I'm blind and totally missing a key point somewhere, here it is. What > do you think ? Actually, I think your math is right (I had to write it in pencil :) ). The reason why it's not a big win is that with this formulation you require three multiplies for each term and then a dot product, as opposed to just one multiply per term with the dot product table lookups. This is assuming, of course, that multiplies are the expensive operations. Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Pierre T. <p.t...@wa...> - 2000-08-15 20:07:15
|
> Actually, I think your math is right (I had to write it in pencil :) ). > The reason why it's not a big win is that with this formulation you > require three multiplies for each term and then a dot product, as opposed > to just one multiply per term with the dot product table lookups. This is > assuming, of course, that multiplies are the expensive operations. Maybe you're right. As always, I think I'll test both ways and pick up the fastest. I asked some questions on comp.graphics.algorithm, where one can reach Van Den Bergen (as well as John Nagle). I also searched my archives and found some old messages by Nagle about GJK. Here's a big copy/paste of the whole thing..... Pierre ==================================================================== > Hi, > I went through various papers in order to implement my own version of GJK, > and there are some things I don't get: > > - in Johnson's distance subalgorithm, where does the Delta formula come > from? I followed Gilbert's original paper and tried to re-derive it ny > expressing the determinant of matrix As, but there's always something wrong > in the end. Is the demonstration available somewhere? I expect it can be > found in Johnson's PhD thesis, but I can't find that one. > > - in Van Den Bergen's paper about GJK, page 5: > "This subset X is the largest of all nonempty Z....". Largest ? I would have > said the smallest...? Typo ? > Not a typo. The smallest nonempty Z would be any singleton subset of Y. Take a pair of points Z = { x0, x1 }, then aff(Z) is a line. Let v = lambda0 x0 + lambda1 x1, lambda0 + lambda1 = 1, the point on aff(Z) closest to the origin. Iff. the lambda0 and lambda1 are positive then |v(conv(Z))| < |x0| and |v(conv(Z))| < |x1|, i.e. the line segment x0-x1 contains a point that is closer to the origin than either x0 or x1. The fact than Z has to be a minimal set is expressed in the fact that the lambdas have to be positive (not equal to zero). > - The delta formula is something like: > > Delta(j)(X+yj) = Sigma(i in IX) (Delta(i)(X) * (yi | yk - yi | yj)) > > ...where | is a dot product. > > Why don't they rewrite it that way : > Delta(j)(X+yj) = (yk - yj) | Sigma(i in IX) (Delta(i)(X) * yi) > > ...which saves a lot of dot products, and make all those caches useless ? > Hmmm, this looks OK. It has been a while, so I'll need some more thought on this one. I have some worries regarding the numerics of this formulation. By doing the subtraction first you might cancel out a lot of precision. Gino ==================================================================== Gerhard wrote: > I am working on a object-distance routine for collision detection. The > GJK-algo was recommended to me. I read some very theoretical papers about > it, and I think I understand how it basically works. I also understand the > principle of the minkowski sum, which is used in the algorithm. But I > wouldn't know how to compute it. > So my question is, how can I compute the minkowski sum of two convex > polyhedra? Try here for some links on GJK. Check out Cameron's work, and SOLID. http://www.animats.com/topics/others.html Personally, I think that introducing the concept of the Minkowsky sum into the design of GJK just complicates the problem. What GJK does can be envisioned in object space. What GJK is actually doing is trying to find the closest-points figure, which can be a line, a triangle, a nonplanar quadrelateral, or a tetrahedron. These represent the point vs. point case, the point vs. line case, the line vs. line case, and the point vs. face case. GJK works by starting with a line between two more or less arbitrary points on each object, and then adds a point to "improve" the closest-points figure. The new figure is tested to determine whether a lower-order part of itself is better than the whole figure, in which case a point is thrown out of the closest-points figure. This process iterates until no improvement is observed. A useful project would be to take Gino's code in SOLID, convert it to Java, and make up an applet that lets you watch the algorithm run. This will make it much clearer how GJK converges on a result, and would be very helpful in diagnosing some of the problems that GJK implementations encounter. John Nagle Animats www.animats.com ==================================================================== ge...@gi... writes: > Hello, > Sorry for replying so late. > I'd like to thank you for your help. > You told me that computing the Minkowski sum is not necessary. My point was that the explaination of GJK that uses the Minkowski sum is unnecessarily confusing. GJK does not, in fact, compute the Minkowski sum of the two polyhedra. That would be a huge data structure. > I'm a > bit confused about how GJK works in object space. > I start with two vertices, one on each object. But now, which vectors do I > use for the two support mappings? > > Here's a list of papers I have about GJK (doesn't mean that I understood > them all completely, they're a bit complicated for a 17-year old > student...): > > - "A Fast Procedure for Computing the Distance Between Complex Objects in > Three-Dimensional Space" > (Gilbert, Johnson and Keerthi) > - "A Fast and Robust GJK Implementation for Collision Detection of Convex > Objects" > (Gino van den Bergen) > - "Determining the Minimum Translational Distance between Two Convex > Polyhedra" > (Cameron) > - "Enhancing GJK: Computing Minimum and Penetration Distances between Convex > Polyhedra" > (Cameron) You have all the right papers. Useful URLs are http://www.win.tue.nl/cs/tt/gino/solid/ http://www.comlab.ox.ac.uk/oucl/people/stephen.cameron.html > I have no problem understanding improvement details such as hill climbing, > but the general algorithm in object space is not yet clear to me. It's not really clear to anybody. Gilbert's original paper is really hard to follow. And none of the papers have enough pictures. Early thinking on GJK applied it to point clouds rather than convex polyhedra, and included extensions to N dimensions. This resulted in a very abstract view of the problem, which is confusing when you try to follow what it's doing in 3D. The key to understanding GJK is understanding what the "simplex" it uses is. The simplex is a collection of from 1 to 4 pairs of points from the two polyhedra. There will never be more than three unique points from either polyhedron. The algorithm runs in two main steps: 1. Find a "better" point to add to the simplex. 2. Throw out one or more in the simplex that aren't helping. This iterates until no improvement is possible. Step 1 works by using the previous closest-points vector and examining each polyhedron, finding the point furthest in the direction along the closest-points vector that heads towards the other polyhedron. This is simple. Step 2 is a table-driven case analysis that looks at all the meaningful ways to build point vs. point, point vs. face, edge vs. edge, etc. cases from the points in the simplex and finds the current best. It's hard to follow what those tables are doing, but you could write an inefficient exhaustive search that tried all possible combinations and it would work. The original I-COLLIDE paper at http://www.cs.unc.edu/~geom/I_COLLIDE.html might be helpful. This is the original Lin-Canny work. It's easier to understand the basic Lin-Canny algorithm than GJK, but the code is huge. Lin-Canny works by turning the problem into a linear programming problem and using an existing linear programming algorithm to solve it. Not only does this take a lot of code with many special cases, it's subject to numerical precision problems. John Nagle Animats www.animats.com |
From: Will P. <wi...@cs...> - 2000-08-15 20:39:58
|
> A useful project would be to take Gino's code in SOLID, > convert it to Java, and make up an applet that lets you watch > the algorithm run. This will make it much clearer how GJK > converges on a result, and would be very helpful in diagnosing > some of the problems that GJK implementations encounter. I can't help but respond to somebody else's post on another forum. :) This is actually a great idea. I was thinking of working on some sort of open source physics library, but I was worried about a variety of issues. But by writing it in java, and in such a form that it's less useful in a commercial game and more useful as a public service, it's a much more interesting project. I know it would be useful for a variety of graphics algorithms as well (i.e. bsp trees, subdivision schemes, and so on), and I'm sure there are already several similar (thought not integrated) programs out there. Anyway, it's an idea. Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Pierre T. <p.t...@wa...> - 2000-08-17 09:32:39
|
Ok, I have my very first version running - and my head hurts. Pretty basic for the moment, but it seems to work. The computed distance does not look very accurate nonetheless. For example I saw it reported as 0.000261 altough the two objects obviously collided. Duno if it's normal. I skipped the backup procedure (I actually *forgot* about it) but my basic tests worked anyway. I think Gino was right when he discarded it. Running time is correct, provided you use hill-climbing. If you don't.... *grin*, if you don't, just forget it. (ok, unless your meshes have few triangles). I used Rabbitz' subroutine for Johnson's algorithm, so that I could build the Gilbert part on a working basis. Now I'm going to recode that one my way. I think some day I'll also pack all my notes in a ppt file for interested people. Pierre |
From: Will P. <wi...@cs...> - 2000-08-17 17:55:00
|
I have one working as well, but I'm not taking hill-climbing into account (I could add it, but for simple shapes like cubes I wouldn't be surprised if it were more expensive to generate the adjacencies and load-time and check them in the support point computation routine). I haven't encountered cycling yet, though it's easy enough to detect that issue. I substituted this GJK implementation in the place of the separating plane stuff I had beforehand, and although it doesn't find a full contact manifold, it looks pretty decent in the context of a rigid body simulation. Will ---- Will Portnoy http://www.cs.washington.edu/homes/will On Thu, 17 Aug 2000, Pierre Terdiman wrote: > Ok, I have my very first version running - and my head hurts. > > Pretty basic for the moment, but it seems to work. The computed distance > does not look very accurate nonetheless. For example I saw it reported as > 0.000261 altough the two objects obviously collided. Duno if it's normal. I > skipped the backup procedure (I actually *forgot* about it) but my basic > tests worked anyway. I think Gino was right when he discarded it. Running > time is correct, provided you use hill-climbing. If you don't.... *grin*, if > you don't, just forget it. (ok, unless your meshes have few triangles). I > used Rabbitz' subroutine for Johnson's algorithm, so that I could build the > Gilbert part on a working basis. Now I'm going to recode that one my way. > > I think some day I'll also pack all my notes in a ppt file for interested > people. > > Pierre > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pierre T. <p.t...@wa...> - 2000-08-17 19:08:43
|
Yeah, here we go again ! > I have one working as well, but I'm not taking hill-climbing into account > (I could add it, but for simple shapes like cubes I wouldn't be surprised > if it were more expensive to generate the adjacencies and load-time and > check them in the support point computation routine). I haven't > encountered cycling yet, though it's easy enough to detect that issue. Hill-climbing is a dramatic optimization provided the number of vertices gets "reasonable". Now, it's up to you :) Talking about cycling, it reminds me of something I forgot to mention so far. What termination condition do you use? Gino's one is the same as Gilbert's one, and you can read in his paper how much pain he had to make it more robust. That's why I discarded it and I used an "improved" termination condition presented in the Q-COLLIDE thesis. Basically, you just need to keep recording the pair of vertices encountered, and you just stop the algorithm the day you find a reoccurrence. This is a lot more robust than everything else of course, since it doesn't imply numerical accuracy anymore. But would it be what you mean by "cycling" ? I don't know where this improved termination condition comes from, I trusted Kelvin Chung on that point - regarding the rest of the thesis I think I can. Best proof of that is just that it works indeed, and very well. Now, I'm a little upset about one of the last paragraphs in Cameron's papers about his enhanced GJK. I must read it again, but I think he claims this new termination condition doesn't handle all possible cases. Any thoughts regarding that particular, delicate point? I think it's important to improve GJK's robustness, I don't feel like playing with epsilons for ages, as Gino did. There is also the implementation issue behind that : you must record pair of vertices. If P has p vertices and Q has q, then you basically can have to face pq different pairs. Recording them is easy. But you also must be able to tell whether a new pair has already been recorded or not - and of course in a very quick way. This is not a very big problem, but choices have to be made there. For the moment, and in the sake of laziness, I just scan the array of recorded pairs in search of current one, in a very stupid linear way. It could be improved using some dichotomy methods, but I didn't do it. I also could use a stupid array of pq bits, but it really would waste some ram for nothing. Comments welcome here as well. > I substituted this GJK implementation in the place of the separating plane > stuff I had beforehand, and although it doesn't find a full contact > manifold, it looks pretty decent in the context of a rigid body > simulation. Looks like we're all doing the same. I already had the separating vector from Q-COLLIDE, and I actually started GJK to implement the "combined algorithm" presented in the end. What do you mean by "full contact manifold" ? It actually made me think to a new question! When the algorithm stops, it returns a pair of closest points expressed as : Cp = l0*P0 + l1*P1 + ... Cq = l0*Q0 + l1*Q1 + ... (assuming li are Lambda(i), Cp belongs to P, Cq belongs to Q) Now, I didn't notice it before, but Lamdbas are the same for Cp and Cq !? Isn't it a strong limitation regarding the actual positions of Cp and Cq on the convex hulls of P and Q ? I mean, when you start looking for two arbitrary points on P and Q, you're supposed to be able to express them with arbitrary Lambda values. There's a little something here I don't get. Weird. Do you find it shocking or normal ?...maybe there's an obvious reason I don't see. And maybe that's what you mean when you say it doesn't find a "full contact manifold" ? Pheew, when you think it's finished, well, it's not :) Pierre |
From: Will P. <wi...@cs...> - 2000-08-17 19:53:51
|
> Hill-climbing is a dramatic optimization provided the number of > vertices gets "reasonable". Now, it's up to you :) I agree totally (i.e. O(1) is better than O(n)). There are just other things that are slower and worth speeding up at this time and making it O(1) (however easy it might be) will complicate the code. For example, I'm going to integrate my rigid bodies into a bsp-tree based world and I'll need to do full collision detection there. I expect those algorithms will be costly. > > Talking about cycling, it reminds me of something I forgot to mention > so far. What termination condition do you use? I'm using Gino's termination condition (the one that monotonically increases to a tighter lower bound). > Basically, you just need to keep recording the pair of vertices > encountered, and you just stop the algorithm the day you find a > reoccurrence. This is a lot more robust than everything else of > course, since it doesn't imply numerical accuracy anymore. But would > it be what you mean by "cycling" ? This is what I was thinking that I might have to add in case the simplex went through the same series of sets of points repeatedly. But I haven't seen it yet. > Any thoughts regarding that particular, delicate point? I think it's > important to improve GJK's robustness, I don't feel like playing with > epsilons for ages, as Gino did. I thought Gino's solution was to look for cycling? That doesn't seem too hard. > There is also the implementation issue behind that : you must record > pair of vertices. If P has p vertices and Q has q, then you basically > can have to face pq different pairs. Recording them is easy. But you > also must be able to tell whether a new pair has already been recorded > or not - and of course in a very quick way. This is not a very big > problem, but choices have to be made there. For the moment, and in the > sake of laziness, I just scan the array of recorded pairs in search of > current one, in a very stupid linear way. It could be improved using > some dichotomy methods, but I didn't do it. I also could use a stupid > array of pq bits, but it really would waste some ram for nothing. > Comments welcome here as well. Well, you could just remember which points are removed from the simplex in that time step.... that shouldn't be too hard, and is constant time. > What do you mean by "full contact manifold" ? Like if two cubes are on top of each other. The full contact manifold (or roughly parts that touch) wouldn't just be one closest point as returned by gjk, but a face or some other combination of features. For my purposes, it might not be worth calculating. > It actually made me think to a new question! When the algorithm stops, > it returns a pair of closest points expressed as : Cp = l0*P0 + l1*P1 > + ... Cq = l0*Q0 + l1*Q1 + ... > > (assuming li are Lambda(i), Cp belongs to P, Cq belongs to Q) Now, I > didn't notice it before, but Lamdbas are the same for Cp and Cq !? > Isn't it a strong limitation regarding the actual positions of Cp and > Cq on the convex hulls of P and Q ? I mean, when you start looking for > two arbitrary points on P and Q, you're supposed to be able to express > them with arbitrary Lambda values. There's a little something here I > don't get. Weird. Do you find it shocking or normal ?...maybe there's > an obvious reason I don't see. And maybe that's what you mean when you > say it doesn't find a "full contact manifold" ? It's weird, yes, but not that weird considering that the whole algorithm works on the two polyhedra A and B at the same time to find the point in A - B closest to the origin. > Pheew, when you think it's finished, well, it's not :) It never is. The harder part is coming up with good collision response/contact forces, and god forbid if I decide to go with nonlinear springs for fear of patent violations. :) Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: John M. <j.m...@re...> - 2000-08-18 08:36:51
|
Pierre Terdiman writes: >Hill-climbing... >Talking about cycling... Hill climbing? Cycling? Did I get on the wrong list by mistake? :-) John |
From: Pierre T. <p.t...@wa...> - 2000-08-17 09:40:27
|
In rigid body simulations, one needs a correct estimation of the distance and contact points between two bodies, in order to: - backup the simulator in search of the exact collision time - compute an accurate collision response If you followed the recent GJK thread, you know this algorithm works well for such tasks. But what about other collision detection methods? For example, say I'm just using OBBs, as with RAPID. RAPID will detect collisions, but won't give me a distance or a pair of closest points. Does it mean it can't be used efficiently within a rigid body simulator ? I think I'm missing something there. Pierre |
From: Jeff L. <je...@di...> - 2000-08-13 23:54:33
|
Sounds like you have it right to me. But normals on a deformable mesh are tricky. There is one problem I know of with the technique. It only works for rotational joints. If you are using the bones for animation by translating them, the normals will not be correct since a translation can change local surface without a way to fix the normals. For instance think of using bones in a grid across a planar surface that you want to use to animate it like water. If you translate up one bone, the local surface will change. But nothing in the rotational part of the bone matrix has changed. You would need to fix up the normals by recomputing them. I have not seen any real serious problems with normals when they are only used with a hierarchical skeleton where the bones only rotate. That doesn't mean there are no problems. Linear interpolation of normals and then normalizing the result is probably not perfect. It would be interesting to test the accuracy by generating average normals then deforming a mesh and normals. Then you could compare those normals with ones that you regenerate. I also thought once about converting the normal to a quaternion, rotating, and then slerping between them. That may be more accurate, however, it really has never looked bad enough for me to investigate further. -Jeff At 06:22 PM 8/13/2000 +0300, you wrote: >Hi, > >How to handle normals in skeletal animation? > >Currently I use this way : >After loading model I take first frame and calculate all normals. >Then I use inverted bone matrices to transform normals into >some 'pretransformed' state and store it same as offset vectors >(every vertex has list of pre normals for affected bones) > >At rendering time for every affected bone I transform this >normals using current bone matrix, multiply to the weight, sum it >and normalize. > >In result I get something hm.. :) >In most cases it looks ok but sometimes it looks like some normals >was calculated using incorrect bones. > >In any case I think should be much better way for this. >Can anybody recommend something? > >Thanks >vlad > > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Conor S. <cs...@tp...> - 2000-08-14 01:39:24
|
You could always avoid using normals. Often you can use various tricks to get nice lighting without them. Conor Stokes > Sounds like you have it right to me. But normals on a deformable mesh are tricky. > > There is one problem I know of with the technique. It only works for rotational joints. If you are using the bones for animation by translating them, the normals will not be correct since a translation can change local surface without a way to fix the normals. > > For instance think of using bones in a grid across a planar surface that you want to use to animate it like water. If you translate up one bone, the local surface will change. But nothing in the rotational part of the bone matrix has changed. You would need to fix up the normals by recomputing them. > > I have not seen any real serious problems with normals when they are only used with a hierarchical skeleton where the bones only rotate. That doesn't mean there are no problems. Linear interpolation of normals and then normalizing the result is probably not perfect. It would be interesting to test the accuracy by generating average normals then deforming a mesh and normals. Then you could compare those normals with ones that you regenerate. > > I also thought once about converting the normal to a quaternion, rotating, and then slerping between them. That may be more accurate, however, it really has never looked bad enough for me to investigate further. > > -Jeff > > At 06:22 PM 8/13/2000 +0300, you wrote: > >Hi, > > > >How to handle normals in skeletal animation? > > > >Currently I use this way : > >After loading model I take first frame and calculate all normals. > >Then I use inverted bone matrices to transform normals into > >some 'pretransformed' state and store it same as offset vectors > >(every vertex has list of pre normals for affected bones) > > > >At rendering time for every affected bone I transform this > >normals using current bone matrix, multiply to the weight, sum it > >and normalize. > > > >In result I get something hm.. :) > >In most cases it looks ok but sometimes it looks like some normals > >was calculated using incorrect bones. > > > >In any case I think should be much better way for this. > >Can anybody recommend something? > > > >Thanks > >vlad > > > > > > > >_______________________________________________ > >GDAlgorithms-list mailing list > >GDA...@li... > >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pai-Hung C. <pa...@ac...> - 2000-08-14 06:28:31
|
Hi, I've written a routine to read a bitmap heightfield and construct a terrain mesh from it. I partition each set of four pixels into two triangles along the upper-right-to-lower-left diagonal line. Therefore, for example, a 128 x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my method. For each triangle I caculate its unit face normal. For each vertex of a triangle, I calculate its vertex normal by adding and averaging the face normals of all triangles that share the vertex (in my case a vertex can be shared by at most six triangles and at least one triganle) and then normalize the averaged normal, which is used in redering. My problem is that there are some *horizontal dimmed banding* effects along the triangle edges of the mesh when rendering in D3D with light cast on it. Also, there are very visible dimmed seams between triangles when light is cast on it. All these artifacts seem to manifest only when light is cast in large angles (i.e. not directly). Is it because the arrangement of triangles in my mesh is too *uniform* (actually they are extremely uniform)? Or is my calculation of vertex normals incorrect? Thanks in advance, Pai-Hung Chen |
From: Klaus H. <k_h...@os...> - 2000-08-14 12:06:17
|
Not sure what you mean by "horizontally dimmed banding", but I guess you are talking about the typical Gouraud shading artifacts. This can happen, if the light intensity at adjacent vertices differs very much (e.g. the two vertices shared vertices of two triangles are white, and the other two non-shared vertices of the two triangles are black). Personally, I use pre-lit textures or lightmaps (using Phong shading) to eliminate this effect, but the following algorithm to compute the vertex normals should already reduce (or maybe even eliminate) those seams (Thanks to Peter Dimov for the algorithm :). void VertexNormal( Vector3& normal, long col, long row, float gridSpacing) { float nx, nz, denom; if (col > 0 && col < m_fieldSize - 1) { nx = GetElev(col - 1, row) - (col + 1, row); } else if (col > 0) { nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); } else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); if (row > 0 && row < m_fieldSize - 1) { nz = GetElev(col, row - 1) - GetElev(col, row + 1); } else if (row > 0) { nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); } else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); gridSpacing *= 2.0f; denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); normal.x = nx * denom; normal.y = gridSpacing * denom; normal.z = nz * denom; } "normal" is the unit vertex normal that is returned by the function. "(col, row)" represents the data point in the height field for which you want to compute the vertex normal, and "gridSpacing" is the spacing between two adjacent vertices (normally, gridSpacing is 1, unless you decide to scale the height field). Finally, GetElev() return a elevation of a particular data point in the height field. HTH, Niki ----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Monday, August 14, 2000 8:22 AM Subject: [Algorithms] Terrain Normals > Hi, > > I've written a routine to read a bitmap heightfield and construct a terrain > mesh from it. I partition each set of four pixels into two triangles along > the upper-right-to-lower-left diagonal line. Therefore, for example, a 128 > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my > method. For each triangle I caculate its unit face normal. For each vertex > of a triangle, I calculate its vertex normal by adding and averaging the > face normals of all triangles that share the vertex (in my case a vertex can > be shared by at most six triangles and at least one triganle) and then > normalize the averaged normal, which is used in redering. My problem is > that there are some *horizontal dimmed banding* effects along the triangle > edges of the mesh when rendering in D3D with light cast on it. Also, there > are very visible dimmed seams between triangles when light is cast on it. > All these artifacts seem to manifest only when light is cast in large angles > (i.e. not directly). Is it because the arrangement of triangles in my mesh > is too *uniform* (actually they are extremely uniform)? Or is my > calculation of vertex normals incorrect? > > Thanks in advance, > > Pai-Hung Chen > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pai-Hung C. <pa...@ac...> - 2000-08-14 16:58:49
|
Hi, Thanks for the help! > Personally, I use pre-lit textures or lightmaps (using Phong shading) to > eliminate this effect, but the following algorithm to compute the vertex > normals should already reduce (or maybe even eliminate) those seams (Thanks > to Peter Dimov for the algorithm :). Wouldn't the same artifacts still be visible in the pre-lit lightmaps since the lightmaps' color/intensity are still affected by the _faulty_ normals when Gouraud shading is used (at least in D3D's case)? Also, could you briefly explain the reasoning of the normal-generating algorithm you suggested as follows? A high-level, general idea is fine. :-) Thank you. Pai-Hung Chen > void VertexNormal( > Vector3& normal, > long col, > long row, > float gridSpacing) > { > float nx, nz, denom; > > if (col > 0 && col < m_fieldSize - 1) > { > nx = GetElev(col - 1, row) - (col + 1, row); > } > else if (col > 0) > { > nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); > } > else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); > > if (row > 0 && row < m_fieldSize - 1) > { > nz = GetElev(col, row - 1) - GetElev(col, row + 1); > } > else if (row > 0) > { > nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); > } > else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); > > gridSpacing *= 2.0f; > > denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); > > normal.x = nx * denom; > normal.y = gridSpacing * denom; > normal.z = nz * denom; > } > > "normal" is the unit vertex normal that is returned by the function. "(col, > row)" represents the data point in the height field for which you want to > compute the vertex normal, and "gridSpacing" is the spacing between two > adjacent vertices (normally, gridSpacing is 1, unless you decide to scale > the height field). Finally, GetElev() return a elevation of a particular > data point in the height field. > > HTH, > Niki > > ----- Original Message ----- > From: Pai-Hung Chen <pa...@ac...> > To: <gda...@li...> > Sent: Monday, August 14, 2000 8:22 AM > Subject: [Algorithms] Terrain Normals > > > > Hi, > > > > I've written a routine to read a bitmap heightfield and construct a > terrain > > mesh from it. I partition each set of four pixels into two triangles > along > > the upper-right-to-lower-left diagonal line. Therefore, for example, a > 128 > > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my > > method. For each triangle I caculate its unit face normal. For each > vertex > > of a triangle, I calculate its vertex normal by adding and averaging the > > face normals of all triangles that share the vertex (in my case a vertex > can > > be shared by at most six triangles and at least one triganle) and then > > normalize the averaged normal, which is used in redering. My problem is > > that there are some *horizontal dimmed banding* effects along the triangle > > edges of the mesh when rendering in D3D with light cast on it. Also, > there > > are very visible dimmed seams between triangles when light is cast on it. > > All these artifacts seem to manifest only when light is cast in large > angles > > (i.e. not directly). Is it because the arrangement of triangles in my > mesh > > is too *uniform* (actually they are extremely uniform)? Or is my > > calculation of vertex normals incorrect? > > > > Thanks in advance, > > > > Pai-Hung Chen > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > |
From: Klaus H. <k_h...@os...> - 2000-08-14 20:35:10
|
Hi, I made two screen shots for you. These images are not supposed to look good, but they show that lightmaps don't introduce those Gouraud shading artifacts. The lightmap contains only a single color (at varying intensities), and two directional light sources have been used. There's exactly one texel per data point in the height field. Please ignore those 'lines' that look like a bug... these are texture seams, and it's sort of difficult to get rid of them. http://www.thecore.de/TheCore/img1.jpg http://www.thecore.de/TheCore/img2.jpg Niki ----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Monday, August 14, 2000 6:53 PM Subject: Re: [Algorithms] Terrain Normals > Hi, > > Thanks for the help! > > > Personally, I use pre-lit textures or lightmaps (using Phong shading) to > > eliminate this effect, but the following algorithm to compute the vertex > > normals should already reduce (or maybe even eliminate) those seams > (Thanks > > to Peter Dimov for the algorithm :). > > Wouldn't the same artifacts still be visible in the pre-lit lightmaps since > the lightmaps' color/intensity are still affected by the _faulty_ normals > when Gouraud shading is used (at least in D3D's case)? > > Also, could you briefly explain the reasoning of the normal-generating > algorithm you suggested as follows? A high-level, general idea is fine. :-) > Thank you. > > Pai-Hung Chen > > > > void VertexNormal( > > Vector3& normal, > > long col, > > long row, > > float gridSpacing) > > { > > float nx, nz, denom; > > > > if (col > 0 && col < m_fieldSize - 1) > > { > > nx = GetElev(col - 1, row) - (col + 1, row); > > } > > else if (col > 0) > > { > > nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); > > } > > else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); > > > > if (row > 0 && row < m_fieldSize - 1) > > { > > nz = GetElev(col, row - 1) - GetElev(col, row + 1); > > } > > else if (row > 0) > > { > > nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); > > } > > else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); > > > > gridSpacing *= 2.0f; > > > > denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); > > > > normal.x = nx * denom; > > normal.y = gridSpacing * denom; > > normal.z = nz * denom; > > } > > > > "normal" is the unit vertex normal that is returned by the function. > "(col, > > row)" represents the data point in the height field for which you want to > > compute the vertex normal, and "gridSpacing" is the spacing between two > > adjacent vertices (normally, gridSpacing is 1, unless you decide to scale > > the height field). Finally, GetElev() return a elevation of a particular > > data point in the height field. > > > > HTH, > > Niki > > > > ----- Original Message ----- > > From: Pai-Hung Chen <pa...@ac...> > > To: <gda...@li...> > > Sent: Monday, August 14, 2000 8:22 AM > > Subject: [Algorithms] Terrain Normals > > > > > > > Hi, > > > > > > I've written a routine to read a bitmap heightfield and construct a > > terrain > > > mesh from it. I partition each set of four pixels into two triangles > > along > > > the upper-right-to-lower-left diagonal line. Therefore, for example, a > > 128 > > > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using > my > > > method. For each triangle I caculate its unit face normal. For each > > vertex > > > of a triangle, I calculate its vertex normal by adding and averaging the > > > face normals of all triangles that share the vertex (in my case a vertex > > can > > > be shared by at most six triangles and at least one triganle) and then > > > normalize the averaged normal, which is used in redering. My problem is > > > that there are some *horizontal dimmed banding* effects along the > triangle > > > edges of the mesh when rendering in D3D with light cast on it. Also, > > there > > > are very visible dimmed seams between triangles when light is cast on > it. > > > All these artifacts seem to manifest only when light is cast in large > > angles > > > (i.e. not directly). Is it because the arrangement of triangles in my > > mesh > > > is too *uniform* (actually they are extremely uniform)? Or is my > > > calculation of vertex normals incorrect? > > > > > > Thanks in advance, > > > > > > Pai-Hung Chen > > > > > > > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Thomas L. <tho...@gm...> - 2000-08-14 20:45:17
|
> Please ignore those 'lines' that look like a bug... these are texture seams, > and it's sort of difficult to get rid of them. > > http://www.thecore.de/TheCore/img1.jpg > http://www.thecore.de/TheCore/img2.jpg BTW, how do you render the water surface in those pictures? Thanks, Thomas |
From: Klaus H. <k_h...@os...> - 2000-08-14 21:14:53
|
That's simply a greenish-blue alpha-polygon (no texture). It basically is one big square polygon that cuts through the whole terrain. The reason why that does not look too crappy is, because you see the lit terrain beneath the water. If it's important for you, the color of the polygon is: r = 0.01 g = 0.2 b = 0.4 a = 0.7 Niki ----- Original Message ----- From: Thomas Luzat <tho...@gm...> To: <gda...@li...> Sent: Monday, August 14, 2000 10:45 PM Subject: Re: [Algorithms] Terrain Normals > > Please ignore those 'lines' that look like a bug... these are texture > seams, > > and it's sort of difficult to get rid of them. > > > > http://www.thecore.de/TheCore/img1.jpg > > http://www.thecore.de/TheCore/img2.jpg > > BTW, how do you render the water surface in those pictures? > > > Thanks, > Thomas > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pai-Hung C. <pa...@ac...> - 2000-08-14 22:15:21
|
> That's simply a greenish-blue alpha-polygon (no texture). It basically is > one big square polygon that cuts through the whole terrain. The reason why > that does not look too crappy is, because you see the lit terrain beneath > the water. (1) Is it because "the terrain beneath the water is lit" that make the water look good? The terrain is lit either with or without lightmap. Does that make a difference? (2) How do you make the terrain mesh? It looks like ROAM. Pai-Hung Chen |
From: Vladimir K. <vka...@si...> - 2000-08-14 22:10:40
|
Hi All, I want to use cubemapping to create refraction's in transparent objects. (glass vase) There is no ready texgen mode for that so I need to calculate tex coordinates myself. How can I do that? I think it should be at least 3 tex coordinates. Thanks Vlad |
From: gl <gl...@nt...> - 2000-08-14 23:02:47
|
Nvidia have a demo and paper on exacty that. Check out http://www.nvidia.com/Marketing/Developer/DevRel.nsf/pages/74C794552AB4A65E8 825687300757A8F (watch for line wraps) -- gl ----- Original Message ----- From: "Vladimir Kajalin" <vka...@si...> To: <gda...@li...> Sent: Monday, August 14, 2000 10:08 PM Subject: Re: [Algorithms] Terrain Normals > Hi All, > > I want to use cubemapping to create refraction's in transparent > objects. (glass vase) > There is no ready texgen mode for that so I need to calculate > tex coordinates myself. How can I do that? > I think it should be at least 3 tex coordinates. > > Thanks > Vlad > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Peter D. <pd...@mm...> - 2000-08-15 10:05:39
|
> Also, could you briefly explain the reasoning of the normal-generating > algorithm you suggested as follows? A high-level, general idea is fine. :-) Brief explanation: The surface partial derivatives are approximated with central differences; the normal is computed by normalizing the cross product of the tangential vectors. More elaborate explanation: Assume that the height field represents a regularly sampled smooth surface y = F(x, z). To compute the tangential vectors to this surface we need to approximate the partial derivatives dF/dx and dF/dz. I have chosen to use central differences dF/dx ~= ( F(x+a) - F(x-a) ) / (2 a) instead of the more widely used forward differences dF/dx ~= ( F(x+a) - F(x) ) / a because the central difference is exact for degree 2 polynomials (ax^2+bx+c), while the forward/backward version is exact for linear functions (ax+b), and heightfields (including bumpmaps) are much better approximated by the former. So, we compute nx = height(x-1, z) - height(x+1, z) nz = height(x, z-1) - height(x, z+1) (note that the signs are reversed) and dF/dx ~= - nx / 2a dF/dz ~= - nz / 2a where a is the distance in world units between two adjacent vertices of the grid, i.e. the grid spacing. The tangential vectors are vx = (1, - nx / 2a, 0) vz = (0, - nz / 2a, 1) and their cross product is v = (nx / 2a, 1, nz / 2a). The only thing left is to normalize v. Note that to avoid the two divisions we may compute v' = 2av = (nx, 2a, nz) instead, because it's going to be normalized anyway. Hope this helps. -- Peter Dimov Multi Media Ltd. (code left for reference) > > void VertexNormal( > > Vector3& normal, > > long col, > > long row, > > float gridSpacing) > > { > > float nx, nz, denom; > > > > if (col > 0 && col < m_fieldSize - 1) > > { > > nx = GetElev(col - 1, row) - (col + 1, row); > > } > > else if (col > 0) > > { > > nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); > > } > > else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); > > > > if (row > 0 && row < m_fieldSize - 1) > > { > > nz = GetElev(col, row - 1) - GetElev(col, row + 1); > > } > > else if (row > 0) > > { > > nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); > > } > > else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); > > > > gridSpacing *= 2.0f; > > > > denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); > > > > normal.x = nx * denom; > > normal.y = gridSpacing * denom; > > normal.z = nz * denom; > > } |
From: Jim O. <j.o...@in...> - 2000-08-14 18:10:35
|
I use a form of bilinear filtering to calculate my vertex normals, like so: +-----+ |\ 1 /| | \ / | |4 X 2| | / \ | |/ 3 \| +-----+ Where the normal of vertex X is the average of the normals of triangles 1...4. Note that the above does _not_ represent the actual layout of my terrain mesh, which has the following (probably quite familiar) layout: +-----+ | /| /| |/ |/ | +--+--+ | /| /| |/ |/ | +-----+ This seems to work when it comes to avoiding the gray bands you are referring to. Hope this helps, Jim Offerman Innovade - designing the designer ----- Original Message ----- From: "Pai-Hung Chen" <pa...@ac...> To: <gda...@li...> Sent: Monday, August 14, 2000 8:22 AM Subject: [Algorithms] Terrain Normals > Hi, > > I've written a routine to read a bitmap heightfield and construct a terrain > mesh from it. I partition each set of four pixels into two triangles along > the upper-right-to-lower-left diagonal line. Therefore, for example, a 128 > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my > method. For each triangle I caculate its unit face normal. For each vertex > of a triangle, I calculate its vertex normal by adding and averaging the > face normals of all triangles that share the vertex (in my case a vertex can > be shared by at most six triangles and at least one triganle) and then > normalize the averaged normal, which is used in redering. My problem is > that there are some *horizontal dimmed banding* effects along the triangle > edges of the mesh when rendering in D3D with light cast on it. Also, there > are very visible dimmed seams between triangles when light is cast on it. > All these artifacts seem to manifest only when light is cast in large angles > (i.e. not directly). Is it because the arrangement of triangles in my mesh > is too *uniform* (actually they are extremely uniform)? Or is my > calculation of vertex normals incorrect? > > Thanks in advance, > > Pai-Hung Chen > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |