Thread: [Algorithms] decompose onto non-orthogonal vectors
Brought to you by:
vexxed72
From: Discoe, B. <ben...@in...> - 2000-07-15 02:08:53
|
I thought it would be a simple problem, but all usual sources failed to answer, so perhaps it will be obvious to someone on this list. Given a point p and two unit vectors a and b like this: b / v/----p / / / / / / ---------a u how to you get scalars (u,v) such that u*a + v*b = p? Ie. decompose p onto a and b. I promise i consulted an academic, a linear algebra textbook and the WWW (all failed) before resorting the list :) Thanks, Ben http://vterrain.org/ |
From: Ignacio C. <i6...@ho...> - 2000-07-15 02:20:57
|
Discoe, Ben wrote: > Given a point p and two unit vectors a and b like this: > > b > / > v/----p > / / > / / > / / > ---------a > u > > how to you get scalars (u,v) such that u*a + v*b = p? > Ie. decompose p onto a and b. what you are trying to do is usually called 'change of basis' look here for further explanations: http://www.math.hmc.edu/calculus/tutorials/changebasis/ Ignacio Castano ca...@cr... |
From: Jonathan B. <jo...@bo...> - 2000-07-15 02:34:19
|
"Discoe, Ben" wrote: > > I thought it would be a simple problem, but all usual sources failed to > answer, so perhaps it will be obvious to someone on this list. > > Given a point p and two unit vectors a and b like this: > > b > / > v/----p > / / > / / > / / > ---------a > u > > how to you get scalars (u,v) such that u*a + v*b = p? > Ie. decompose p onto a and b. This is one of those "fundamental linear algebra things" that Ron and I were just talking about. Your original coordinates of 'p' are in the coordinate system that we are used to, that is, two basis vectors that are orthonormal. We will call this space R. You want the coordinates of 'p' in a system in which the vectors are unit, but not orthogonal. We will call this space S. Just build a transformation matrix from R to S, and call that matrix T. Then p' = Tp and you are done. For instructions on how to make a matrix that goes from one space to another, given the basis vectors of each space, consult any one of a trillion graphics books. -J. > I promise i consulted an academic What kind of academic was that? |
From: Will P. <wi...@cs...> - 2000-07-15 03:08:11
|
I prefer this answer to mine from a theory standpoint, because it expresses all of the gotchas that I had to list explicitly... code wise, I wouldn't want to actually make a matrix to solve it (i'm sure you wouldn't either). One thing I didn't mention, and I haven't proven it to myself mathematically yet: if vectors a and b are linearly independent (which they don't seem to be from the original thread author's drawing), then there's only one solution for u and v. For example, in the "normal" orthonormal basis that we use ((1,0,0)(0,1,0)(0,0,1), there's only one linear combination of the basis that will give a point p. i'm assuming the basis vectors span the subspace for those mathematically inclined people who want to jump on me. :) Will ---- Will Portnoy http://www.cs.washington.edu/homes/will On Fri, 14 Jul 2000, Jonathan Blow wrote: > "Discoe, Ben" wrote: > > > > I thought it would be a simple problem, but all usual sources failed to > > answer, so perhaps it will be obvious to someone on this list. > > > > Given a point p and two unit vectors a and b like this: > > > > b > > / > > v/----p > > / / > > / / > > / / > > ---------a > > u > > > > how to you get scalars (u,v) such that u*a + v*b = p? > > Ie. decompose p onto a and b. > > This is one of those "fundamental linear algebra things" > that Ron and I were just talking about. > > Your original coordinates of 'p' are in the coordinate > system that we are used to, that is, two basis vectors > that are orthonormal. We will call this space R. > > You want the coordinates of 'p' in a system in which > the vectors are unit, but not orthogonal. We will call > this space S. > > Just build a transformation matrix from R to S, and > call that matrix T. Then p' = Tp and you are done. > > For instructions on how to make a matrix that goes from > one space to another, given the basis vectors of each > space, consult any one of a trillion graphics books. > > -J. > > > > I promise i consulted an academic > > What kind of academic was that? > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Jonathan B. <jo...@bo...> - 2000-07-15 03:39:51
|
Will Portnoy wrote: > I prefer this answer to mine from a theory standpoint, because it > expresses all of the gotchas that I had to list explicitly... code wise, I > wouldn't want to actually make a matrix to solve it (i'm sure you wouldn't > either). There was once a time when I would have thought this. But once you get good at understanding transformations, it is *hugely* easier to just say something like: "Hey, I'll make some matrix, invert it, and transform something else by that to get my answer." (I don't mean for this problem, which requires no inversion, but.) It is a lot nicer to be able to come back to code like this later and understand what it's doing, as opposed to some random math that's going on. And it is also a lot easier to write this kind of code in a bug-free manner. Unless you really *really* care about speed, I'd recommend using a transformation-style approach if there is one. And if you do care about speed, I'd still recommend using this kind of approach, until you are sure the code works; *then* you optimize and keep the old code in a comment as a backup. > One thing I didn't mention, and I haven't proven it to myself > mathematically yet: if vectors a and b are linearly independent (which > they don't seem to be from the original thread author's drawing), then > there's only one solution for u and v. For example, in the "normal" > orthonormal basis that we use ((1,0,0)(0,1,0)(0,0,1), there's only one > linear combination of the basis that will give a point p. They are linearly independent in the drawing. Linearly independent does not mean orthogonal, it just means that the two vectors are not fully redundant (i.e. one is not a scalar multiple of the other). However it is a basic theorem of linear algebra that there is only one solution for p regardless of the vectors so long as they are linearly independent. -J. |
From: Jim O. <j.o...@in...> - 2000-07-15 06:05:12
|
> > I prefer this answer to mine from a theory standpoint, because it > > expresses all of the gotchas that I had to list explicitly... code wise, I > > wouldn't want to actually make a matrix to solve it (i'm sure you wouldn't > > either). > > There was once a time when I would have thought this. But once you get > ... > this kind of code in a bug-free manner. I totally agree. I used to have all sorts of math functions, grabbed from this tutorial here and that book there and the end result was that I didn't even understand what was going on inside my own mathematics library - which makes it *really* hard to find a bug. Nowadays, I write down which matrices and matrix multiplications I need to get the job done and then implement it using 'dumb' matrix multiplications. Once I am sure it works, I feed the matrices and multiplications to this little tool I have written, which filters out any unnecessary steps in the multiplications (i.e. B.x.x = A.x.x * I.x.x + A.x.y * I.x.y would simply become B.x.x = A.x.x as I.x.x = 1 and I.x.y = 0, where I is identity and A and B are any given matrix). Works like a charm :-) Jim Offerman Innovade - designing the designer |
From: <ro...@do...> - 2000-07-15 03:27:22
|
Jonathan Blow wrote: >"Discoe, Ben" wrote: >> >> I thought it would be a simple problem, but all usual sources failed to >> answer, so perhaps it will be obvious to someone on this list. >> ... >This is one of those "fundamental linear algebra things" >that Ron and I were just talking about. > >Your original coordinates of 'p' are in the coordinate >system that we are used to, that is, two basis vectors >that are orthonormal. We will call this space R. > Actually you don't need them expressed in an orthonormal basis, any basis will do. (Although the solution is shorter when you have an orthonormal basis and have learned the concept of "dot product", but you might not get that until one of the later chapters. >You want the coordinates of 'p' in a system in which >the vectors are unit, but not orthogonal. We will call >this space S. > >Just build a transformation matrix from R to S, and >call that matrix T. Then p' = Tp and you are done. > >For instructions on how to make a matrix that goes from >one space to another, given the basis vectors of each >space, consult any one of a trillion graphics books. Again, you don't need matrices (maybe chapter 3) but just high school algebra. |
From: <ro...@do...> - 2000-07-15 15:09:15
|
I wrote: >Jonathan Blow wrote: > >>Just build a transformation matrix from R to S, and >>call that matrix T. Then p' = Tp and you are done. >> >>For instructions on how to make a matrix that goes from >>one space to another, given the basis vectors of each >>space, consult any one of a trillion graphics books. > >Again, you don't need matrices (maybe chapter 3) but just high school >algebra. > Since I was rushing out the door to a Bastille Day party I didn't get a chance to add that your "change of basis" approach DOES have the benefit that when you get this matrix, then you can solve the same problem for ANY OTHER vector p, simply by a matrix multiplication. But again, that was a much more complicated problem than that which was asked, involving just one given vector p. And also it would apply only in the case that Ben was thinking of a 2D problem and we have learned from a subsequent post that it is the 3D problem he had in mind. (It takes three indepdent vectors, not two, to form a basis of 3-space). P.S. The Bastille Day party was great fun. It was held in the studio of a Berkeley company called Scientific Arts, the folks who built the giant baseball mitt and Coke-bottle kiddy slide that grace the lefr-field bleachers of the new SF Giants ballpark. For the party they'd built a perfectly working model guillotine, about three feet tall, and used it to chop up the frogs for the frog-leg barbecue. |
From: <ro...@do...> - 2000-07-15 16:55:11
|
I wrote: >. For the party >they'd built a perfectly working model guillotine, about three feet >tall, and used it to chop up the frogs for the frog-leg barbecue. > I hasten to clarify that the frogs were already dead and skinned, looking much like plucked chickens in texture, but quite humanoid in form. |
From: Will P. <wi...@cs...> - 2000-07-15 02:41:27
|
> how to you get scalars (u,v) such that u*a + v*b = p? > Ie. decompose p onto a and b. Regardless of what the answer is, you have one equation (ua + vb = p) and two unknowns (u and v). You need to come up with an equation involving u and/or v that is linearly independent of the first equation, and then you can solve the system of linear equations. You could make up something like: u + v = 1, which is the same as v = 1 - u and then substitute in to the first equation: ua + vb = p ua + (1 - u) b = p ua - ub = p - b u = (p - b) / (a - b) and then v = 1 - u... assuming I didn't make any mistakes in this textual algebra, which is hard for me to write (and I imagine hard to read). the thing is, this made-up equation (u + v = 1) doesn't guarrantee a positive u and positive v. so I hope you weren't looking for that. :) the other problem is you have to make sure vector a and vector b are linearly independent, or there is likely to be no solution, no matter what u, v, and the second equation is. Will |
From: <ro...@do...> - 2000-07-15 03:28:25
|
Will Portnoy wrote: >> how to you get scalars (u,v) such that u*a + v*b = p? >> Ie. decompose p onto a and b. > >Regardless of what the answer is, you have one equation (ua + vb = p) and >two unknowns (u and v). Nah, it's one vector equation, so two scalar equations. >You need to come up with an equation involving u >and/or v that is linearly independent of the first equation, and then you >can solve the system of linear equations. > >You could make up something like: > >u + v = 1, which is the same as v = 1 - u > >and then substitute in to the first equation: > This part and the remainder is utter nonsense. |
From: <ro...@do...> - 2000-07-15 03:24:24
|
Discoe, Ben wrote: > >I thought it would be a simple problem, but all usual sources failed to >answer, so perhaps it will be obvious to someone on this list. > >Given a point p and two unit vectors a and b like this: > > b > / > v/----p > / / > / / > / / >---------a > u > >how to you get scalars (u,v) such that u*a + v*b = p? >Ie. decompose p onto a and b. > >I promise i consulted an academic, a linear algebra textbook and the WWW >(all failed) before resorting the list :) > Sheesh, this is a problem that you ought to be able to solve after doing Chapter 1 in any linear algebra book. In fact, under the most natural interpretation of what you have stated, this is a problem in intermediate (high school) algebra. First, this is essentially a 2D problem because it has no solution unless p lies in the plane spanned by a and b. So I suppose that it is posed as a 2D problem, then consider what to do if you are dealing with it posed as a 3 or higher dimensional problem So if you are "given" a, b and p, then you must be given their components with respect to SOME coordinate system (and it doesn't even have to be an orthonormal coordinate system), So we suppose you are given a = (a1, a2), b=(b1, b2) and p = (p1, p2), where a1, a2, b1, b2, p1, p2 are known scalars. (If you are not given this info, but just a diagram, then we are reduced to a high school geometry compass and straight edge solution, which also is fun, but I'll forbear) Now when you write out your vector equation ua + vb = p in components it is actually a system of two linear equations u a1 + v b1 = p1 u a2 + v b2 = p2 (Note that I refuse to use * for multiplication in algebra, even in ASCII algebra) That is A SYSTEM OF TWO LINEAR EQUATIONS IN TWO UNKNOWNS u, v, which you ought to have learned to solve in intermediate algebra, probably the simplest of all problems in the subject that we call "linear algebra", no? Gaussian elimination, the general means of attacking all linear systems, works fine for this, but most people would have remembered the Cramer's rule solution in terms of determinants. u = (p1 b2 - p2 b1)/(a1 b2 - a2 b1) v = (a1 p2 - a2 p1)/(a1 b2 - a2 b1) Voilà la solution| (Where I use the French in honor of July 14). If the denominator (a1 b2 - a2 b1) is zero then it means that a and b are linearly dependent (i.e. collinear) and either it has no solution (if p is not also collinear with a and b) or infinitely many different solutions (if p is collinear with a and b) Sheesh. What kind of "academic" did you consult, a cooking teacher? Now suppose that you are given this as a 3D problem, say a = (a1, a2, a3) and b = (b1, b2, b3) Then you have u a1 + v b1 = p1 u a2 + v b2 = p2 u a3 + v b3 = p3 a system of THREE linear equations in TWO unknowns. It may or may not have solutions, in fact, as stated above, it will have a solution only if p lies in the plane spanned by a and b, which requires that the three equations be linearly dependent. Cramer's rule does not apply here, but Gaussian elimination still does. As you apply your favorite Gaussian elimination algorithm, one of two things will happen: either (1) you will get one of the equations into the form 0 = 0 , in which case it is irrelevant and you are left with a system of two equations in two unknowns, or (2) you will get one of the equations into the form 0 = 1, in which case there is no solution, i.e. p does NOT lie in the plane spanned by a and b No, I will not belabor the list with a review of Gaussian elimination algorithms. You will find one in Chapter 1 of your favorite linear algebra book. Sheesh. You weren't trolling me were you? |
From: Scott J. S. <sjs...@um...> - 2000-07-15 07:15:38
|
Okay, let me try and explain this as simply as possible. You want to find scalars u,v such that u*a + v*b = p. Since these are 3-dimensional vectors, you have 3 equations and 2 unknowns: u*a1 + v*b1 = p1 u*a2 + v*b2 = p2 u*a3 + v*b3 = p3 where p1,p2,p3 are the coordinates of the point p, likewise for a1,a2,a3 and a, and b1,b2,b3 and b. How do you solve a system of 3 linear equations with 2 unknowns? If you don't know Gaussian elimination, this can be done with some simple algebraic rules and substituion. If you know your constant values (a, b, p) now, it's very simple to just do the necessary substitutions by hand. On the other hand, if the values are determined at run-time, coding the algebra isn't as easy, unless you use gaussian elimination, since gaussian elimination is essentially an algorithm for solving linear equations. The problem with "coding" algebra is that there are a lot of special cases. So I'll present you with one sample, "stupid" solution that will work, assuming that a1 is non-zero. Simply vary the approach if a1 is zero, and choose the starting equation using any element of a that is non-zero (at least one of them must be non zero, or else you reduce to a much simpler equation, since the point is along b). This is a dumb approach, and simply meant for a sample illustration of a possible, yet ugly way, to solve a system like this. It also assumes a solution exists: Start with equation 1. Solve for u in terms of v. This gives: u = (p1 - v*b1)/a1 plug this value of u into equation 2. This gives you: a2* (p1 -v*b1)/a1 + v*b2 = p2 now factor out v: v[ a2* (p1-v*b1)/a1 + b2 ] = p2 divide p2 by that big thing in brackets, so you have v by itself. Now you have v expressed entirely in terms of known values, so you know the value of v. with this, it is trivial to find the value of u, just plug back into the first equation. By plugging the vales of u and v you have determined into the third equation, you can see if in fact there is a real solution. If you end up getting something like 0=1 for the third equation upon substiting your discovered values of u and v, no solution exists. Note to Ron: obviously, this isn't rigorous at all, but the goal was merely give an impression of how the solution can be obtained. You'd really want to use gaussian elimination for anything like this, since all of the special cases (is a1 0? start with a different equation, is a1,a2,a3 all 0? just reduce to one equation, etc.) are neatly handled by it. In fact, I like to think of gaussian elimination as a mathematical "encapsulation" for solving systems of linear equations, it automatically takes care of the details for you. Try doing a search on the net for guassian elimination, you will definitely come across some hits. I believe I learned it my sophomore year of high school, algebra II class, so it's certainly not anything remotely unusual. Hope this helps. -- Scott Shumaker sjs...@um... On Fri, 14 Jul 2000, Discoe, Ben wrote: > > I thought it would be a simple problem, but all usual sources failed to > answer, so perhaps it will be obvious to someone on this list. > > Given a point p and two unit vectors a and b like this: > > b > / > v/----p > / / > / / > / / > ---------a > u > > how to you get scalars (u,v) such that u*a + v*b = p? > Ie. decompose p onto a and b. > > I promise i consulted an academic, a linear algebra textbook and the WWW > (all failed) before resorting the list :) > > Thanks, > Ben > http://vterrain.org/ > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Klaus H. <k_h...@os...> - 2000-07-15 14:04:36
|
----- Original Message ----- From: Scott Justin Shumaker <sjs...@um...> To: <gda...@li...> Sent: Saturday, July 15, 2000 9:15 AM Subject: Re: [Algorithms] decompose onto non-orthogonal vectors > Okay, let me try and explain this as simply as possible. You want to find > scalars u,v such that u*a + v*b = p. Since these are 3-dimensional > vectors, you have 3 equations and 2 unknowns: > > u*a1 + v*b1 = p1 > u*a2 + v*b2 = p2 > u*a3 + v*b3 = p3 > > where p1,p2,p3 are the coordinates of the point p, likewise for a1,a2,a3 > and a, and b1,b2,b3 and b. > > How do you solve a system of 3 linear equations with 2 unknowns? If you > don't know Gaussian elimination, this can be done with some simple > algebraic rules and substituion. Okay, this may become very embarrassing for me now... :) How can you solve a system of 3 linear equations in two unknows, u, v, if they cannot be reduced to two linear equations (for example, a2 = a3, b2 = b3, and p2 = p3)??? Let's assume that we change the 3 linear equations as follows: u*a1 + v*b1 + w*c1 = p1 u*a2 + v*b2 + w*c2 = p2 u*a3 + v*b3 + w*c3 = p3 Then there's a unique solution, if | a1 b1 c1 | det | a2 b2 c2 | != 0 | a3 b3 c3 | If we set c1 = c2 = c3 = 0, then we have the same system as above (yours), but with 3 unknowns: u*a1 + v*b1 + w*0 = p1 u*a2 + v*b2 + w*0 = p2 u*a3 + v*b3 + w*0 = p3 Then | a1 b1 0 | Dn = det | a2 b2 0 | = 0 | a3 b3 0 | Since Dn = 0, we get a division by zero, and thus there's an infinite number of solutions. ... or did I just miss something??? Niki |
From: <ro...@do...> - 2000-07-15 16:21:02
|
Klaus Hartmann wrote: > >----- Original Message ----- >From: Scott Justin Shumaker <sjs...@um...> >To: <gda...@li...> >Sent: Saturday, July 15, 2000 9:15 AM >Subject: Re: [Algorithms] decompose onto non-orthogonal vectors > > >> Okay, let me try and explain this as simply as possible. You want to find >> scalars u,v such that u*a + v*b = p. Since these are 3-dimensional >> vectors, you have 3 equations and 2 unknowns: >> >> u*a1 + v*b1 = p1 >> u*a2 + v*b2 = p2 >> u*a3 + v*b3 = p3 >> >> where p1,p2,p3 are the coordinates of the point p, likewise for a1,a2,a3 >> and a, and b1,b2,b3 and b. >> >> How do you solve a system of 3 linear equations with 2 unknowns? If you >> don't know Gaussian elimination, this can be done with some simple >> algebraic rules and substituion. > >Okay, this may become very embarrassing for me now... :) How can you solve a >system of 3 linear equations in two unknows, u, v, if they cannot be reduced >to two linear equations The system of three equations is equivalent to (i.e, has exactly the same solutions as) a system of two equations if and only if one of the equations is a linear combination of the other two. And you determine whether or not this is the case by Gaussian elimination. "Elimination" entails eliminating unknowns from equations. If there are more equations than unknowns, then Gaussian elimination will get you to a step where one of the equations has no remaining unknowns. If this unknownless equation is a TRUE equation, e.g. 0=0 or 127 = 127, then it is irrelevant, the three equations were indeed linearly dependent, and the Gaussian elimination will have left you with a system of two equations in two unknowns. If this unknownless equation is a FALSE equation, e.g. 0 = 1, then we know that the original system of three equations is inconsistent and has NO solutions. Each step of Gaussian elimination consists of multiplying an equation term wise by a constant, or replacing one of the equations by the term-wise sum of itself with a multiple of one of the other equations. One can easily see that this does not change the set of possible solutions to the system. The trick then is to choose the two equations and the multiplier at each step so that it results in eliminating one of the unknowns from one of the equations. It's not a hard trick and I'm not going to say any more about it here, except to point out that there are various versions of the algorithm motivated by minimizing the number of steps or minimizing the potential numerical errors in the result. This is all VERY ELEMENTARY LINEAR ALGEBRA, It discourages me that a mail list supposedly on game algorithms spends so much time on stuff that everyone ought to have had in their basic formation, and which is better learned from math courses and text books than from mail lists. |
From: Pierre T. <p.t...@wa...> - 2000-07-15 12:28:36
|
b / v/----p / / / / / / c ---------a u - Add a point c to form a triangle abc. - Now you can use the standard raytracing code which computes (u,v) = ray-triangle impact (texture) coordinates, then check if 0<u<1, 0<v<1 and 0<u+v<1 to keep or discard the impact. The underlying method is the same as the one already suggested (expressing p in the abc basis). But in the raytracing field you'll find less formal and more pragmatic/optimized-to-death answers/code. /* Pierre Terdiman * Home: p.t...@wa... Software engineer * Zappy's Lair: www.codercorner.com */ ----- Original Message ----- From: Discoe, Ben <ben...@in...> To: <gda...@li...> Sent: Saturday, July 15, 2000 4:08 AM Subject: [Algorithms] decompose onto non-orthogonal vectors > > I thought it would be a simple problem, but all usual sources failed to > answer, so perhaps it will be obvious to someone on this list. > > Given a point p and two unit vectors a and b like this: > > > how to you get scalars (u,v) such that u*a + v*b = p? > Ie. decompose p onto a and b. > > I promise i consulted an academic, a linear algebra textbook and the WWW > (all failed) before resorting the list :) > > Thanks, > Ben > http://vterrain.org/ > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Peter D. <pd...@mm...> - 2000-07-15 13:52:18
|
> Given a point p and two unit vectors a and b like this: > > b > / > v/----p > / / > / / > / / > ---------a > u > > how to you get scalars (u,v) such that u*a + v*b = p? > Ie. decompose p onto a and b. Easy. dot the equation with a and b, correspondingly: u * (a dot a) + v * (b dot a) = p dot a; v * (a dot b) + v * (b dot b) = p dot b. Solve the system. -- Peter Dimov Multi Media Ltd. |
From: <ro...@do...> - 2000-07-15 15:55:23
|
Peter Dimov wrote: >> Given a point p and two unit vectors a and b like this: >> >> b >> / >> v/----p >> / / >> / / >> / / >> ---------a >> u >> >> how to you get scalars (u,v) such that u*a + v*b = p? >> Ie. decompose p onto a and b. > >Easy. dot the equation with a and b, correspondingly: > >u * (a dot a) + v * (b dot a) = p dot a; >v * (a dot b) + v * (b dot b) = p dot b. > >Solve the system. In the 2D case this is more complicated than it need be, but will give the correct solution (provided, of course that one knows dot product and how to solve a 2x2 system). In the 3D case, this will give the correct solution only if p is indeed in the plane of a and b. It will not detect that there is no solution if p is not coplanar with a and b. |
From: Peter D. <pd...@mm...> - 2000-07-15 23:14:52
|
> >> how to you get scalars (u,v) such that u*a + v*b = p? > >> Ie. decompose p onto a and b. > > > >Easy. dot the equation with a and b, correspondingly: > > > >u * (a dot a) + v * (b dot a) = p dot a; > >v * (a dot b) + v * (b dot b) = p dot b. > > > >Solve the system. > > In the 2D case this is more complicated than it need be, but will give > the correct solution (provided, of course that one knows dot product > and how to solve a 2x2 system). > > In the 3D case, this will give the correct solution only if p is > indeed in the plane of a and b. It will not detect that there is no > solution if p is not coplanar with a and b. This works for any number of dimensions. Whenever a solution exists, the (u, v) pair determined by this approach will have the property u * a + v * b = p. (Easy to prove.) -- Peter Dimov Multi Media Ltd. |
From: <ro...@do...> - 2000-07-16 06:00:18
|
Peter Dimov wrote: >> >> how to you get scalars (u,v) such that u*a + v*b = p? >> >> Ie. decompose p onto a and b. >> > >> >Easy. dot the equation with a and b, correspondingly: >> > >> >u * (a dot a) + v * (b dot a) = p dot a; >> >v * (a dot b) + v * (b dot b) = p dot b. >> > >> >Solve the system. >> >> In the 2D case this is more complicated than it need be, but will give >> the correct solution (provided, of course that one knows dot product >> and how to solve a 2x2 system). >> >> In the 3D case, this will give the correct solution only if p is >> indeed in the plane of a and b. It will not detect that there is no >> solution if p is not coplanar with a and b. > >This works for any number of dimensions. Whenever a solution exists, the (u, >v) pair determined by this approach will have the property u * a + v * b = >p. (Easy to prove.) Agree that it works for any number of dimensions provided that p does indeed lie in the plane of a and b. My point was that it will not detect the case that p does not lie in that plane, that is, will not detect the case that no solution exisits. |
From: Peter D. <pd...@mm...> - 2000-07-16 12:11:36
|
> >This works for any number of dimensions. Whenever a solution exists, the (u, > >v) pair determined by this approach will have the property u * a + v * b = > >p. (Easy to prove.) > > Agree that it works for any number of dimensions provided that p does > indeed lie in the plane of a and b. My point was that it will not > detect the case that p does not lie in that plane, that is, will not > detect the case that no solution exisits. Hmm, it must be my English. :) 1. Find the (u, v) pair as above. 2. Check whether u * a + v * b = p. 3. If yes, this (u, v) pair is the solution. 4. If no, no solution exists. So it does indeed detect whether p lies in the plane formed by a and b. Whenever p lies in that plane, an (u, v) pair such that p = u * a + v * b must exist, and vice versa. (I'm deliberately assuming that this plane is unique. Nevertheless, the method "works" even if a and b are collinear, or one of them is zero.) A more "formal" proof: First, assume that a solution does exist, i.e. there exists an (u0, v0) pair such that u0 * a + v0 * b = p. Such a pair must satisfy the linear system of equations I gave, and therefore can be determined by solving that system. If this pair is unique (the case we are interested in,) it will be the only solution of this system (*), and therefore will have the property u * a + v * b = p. Now, assume that no solution exists, i.e. no (u, v) pair satisfies u * a + v * b = p. In this case, obviously, whatever the solution of the system, it cannot have this property. QED? -- Peter Dimov Multi Media Ltd. (*) The determinant is zero when a.a * b.b = a.b * b.a, i.e. when a and b are collinear. P.S. I'm inclined to note that this method does not even mention coordinates, relying only on the fact that multiplication, addition, and dot product are defined on a, b, and p, and therefore you must like it, by definition. :) |
From: <ro...@do...> - 2000-07-16 14:38:19
|
Peter Dimov wrote: >> >This works for any number of dimensions. Whenever a solution exists, the >(u, >> >v) pair determined by this approach will have the property u * a + v * b >= >> >p. (Easy to prove.) >> >> Agree that it works for any number of dimensions provided that p does >> indeed lie in the plane of a and b. My point was that it will not >> detect the case that p does not lie in that plane, that is, will not >> detect the case that no solution exisits. > >Hmm, it must be my English. :) > >1. Find the (u, v) pair as above. >2. Check whether u * a + v * b = p. >3. If yes, this (u, v) pair is the solution. >4. If no, no solution exists. > OK, you've stated a more complete algorithm now, and I agree that it works. In any case, as I stated previously, I think your solution is the best and have long used it myself. The Gaussian elimination approach, however, is based more directly on the most basic and fundamental concepts of linear algebra, which is why I stressed it.. The dot product approach is based on somewhat more advanced ideas. . |
From: <ro...@do...> - 2000-07-16 15:08:13
|
I wrote: >Peter Dimov wrote: > >>> >This works for any number of dimensions. Whenever a solution exists, the >>(u, >>> >v) pair determined by this approach will have the property u * a + v * b >>= >>> >p. (Easy to prove.) >>> >>> Agree that it works for any number of dimensions provided that p does >>> indeed lie in the plane of a and b. My point was that it will not >>> detect the case that p does not lie in that plane, that is, will not >>> detect the case that no solution exisits. >> >>Hmm, it must be my English. :) >> >>1. Find the (u, v) pair as above. >>2. Check whether u * a + v * b = p. >>3. If yes, this (u, v) pair is the solution. >>4. If no, no solution exists. >> > >OK, you've stated a more complete algorithm now, and I agree that it >works. > >In any case, as I stated previously, I think your solution is the best >and have long used it myself. The Gaussian elimination approach, >however, is based more directly on the most basic and fundamental >concepts of linear algebra, which is why I stressed it.. The dot >product approach is based on somewhat more advanced ideas. > And further, it appears to me that if you don't already know that p is a linear combination of a and b, then the Gaussian elimination solution is somewhat shorter than your algorithm above. |
From: Peter D. <pd...@mm...> - 2000-07-16 20:34:08
|
> And further, it appears to me that if you don't already know that p is > a linear combination of a and b, then the Gaussian elimination > solution is somewhat shorter than your algorithm above. To further beat this dead horse: Yes, a coordinate-based solution may indeed be shorter algorithmically. The method I presented wasn't meant to be the most efficient one. I wanted to give the OP (Ben?) an idea how to proceed in situations like this. When you have one vector equation with several scalar unknowns, as in this case, the trick is to project the equation along several (linearly independent) axes and solve the resulting linear system. The axes usually chosen are the coordinate axes - this corresponds to dotting the equation with the basis vectors. This problem demonstrated a situation where a different set of axes proves to be useful for the projection, namely the a and b vectors. -- Peter Dimov Multi Media Ltd. |