Re: [Algorithms] decompose onto non-orthogonal vectors
Brought to you by:
vexxed72
|
From: Jonathan B. <jo...@bo...> - 2000-07-15 03:39:51
|
Will Portnoy wrote:
> I prefer this answer to mine from a theory standpoint, because it
> expresses all of the gotchas that I had to list explicitly... code wise, I
> wouldn't want to actually make a matrix to solve it (i'm sure you wouldn't
> either).
There was once a time when I would have thought this. But once you get
good at understanding transformations, it is *hugely* easier to just say
something like: "Hey, I'll make some matrix, invert it, and transform
something else by that to get my answer." (I don't mean for this problem,
which requires no inversion, but.) It is a lot nicer to be able to come
back to code like this later and understand what it's doing, as opposed
to some random math that's going on. And it is also a lot easier to write
this kind of code in a bug-free manner.
Unless you really *really* care about speed, I'd recommend using a
transformation-style approach if there is one. And if you do care
about speed, I'd still recommend using this kind of approach, until you
are sure the code works; *then* you optimize and keep the old code
in a comment as a backup.
> One thing I didn't mention, and I haven't proven it to myself
> mathematically yet: if vectors a and b are linearly independent (which
> they don't seem to be from the original thread author's drawing), then
> there's only one solution for u and v. For example, in the "normal"
> orthonormal basis that we use ((1,0,0)(0,1,0)(0,0,1), there's only one
> linear combination of the basis that will give a point p.
They are linearly independent in the drawing. Linearly independent does
not mean orthogonal, it just means that the two vectors are not fully
redundant (i.e. one is not a scalar multiple of the other). However
it is a basic theorem of linear algebra that there is only one
solution for p regardless of the vectors so long as they are linearly
independent.
-J.
|