First a bit of background. For my case an "object" is defined as a =
connected polygonal mesh with one base texture projected onto it using =
UVs. These objects are scaled in geometric detail in real-time using =
progressive mesh techniques. Currently I am also able to generate one =
lightmap per -face- on these objects. This is less than desirable for a =
couple reasons. If I scale the mesh I get holes where new seams don't =
meet up exactly in the mesh and the enormous amount of texture state =
changes kills performance. Thus what I'm trying to accomplish is to =
generate 1 lightmap per object with it's own set of UVs from the face =
lightmaps for best visual quality and performance.
My current thinking is to use the XYZ values of each vertex/face and =
render/rasterize the lightmap for that face to a new, large texture and =
generate a new set of UVs for each vertex. The question is how do I map =
XYZ -> UV in the most seamless, space efficient manner? Seems like a =
nice linear algebra problem ;)
I'm sure this topic must have been covered before, so I'd be most =
appreciative for pointing me to any good resources.
Thanks in advance,
Ryan Earl
|